openobseve HA本地单集群模式

2023年 8月 26日 74.4k 0

ha默认就不支持本地存储了,集群模式下openobseve会运行多个节点,每个节点都是无状态的,数据存储在对象存储中,元数据在etcd中,因此理论上openobseve可以随时进行水平扩容

arch-etcd-s3.png

组件如下:

arch-ha.png

router:处理数据写入和页面查询,作为路由

etcd: 存储用户信息,函数,规则,元数据等

s3: 数据本身

querier: 数据查询

ingester: 数据没有在被写入到s3中之前,数据会进行临时通过预写来确保数据不会丢失,这类似于prometheus的wal

compactor: 合并小文件到大文件,以及数据保留时间

要配置集群模式,我们需要一个 对象存储,awk的s3,阿里的oss,或者本地的minio,还需要部署一个etcd作为元数据的存储,并且为ingester数据提供一个pvc,因为openobseve是运行在k8s上

etcd

我们将etcd运行在外部k8s之外的外部节点

version: '2'

services:
  oo_etcd:
    container_name: oo_etcd
    #image: 'docker.io/bitnami/etcd/3.5.8-debian-11-r4'
    image: uhub.service.ucloud.cn/marksugar-k8s/etcd:3.5.8-debian-11-r4
    #network_mode: host
    restart: always
    environment:
      - ALLOW_NONE_AUTHENTICATION=yes
      - ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379
      #- ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
      #- ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
      - ETCD_DATA_DIR=/bitnami/etcd/data
    volumes:
    - /etc/localtime:/etc/localtime:ro  # 时区2
    - /data/etcd/date:/bitnami/etcd # chown -R 777 /data/etcd/date/
    ports:
      - 2379:2379
      - 2380:2380
    logging:
      driver: "json-file"
      options:
        max-size: "50M"
    mem_limit: 2048m

pvc

需要一个安装好的storageClass,我这里使用的是nfs-subdir-external-provisioner创建的nfs-latest

minio

部署一个单机版本的minio进行测试即可

version: '2'

services:
  oo_minio:
    container_name: oo_minio
    image: "uhub.service.ucloud.cn/marksugar-k8s/minio:RELEASE.2023-02-10T18-48-39Z"
    volumes:
    - /etc/localtime:/etc/localtime:ro  # 时区2
    - /docker/minio/data:/data
    command: server --console-address ':9001' /data
    environment:
    - MINIO_ACCESS_KEY=admin    #管理后台用户名
    - MINIO_SECRET_KEY=admin1234 #管理后台密码,最小8个字符
    ports:
      - 9000:9000 # api 端口
      - 9001:9001 # 控制台端口
    logging:
      driver: "json-file"
      options:
        max-size: "50M"
    mem_limit: 2048m
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

启动后创建一个名为openobserve的桶

image-20230820161201847.png

安装openObserve

我们仍然使用helm进行安装

helm repo add openobserve https://charts.openobserve.ai
helm repo update
kubectl create ns openobserve

对values.yaml定制的内容如下,latest.yaml:

image:
  repository: uhub.service.ucloud.cn/marksugar-k8s/openobserve
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "latest"

# 副本数
replicaCount:
  ingester: 1
  querier: 1
  router: 1
  alertmanager: 1
  compactor: 1

ingester:
  persistence:
    enabled: true  
    size: 10Gi
    storageClass: "nfs-latest" # NFS的storageClass
    accessModes:
      - ReadWriteOnce

# Credentials for authentication
# 账号密码
auth:
  ZO_ROOT_USER_EMAIL: "root@example.com"
  ZO_ROOT_USER_PASSWORD: "abc123"
  # s3地址
  ZO_S3_ACCESS_KEY: "admin"
  ZO_S3_SECRET_KEY: "admin1234"
etcd:
  enabled: false # if true then etcd will be deployed as part of openobserve
  externalUrl: "172.16.100.47:2379"   
config:
  # ZO_ETCD_ADDR: "172.16.100.47:2379"  # etcd地址
  # ZO_HTTP_ADDR: "172.16.100.47:2379"  
  ZO_DATA_DIR: "./data/" #数据目录
  # 开启minio
  ZO_LOCAL_MODE_STORAGE: s3 
  ZO_S3_SERVER_URL: http://172.16.100.47:9000
  ZO_S3_REGION_NAME: local 
  ZO_S3_ACCESS_KEY: admin 
  ZO_S3_SECRET_KEY: admin1234 
  ZO_S3_BUCKET_NAME: openobserve 
  ZO_S3_BUCKET_PREFIX: openobserve 
  ZO_S3_PROVIDER: minio
  ZO_TELEMETRY: "false" # 禁用匿名
  ZO_WAL_MEMORY_MODE_ENABLED: "false" # 内存模式
  ZO_WAL_LINE_MODE_ENABLED: "true" # wal写入模式 
  #ZO_S3_FEATURE_FORCE_PATH_STYLE: "true"
  # 数据没有在被写入到s3中之前,数据会进行临时通过预写来确保数据不会丢失,这类似于prometheus的wal
resources:
  ingester: {}
  querier: {}
  compactor: {}
  router: {}
  alertmanager: {}

autoscaling:
  ingester:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  querier:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  router:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  compactor:
    enabled: false
    minReplicas: 1
    maxReplicas: 100
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80

指定本地minio,桶名称,认证信息等;指定etcd地址;为ingester指定sc; 而后安装

 helm upgrade --install openobserve -f latest.yaml --namespace openobserve openobserve/openobserve

如下

[root@master-01 ~/openObserve]# helm upgrade --install openobserve -f latest.yaml --namespace openobserve openobserve/openobserve 
Release "openobserve" does not exist. Installing it now.
NAME: openobserve
LAST DEPLOYED: Sun Aug 20 18:04:31 2023
NAMESPACE: openobserve
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  kubectl --namespace openobserve port-forward svc/openobserve-openobserve-router 5080:5080
[root@master-01 ~/openObserve]# kubectl -n openobserve get pod
NAME                                        READY   STATUS    RESTARTS   AGE
openobserve-alertmanager-6f486d5df5-krtxm   1/1     Running   0          53s
openobserve-compactor-98ccf664c-v9mkb       1/1     Running   0          53s
openobserve-ingester-0                      1/1     Running   0          53s
openobserve-querier-695cf4fcc9-854z8        1/1     Running   0          53s
openobserve-router-65b68b4899-j9hs7         1/1     Running   0          53s
[root@master-01 ~/openObserve]# kubectl -n openobserve get pvc
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-openobserve-ingester-0   Bound    pvc-5d86b642-4464-4b3e-950a-d5e0b4461c27   10Gi       RWO            nfs-latest     2m47s

而后配置一个Ingress指向openobserve-router

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: openobserve-ui
  namespace: openobserve
  labels:
    app: openobserve 
  annotations:
    # kubernetes.io/ingress.class: nginx
    cert-manager.io/issuer: letsencrypt
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/connection-proxy-header: keep-alive
    nginx.ingress.kubernetes.io/proxy-connect-timeout: '600'
    nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
    nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
    nginx.ingress.kubernetes.io/proxy-body-size: 32m
spec:
  ingressClassName: nginx
  rules:
  - host: openobserve.test.com
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: openobserve-router
            port:
              number: 5080

添加本地hosts后打开

image-20230820184851478.png

此时是没有任何数据的

image-20230820184930100.png

测试

我们手动写入测试数据

[root@master-01 ~/openObserve]# curl http://openobserve.test.com/api/linuxea/0820/_json -i -u 'root@example.com:abc123' -d '[{"author":"marksugar","name":"www.linuxea.com"}]'
HTTP/1.1 200 OK
Date: Sun, 20 Aug 2023 11:02:08 GMT
Content-Type: application/json
Content-Length: 65
Connection: keep-alive
Vary: Accept-Encoding
vary: accept-encoding
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: GET, PUT, POST, DELETE, PATCH, OPTIONS
Access-Control-Allow-Headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
Access-Control-Max-Age: 1728000

{"code":200,"status":[{"name":"0820","successful":1,"failed":0}]}

数据插入

image-20230820190257317.png

同时,在NFS的本地磁盘也会写入

[root@Node-172_16_100_49 ~]# cat /data/nfs-share/openobserve/data-openobserve-ingester-0/wal/files/linuxea/logs/0820/0_2023_08_20_13_2c624affe8540b70_7099015230658842624DKMpVA.json 
{"_timestamp":1692537124314778,"author":"marksugar","name":"www.linuxea.com"}

在minio内的数据也进行写入

image-20230820211054041.png

minio中存储的数据无法查看,因为元数据在etcd中。

相关文章

对接alertmanager创建钉钉卡片(1)
手把手教你搭建OpenFalcon监控系统
无需任何魔法即可使用 Ansible 的神奇变量“hostvars”
基于k8s上loggie/vector/openobserve日志收集
openobseve单节点和查询语法
2023 年需要学习和掌握的 30 个最佳 DevOps 工具:Git、Docker 等

发布评论