如何使用Prometheus监控MongoDB集群
什么是MongoDB
MongoDB是一种非关系型数据库管理系统(NoSQL),它被设计用于存储和检索大量数据,特别适用于处理大数据和实时应用程序。MongoDB的名称源自"humongous"(巨大的)这个词,它强调了MongoDB的优势,即能够轻松处理大规模的数据。以下是MongoDB的一些关键特点和概念:
MongoDB广泛用于各种应用程序,包括Web应用程序、大数据分析、物联网(IoT)应用程序、日志管理和许多其他领域,因为它的灵活性和可扩展性使其适用于各种不同类型的数据存储需求。
实现背景
在实际企业环境中,MongoDB需要多节点部署组成一个集群,可用于确保高可用性、容量扩展、负载均衡和数据备份。此时需要对整个MongoDB集群进行监控,持续关注集群的健康状况。因此我们使用MongoDB Exporter将指标转换为Prometheus的数据类型。最后通过Prometheus进行收集Mongodb集群的监控指标,并采用Grafana可视化。
创建StorageClasses
基于NFS创建StorageClasses,作为Mongodb集群持久化存储。
# 创建StorageClasses $ kubectl apply -f sc.yml storageclass.storage.k8s.io/kubesre-nfs created # 查看StorageClasses $ kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE kubesre-nfs example.com/external-nfs Delete Immediate false 63s standard (default) rancher.io/local-path Delete WaitForFirstConsumer false 13d
MaongoDB集群部署
通过Helm方式进行部署MongoDB集群
# 添加Helm仓库 $ helm repo add bitnami https://charts.bitnami.com/bitnami "bitnami" has been added to your repositories # 搜索 MangoDB $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "higress.io" chart repository ...Successfully got an update from the "bitnami" chart repository ...Successfully got an update from the "ingress-nginx" chart repository Update Complete. ⎈Happy Helming!⎈ $ helm search repo mongodb NAME CHART VERSION APP VERSION DESCRIPTION bitnami/mongodb 13.18.4 6.0.10 MongoDB(R) is a relational open source NoSQL da... bitnami/mongodb-sharded 6.6.6 6.0.10 MongoDB(R) is an open source NoSQL database tha... # 将MongoDB Chat下载到本地 $ mkdir mongodb && cd mongodb $ helm pull bitnami/mongodb $ tar zxf mongodb-13.18.4.tgz $ cp mongodb/values.yaml ./values-test.yaml # 修改values-test.yaml $ cat values-test.yaml ## 配置文件中定义 storageClass: "",会使用集群配置的 openebs 提供的 storageClass, ## 使用此文档部署,需要自行解决 storageClass 问题 (ceph, nfs, 公有云提供的 nfs) global: # 定义 storageClass 使用的类型 storageClass: "nfs-client" # 定义 mongodb 集群为副本集模式 architecture: replicaset # 启动集群认证功能,设置超级管理员账户密码 auth: enabled: true rootUser: root rootPassword: "root" # 设置集群数量,3个 replicaCount: 3 # 启用持久化存储,使用 global.storageClass 自动创建 pvc persistence: enabled: true size: 20Gi # 安装MongoDB集群 helm install mongodb-cluster mongodb -f ./values-test.yaml NAME: mongodb-cluster LAST DEPLOYED: Tue Sep 19 15:54:36 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: mongodb CHART VERSION: 13.18.4 APP VERSION: 6.0.10 ** Please be patient while the chart is being deployed ** MongoDB® can be accessed on the following DNS name(s) and ports from within your cluster: mongodb-cluster-0.mongodb-cluster-headless.default.svc.cluster.local:27017 mongodb-cluster-1.mongodb-cluster-headless.default.svc.cluster.local:27017 mongodb-cluster-2.mongodb-cluster-headless.default.svc.cluster.local:27017 To get the root password run: export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb-cluster -o jsonpath="{.data.mongodb-root-password}" | base64 -d) To connect to your database, create a MongoDB® client container: kubectl run --namespace default mongodb-cluster-client --rm --tty -i --restart='Never' --env="MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD" --image docker.io/bitnami/mongodb:6.0.10-debian-11-r0 --command -- bash Then, run the following command: mongosh admin --host "mongodb-cluster-0.mongodb-cluster-headless.default.svc.cluster.local:27017,mongodb-cluster-1.mongodb-cluster-headless.default.svc.cluster.local:27017,mongodb-cluster-2.mongodb-cluster-headless.default.svc.cluster.local:27017" --authenticationDatabase admin -u $MONGODB_ROOT_USER -p $MONGODB_ROOT_PASSWORD # 查看运行状态 $ kubectl get pods | grep mongo mongodb-cluster-0 1/1 Running 0 17m mongodb-cluster-1 1/1 Running 0 6m42s mongodb-cluster-2 1/1 Running 0 4m29s mongodb-cluster-arbiter-0 1/1 Running 4 (7m51s ago) 20m
MongoDB Exporter 部署
接下来部署Mongodb Exporter:
# 部署Mongodb Exporter $ cat exporter.yml apiVersion: apps/v1 kind: Deployment metadata: name: mongo-expoter spec: template: metadata: labels: app: mongo-expoter spec: containers: - args: - '--web.listen-address=:9104' - '--mongodb.uri' - >- mongodb://mongodb-cluster-0.mongodb-cluster-headless.default.svc.cluster.local:27017,mongodb-cluster-1.mongodb-cluster-headless.default.svc.cluster.local:27017,mongodb-cluster-2.mongodb-cluster-headless.default.svc.cluster.local:27017/admin?authSource=admin image: 'percona/mongodb_exporter:0.39.0' imagePullPolicy: Always name: mongo-expoter resources: requests: cpu: 250m memory: 512Mi $ kubectl apply -f exporter.yml # 部署Mongodb Exporter Service $ cat service.yml apiVersion: v1 kind: Service metadata: name: mongo-exporter spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 9104 protocol: TCP targetPort: 9104 selector: app: mongo-exporter type: ClusterIP $ kubectl apply -f service.yml
到此Exporter部署完毕!
Prometheus安装
接下来需要创建一个Configmap 存储Prometheus 的配置映射:
$ cat prometheus-cm.yml apiVersion: v1 data: prometheus.yml: |- global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: "exporter" static_configs: - targets: ["mongo-exporter:9104"] kind: ConfigMap name: prometheus $ kubectl apply -f prometheus-cm.yml # 部署Prometheus $ cat prometheus.yml apiVersion: apps/v1 kind: Deployment metadata: name: prometheus spec: template: metadata: labels: app: prometheus name: prometheus-pod spec: containers: - image: prom/prometheus imagePullPolicy: IfNotPresent name: prometheus ports: - containerPort: 9090 name: metrics protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/prometheus name: prometheus-config dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 15 volumes: - configMap: defaultMode: 420 items: - key: prometheus.yml path: prometheus.yml name: prometheus name: prometheus-config # 创建Prometheus Service $ cat prometheus-service.yml apiVersion: v1 kind: Service metadata: creationTimestamp: '2023-09-19T14:12:15Z' managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: 'f:spec': 'f:internalTrafficPolicy': {} 'f:ipFamilyPolicy': {} 'f:ports': .: {} 'k:{"port":9090,"protocol":"TCP"}': .: {} 'f:port': {} 'f:protocol': {} 'f:targetPort': {} 'f:selector': {} 'f:sessionAffinity': {} 'f:type': {} manager: ACK-Console Apache-HttpClient operation: Update time: '2023-09-19T14:12:15Z' name: prometheues namespace: default resourceVersion: '531698594' uid: 0187137d-6805-4179-9981-dfa5481b8d5e spec: clusterIP: 172.25.7.25 clusterIPs: - 172.25.7.25 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 9090 protocol: TCP targetPort: 9090 selector: app: prometheus sessionAffinity: None type: ClusterIP status: loadBalancer: {} $ kubectl apply -f prometheus-service.yml
此时可以访问Prometheus控制台了:
Grafana 部署
开始部署Grafana喽:
# 部署Grafana $ cat grafana.yml apiVersion: apps/v1 kind: Deployment metadata: name: grafana spec: template: metadata: labels: tool: grafana name: grafana-pod spec: containers: - image: grafana/grafana imagePullPolicy: IfNotPresent name: grafana ports: - containerPort: 3000 protocol: TCP $ kubectl apply -f grafana.yml # 创建Service $ cat grafana-service.yml apiVersion: v1 kind: Service metadata: name: grafana spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 3000 protocol: TCP targetPort: 3000 selector: tool: grafana sessionAffinity: None type: ClusterIP $ kubectl apply -f grafana-service.yml
可视化展示
访问Grafana:默认账户/密码:admin/admin
配置Prometheus数据源:
导入模版查看数据: