k8s HELM 安装Kafka Zookeeper集群

2023年 7月 27日 82.3k 0

很早之前文章介绍了通过Kafka 二进制安装集群,目前很多环境都是Kubernetes,追求快速部署、快速创建项目。 下面我们通过helm快速构建一套Kafka集群并配置持久化

关于k8s sc持久化和Kafka二进制安装,此处就不在介绍了,可以参考下面的文章

Kafka容器化会受底层物理机的配置影响,大并发常景还是要慎重考虑

Helm 安装

相关服务版本

  • Kubernetes 1.24.0
  • Containerd v1.6.4
  1. # 下载
  2. wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz
  3. # 解压
  4. tar zxvf helm-v3.6.1-linux-amd64.tar.gz
  5. # 安装
  6. mv linux-amd64/helm /usr/local/bin/
  7. # 验证
  8. helm version

Helm 部署Zookeeper集群

  1. # 添加bitnami仓库
  2. helm repo add bitnami https://charts.bitnami.com/bitnami
  3. # 查询chart
  4. helm search repo bitnami
  5. # 拉取zookeeper
  6. helm pull bitnami/zookeeper
  7. # 解压
  8. tar zxvf zookeeper-11.4.2.tgz
  9. #进入Zookeeper
  10. cd zookeeper

接下来对Zookeeper进行时区、持久化存储、副本数等配置

  1. extraEnvVars:
  2. - name: TZ
  3. value: "Asia/Shanghai"
  4. # 允许任意用户连接(默认开启)
  5. allowAnonymousLogin: true
  6. ---
  7. # 关闭认证(默认关闭)
  8. auth:
  9. enable: false
  10. ---
  11. # 修改副本数
  12. replicaCount: 3
  13. ---
  14. # 4. 配置持久化,按需使用
  15. persistence:
  16. enabled: true
  17. storageClass: "rook-ceph-block" # storageClass 如果有默认存储可以不写
  18. accessModes:
  19. - ReadWriteOnce

创建Kafka namespace

  1. [root@k8s-02 zookeeper]# kubectl create ns kafka

helm创建Zookeeper集群

  1. [root@k8s-02 zookeeper]# helm install zookeeper -n kafka .
  2. #此处下面环境版本信息
  3. NAME: zookeeper
  4. LAST DEPLOYED: Tue May 23 13:40:12 2023
  5. NAMESPACE: kafka
  6. STATUS: deployed
  7. REVISION: 1
  8. TEST SUITE: None
  9. NOTES:
  10. CHART NAME: zookeeper
  11. CHART VERSION: 11.4.2
  12. APP VERSION: 3.8.1
  13. #下面为相关注释,后面可以通过下面的命令查看Zookeeper集群状态
  14. ** Please be patient while the chart is being deployed **
  15. ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
  16. zookeeper.kafka.svc.cluster.local
  17. To connect to your ZooKeeper server run the following commands:
  18. export POD_NAME=$(kubectl get pods --namespace kafka -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
  19. kubectl exec -it $POD_NAME -- zkCli.sh
  20. To connect to your ZooKeeper server from outside the cluster execute the following commands:
  21. kubectl port-forward --namespace kafka svc/zookeeper 2181:2181 &
  22. zkCli.sh 127.0.0.1:2181
  23. [root@k8s-02 zookeeper]#

检查pod状态

  1. [root@k8s-02 zookeeper]# kubectl get all -n kafka
  2. NAME READY STATUS RESTARTS AGE
  3. pod/zookeeper-0 1/1 Running 0 52s
  4. pod/zookeeper-1 1/1 Running 0 51s
  5. pod/zookeeper-2 1/1 Running 0 49s
  6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  7. service/zookeeper ClusterIP 10.110.142.203 <none> 2181/TCP,2888/TCP,3888/TCP 52s
  8. service/zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 52s
  9. NAME READY AGE
  10. statefulset.apps/zookeeper 3/3 52s

检查pvc

  1. [root@k8s-02 zookeeper]# kubectl get pvc |grep zook
  2. data-zookeeper-0 Bound pvc-997a81c1-6986-4620-88f4-2270247354f5 8Gi RWO nfs-storage 7d23h
  3. data-zookeeper-1 Bound pvc-a6012ebb-1f70-43d1-ac1b-8deaec660efe 8Gi RWO nfs-storage 7d23h
  4. data-zookeeper-2 Bound pvc-f6300de4-8cd9-4807-a5fd-2655deb05139 8Gi RWO nfs-storage 7d23h
  5. [root@k8s-02 zookeeper]#

检查Zookeeper集群状态

  1. [root@k8s-02 zookeeper]# kubectl exec -it -n kafka zookeeper-0 -- bash
  2. I have no name!@zookeeper-0:/$
  3. I have no name!@zookeeper-0:/$ zkServer.sh status
  4. /opt/bitnami/java/bin/java
  5. ZooKeeper JMX enabled by default
  6. Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
  7. Client port found: 2181. Client address: localhost. Client SSL: false.
  8. Mode: follower
  9. I have no name!@zookeeper-0:/$

Helm 部署Kafka集群

拉取kafka

  1. helm pull bitnami/kafka

解压kafka

  1. [root@k8s-02 ~]# tar xf kafka-22.1.3.tgz

进入Kafka目录

  1. [root@k8s-02 ~]# cd kafka
  2. [root@k8s-02 kafka]# ls
  3. Chart.lock charts Chart.yaml README.md templates values.yaml

修改values.yaml

  1. extraEnvVars:
  2. - name: TZ
  3. value: "Asia/Shanghai"
  4. ---
  5. # 副本数
  6. replicaCount: 3 # 副本数
  7. ---
  8. # 持久化存储
  9. persistence:
  10. enabled: true
  11. storageClass: "rook-ceph-block" # sc 有默认sc可以不写
  12. accessModes:
  13. - ReadWriteOnce
  14. size: 8Gi
  15. ---
  16. kraft:
  17. ## @param kraft.enabled Switch to enable or disable the Kraft mode for Kafka
  18. ##
  19. enabled: false #设置为false
  20. ---
  21. # 配置zookeeper外部连接
  22. zookeeper:
  23. enabled: false # 不使用内部zookeeper,默认是false
  24. externalZookeeper: # 外部zookeeper
  25. servers: zookeeper #Zookeeper svc名称

可选配置

  1. ## 允许删除topic(按需开启)
  2. deleteTopicEnable: true
  3. ## 日志保留时间(默认一周)
  4. logRetentionHours: 168
  5. ## 自动创建topic时的默认副本数
  6. defaultReplicationFactor: 2
  7. ## 用于配置offset记录的topic的partition的副本个数
  8. offsetsTopicReplicationFactor: 2
  9. ## 事务主题的复制因子
  10. transactionStateLogReplicationFactor: 2
  11. ## min.insync.replicas
  12. transactionStateLogMinIsr: 2
  13. ## 新建Topic时默认的分区数
  14. numPartitions: 3

创建Kafka集群

  1. [root@k8s-02 kafka]# helm install kafka -n kafka .
  2. #输出结果如下
  3. [root@k8s-02 kafka]# helm install kafka -n kafka .
  4. W0523 13:52:58.673090 28827 warnings.go:70] spec.template.spec.containers[0].env[39].name: duplicate name "KAFKA_ENABLE_KRAFT"
  5. NAME: kafka
  6. LAST DEPLOYED: Tue May 23 13:52:58 2023
  7. NAMESPACE: kafka
  8. STATUS: deployed
  9. REVISION: 1
  10. TEST SUITE: None
  11. NOTES:
  12. CHART NAME: kafka
  13. CHART VERSION: 22.1.3
  14. APP VERSION: 3.4.0
  15. ** Please be patient while the chart is being deployed **
  16. Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
  17. kafka.kafka.svc.cluster.local
  18. Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
  19. kafka-0.kafka-headless.kafka.svc.cluster.local:9092
  20. kafka-1.kafka-headless.kafka.svc.cluster.local:9092
  21. kafka-2.kafka-headless.kafka.svc.cluster.local:9092
  22. To create a pod that you can use as a Kafka client run the following commands:
  23. kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r33 --namespace kafka --command -- sleep infinity
  24. kubectl exec --tty -i kafka-client --namespace kafka -- bash
  25. PRODUCER:
  26. kafka-console-producer.sh \
  27. --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092 \
  28. --topic test
  29. CONSUMER:
  30. kafka-console-consumer.sh \
  31. --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
  32. --topic test \
  33. --from-beginning

进入Kafka集群,创建topic查看

  1. ##进入Kafka集群
  2. kubectl exec -it -n kafka kafka-0 -- bash
  3. #创建topic
  4. kafka-topics.sh --create --bootstrap-server kafka:9092 --topic abcdocker
  5. #查看topic列表
  6. kafka-topics.sh --list --bootstrap-server kafka:9092
  7. #查看topic详细信息
  8. kafka-topics.sh --bootstrap-server kafka:9092 --describe --topic abcdocker
  9. #配置文件配置已经生效,默认分区为3,副本为3,过期时间为168小时
  10. I have no name!@kafka-0:/$ kafka-topics.sh --bootstrap-server kafka:9092 --describe --topic abcdocker
  11. Topic: abcdocker TopicId: jcJtxY1NSr-nSloax8oPnA PartitionCount: 3 ReplicationFactor: 3 Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
  12. Topic: abcdocker Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
  13. Topic: abcdocker Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
  14. Topic: abcdocker Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1

相关文章

KubeSphere 部署向量数据库 Milvus 实战指南
探索 Kubernetes 持久化存储之 Longhorn 初窥门径
征服 Docker 镜像访问限制!KubeSphere v3.4.1 成功部署全攻略
那些年在 Terraform 上吃到的糖和踩过的坑
无需 Kubernetes 测试 Kubernetes 网络实现
Kubernetes v1.31 中的移除和主要变更

发布评论