k8s HELM 安装Kafka Zookeeper集群
很早之前文章介绍了通过Kafka 二进制安装集群,目前很多环境都是Kubernetes,追求快速部署、快速创建项目。 下面我们通过helm快速构建一套Kafka集群并配置持久化
关于k8s sc持久化和Kafka二进制安装,此处就不在介绍了,可以参考下面的文章
Kafka容器化会受底层物理机的配置影响,大并发常景还是要慎重考虑
Helm 安装
相关服务版本
- Kubernetes 1.24.0
- Containerd v1.6.4
# 下载wget https://get.helm.sh/helm-v3.6.1-linux-amd64.tar.gz# 解压tar zxvf helm-v3.6.1-linux-amd64.tar.gz# 安装mv linux-amd64/helm /usr/local/bin/# 验证helm version
Helm 部署Zookeeper集群
# 添加bitnami仓库helm repo add bitnami https://charts.bitnami.com/bitnami# 查询charthelm search repo bitnami# 拉取zookeeperhelm pull bitnami/zookeeper# 解压tar zxvf zookeeper-11.4.2.tgz#进入Zookeepercd zookeeper
接下来对Zookeeper进行时区、持久化存储、副本数等配置
extraEnvVars:- name: TZvalue: "Asia/Shanghai"# 允许任意用户连接(默认开启)allowAnonymousLogin: true---# 关闭认证(默认关闭)auth:enable: false---# 修改副本数replicaCount: 3---# 4. 配置持久化,按需使用persistence:enabled: truestorageClass: "rook-ceph-block" # storageClass 如果有默认存储可以不写accessModes:- ReadWriteOnce
创建Kafka namespace
[root@k8s-02 zookeeper]# kubectl create ns kafka
helm创建Zookeeper集群
[root@k8s-02 zookeeper]# helm install zookeeper -n kafka .#此处下面环境版本信息NAME: zookeeperLAST DEPLOYED: Tue May 23 13:40:12 2023NAMESPACE: kafkaSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:CHART NAME: zookeeperCHART VERSION: 11.4.2APP VERSION: 3.8.1#下面为相关注释,后面可以通过下面的命令查看Zookeeper集群状态** Please be patient while the chart is being deployed **ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:zookeeper.kafka.svc.cluster.localTo connect to your ZooKeeper server run the following commands:export POD_NAME=$(kubectl get pods --namespace kafka -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")kubectl exec -it $POD_NAME -- zkCli.shTo connect to your ZooKeeper server from outside the cluster execute the following commands:kubectl port-forward --namespace kafka svc/zookeeper 2181:2181 &zkCli.sh 127.0.0.1:2181[root@k8s-02 zookeeper]#
检查pod状态
[root@k8s-02 zookeeper]# kubectl get all -n kafkaNAME READY STATUS RESTARTS AGEpod/zookeeper-0 1/1 Running 0 52spod/zookeeper-1 1/1 Running 0 51spod/zookeeper-2 1/1 Running 0 49sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/zookeeper ClusterIP 10.110.142.203 <none> 2181/TCP,2888/TCP,3888/TCP 52sservice/zookeeper-headless ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 52sNAME READY AGEstatefulset.apps/zookeeper 3/3 52s
检查pvc
[root@k8s-02 zookeeper]# kubectl get pvc |grep zookdata-zookeeper-0 Bound pvc-997a81c1-6986-4620-88f4-2270247354f5 8Gi RWO nfs-storage 7d23hdata-zookeeper-1 Bound pvc-a6012ebb-1f70-43d1-ac1b-8deaec660efe 8Gi RWO nfs-storage 7d23hdata-zookeeper-2 Bound pvc-f6300de4-8cd9-4807-a5fd-2655deb05139 8Gi RWO nfs-storage 7d23h[root@k8s-02 zookeeper]#
检查Zookeeper集群状态
[root@k8s-02 zookeeper]# kubectl exec -it -n kafka zookeeper-0 -- bashI have no name!@zookeeper-0:/$I have no name!@zookeeper-0:/$ zkServer.sh status/opt/bitnami/java/bin/javaZooKeeper JMX enabled by defaultUsing config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost. Client SSL: false.Mode: followerI have no name!@zookeeper-0:/$
Helm 部署Kafka集群
拉取kafka
helm pull bitnami/kafka
解压kafka
[root@k8s-02 ~]# tar xf kafka-22.1.3.tgz
进入Kafka目录
[root@k8s-02 ~]# cd kafka[root@k8s-02 kafka]# lsChart.lock charts Chart.yaml README.md templates values.yaml
修改values.yaml
extraEnvVars:- name: TZvalue: "Asia/Shanghai"---# 副本数replicaCount: 3 # 副本数---# 持久化存储persistence:enabled: truestorageClass: "rook-ceph-block" # sc 有默认sc可以不写accessModes:- ReadWriteOncesize: 8Gi---kraft:## @param kraft.enabled Switch to enable or disable the Kraft mode for Kafka##enabled: false #设置为false---# 配置zookeeper外部连接zookeeper:enabled: false # 不使用内部zookeeper,默认是falseexternalZookeeper: # 外部zookeeperservers: zookeeper #Zookeeper svc名称
可选配置
## 允许删除topic(按需开启)deleteTopicEnable: true## 日志保留时间(默认一周)logRetentionHours: 168## 自动创建topic时的默认副本数defaultReplicationFactor: 2## 用于配置offset记录的topic的partition的副本个数offsetsTopicReplicationFactor: 2## 事务主题的复制因子transactionStateLogReplicationFactor: 2## min.insync.replicastransactionStateLogMinIsr: 2## 新建Topic时默认的分区数numPartitions: 3
创建Kafka集群
[root@k8s-02 kafka]# helm install kafka -n kafka .#输出结果如下[root@k8s-02 kafka]# helm install kafka -n kafka .W0523 13:52:58.673090 28827 warnings.go:70] spec.template.spec.containers[0].env[39].name: duplicate name "KAFKA_ENABLE_KRAFT"NAME: kafkaLAST DEPLOYED: Tue May 23 13:52:58 2023NAMESPACE: kafkaSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:CHART NAME: kafkaCHART VERSION: 22.1.3APP VERSION: 3.4.0** Please be patient while the chart is being deployed **Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:kafka.kafka.svc.cluster.localEach Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:kafka-0.kafka-headless.kafka.svc.cluster.local:9092kafka-1.kafka-headless.kafka.svc.cluster.local:9092kafka-2.kafka-headless.kafka.svc.cluster.local:9092To create a pod that you can use as a Kafka client run the following commands:kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.4.0-debian-11-r33 --namespace kafka --command -- sleep infinitykubectl exec --tty -i kafka-client --namespace kafka -- bashPRODUCER:kafka-console-producer.sh \--broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092 \--topic testCONSUMER:kafka-console-consumer.sh \--bootstrap-server kafka.kafka.svc.cluster.local:9092 \--topic test \--from-beginning
进入Kafka集群,创建topic查看
##进入Kafka集群kubectl exec -it -n kafka kafka-0 -- bash#创建topickafka-topics.sh --create --bootstrap-server kafka:9092 --topic abcdocker#查看topic列表kafka-topics.sh --list --bootstrap-server kafka:9092#查看topic详细信息kafka-topics.sh --bootstrap-server kafka:9092 --describe --topic abcdocker#配置文件配置已经生效,默认分区为3,副本为3,过期时间为168小时I have no name!@kafka-0:/$ kafka-topics.sh --bootstrap-server kafka:9092 --describe --topic abcdockerTopic: abcdocker TopicId: jcJtxY1NSr-nSloax8oPnA PartitionCount: 3 ReplicationFactor: 3 Configs: flush.ms=1000,segment.bytes=1073741824,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824Topic: abcdocker Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0Topic: abcdocker Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2Topic: abcdocker Partition: 2 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1