kafka单机版SCRAM认证

2023年 8月 18日 146.1k 0

仍然使用旧版的2.5.0配置单节点的zookeeper和kafka,mechanism使用SCRAM-SHA-256进行认证

不使用docker,就需要安装jdk,如下:

jdk

curl -Lk https://repo.huaweicloud.com/java/jdk/8u202-b08/jdk-8u202-linux-x64.tar.gz |tar xz -C /usr/local
cd /usr/local && ln -s jdk1.8.0_202 java
cat > /etc/profile.d/java.sh <<EOF
export JAVA_HOME=/usr/local/java
export PATH=$JAVA_HOME/bin:$PATH 
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
EOF
source /etc/profile.d/java.sh
java -version

kafka-2.5.0

安装

配置kafka目录

DPATH=/data
mkdir ${DPATH}/logging/zookeeper/logs ${DPATH}/logging/kafka/ ${DPATH}/logs/ -p
groupadd -r -g 699 kafka
useradd -u 699 -s /sbin/nologin -c 'kafka server' -g kafka kafka -M
chown -R  kafka.kafka ${DPATH}/logging

download kafka

wget https://archive.apache.org/dist/kafka/2.5.0/kafka_2.12-2.5.0.tgz
tar xf kafka_2.12-2.5.0.tgz -C /usr/local
cd /usr/local
ln -s kafka_2.12-2.5.0 kafka

而后配置必要的认证文件

cat > /usr/local/kafka/config/kafka_client_jaas.conf << EOF
KafkaClient {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="marksugar"
    password="linuxea.com";
};
EOF
cat > /usr/local/kafka/config/kafka_client_jaas.conf << EOF
KafkaClient {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="marksugar"
    password="linuolxloADMINXP[QP[1]]";
};
EOF

配置文件

zookeeper和kafka在一起的,不需要重新安装

  • zookeeper
mv /usr/local/kafka/config/zookeeper.properties /usr/local/kafka/config/zookeeper.properties.bak
cat > /usr/local/kafka/config/zookeeper.properties << EOF
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/logging/zookeeper
dataLogDir=/data/logging/zookeeper/logs
clientPort=2181
EOF
echo "0" > /data/logging/zookeeper/myid

启动脚本

cat > /etc/systemd/system/zookeeper.service << EOF
[Unit]
Description=ZooKeeper Service
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Environment=ZOO_LOG_DIR=/data/logging/zookeeper/logs
PIDFile=/data/logging/zookeeper/zookeeper_server.pid
User=kafka
Group=kafka
ExecStart=/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties
#RestartSec=15
#LimitNOFILE=65536
#OOMScoreAdjust=-999
Type=simple
Restart=on-failure

[Install]
WantedBy=default.target
EOF
  • kafka
broker.id=1
listeners=SASL_PLAINTEXT://:9092
advertised.listeners=SASL_PLAINTEXT://172.16.100.151:9092

log.dirs=/data/logging/kafka/
zookeeper.connect=172.16.100.151:2181
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:markadmin;User:marksugar

num.network.threads=9
num.io.threads=16
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

num.partitions=1
#auto.create.topics.enable=true
default.replication.factor=1

启动脚本

cat > /etc/systemd/system/kafka.service << EOF
[Unit]
Description=kafka Service
After=network.target syslog.target

[Service]
Environment=ZOO_LOG_DIR=/data/logging/kafka/logs
SyslogIdentifier=kafka

# 添加limit参数
LimitFSIZE=infinity
LimitCPU=infinity
LimitAS=infinity
LimitMEMLOCK=infinity
LimitNOFILE=64000
LimitNPROC=64000


User=kafka
Group=kafka
Type=simple
Restart=on-failure
Environment="KAFKA_OPTS=-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf"
Environment="PATH=${PATH}:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
ExecStart=/usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server-scram.properties
ExecStop=/usr/local/kafka/bin/kafka-server-stop.sh

[Install]
WantedBy=default.target
EOF

修改内存配置,修改java配置,bin下的kafka-server-start.sh,配置内存大小,并且配置9999端口eagle

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    #export KAFKA_HEAP_OPTS="-server -Xms4G -Xmx4G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParalupancyPercent=70"
    export KAFKA_HEAP_OPTS="-server -Xms4G -Xmx4G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200" 
    export JMX_PORT="9999"
fi

而后添加环境变量到JAVA_HOME=/usr/local/java到 /usr/local/kafka/bin/kafka-run-class.shes

  • 开始启动
chown -R kafka.kafka /usr/local/kafka_2.13-2.8.2/
systemctl start zookeeper
systemctl enable zookeeper
systemctl status zookeeper

创建用户

/usr/local/kafka/bin/kafka-configs.sh --zookeeper 172.16.100.151:2181 --alter 
--add-config 'SCRAM-SHA-256=[iterations=8192,password=linuxea.com],SCRAM-SHA-256=[password=linuxea.com]' 
--entity-type users --entity-name marksugar

/usr/local/kafka/bin/kafka-configs.sh --zookeeper 172.16.100.151:2181 --alter 
--add-config 'SCRAM-SHA-256=[iterations=8192,password=MwMzA0MGIwZjMwMjg3MjY4NWE2ZGFmOG],SCRAM-SHA-256=[password=MwMzA0MGIwZjMwMjg3MjY4NWE2ZGFmOG]' 
--entity-type users --entity-name markadmin

而后使用sudo -u kafka启动kafka进行调试

chown -R kafka.kafka /usr/local/kafka*
sudo -u kafka KAFKA_OPTS=-Djava.security.auth.login.config=/usr/local/kafka/config/kafka_server_jaas.conf /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server-scram.properties

没有问题后,配置开机启动

systemctl start kafka
systemctl enable kafka
systemctl status kafka

一旦使用了kafka SCRAM,那么kafka-eagle才是一个好选择image-20230817221927577.png

参考

https://linuxea.com/2678.html

相关文章

Oracle如何使用授予和撤销权限的语法和示例
Awesome Project: 探索 MatrixOrigin 云原生分布式数据库
下载丨66页PDF,云和恩墨技术通讯(2024年7月刊)
社区版oceanbase安装
Oracle 导出CSV工具-sqluldr2
ETL数据集成丨快速将MySQL数据迁移至Doris数据库

发布评论