作者:张延英,电信系统集成公司山东分公司运维架构师,云原生爱好者,目前专注于云原生运维
前提说明
-
本系列文档适用于中小规模 (> /etc/fstab
-
更新操作系统并重启
[root@k8s-master-0 ~]# yum update [root@k8s-master-0 ~]# reboot
-
安装依赖软件包
[root@k8s-master-0 ~]# yum install socat conntrack ebtables ipset
- 每个企业的基线扫描标准和工具不尽相同,因此本节内容请自行根据漏扫报告的整改要求进行配置
- 如有有需要,后期可以分享我们使用的基线加固的自动化配置脚本
-
配置 Docker yum 源
[root@k8s-master-0 ~]# vi /etc/yum.repods.d/docker.repo [docker-ce-stable] baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/stable gpgcheck=1 gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg enabled=1 [root@k8s-master-0 ~]# yum clean all [root@k8s-master-0 ~]# yum makecache
-
创建 Docker 的配置文件目录和配置文件
[root@k8s-master-0 ~]# mkdir -p /etc/docker/ [root@k8s-master-0 ~]# vi /etc/docker/daemon.json { "data-root": "/data/docker", "registry-mirrors":["https://docker.mirrors.ustc.edu.cn"], "log-opts": { "max-size": "5m", "max-file":"3" }, "exec-opts": ["native.cgroupdriver=systemd"] }
-
安装 Docker
[root@k8s-master-0 ~]# yum install docker-ce-19.03.15-3.el7 docker-ce-cli-19.03.15-3.el7 -y
-
启动服务并设置开机自启动
[root@k8s-master-0 ~]# systemctl restart docker.service && systemctl enable docker.service
-
验证
[root@k8s-master-0 ~]# docker version Client: Docker Engine - Community Version: 19.03.15 API version: 1.40 Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:17:57 2021 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.15 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:16:33 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.12 GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d runc: Version: 1.0.2 GitCommit: v1.0.2-0-g52b36a2 docker-init: Version: 0.18.0 GitCommit: fec3683
-
采用公有云或是私有云平台上自带的弹性负载均衡服务
- 配置监听器监听的端口
服务 协议 端口 apiserver TCP 6443 ks-console TCP 30880 http TCP 80 https TCP 443
- 配置监听器监听的端口
-
采用 HAProxy 或是 Nginx 自建负载均衡(此次选择)
-
使用 KubeSphere 自带的解决方案部署 HAProxy
- kubekye v1.2.1 开始支持
- 参考使用 KubeKey 内置 HAproxy 创建高可用集群
-
安装软件包 (所有负载均衡节点)
[root@k8s-master-0 ~]# yum install haproxy keepalived
-
配置 HAproxy(所有负载均衡节点,配置相同)
-
编辑配置文件
[root@k8s-master-0 ~]# vi /etc/haproxy/haproxy.cfg
-
配置示例
global log /dev/log local0 warning chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults log global option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 frontend kube-apiserver bind *:6443 mode tcp option tcplog default_backend kube-apiserver backend kube-apiserver mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kube-apiserver-1 192.168.9.4:6443 check # Replace the IP address with your own. server kube-apiserver-2 192.168.9.5:6443 check # Replace the IP address with your own. server kube-apiserver-3 192.168.9.6:6443 check # Replace the IP address with your own. frontend ks-console bind *:30880 mode tcp option tcplog default_backend ks-console backend ks-console mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kube-apiserver-1 192.168.9.4:30880 check # Replace the IP address with your own. server kube-apiserver-2 192.168.9.5:30880 check # Replace the IP address with your own. server kube-apiserver-3 192.168.9.6:30880 check # Replace the IP address with your own.
-
启动服务并设置开机自启动 (所有负载均衡节点)
[root@k8s-master-0 ~]# systemctl restart haproxy && systemctl enable haproxy
-
-
配置 Keepalived
-
编辑配置文件 (所有负载均衡节点)
[root@k8s-master-0 ~]# vi /etc/keepalived/keepalived.conf
-
LB 节点 1 配置文件示例
global_defs { notification_email { } router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance haproxy-vip { state MASTER # 主服务器的初始状态 priority 100 # 优先级主服务器的要高 interface eth0 # 网卡名称,根据实际情况替换 virtual_router_id 60 advert_int 1 authentication { auth_type PASS auth_pass 1111 } unicast_src_ip 192.168.9.2 # 本机eth0网卡的IP地址 unicast_peer { 192.168.9.3 # SLB节点2的IP地址 } virtual_ipaddress { 192.168.9.1/24 # VIP地址 } track_script { chk_haproxy } }
-
LB 节点 2 配置文件示例
global_defs { notification_email { } router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance haproxy-vip { state BACKUP # 从服务器的初始状态 priority 99 # 优先级,从服务器的低于主服务器的值 interface eth0 # 网卡名称,根据实际情况替换 virtual_router_id 60 advert_int 1 authentication { auth_type PASS auth_pass 1111 } unicast_src_ip 192.168.9.3 # 本机eth0网卡的IP地址 unicast_peer { 192.168.9.2 # SLB节点1的IP地址 } virtual_ipaddress { 192.168.9.1/24 # VIP地址 } track_script { chk_haproxy } }
-
启动服务并设置开机自启动 (所有负载均衡节点)
[root@k8s-master-0 ~]# systemctl restart keepalived && systemctl enable keepalived
-
-
验证
-
查看 vip(在负载均衡节点)
[root@k8s-slb-0 ~]# ip a s 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:9e:27:38:c8 brd ff:ff:ff:ff:ff:ff inet 192.168.9.2/24 brd 192.168.9.255 scope global noprefixroute dynamic eth0 valid_lft 73334sec preferred_lft 73334sec inet 192.168.9.1/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute valid_lft forever preferred_lft forever
-
验证 vip 的连通性(在 k8s-master 其他节点)
[root@k8s-master-0 ~]# ping -c 4 192.168.9.1 PING 192.168.9.1 (192.168.9.1) 56(84) bytes of data. 64 bytes from 192.168.9.1: icmp_seq=1 ttl=64 time=0.664 ms 64 bytes from 192.168.9.1: icmp_seq=2 ttl=64 time=0.354 ms 64 bytes from 192.168.9.1: icmp_seq=3 ttl=64 time=0.339 ms 64 bytes from 192.168.9.1: icmp_seq=4 ttl=64 time=0.304 ms --- 192.168.9.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.304/0.415/0.664/0.145 ms
-
-
下载 KubeKey
KubeKey 安装在了 master-0 节点,也可以安装在运维管理节点
# 使用国内环境 [root@k8s-master-0 ~]# export KKZONE=cn # 执行以下命令下载 KubeKey [root@k8s-master-0 ~]# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh - # 为 kk 添加可执行权限 (可选) [root@k8s-master-0 ~]# chmod +x kk
-
创建包含默认配置的示例配置文件 config-sample.yaml
[root@k8s-master-0 ~]# ./kk create config --with-kubesphere v3.2.1 --with-kubernetes v1.20.4
- --with-kubesphere 指定 KubeSphere 版本 v3.2.1
- --with-kubernetes 指定 Kubernetes 版本 v1.20.4
-
根据规划,编辑修改配置文件
-
vi config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: k8s-master-0, address: 192.168.9.3, internalAddress: 192.168.9.3, user: root, password: P@ssw0rd@123} - {name: k8s-master-1, address: 192.168.9.4, internalAddress: 192.168.9.4, user: root, password: P@ssw0rd@123} - {name: k8s-master-2, address: 192.168.9.5, internalAddress: 192.168.9.5, user: root, password: P@ssw0rd@123} - {name: k8s-node-0, address: 192.168.9.6, internalAddress: 192.168.9.6, user: root, password: P@ssw0rd@123} - {name: k8s-node-1, address: 192.168.9.7, internalAddress: 192.168.9.7, user: root, password: P@ssw0rd@123} - {name: k8s-node-2, address: 192.168.9.8, internalAddress: 192.168.9.8, user: root, password: P@ssw0rd@123} roleGroups: etcd: - k8s-master-0 - k8s-master-1 - k8s-master-2 control-plane: - k8s-master-0 - k8s-master-1 - k8s-master-2 worker: - k8s-node-0 - k8s-node-1 - k8s-node-0 controlPlaneEndpoint: domain: lb.kubesphere.local address: "192.168.9.1" port: 6443 kubernetes: version: v1.20.4 imageRepo: kubesphere clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233.64.0/18 kubeServiceCIDR: 10.233.0.0/18 registry: registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration ....(后面太多都是 KubeSphere 的配置,本文不涉及,先省略)
-
重点配置项说明
-
hosts 配置 K8s 集群节点的名字、IP、管理用户、管理用户名
-
roleGroups
- etcd: etcd 节点名称
- control-plane: 主节点的名称
- worker: work 节点的名称
-
controlPlaneEndpoint
- domain: 负载衡器 IP 对应的域名,一般形式 lb.clusterName
- address: 负载衡器 IP 地址
-
K8s
- clusterName: kubernetes 集群的集群名称
-
-
-
安装 KubeSphere 和 Kubernetes 集群
[root@k8s-master-0 ~]# ./kk create cluster -f config-sample.yaml
-
验证安装结果
-
验证安装过程
[root@k8s-master-0 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
-
验证集群状态
安装完成后,您会看到如下内容:
##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.9.2:30880 Account: admin Password: P@88w0rd NOTES: 1. After you log into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are up and running. 2. Please change the default password after login. ##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx #####################################################
-
- 多节点安装
- 使用 Keepalived 和 HAproxy 创建高可用 K8s 集群
基本的安全配置
基线加固配置
Docker 安装配置
容器运行时,我们生产环境保守的选择了 19.03 版本的 Docker,安装时选择最新版的即可
安装配置负载均衡
三种解决方案
安装配置
KubeSphere 安装 K8s
参考文档
后续
下一篇文章将会介绍基于 KubeSphere 的 K8s 生产实践之路-持久化存储之 GlusterFS,敬请期待。