kubernetes v1.14.0高可用master集群部署(使用kubeadm,离线安装)

2023年 7月 9日 27.1k 0

集群方案:

  • 发行版:CentOS 7
  • 容器运行时
  • 内核: 4.18.12-1.el7.elrepo.x86_64
  • 版本:Kubernetes: 1.14.0
  • 网络方案: Calico
  • kube-proxy mode: IPVS
  • master高可用方案:HAProxy keepalived LVS
  • DNS插件: CoreDNS
  • metrics插件:metrics-server
  • 界面:kubernetes-dashboard

Kubernetes集群搭建

Host Name Role IP
master1 master1 192.168.56.103
master2 master2 192.168.56.104
master3 master3 192.168.56.105
node1 node1 192.168.56.106
node2 node2 192.168.56.107
node3 node3 192.168.56.108

1、离线安装包准备(基于能够访问外网的服务器下载相应安装包)

# 设置yum缓存路径,cachedir 缓存路径 keepcache=1保持安装包在软件安装之后不删除
cat /etc/yum.conf  
[main]
cachedir=/home/yum
keepcache=1
...

# 安装ifconfig
yum install net-tools -y

# 时间同步
yum install -y ntpdate

# 安装docker(建议19.8.06)
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
 --add-repo \
 https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
## 列出Docker版本
yum list docker-ce --showduplicates | sort -r
## 安装指定版本
sudo yum install docker-ce-

# 安装文件管理器,XShell可通过rz sz命令上传或者下载服务器文件
yum intall lrzsz -y

# 安装keepalived、haproxy
yum install -y socat keepalived ipvsadm haproxy

# 安装kubernetes相关组件
cat  /etc/fstab
  • 配置L2网桥在转发包时会被iptables的FORWARD规则所过滤,该配置被CNI插件需要,更多信息请参考Network Plugin Requirements
    echo """
    vm.swappiness = 0
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    """ > /etc/sysctl.conf
    sysctl -p
    

    centos7添加bridge-nf-call-ip6tables出现No such file or directory,简单来说就是执行一下 modprobe br_netfilter

  • 同步时间
    ntpdate -u ntp.api.bz
    
  • 升级内核到最新(已准备内核离线安装包,可选)centos7 升级内核

    参考文章

    grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
    grubby --default-kernel
    grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
    
  • 重启系统,确认内核版本后,开启IPVS(如果未升级内核,去掉ip_vs_fo)
    uname -a
    cat > /etc/sysconfig/modules/ipvs.modules &1
        if [ $? -eq 0 ]; then
            /sbin/modprobe \${kernel_module}
        fi
    done
    EOF
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
    

    执行sysctl -p报错可执行modprobe br_netfilter,请参考centos7添加bridge-nf-call-ip6tables出现No such file or directory

  • 所有机器需要设定/etc/sysctl.d/k8s.conf的系统参数
    # https://github.com/moby/moby/issues/31208 
    # ipvsadm -l --timout
    # 修复ipvs模式下长连接timeout问题 小于900即可
    cat  ./cluster-info
    bash -c "$(curl -fsSL https://raw.githubusercontent.com/hnbcao/kubeadm-ha-master/v1.14.0/keepalived-haproxy.sh)"
    

    4、部署HA Master

    HA Master的部署过程已经自动化,请在master-1上执行如下命令,并注意修改IP;

    脚本主要执行三步:

    1)、重置kubelet设置

    kubeadm reset -f
    rm -rf /etc/kubernetes/pki/
    

    2)、编写节点配置文件并初始化master1的kubelet

    echo """
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterConfiguration
    kubernetesVersion: v1.13.0
    controlPlaneEndpoint: "${VIP}:8443"
    maxPods: 100
    networkPlugin: cni
    imageRepository: registry.aliyuncs.com/google_containers
    apiServer:
      certSANs:
      - ${CP0_IP}
      - ${CP1_IP}
      - ${CP2_IP}
      - ${VIP}
    networking:
      # This CIDR is a Calico default. Substitute or remove for your CNI provider.
      podSubnet: ${CIDR}
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: ipvs
    """ > /etc/kubernetes/kubeadm-config.yaml
    kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
    mkdir -p $HOME/.kube
    cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config
    
    • 关于默认网关问题,如果有多张网卡,需要先将默认网关切换到集群使用的那张网卡上,否则可能会出现etcd无法连接等问题。(应用我用的虚拟机,有一张网卡无法做到各个节点胡同;route查看当前网关信息,route del default删除默认网关,route add default enth0设置默认网关enth0为网卡名)

    3)、拷贝相关证书到master2、master3

    for index in 1 2; do
      ip=${IPS[${index}]}
      ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
      scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
      scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
      scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
      scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
      scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
      scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
      scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
      scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
      scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
      scp /etc/kubernetes/admin.conf $ip:~/.kube/config
    
      ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
    done
    

    4)、master2、master3加入节点

    JOIN_CMD=`kubeadm token create --print-join-command`
    ssh ${ip} "${JOIN_CMD} --experimental-control-plane"
    

    完整脚本:

    # 部署HA master
     
    bash -c "$(curl -fsSL https://raw.githubusercontent.com/hnbcao/kubeadm-ha-master/v1.14.0/kube-ha.sh)"
    

    5、加入节点

    • 各个节点需要配置keepalived 和 haproxy
      #/etc/haproxy/haproxy.cfg
      global
          log         127.0.0.1 local2
          chroot      /var/lib/haproxy
          pidfile     /var/run/haproxy.pid
          maxconn     4000
          user        haproxy
          group       haproxy
          daemon
          stats socket /var/lib/haproxy/stats
      
      defaults
          mode                    tcp
          log                     global
          option                  tcplog
          option                  dontlognull
          option                  redispatch
          retries                 3
          timeout queue           1m
          timeout connect         10s
          timeout client          1m
          timeout server          1m
          timeout check           10s
          maxconn                 3000
      
      listen stats
          mode   http
          bind :10086
          stats   enable
          stats   uri     /admin?stats
          stats   auth    admin:admin
          stats   admin   if TRUE
          
      frontend  k8s_https *:8443
          mode      tcp
          maxconn      2000
          default_backend     https_sri
          
      backend https_sri
          balance      roundrobin
          server master1-api ${MASTER1_IP}:6443  check inter 10000 fall 2 rise 2 weight 1
          server master2-api ${MASTER2_IP}:6443  check inter 10000 fall 2 rise 2 weight 1
          server master3-api ${MASTER3_IP}:6443  check inter 10000 fall 2 rise 2 weight 1
      
      #/etc/keepalived/keepalived.conf 
      global_defs {
         router_id LVS_DEVEL
      }
      
      vrrp_script check_haproxy {
          script /etc/keepalived/check_haproxy.sh
          interval 3
      }
      
      vrrp_instance VI_1 {
          state MASTER
          interface eth0
          virtual_router_id 80
          priority 100
          advert_int 1
          authentication {
              auth_type PASS
              auth_pass just0kk
          }
          virtual_ipaddress {
              ${VIP}/24
          }
          track_script {   
              check_haproxy
          }
      }
      
      }
      

    注意两个配置中的kubernetes v1.14.0高可用master集群部署(使用kubeadm,离线安装)-1{MASTER2 _ IP}, kubernetes v1.14.0高可用master集群部署(使用kubeadm,离线安装)-2{VIP}需要替换为自己集群相应的IP地址

    • 重启keepalived和haproxy
      systemctl stop keepalived
      systemctl enable keepalived
      systemctl start keepalived
      systemctl stop haproxy
      systemctl enable haproxy
      systemctl start haproxy
      
    • 节点加入命令获取
      #master节点执行该命令,再在节点执行获取到的命令
      kubeadm token create --print-join-command
      

    6、结束安装

    当前集群安装完毕,还需要安装cni插件,推荐使用calico,性能优于其他。

  • 相关文章

    KubeSphere 部署向量数据库 Milvus 实战指南
    探索 Kubernetes 持久化存储之 Longhorn 初窥门径
    征服 Docker 镜像访问限制!KubeSphere v3.4.1 成功部署全攻略
    那些年在 Terraform 上吃到的糖和踩过的坑
    无需 Kubernetes 测试 Kubernetes 网络实现
    Kubernetes v1.31 中的移除和主要变更

    发布评论