一步步配置基于kubeadmin的kubevip高可用

2024年 1月 21日 37.9k 0

在使用kubeadm安装之前,我们需要对系统做简单的初始化步骤,如:防火墙,主机名,swap,时间服务,内核模块转发等,如下:

systemctl disable firewalld
systemctl stop firewalld
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
timedatectl set-timezone Asia/Shanghai
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sed -i 's/0.centos.pool.ntp.org/ntp1.aliyun.com/g' /etc/chrony.conf
systemctl enable --now chronyd
systemctl restart chronyd
chronyc activity
chronyc sources
swapoff -a
sed -i '/swap/d' /etc/fstab
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

其中netfiter的两个插件iptables和Ipvs的net参数需要配置

image-20221201201346047.png

cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
# 下面的内核参数可以解决ipvs模式下长连接空闲超时的问题
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.tcp_keepalive_time = 600
EOF
sysctl -p /etc/sysctl.conf

创建Ipvs模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
yum install ipset ipvsadm -y

修改主机名并添加到本地hosts

echo "172.16.100.151 master1" >> /etc/hosts
echo "172.16.100.152 master2" >> /etc/hosts
echo "172.16.100.153 master3" >> /etc/hosts
echo "172.16.100.154 node1" >> /etc/hosts
echo "172.16.100.123 k8sapi.local" >> /etc/hosts

172.16.100.123 k8sapi.local是Kube-vip的vip映射

安装其他包

c7

yum install iproute

c8

yum install iproute-tc -y

配置

modprobe br_netfilter
cat >> /etc/rc.d/rc.local << EOF
for file in /etc/sysconfig/modules/*.modules ; do
[ -x \$file ] && \$file
done
EOF
echo "modprobe br_netfilter" > /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
lsmod | grep br_netfilter

2.安装Containerd

如果使用的centos7,我们需要升级到2.5来适配1.6

rpm -qa |grep libseccomp
yum remove libseccomp-2.3.1 -y
wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -qa | grep libseccomp
containerd -v
runc -v
yum install chrony -y
sed -i 's/0.centos.pool.ntp.org/ntp1.aliyun.com/g' /etc/chrony.conf
systemctl enable --now chronyd
systemctl restart chronyd
chronyc activity
chronyc sources

由于 Containerd 需要依赖底层的 runc 工具,所以我们也需要先安装 runc,不过 Containerd 提供了一个包含相关依赖的压缩包 cri-containerd-cni-${VERSION}.${OS}-${ARCH}.tar.gz ,可以直接使用这个包来进行安装,强烈建议使用该安装包,不然可能因为 runc 版本问题导致不兼容。首先从 release 页面下载最新的 1.6.10 版本的压缩

wget https://github.com/containerd/containerd/releases/download/v1.6.10/cricontainerd-1.6.10-linux-amd64.tar.gz
尝试使用如下
wget
https://ghdl.feizhuqwq.cf/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-1.6.10-linux-amd64.tar.gz

安装1.6.10

mkdir /usr/local/containerd-1.6.10 -p
curl https://ghdl.feizhuqwq.cf/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-1.6.10-linux-amd64.tar.gz | tar -xz -C /usr/local/containerd-1.6.10
cp /usr/local/containerd-1.6.10/usr/local/bin/* /usr/local/bin/ 
cp /usr/local/containerd-1.6.10/usr/local/sbin/* /usr/local/sbin/ 
chmod +x /etc/profile.d/containerd.sh
source  /etc/profile.d/containerd.sh
containerd -v
runc -v

centos8

加入是centos8,则不需要升级libseccomp,这里安装的是不同的版本:1.6.26

wget https://github.moeyy.xyz/https://github.com/containerd/containerd/releases/download/v1.6.26/cri-containerd-1.6.26-linux-amd64.tar.gz
mkdir /usr/local/containerd-1.6 -p
tar xf cri-containerd-1.6.26-linux-amd64.tar.gz -C /usr/local/containerd-1.6
cp /usr/local/containerd-1.6/usr/local/bin/* /usr/local/bin/ 
cp /usr/local/containerd-1.6/usr/local/sbin/* /usr/local/sbin/ 
cp /usr/local/containerd-1.6/etc/systemd/system/containerd.service /etc/systemd/system
chmod +x /etc/profile.d/containerd.sh
source  /etc/profile.d/containerd.sh
containerd -v
runc -v

上面安装完成后,接着开始对配置做基本的修改

生成默认配置文件

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
  • 我们替换配置,分别是pause镜像和systemd_cgroup,systemd_cgroup设置为true,这也是官方推荐的配置
registry.k8s.io/pause:3.6
替换
registry.cn-hangzhou.aliyuncs.com/marksugar/pause:3.8
SystemdCgroup = false
替换
SystemdCgroup = true

另外,我们调整存储的位置,假如你也有一个专属磁盘的话

path = "/opt/containerd"

如下

mkdir /data/containerd/opt -p
confPath=/etc/containerd/config.toml
sed -i 's@registry.k8s.io/pause:3.6@registry.cn-hangzhou.aliyuncs.com/marksugar-k8s/pause:3.8@g' $confPath
sed -i 's@SystemdCgroup = false@SystemdCgroup = true@g' $confPath 
sed -i 's@path = "/opt/containerd""@path = "/data/opt/containerd""@g' $confPath

配置开机启动

cp /usr/local/containerd-1.6.10/etc/systemd/system/containerd.service /etc/systemd/system
ln -s  /usr/local/containerd-1.6.10/usr/local/bin/containerd  /usr/local/bin/containerd
systemctl daemon-reload
systemctl enable containerd --now
systemctl restart containerd 
containerd -v
ctr version
crictl config runtime-endpoint unix:///run/containerd/containerd.sock
crictl config image-endpoint unix:///run/containerd/containerd.sock
crictl  images

3.初始化

上面的相关环境配置完成后,接着我们就可以来安装 Kubeadm 了,我们这里是通过指定 yum 源的方式来进行安装的

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

4.配置kube-vip

参考:https://kube-vip.io/docs/installation/static/,https://github.com/kube-vip/kube-vip/blob/main/docs/install_static/index.md

kube-vip 来实现集群的高可用,首先在 master1 节点上生成基本的 Kubernetes 静态 Pod 资源清单文件

声明网卡名称和VIP并且拉取镜像

export VIP=172.16.100.123
export INTERFACE=eth0
ctr image pull uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3

而后运行如下命令生成配置,有些参数经过修改。接着写入到/etc/kubernetes/manifests/kube-vip.yaml

mkdir -p /etc/kubernetes/manifests/
cd /etc/kubernetes/manifests/
ctr run --rm --net-host uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaseDuration 30 \
--leaseRenewDuration 20 \
--leaseRetry 4 \
--leaderElection | tee  /etc/kubernetes/manifests/kube-vip.yaml

清单如下

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  name: kube-vip
  namespace: kube-system
spec:
  containers:
  - args:
    - manager
    env:
    - name: vip_arp
      value: "true"
    - name: port
      value: "6443"
    - name: vip_interface
      value: eth0
    - name: vip_cidr
      value: "32"
    - name: cp_enable
      value: "true"
    - name: cp_namespace
      value: kube-system
    - name: vip_ddns
      value: "false"
    - name: svc_enable
      value: "true"
    - name: svc_leasename
      value: plndr-svcs-lock
    - name: vip_leaderelection
      value: "true"
    - name: vip_leasename
      value: plndr-cp-lock
    - name: vip_leaseduration
      value: "30"
    - name: vip_renewdeadline
      value: "20"
    - name: vip_retryperiod
      value: "4"
    - name: vip_address
      value: 172.16.100.123
    - name: prometheus_server
      value: :2112
    image: ghcr.io/kube-vip/kube-vip:v0.6.3
    imagePullPolicy: Always
    name: kube-vip
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
    volumeMounts:
    - mountPath: /etc/kubernetes/admin.conf
      name: kubeconfig
  hostAliases:
  - hostnames:
    - kubernetes
    ip: 127.0.0.1
  hostNetwork: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admin.conf
    name: kubeconfig
status: {}

现在的VIP是我上面设置123结尾的IP,而当前的节点就是Leader

4.初始化

4.1安装kube套件

然后安装 kubeadm、kubelet、kubectl:

kubelet :运行在cluster,负责启动pod管理容器kubeadm :k8s快速构建工具,用于初始化clusterkubectl :k8s命令工具,部署和管理应用,维护组件

# --disableexcludes 禁掉除了kubernetes之外的别的仓库
yum makecache fast
yum install -y kubelet-1.26.9 kubeadm-1.26.9  kubectl-1.26.9  --disableexcludes=kubernetes
kubeadm  version

能看到kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.9", GitCommit:"d1483fdf7a0578c83523bc1e2212a606a44fd71d", GitTreeState:"clean", BuildDate:"2023-09-13T11:31:28Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}说明安装好了

其他依赖如下

 kubeadm                  
 kubectl                  
 kubelet                  
 conntrack-tools          
 cri-tools                
 kubernetes-cni           
 libnetfilter_cthelper    
 libnetfilter_cttimeout   
 libnetfilter_queue       
 socat 

可以看到我们这里安装的是 v1.26.9版本,然后将 master 节点的 kubelet 设置成开机启动:

 systemctl enable --now kubelet

4.2 初始化配置

当在三个节点上完成上述的初始化,并且安装了Kubelet,可以开始在master上初始化集群配置。当我们执行 kubelet --help 命令的时候可以看到原来大部分命令行参数都被 DEPRECATED 了,这是因为官方推荐我们使用 --config 来指定配置文件,在配置文件中指定原来这些参数的配置,可以通过官方文档 Set Kubelet parameters via a config file 了解更多相关信息,这样 Kubernetes 就可以支持动态 Kubelet 配置(Dynamic Kubelet Configuration)了,参考 Reconfigure a Node’s Kubelet in a Live Cluster。我们通过下面的命令在 master 节点上输出集群初始化默认使用的配置:

kubeadm.yaml的安装主要是由 api server、etcd、scheduler、controller-manager、coredns等镜像构成

kubeadm config print init-defaults --component-configs KubeletConfiguration > kubeadm.yaml

而后我们修改配置

1.advertiseAddress

advertiseAddress: 1.2.3.4
修改为
advertiseAddress: 172.16.100.11

2.修改一个taints

nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints:  # 给master添加污点,master节点不能调度应用
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"       

3.修改imageRepository

imageRepository: uhub.service.ucloud.cn/marksugar-k8s

如果你是自己的镜像仓库,那么至少需要将如下镜像存放到以上仓库中

registry.k8s.io/kube-apiserver:v1.26.9
registry.k8s.io/kube-controller-manager:v1.26.9
registry.k8s.io/kube-scheduler:v1.26.9
registry.k8s.io/kube-proxy:v1.26.9
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
coredns/coredns:v1.9.3

4.修改版本号

kubernetesVersion: 1.26.9

5.指定pod子网podSubnet

networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16

6.修改kubeproxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

7.添加Master的cert

controlPlaneEndpoint: k8sapi.local:6443
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
  certSANs:
  - k8sapi.local
  - master1
  - master2
  - master3
  - 172.16.100.151
  - 172.16.100.152
  - 172.16.100.153

将上述的配置修改后整合到一个里面,如下

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.100.151
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master1
  taints:
  - effect: "NoSchedule"
    key: "node-role.kubernetes.io/master"
---
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: uhub.service.ucloud.cn/marksugar-k8s
kind: ClusterConfiguration
kubernetesVersion: 1.26.9
controlPlaneEndpoint: k8sapi.local:6443
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
  certSANs:
  - k8sapi.local
  - master1
  - master2
  - master3
  - 172.16.100.151
  - 172.16.100.152
  - 172.16.100.153
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

4.3 准备镜像

在开始初始化集群之前可以使用 kubeadm config images pull --config kubeadm.yaml 预先在各个服务器节点上拉取所 k8s 需要的容器镜像。配置文件准备好过后,可以使用如下命令先将相关镜像 pull 下面:

kubeadm config images pull --config kubeadm.yaml

列出要准备的镜像

[root@master1 manifests]# kubeadm config images list --config=kubeadm.yaml
uhub.service.ucloud.cn/marksugar-k8s/kube-apiserver:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-controller-manager:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-scheduler:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-proxy:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/pause:3.9
uhub.service.ucloud.cn/marksugar-k8s/etcd:3.5.6-0
uhub.service.ucloud.cn/marksugar-k8s/coredns:v1.9.3

开始拉取

#!/bin/bash

images=`kubeadm config images list --config  kubeadm.yaml`

if [[ -n ${images} ]]
then
  echo "开始拉取镜像"
  for i in ${images};
    do 
      echo $i
      ctr -n k8s.io i pull $i;
  done
else
 echo "没有可拉取的镜像"
fi

4.4 初始化集群

此时我们使用kubeadm init指定配置文件进行初始化集群

kubeadm init --config kubeadm.yaml

如果此时无法初始化,查看日志

journalctl -u containerd -f
journalctl -u kubelet -f
kubeadm reset -f
sudo rm -rvf $HOME/.kube
systemctl stop kubelet containerd
systemctl start kubelet containerd

如果此时要离线安装就需要准备如下镜像

uhub.service.ucloud.cn/marksugar-k8s/kube-apiserver:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-controller-manager:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-scheduler:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/kube-proxy:v1.26.9
uhub.service.ucloud.cn/marksugar-k8s/pause:3.9
uhub.service.ucloud.cn/marksugar-k8s/etcd:3.5.6-0
uhub.service.ucloud.cn/marksugar-k8s/coredns:v1.9.3

初始化完成如下

[init] Using Kubernetes version: v1.26.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster

....


[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:51aef1df7ff570b7dcd5819dec7baa31f000bec600e60f7017abf37c32ab70fc \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:51aef1df7ff570b7dcd5819dec7baa31f000bec600e60f7017abf37c32ab70fc 

添加完成后按照提示

[root@master1 manifests]#   mkdir -p $HOME/.kube
[root@master1 manifests]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 manifests]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master1 manifests]#   kubectl get node
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   82s   v1.26.9

VIP的pod开始运行

[root@master1 manifests]# kubectl  get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-684b46fbf5-sbj6t          0/1     Pending   0          5m54s
kube-system   coredns-684b46fbf5-vzpzw          0/1     Pending   0          5m54s
kube-system   etcd-master1                      1/1     Running   0          5m58s
kube-system   kube-apiserver-master1            1/1     Running   0          5m59s
kube-system   kube-controller-manager-master1   1/1     Running   0          5m59s
kube-system   kube-proxy-m9xfz                  1/1     Running   0          5m55s
kube-system   kube-scheduler-master1            1/1     Running   0          5m59s
kube-system   kube-vip-master1                  1/1     Running   0          6m2s

此时我们使用kubectl -n kube-system get cm kubeadm-config -o yaml可以看到参与集群的master成员

[root@master1 ~]# kubectl -n kube-system get cm kubeadm-config -o yaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      certSANs:
      - k8sapi.local
      - master1
      - master2
      - master3
      - 172.16.100.151
      - 172.16.100.152
      - 172.16.100.153
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: k8sapi.local:6443

并且此时ip在master1

[root@master1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 96:99:94:f9:e2:97 brd ff:ff:ff:ff:ff:ff
    inet 172.16.100.151/24 brd 172.16.100.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.16.100.123/32 scope global eth0
       valid_lft forever preferred_lft forever

5.添加第二台master

参考如上

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124 \
    --control-plane 

在第一台151复制到152和153

scp -r /etc/kubernetes/pki 172.16.100.152:/etc/kubernetes/
scp -r /etc/kubernetes/pki 172.16.100.153:/etc/kubernetes/

回到152,而后开始加入

cd /etc/kubernetes/pki/
rm -rf apiserver*
rm -rf etcd/healthcheck-client.*
rm -rf etcd/server.*
rm -rf etcd/peer.*

  kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124 \
    --control-plane 

开始添加

[root@master2 ~]#   kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124 --control-plane 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.

failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher
[root@master2 ~]#   kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124 --control-plane 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...

.....


This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

配置

[root@master2 pki]# mkdir -p $HOME/.kube
[root@master2 pki]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? yes
[root@master2 pki]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master2 pki]# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS        AGE
kube-system   coredns-684b46fbf5-c8fhl          0/1     Pending   0               11m
kube-system   coredns-684b46fbf5-g7r5t          0/1     Pending   0               11m
kube-system   etcd-master1                      1/1     Running   4               11m
kube-system   etcd-master2                      1/1     Running   1               7m33s
kube-system   kube-apiserver-master1            1/1     Running   12              11m
kube-system   kube-apiserver-master2            1/1     Running   2               7m31s
kube-system   kube-controller-manager-master1   1/1     Running   7 (7m33s ago)   11m
kube-system   kube-controller-manager-master2   1/1     Running   1               7m35s
kube-system   kube-proxy-2lr26                  1/1     Running   1               7m43s
kube-system   kube-proxy-wm5vr                  1/1     Running   0               11m
kube-system   kube-scheduler-master1            1/1     Running   7 (7m31s ago)   11m
kube-system   kube-scheduler-master2            1/1     Running   1               7m33s
kube-system   kube-vip-master1                  1/1     Running   0               11m

5.1 kube-vip

而后我们添加第三胎的vip

export VIP=172.16.100.123
export INTERFACE=eth0
ctr image pull uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3
ctr run --rm --net-host uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaseDuration 30 \
--leaseRenewDuration 20 \
--leaseRetry 4 \
--leaderElection | tee  /etc/kubernetes/manifests/kube-vip.yaml

如下

[root@master2 ~]# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS      AGE
kube-system   coredns-684b46fbf5-c8fhl          0/1     Pending   0             22m
kube-system   coredns-684b46fbf5-g7r5t          0/1     Pending   0             22m
kube-system   etcd-master1                      1/1     Running   4             22m
kube-system   etcd-master2                      1/1     Running   1             17m
kube-system   etcd-master3                      1/1     Running   0             9m47s
kube-system   kube-apiserver-master1            1/1     Running   12            22m
kube-system   kube-apiserver-master2            1/1     Running   2             17m
kube-system   kube-apiserver-master3            1/1     Running   0             9m47s
kube-system   kube-controller-manager-master1   1/1     Running   7 (17m ago)   22m
kube-system   kube-controller-manager-master2   1/1     Running   1             17m
kube-system   kube-controller-manager-master3   1/1     Running   0             9m44s
kube-system   kube-proxy-2lr26                  1/1     Running   1             17m
kube-system   kube-proxy-rrkc8                  1/1     Running   0             9m51s
kube-system   kube-proxy-wm5vr                  1/1     Running   0             22m
kube-system   kube-scheduler-master1            1/1     Running   7 (17m ago)   22m
kube-system   kube-scheduler-master2            1/1     Running   1             17m
kube-system   kube-scheduler-master3            1/1     Running   0             9m45s
kube-system   kube-vip-master1                  1/1     Running   0             22m
kube-system   kube-vip-master2                  1/1     Running   0             20s

6.添加第三胎

回到153开始添加

cd /etc/kubernetes/pki/
rm -rf apiserver*
rm -rf etcd/healthcheck-client.*
rm -rf etcd/server.*
rm -rf etcd/peer.*

  kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124 \
    --control-plane 

添加完成查看

[root@master3 ~]# mkdir -p $HOME/.kube
[root@master3 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master3 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master3 ~]# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS        AGE
kube-system   coredns-684b46fbf5-c8fhl          0/1     Pending   0               12m
kube-system   coredns-684b46fbf5-g7r5t          0/1     Pending   0               12m
kube-system   etcd-master1                      1/1     Running   4               12m
kube-system   etcd-master2                      1/1     Running   1               8m31s
kube-system   etcd-master3                      1/1     Running   0               33s
kube-system   kube-apiserver-master1            1/1     Running   12              12m
kube-system   kube-apiserver-master2            1/1     Running   2               8m29s
kube-system   kube-apiserver-master3            1/1     Running   0               33s
kube-system   kube-controller-manager-master1   1/1     Running   7 (8m31s ago)   12m
kube-system   kube-controller-manager-master2   1/1     Running   1               8m33s
kube-system   kube-controller-manager-master3   1/1     Running   0               30s
kube-system   kube-proxy-2lr26                  1/1     Running   1               8m41s
kube-system   kube-proxy-rrkc8                  1/1     Running   0               37s
kube-system   kube-proxy-wm5vr                  1/1     Running   0               12m
kube-system   kube-scheduler-master1            1/1     Running   7 (8m29s ago)   12m
kube-system   kube-scheduler-master2            1/1     Running   1               8m31s
kube-system   kube-scheduler-master3            1/1     Running   0               31s
kube-system   kube-vip-master1                  1/1     Running   0               12m

6.1 kube-vip

生成时候添加

export VIP=172.16.100.123
export INTERFACE=eth0
ctr image pull uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3
ctr run --rm --net-host uhub.service.ucloud.cn/marksugar-k8s/kube-vip:v0.6.3 vip \
/kube-vip manifest pod \
--interface $INTERFACE \
--vip $VIP \
--controlplane \
--services \
--arp \
--leaseDuration 30 \
--leaseRenewDuration 20 \
--leaseRetry 4 \
--leaderElection | tee  /etc/kubernetes/manifests/kube-vip.yaml

查看

[root@master3 ~]# kubectl -n kube-system get pod
NAME                              READY   STATUS    RESTARTS      AGE
coredns-684b46fbf5-c8fhl          0/1     Pending   0             24m
coredns-684b46fbf5-g7r5t          0/1     Pending   0             24m
etcd-master1                      1/1     Running   4             24m
etcd-master2                      1/1     Running   1             20m
etcd-master3                      1/1     Running   0             12m
kube-apiserver-master1            1/1     Running   12            24m
kube-apiserver-master2            1/1     Running   2             20m
kube-apiserver-master3            1/1     Running   0             12m
kube-controller-manager-master1   1/1     Running   7 (20m ago)   24m
kube-controller-manager-master2   1/1     Running   1             20m
kube-controller-manager-master3   1/1     Running   0             12m
kube-proxy-2lr26                  1/1     Running   1             20m
kube-proxy-rrkc8                  1/1     Running   0             12m
kube-proxy-wm5vr                  1/1     Running   0             24m
kube-scheduler-master1            1/1     Running   7 (20m ago)   24m
kube-scheduler-master2            1/1     Running   1             20m
kube-scheduler-master3            1/1     Running   0             12m
kube-vip-master1                  1/1     Running   0             24m
kube-vip-master2                  1/1     Running   0             3m2s
kube-vip-master3                  1/1     Running   0             16s

7.添加Node

[root@node1 ~]# kubeadm join k8sapi.local:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:688334f4831391ce3fd87c10a969aea2794e0dd878da654fc99d5df82cc41124
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

回到master

[root@master1 ~]# kubectl get nodes
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   26m   v1.26.9
master2   NotReady   control-plane   22m   v1.26.9
master3   NotReady   control-plane   14m   v1.26.9
node1     NotReady   <none>          49s   v1.26.9

VIP切换

我们关闭master1后,master2成功担任了leader,通过查看master2和master3日志查看

master2

[root@master2 ~]# crictl logs ed354dddc762e
time="2023-11-13T16:29:54Z" level=info msg="Starting kube-vip.io [v0.6.3]"
time="2023-11-13T16:29:54Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[true]"
time="2023-11-13T16:29:54Z" level=info msg="prometheus HTTP server started"
time="2023-11-13T16:29:54Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2023-11-13T16:29:54Z" level=info msg="beginning services leadership, namespace [kube-system], lock name [plndr-svcs-lock], id [master2]"
I1113 16:29:54.266496       1 leaderelection.go:245] attempting to acquire leader lease kube-system/plndr-svcs-lock...
time="2023-11-13T16:29:54Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [master2]"
I1113 16:29:54.266906       1 leaderelection.go:245] attempting to acquire leader lease kube-system/plndr-cp-lock...
time="2023-11-13T16:29:54Z" level=info msg="new leader elected: master1"
time="2023-11-13T16:29:54Z" level=info msg="Node [master1] is assuming leadership of the cluster"
I1113 16:42:49.506035       1 leaderelection.go:255] successfully acquired lease kube-system/plndr-cp-lock
time="2023-11-13T16:42:49Z" level=info msg="Node [master2] is assuming leadership of the cluster"
time="2023-11-13T16:42:49Z" level=info msg="Gratuitous Arp broadcast will repeat every 3 seconds for [172.16.100.123]"
time="2023-11-13T16:42:52Z" level=info msg="new leader elected: master3"

master3

[root@master3 ~]# crictl logs bffab2b35af39
time="2023-11-13T16:32:45Z" level=info msg="Starting kube-vip.io [v0.6.3]"
time="2023-11-13T16:32:45Z" level=info msg="namespace [kube-system], Mode: [ARP], Features(s): Control Plane:[true], Services:[true]"
time="2023-11-13T16:32:45Z" level=info msg="Starting Kube-vip Manager with the ARP engine"
time="2023-11-13T16:32:45Z" level=info msg="prometheus HTTP server started"
time="2023-11-13T16:32:45Z" level=info msg="beginning services leadership, namespace [kube-system], lock name [plndr-svcs-lock], id [master3]"
I1113 16:32:45.715711       1 leaderelection.go:245] attempting to acquire leader lease kube-system/plndr-svcs-lock...
time="2023-11-13T16:32:45Z" level=info msg="Beginning cluster membership, namespace [kube-system], lock name [plndr-cp-lock], id [master3]"
I1113 16:32:45.716210       1 leaderelection.go:245] attempting to acquire leader lease kube-system/plndr-cp-lock...
time="2023-11-13T16:32:45Z" level=info msg="Node [master1] is assuming leadership of the cluster"
time="2023-11-13T16:32:45Z" level=info msg="new leader elected: master1"
I1113 16:42:51.359775       1 leaderelection.go:255] successfully acquired lease kube-system/plndr-svcs-lock
time="2023-11-13T16:42:51Z" level=info msg="starting services watcher for all namespaces"
time="2023-11-13T16:42:51Z" level=info msg="Node [master2] is assuming leadership of the cluster"

此时VIP已经到master2

[root@master2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether ce:fa:bb:66:29:9f brd ff:ff:ff:ff:ff:ff
    inet 172.16.100.152/24 brd 172.16.100.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 172.16.100.123/32 scope global eth0
       valid_lft forever preferred_lft forever

高可用完成

[root@master2 ~]# kubectl get node
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   38m   v1.26.9
master2   NotReady   control-plane   34m   v1.26.9
master3   NotReady   control-plane   26m   v1.26.9
node1     NotReady   <none>          13m   v1.26.9
[root@master2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok        
scheduler            Healthy   ok        
etcd-0               Healthy 

相关文章

10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
istio全链路传递cookie和header灰度
REST Web 服务版本控制
2023 年最适合 Windows 11 使用的 20 个应用
如何解决 Outlook 错误 0x800300fd?

发布评论