kubeadm 1.13 高可用

2023年 7月 16日 100.1k 0

使用kubeadm安装配置kubernetes HA,etcd外放,使用VIP做故障转移,其中不同的是,这个VIP还做了域名解析。此前尝试使用keepalived+haproxy发现有一些问题。

恰巧内部有内部的DNS服务器,这样一来,两台master通过域名和VIP做转移,实现了kubernetes的高可用,如下图k8sga-2.png环境如下:

[root@linuxea.com ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1",
[root@linuxea.com ~]# docker -v
Docker version 18.06.1-ce, build e68fc7a

先决条件

  • hosts
cat >> /etc/hosts << EOF
172.25.50.13 master-0.k8s.org
172.25.50.14 master-1.k8s.org
127.0.0.1 www.linuxea.com
EOF
  • hostname
[root@linuxea.com ~]# hostnamectl set-hostname  master-0.k8s.org
[root@host-172-25-50-13 ~]# echo "DHCP_HOSTNAME=master-0.k8s.org" >> /etc/sysconfig/network-scripts/ifcfg-eth0 
[root@linuxea.com ~]# systemctl restart network

修改后重启下,在重启前,关闭防火墙

[root@linuxea.com ~]# systemctl disable iptables firewalld.service 
[root@linuxea.com ~]# systemctl stop iptables firewalld.service 
[root@linuxea.com ~]# reboot

当然了,我这里此前安装的就是iptables

  • swap
[root@master-0 ~]# swapoff -a

可以打开ipvs

cat << EOF > /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/`uname -r`/kernel/net/netfilter/ipvs"
for i in `ls $ipvs_modules_dir | sed  -r 's#(.*).ko.*#1#'`; do
    /sbin/modinfo -F filename $i  &> /dev/null
    if [ $? -eq 0 ]; then
        /sbin/modprobe $i
    fi
done
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables

确保模块安装,nf_nat_ipv4也是关键之一

[root@master-0 ~]# lsmod|grep ip_vs
ip_vs_wrr              16384  0 
ip_vs_wlc              16384  0 
ip_vs_sh               16384  0 
ip_vs_sed              16384  0 
ip_vs_rr               16384  0 
ip_vs_pe_sip           16384  0 
nf_conntrack_sip       28672  1 ip_vs_pe_sip
ip_vs_ovf              16384  0 
ip_vs_nq               16384  0 
ip_vs_mh               16384  0 
ip_vs_lc               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_ftp              16384  0 
ip_vs_fo               16384  0 
ip_vs_dh               16384  0 
ip_vs                 151552  30 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_mh,ip_vs_sedip_vs_ftp
nf_nat                 32768  2 nf_nat_ipv4,ip_vs_ftp
nf_conntrack          135168  8 xt_conntrack,nf_conntrack_ipv4,nf_nat,ipt_MASQUERADE,nf_nat_ipv4,nf_conntrack_sip,nf_conntrack_netlink,ip_vs
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs
  • 如果觉得上面的步骤太繁琐,可以参考这里的脚本:
curl -Lk https://raw.githubusercontent.com/marksugar/kubeadMHA/master/systeminit/chenage_hostname|bash
curl -Lk https://raw.githubusercontent.com/marksugar/kubeadMHA/master/systeminit/ip_vs_a_init|bash

keepalived

  • install keepalived
 bash <(curl -s  https://raw.githubusercontent.com/marksugar/lvs/master/keepliaved/install.sh|more)

如下:

输入Master或者BACKUP和VIP

[root@master-0 ~]# bash <(curl -s  https://raw.githubusercontent.com/marksugar/lvs/master/keepliaved/install.sh|more)
You install role MASTER/BACKUP ?
         please enter(block letter):MASTER
Please enter the use VIP: 172.25.50.15

安装kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm
systemctl enable kubelet && systemctl start kubelet

master-0部署

kubeadm init

  • master-0.k8s.org
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
  name: master-0.k8s.org
#  taints:
#  - key: "kubeadmNode"
#    value: "master"
#    effect: "NoSchedule"
localapiEndpoint:
  advertiseAddress: https://172.25.50.13
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: "v1.13.1"
#controlPlaneEndpoint: 172.25.50.15:6444
controlPlaneEndpoint: master-vip.k8s.org:6443
apiServer:
  CertSANs:
  - master-vip.k8s.org
  timeoutForControlPlane: 5m0s
etcd:
  external:
    endpoints:
    - "https://172.25.50.16:2379"
    - "https://172.25.50.17:2379"
    - "https://172.25.50.18:2379"
    caFile: /etc/kubernetes/pki/etcd/ca.pem
    certFile: /etc/kubernetes/pki/etcd/client.pem
    keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
  serviceSubnet: 172.25.50.0/23
  podSubnet: 172.25.56.0/22
  dnsDomain: cluster.local
imageRepository: k8s.gcr.io
clusterName: "Acluster"
#dns:
#  type: CoreDNS
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

开始初始化

[root@master-0 ~]# kubeadm init --config ./kubeadm-init.yaml
...
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join master-vip.k8s.org:6443 --token gjflgc.jg9i5vyrmiv295h3 --discovery-token-ca-cert-hash sha256:9b7943a35e4b6199b5f9fe50473bd336e28c184975d90e3a0f3076c25b694a18
[root@master-0 ~]#   mkdir -p $HOME/.kube
[root@master-0 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-0 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-0 ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   

NAME                    STATUS     ROLES    AGE   VERSION
node/master-0.k8s.org   NotReady   master   75s   v1.13.1
[root@master-0 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-br2x2                   0/1     Pending   0          71s
coredns-86c58d9df4-kcm42                   0/1     Pending   0          71s
kube-apiserver-master-0.k8s.org            1/1     Running   0          28s
kube-controller-manager-master-0.k8s.org   1/1     Running   0          29s
kube-proxy-rp8dg                           1/1     Running   0          71s
kube-scheduler-master-0.k8s.org            1/1     Running   0          31s

安装calico

[root@master-0 ~]# wget https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
[root@master-0 ~]# cd calico/
[root@master-0 calico]# ls
calicoctl  calico.yaml

apply

[root@master-0 calico]# kubectl apply -f ./
configmap/calico-config created
secret/calico-etcd-secrets created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
[root@master-0 calico]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5d94b577bb-7k7jn   1/1     Running   0          27s
calico-node-bdkgn                          1/1     Running   0          27s
coredns-86c58d9df4-br2x2                   1/1     Running   0          3m8s
coredns-86c58d9df4-kcm42                   1/1     Running   0          3m8s
kube-apiserver-master-0.k8s.org            1/1     Running   0          2m25s
kube-controller-manager-master-0.k8s.org   1/1     Running   0          2m26s
kube-proxy-rp8dg                           1/1     Running   0          3m8s
kube-scheduler-master-0.k8s.org            1/1     Running   0          2m28s

你可能需要修改两个地方,一个网卡接口

- name: IP
  value: "autodetect"
- name: IP_AUTODETECTION_METHOD
  value: "interface=eth.*"   

或者,直接写成ip

- name: CALICO_IPV4POOL_CIDR
  value: "172.25.56.0/22"

另外,如果有必要,你还要修改容忍度来容忍master的污点

tolerations:
  # Mark the pod as a critical add-on for rescheduling.
  - key: CriticalAddonsOnly
    operator: Exists
  - key: node-role.kubernetes.io/master
    effect: NoSchedule

可以参考我的github上的文件:https://github.com/marksugar/kubeadMHA/tree/master/calico

延伸阅读:https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/calico

安装metrics-server

我们创建 一个目录来存放metrics-server

[root@master-0 ~]# mkdir deploy/metrics-server/
[root@master-0 ~]# cd ~/deploy/metrics-server/

而后我们只下载metrics-server的部署文件即可

[root@master-0 metrics-server]# for i in 
aggregated-metrics-reader.yaml 
auth-delegator.yaml 
auth-reader.yaml 
metrics-apiservice.yaml 
metrics-server-deployment.yaml 
metrics-server-service.yaml 
resource-reader.yaml 
;do curl -Lks https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/$i -o "${i}";done
[root@master-0 metrics-server]# ll
total 28
-rw-r--r-- 1 root root 384 Jan  1 21:37 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root 308 Jan  1 21:37 auth-delegator.yaml
-rw-r--r-- 1 root root 329 Jan  1 21:37 auth-reader.yaml
-rw-r--r-- 1 root root 298 Jan  1 21:37 metrics-apiservice.yaml
-rw-r--r-- 1 root root 815 Jan  1 21:37 metrics-server-deployment.yaml
-rw-r--r-- 1 root root 249 Jan  1 21:37 metrics-server-service.yaml
-rw-r--r-- 1 root root 502 Jan  1 21:37 resource-reader.yaml

而后部署即可

[root@master-0 metrics-server]# kubectl apply -f ./
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
[root@master-0 metrics-server]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5d94b577bb-7k7jn   1/1     Running   0          64s
calico-node-bdkgn                          1/1     Running   0          64s
coredns-86c58d9df4-br2x2                   1/1     Running   0          3m45s
coredns-86c58d9df4-kcm42                   1/1     Running   0          3m45s
kube-apiserver-master-0.k8s.org            1/1     Running   0          3m2s
kube-controller-manager-master-0.k8s.org   1/1     Running   0          3m3s
kube-proxy-rp8dg                           1/1     Running   0          3m45s
kube-scheduler-master-0.k8s.org            1/1     Running   0          3m5s
metrics-server-54f6f996dc-kr5wz            1/1     Running   0          7s

稍等片刻就可以看到节点资源状态等

[root@master-0 metrics-server]# kubectl top node
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master-0.k8s.org   292m         14%    1272Mi          35%   
[root@master-0 metrics-server]# kubectl top pods -n kube-system
NAME                                       CPU(cores)   MEMORY(bytes)   
calico-kube-controllers-5d94b577bb-7k7jn   2m           10Mi            
calico-node-bdkgn                          20m          19Mi            
coredns-86c58d9df4-br2x2                   2m           11Mi            
coredns-86c58d9df4-kcm42                   3m           11Mi            
kube-apiserver-master-0.k8s.org            79m          391Mi           
kube-controller-manager-master-0.k8s.org   33m          67Mi            
kube-proxy-rp8dg                           2m           15Mi            
kube-scheduler-master-0.k8s.org            10m          13Mi            
metrics-server-54f6f996dc-kr5wz            1m           11Mi  

master-1部署

复制密钥到master-1

[root@master-0 metrics-server]# cd /etc/kubernetes/ && scp -r ./pki 172.25.50.14:/etc/kubernetes/ && scp ./admin.conf 172.25.50.14:/etc/kubernetes/
ca.pem                                           100% 1371   101.9KB/s   00:00    
client.pem                                       100%  997   430.3KB/s   00:00    
client-key.pem                                   100%  227   132.2KB/s   00:00    
ca.key                                           100% 1675   366.5KB/s   00:00    
ca.crt                                           100% 1025     1.5MB/s   00:00    
apiserver-kubelet-client.key                     100% 1679   817.6KB/s   00:00    
apiserver-kubelet-client.crt                     100% 1099   602.7KB/s   00:00    
apiserver.key                                    100% 1679   170.8KB/s   00:00    
apiserver.crt                                    100% 1261   266.2KB/s   00:00    
front-proxy-ca.key                               100% 1675   796.2KB/s   00:00    
front-proxy-ca.crt                               100% 1038   595.1KB/s   00:00    
front-proxy-client.key                           100% 1679   816.4KB/s   00:00    
front-proxy-client.crt                           100% 1058   512.4KB/s   00:00    
sa.key                                           100% 1679   890.5KB/s   00:00    
sa.pub                                           100%  451   235.0KB/s   00:00    
admin.conf                                       100% 5450   442.9KB/s   00:00

使用kubeadm token create --print-join-command获取当前token

[root@master-0 kubernetes]# kubeadm token create --print-join-command
kubeadm join master-vip.k8s.org:6443 --token qffwr6.4dqd3hshvfbxn3f8 --discovery-token-ca-cert-hash sha256:9b7943a35e4b6199b5f9fe50473bd336e28c184975d90e3a0f3076c25b694a18
[root@master-1 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables && bash /etc/sysconfig/modules/ipvs.modules
[root@master-1 ~]#  kubeadm join master-vip.k8s.org:6443 --token qffwr6.4dqd3hshvfbxn3f8 --discovery-token-ca-cert-hash sha256:9b7943a35e4b6199b5f9fe50473bd336e28c184975d90e3a0f3076c25b694a18 --experimental-control-plane
...
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Master label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
...

我们在备用的master节点使用experimental-control-plane

  • 延伸阅读:https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd
[root@master-1 ~]# mkdir -p $HOME/.kube
[root@master-1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-1 ~]# kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
master-0.k8s.org   Ready    master   11m   v1.13.1
master-1.k8s.org   Ready    master   70s   v1.13.1
[root@master-1 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5d94b577bb-7k7jn   1/1     Running   0          8m13s
calico-node-bdkgn                          1/1     Running   0          8m13s
calico-node-nk6xw                          1/1     Running   0          79s
coredns-86c58d9df4-br2x2                   1/1     Running   0          10m
coredns-86c58d9df4-kcm42                   1/1     Running   0          10m
kube-apiserver-master-0.k8s.org            1/1     Running   0          10m
kube-apiserver-master-1.k8s.org            1/1     Running   0          79s
kube-controller-manager-master-0.k8s.org   1/1     Running   0          10m
kube-controller-manager-master-1.k8s.org   1/1     Running   0          79s
kube-proxy-cz8h8                           1/1     Running   0          79s
kube-proxy-rp8dg                           1/1     Running   0          10m
kube-scheduler-master-0.k8s.org            1/1     Running   0          10m
kube-scheduler-master-1.k8s.org            1/1     Running   0          79s
metrics-server-54f6f996dc-kr5wz            1/1     Running   0          7m16s
[root@master-1 ~]# kubectl top nodes
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master-0.k8s.org   226m         11%    1314Mi          36%       
master-1.k8s.org   120m         6%     788Mi           21%   

添加node

添加node主机,做好修改主机名,开启ipvs等。这里写在一个文件里

curl -Lk https://raw.githubusercontent.com/marksugar/kubeadMHA/master/systeminit/chenage_hostname|bash

主机名你也可以这样修改

echo $(ip addr show eth0 | grep -Po 'inet K[d.]+'|awk -F. '{print $0}') > /etc/hostname

CHOSTNAME=node-$(echo `sed 's@.@-@g' /etc/hostname`).k8s.org
CHOSTNAME_pretty='k8s node'
sysctl -w kernel.hostname=$CHOSTNAME
hostnamectl set-hostname $CHOSTNAME --static
hostnamectl set-hostname "$CHOSTNAME_pretty" --pretty
sysctl kernel.hostname=$CHOSTNAME

echo -e "33[31m33[01m[ `hostnamectl` ]33[0m"

主机名修改后,还需要在做一些操作

curl -Lk https://raw.githubusercontent.com/marksugar/kubeadMHA/master/systeminit/ip_vs_a_init|bash

这个脚本可以在github打开查看即可

而后添加即可

[root@node-172-25-50-19.k8s.org ~]# kubeadm join master-vip.k8s.org:6443 --token qffwr6.4dqd3hshvfbxn3f8 --discovery-token-ca-cert-hash sha256:9b7943a35e4b6199b5f9fe50473bd336e28c184975d90e3a0f3076c25b694a18
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "master-vip.k8s.org:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://master-vip.k8s.org:6443"
[discovery] Requesting info from "https://master-vip.k8s.org:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "master-vip.k8s.org:6443"
[discovery] Successfully established connection with API Server "master-vip.k8s.org:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-172-25-50-19.k8s.org" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@master-0 ~]# kubectl get nodes
NAME                        STATUS   ROLES    AGE    VERSION
master-0.k8s.org            Ready    master   133m   v1.13.1
master-1.k8s.org            Ready    master   123m   v1.13.1
node-172-25-50-19.k8s.org   Ready    <none>   15s    v1.13.1

故障测试

我们关掉keepalived,模拟master-0宕机

[root@master-0 kubernetes]# systemctl stop keepalived.service 

此时eth0上的172.25.10.15就会飘逸到master-1上,master-0如下

[root@master-0 kubernetes]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:ed:d3:21 brd ff:ff:ff:ff:ff:ff
    inet 172.25.50.13/16 brd 172.25.255.255 scope global dynamic eth0
       valid_lft 82951sec preferred_lft 82951sec

master-1如下:

[root@master-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether fa:16:3e:ae:76:b9 brd ff:ff:ff:ff:ff:ff
    inet 172.25.50.14/16 brd 172.25.255.255 scope global dynamic eth0
       valid_lft 85332sec preferred_lft 85332sec
    inet 172.25.50.15/16 brd 172.25.255.255 scope global secondary eth0:vip
       valid_lft forever preferred_lft forever
[root@master-1 ~]# kubectl get node
NAME               STATUS   ROLES    AGE    VERSION
master-0.k8s.org   Ready    master   15m    v1.13.1
master-1.k8s.org   Ready    master   5m7s   v1.13.1
[root@master-1 ~]# kubectl get pod -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5d94b577bb-7k7jn   1/1     Running   0          12m
calico-node-bdkgn                          1/1     Running   0          12m
calico-node-nk6xw                          1/1     Running   0          5m17s
coredns-86c58d9df4-br2x2                   1/1     Running   0          14m
coredns-86c58d9df4-kcm42                   1/1     Running   0          14m
kube-apiserver-master-0.k8s.org            1/1     Running   0          14m
kube-apiserver-master-1.k8s.org            1/1     Running   0          5m17s
kube-controller-manager-master-0.k8s.org   1/1     Running   0          14m
kube-controller-manager-master-1.k8s.org   1/1     Running   0          5m17s
kube-proxy-cz8h8                           1/1     Running   0          5m17s
kube-proxy-rp8dg                           1/1     Running   0          14m
kube-scheduler-master-0.k8s.org            1/1     Running   0          14m
kube-scheduler-master-1.k8s.org            1/1     Running   0          5m17s
metrics-server-54f6f996dc-kr5wz            1/1     Running   0          11m

添加Master

我们使用的是keepalived跑的vip,测试发现使用VIP更妥当一些,DNS轮询不见得好用。于是乎将域名解析到这个VIP上,依靠VIP飘逸做HA

关于 keepalived,可以参考上面的安装方式,非常简单,运行以下脚本即可

bash <(curl -s  https://raw.githubusercontent.com/marksugar/lvs/master/keepliaved/install.sh|more)

如下:

输入Master或者BACKUP和VIP

[root@master-0 ~]# bash <(curl -s  https://raw.githubusercontent.com/marksugar/lvs/master/keepliaved/install.sh|more)
You install role MASTER/BACKUP ?
         please enter(block letter):MASTER
Please enter the use VIP: 172.25.50.15
  • 这里需要注意的是,如果是三台keepalived,需要手动修改权重了。

如果我们要添加一台Master只需要kubeadm token create --print-join-command获取到token,而后使用--experimental-control-plane,大致如下:

kubeadm join master-vip.k8s.org:6443 --token qffwr6.4dqd3hshvfbxn3f8 --discovery-token-ca-cert-hash sha256:9b7943a35e4b6199b5f9fe50473bd336e28c184975d90e3a0f3076c25b694a18 --experimental-control-plane

延伸阅读:https://kubernetes.io/docs/setup/independent/high-availability/#external-etcdhttps://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyIPVSConfigurationhttps://k8smeetup.github.io/docs/setup/independent/high-availability/

如果你是需要频繁的安装测试,那下面的这些命令或许有 用:

kubeadm reset
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ip link delete cni0
ip link delete flannel.1
ip link delete docker0
ip link delete dummy0
ip link delete kube-ipvs0
ip link delete tunl0@NONE
ip link delete tunl0

ip addr add IP/32 brd IP dev eth0
ip addr del IP/32 brd IP dev tunl0@NONE

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables && bash /etc/sysconfig/modules/ipvs.modules

相关文章

LeaferJS 1.0 重磅发布:强悍的前端 Canvas 渲染引擎
10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
一步步配置基于kubeadmin的kubevip高可用
istio全链路传递cookie和header灰度
REST Web 服务版本控制

发布评论