一、服务器重启后,再次关闭swap和防火墙
关闭防火墙:
$ systemctl stop firewalld
$ systemctl disable firewalld
关闭 selinux:
$ sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
$ setenforce 0 # 临时
关闭 swap:
$ swapoff -a # 临时
$ sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
二、启动kubelet
[root@# localhost ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since 六 2023-10-07 15:30:04 CST; 2s ago
Docs: https://kubernetes.io/docs/
Process: 27718 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 27718 (code=exited, status=255)
10月 07 15:30:04 # localhost.localdomain systemd[1]: Unit kubelet.service entered failed state.
10月 07 15:30:04 # localhost.localdomain systemd[1]: kubelet.service failed.
[root@# localhost ~]# swapoff -a
[root@# localhost ~]# systemctl start kubelet.service
[root@# localhost ~]# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since 六 2023-10-07 15:31:36 CST; 3s ago
Docs: https://kubernetes.io/docs/
Main PID: 28461 (kubelet)
Tasks: 14
Memory: 24.7M
CGroup: /system.slice/kubelet.service
└─28461 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubern...
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326134 28461 remote_runtime.go:59] parsed s...: ""
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326146 28461 remote_runtime.go:59] scheme "...heme
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326184 28461 passthrough.go:48] ccResolverW...il>}
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326202 28461 clientconn.go:933] ClientConn ...rst"
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326276 28461 remote_image.go:50] parsed scheme: ""
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326280 28461 remote_image.go:50] scheme "" ...heme
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326286 28461 passthrough.go:48] ccResolverW...il>}
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326289 28461 clientconn.go:933] ClientConn ...rst"
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326325 28461 kubelet.go:292] Adding pod pat...ests
10月 07 15:31:37 # localhost.localdomain kubelet[28461]: I1007 15:31:37.326371 28461 kubelet.go:317] Watching apiserver
Hint: Some lines were ellipsized, use -l to show in full.
三、生成新的token
[root@server253 kubernetes]# kubectl get secret -n kube-system | grep admin | awk '{print $1}'
dashboard-admin-token-mj5xj
[root@server253 kubernetes]# kubectl describe secret dashboard-admin-token-mj5xj -n kube-system| grep '^token'| awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik42T1ZuWWpSVVpZVF9wQlA4OXdncjF5WE5WZjdXQnBXRHM3S1JBTGhpS0kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbWo1eGoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNmUwMDlmNjUtMzkzZS00ODJhLWFmYjItNDk4NjE1ZGFjZmRjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.sSVhZQ8WHPanlx54k09VK0CUpwdoGPLaxQAbWp3dvOZ1WHpOtLm15aas8IyfrODOvkA5v4NHvo3hH_rMRxUgxJnH4hJdbBZzFMjmqawzPvKoIKiPPmaxGK5qVrnCbgeSoyHI0T-zNzf2o9yETz_Stk3Dsnjd3b9UC4PDAkxbn_hgDWvgZ3GNCLjP9rcAM4bZZ5Z8vvccY7Y4oX2LBIM4cw8v93ocU3V6MgklxlhNHguxk5aR-jcbPJdI32R2fD4VlNPy5JGs1hPQcSrOEkc9cI1osC61k80QRPsEN8hsKYsmorkAv3n4Qmu3Y9v4nSK7lOmyjpQ_gxUGO74wzr2B9g
四、如果kubernates-dashboard无法显示内容,需要在master节点/etc/kubenates/下自己创建一个dashboard-svc-account.yml文件,并执行生效
[root@server253 kubernetes]# vim dashboard-svc-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
执行生效
kubectl apply -f dashboard-svc-account.yml
根据第三点重新生成一次token再登录dashboard即可。
五、新增node节点
先获取加入集群的命令:在master节点执行:
[root@server253 kubernetes]# kubeadm token create --print-join-command
W1007 17:01:39.376097 26566 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.2.253:6443 --token mbd8fr.limeyyhyhsbu81nz --discovery-token-ca-cert-hash sha256:7eeb949618f9bfe4021795e460243851ca67abab018a3a59a947a754cc245a5a
kubeadm join 192.168.2.253:6443 --token mbd8fr.limeyyhyhsbu81nz --discovery-token-ca-cert-hash sha256:7eeb949618f9bfe4021795e460243851ca67abab018a3a59a947a754cc245a5a
六、新增master节点
先获取加入集群的命令:在master节点执行(比新增node节点多了--control-plane**):**
[root@server253 kubernetes]# kubeadm token create --print-join-command
W1007 17:01:39.376097 26566 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.2.253:6443 --token mbd8fr.limeyyhyhsbu81nz --discovery-token-ca-cert-hash sha256:7eeb949618f9bfe4021795e460243851ca67abab018a3a59a947a754cc245a5a
kubeadm join 192.168.2.253:6443 --token mbd8fr.limeyyhyhsbu81nz --discovery-token-ca-cert-hash sha256:7eeb949618f9bfe4021795e460243851ca67abab018a3a59a947a754cc245a5a
--control-plane
七、重启后节点NotReady问题
一般这种情况我们先查看kubelet启动日志
[root@localhost ~]# journalctl -f -u kubelet
-- Logs begin at 日 2023-10-08 19:42:47 CST. --
10月 09 09:33:39 localhost.localdomain kubelet[20491]: E1009 09:33:39.848444 20491 kubelet.go:2267] node "localhost.localdomain" not found
我这里是遇到主机名称失效问题
因为我搭建集群的时候使用的是临时配置hostname,重启服务器后就会还原
CentOS有三类的主机名
静态的(static hotsname)也称内核主机名,是系统在启动时从/etc/hostname初始化的主机名。
瞬态/动态的(transient hostname) 是系统运行时临时分配的主机名。例如,通过DHCP或者mDNS服务器分配。
灵活的(pretty hostname)也称”别名“主机名,允许使用特殊符号或者空格。静态和瞬态主机名需要遵从互联网域名同样的规则。
1.不要使用hostname xxxx这种临时方式
[root@server251 ~]# hostname server251
[root@server251 ~]# hostname
server251
[root@server251 ~]# hostnamectl status
Static hostname: localhost.localdomain
Transient hostname: status
Icon name: computer-vm
Chassis: vm
Machine ID: 6306a69c879e4eb596c916f574e9b1a4
Boot ID: 613fc1f36edc47f1a4a3778743a21e04
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.el7.x86_64
Architecture: x86-64
2.不要使用配置文件形式:
[root@localhost ~]# vim /etc/sysconfig/network
写入内容:
# Created by anacondaNETWORKING=yesHOSTNAME=node249
重启服务器也会还原主机名称
PS:对于在Ubuntu系统中,主机名存放在/etc/hostname文件中,修改主机名时,编辑hostname文件,在文件中输入新的主机名并保存该文件即可。
3.使用hostnamectl命令
[root@node249 ~]# hostnamectl set-hostname node249
[root@node249 ~]# hostnamectl status
Static hostname: node249
Icon name: computer-vm
Chassis: vm
Machine ID: da25eb7b3b0d4853aa54ed9c0230a9c5
Boot ID: 383e36010014416cad47f60a641afe9a
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.24.1.el7.x86_64
Architecture: x86-64