[kubepshere平台搭建 part.1] 多节点部署KubeSphere与部署nfs文件系统

2023年 10月 7日 57.6k 0

单一服务器必须采用all-in-one模式安装

最少3台服务器才能采用多节点安装

1.1 准备

  • 4c8g (master节点)

  • 8c16g * 2(worker节点)

  • linux系统

  • docker

  • 内网互通

  • 每个机器有自己域名

     hostnamectl set-hostname xxx
    
  • 防火墙开放30000~32767端口

    通过后台面板设置

    或者

     #删除端口
     sudo ufw delete allow [端口]
     ​
     sudo ufw status verbose
     sudo ufw allow 22
     sudo ufw allow 53,111,8443,8080,80,179,6443,2379,2380,9099,9100/tcp
     sudo ufw allow 53,111,8443,8080,80,179,6443,2379,2380,9099,9100/udp
     sudo ufw allow 30000:32767/tcp
     sudo ufw allow 30000:32767/udp
     [master] sudo ufw allow 10250:10258/tcp
     [master] sudo ufw allow 10250:10258/udp
     sudo ufw enable
     sudo ufw reload
    
  • 前置环境

     sudo apt install conntrack
     sudo apt install socat
    

1.2 使用KubeKey创建集群

1.2.1 下载KubeKey

 #服务器在国内
 export KKZONE=cn
 
 curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
 chmod +x kk

1.2.2 创建KubeKey配置文件

主节点一定要root用户

 # 创建配置文件 分别指定k8s与kubesphere版本
 ./kk create config --with-kubernetes v1.23.5 --with-kubesphere v3.1.1
# 若各节点都在同一网络,且平台不对外网公开,可以全部采用内网ip

   hosts:
     - {name: master, address: [公网ip], internalAddress: [内网ip], user: [用户名], password: "pwd"}
     - {name: node1, address: [公网ip], internalAddress: [内网ip], user: [用户名], password: "pwd"}
     - {name: node2, address: [公网ip], internalAddress: [内网ip], user: [用户名], password: "pwd"}
   roleGroups:
     etcd:
     - master
     control-plane:
     - master
     worker:
     - node1
     - node2
 # 其他功能在下面配置开启
 # 包括metrics、devops、应用商店等

master节点有默认污点,工作负载不会分配到master节点,有需要可以另行清除

1.2.3 修改内网ip

打开config-sample.yaml

修改节点信息为内网ip

1.2.4 启用可插拔插件

打开config-sample.yaml

1.2.5 创建集群

 ./kk create cluster -f config-sample.yaml
 ​
 #helm下载慢则采用
 sudo KKZONE=cn ./kk create cluster -f config-sample.yaml

创建完成

 21:56:14 CST success: [master]
 #####################################################
 ###              Welcome to KubeSphere!           ###
 #####################################################
 ​
 Console: http://10.0.12.14:30880
 Account: admin
 Password: P@88w0rd
 NOTES:
   1. After you log into the console, please check the
      monitoring status of service components in
      "Cluster Management". If any service is not
      ready, please wait patiently until all components
      are up and running.
   2. Please change the default password after login.
 ​
 #####################################################
 https://kubesphere.io             2023-07-21 22:05:16
 #####################################################
 22:05:17 CST success: [master]
 22:05:17 CST Pipeline[CreateClusterPipeline] execute successfully
 Installation is complete.
 ​

校验

 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

重启命令

 kubectl rollout restart deploy -n kubesphere-system ks-installer

1.3 卸载kubesphere

 ./kk delete cluster [-f config-sample.yaml]

1.4 注意事项

kubeadm集群时,出于安全考虑Pod不会被调度到Master Node上,默认情况下,master打了污点,不参与工作负载

  • 查看污点信息 命令:kubectl get no -o yaml | grep taint -A 5
  • 删除master节点污点 命令kubectl taint nodes --all node-role.kubernetes.io/master-

1.5 NFS部署

1.5.1 安装

 # 服务端-master节点
 apt install nfs-kernel-server
 # 客户端-worker节点
 apt install nfs-common

1.5.2 创建nfs共享目录

 mkdir -p /data/nfs
 chmod 666 /data/nfs

1.5.3 配置nfs

 sudo vim /etc/exports
 sudo exportfs -r
 # 这里指定了某一网段下所有ip都允许
 /data/nfs-data 10.222.77.0/24(rw,sync,insecure,no_subtree_check,no_root_squash)

1.5.4 开启RPC、NFS服务

 service rpcbind start
 service nfs start
 ​
 #重启
 sudo systemctl restart nfs-kernel-server
 ​
 rpcinfo -p localhost
 # master节点查看
 showmount -e localhost
 # 客户端测试连接
 sudo showmount -e [master ip]

1.5.5 挂载远程目录

 mount [master ip]:[远程目录] [本地目录]
 #默认是以udp挂载,可以指定tcp
 mount [master ip]:[远程目录] [本地目录] -o proto=tcp -o nolock

1.5.6 安装 NFS 动态分配器

下载SA RBAC
创建rbac.yaml文件

 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
 ---
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: nfs-client-provisioner-runner
 rules:
   - apiGroups: [""]
     resources: ["persistentvolumes"]
     verbs: ["get", "list", "watch", "create", "delete"]
   - apiGroups: [""]
     resources: ["persistentvolumeclaims"]
     verbs: ["get", "list", "watch", "update"]
   - apiGroups: ["storage.k8s.io"]
     resources: ["storageclasses"]
     verbs: ["get", "list", "watch"]
   - apiGroups: [""]
     resources: ["events"]
     verbs: ["create", "update", "patch"]
 ---
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: run-nfs-client-provisioner
 subjects:
   - kind: ServiceAccount
     name: nfs-client-provisioner
     # replace with namespace where provisioner is deployed
     namespace: default
 roleRef:
   kind: ClusterRole
   name: nfs-client-provisioner-runner
   apiGroup: rbac.authorization.k8s.io
 ---
 kind: Role
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: leader-locking-nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
 rules:
   - apiGroups: [""]
     resources: ["endpoints"]
     verbs: ["get", "list", "watch", "create", "update", "patch"]
 ---
 kind: RoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: leader-locking-nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
 subjects:
   - kind: ServiceAccount
     name: nfs-client-provisioner
     # replace with namespace where provisioner is deployed
     namespace: default
 roleRef:
   kind: Role
   name: leader-locking-nfs-client-provisioner
   apiGroup: rbac.authorization.k8s.io
 kubectl apply -f rbac.yaml

官方的 nfs provisoner用途的deloyment

 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: nfs-client-provisioner
   labels:
     app: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
 spec:
   replicas: 1
   strategy:
     type: Recreate
   selector:
     matchLabels:
       app: nfs-client-provisioner
   template:
     metadata:
       labels:
         app: nfs-client-provisioner
     spec:
       serviceAccountName: nfs-client-provisioner
       containers:
         - name: nfs-client-provisioner
           image: quay.io/external_storage/nfs-client-provisioner:latest
           volumeMounts:
             - name: nfs-client-root
               mountPath: /persistentvolumes
           env:
             - name: PROVISIONER_NAME
               value: nfs/provisioner-229
             - name: NFS_SERVER
               value: 10.21.80.226
             - name: NFS_PATH
               value: /data/nfs
       volumes:
         - name: nfs-client-root
           nfs:
             server: 10.21.80.226
             path: /data/nfs

创建存储类 storageClass.yaml

 apiVersion: storage.k8s.io/v1
 kind: StorageClass
 metadata:
   name: managed-nfs-storage
 provisioner: nfs/provisioner-229 # or choose another name, must match deployment's env PROVISIONER_NAME'
 parameters:
   archiveOnDelete: "false"
 kubectl apply -f storageclass.yaml
 #查看部署情况
 kubectl get pvc -n default
 kubectl get pv -n default
 # 若是pending或其他状态,查看异常
 kubectl describe pvc ${你的pvc名称} -n ${你的namespace名称}
 #验证存储类型
 kubectl get sc
 #检查pod状态
 kubectl get pod -n kube-system

相关文章

服务器端口转发,带你了解服务器端口转发
服务器开放端口,服务器开放端口的步骤
产品推荐:7月受欢迎AI容器镜像来了,有Qwen系列大模型镜像
如何使用 WinGet 下载 Microsoft Store 应用
百度搜索:蓝易云 – 熟悉ubuntu apt-get命令详解
百度搜索:蓝易云 – 域名解析成功但ping不通解决方案

发布评论