ubuntu 22.04 开发环境 Kubernetes 集群搭建

2023年 12月 16日 74.9k 0

安装

Docker 基础配置

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "registry-mirrors": ["https://uy35zvn6.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

k8s 的防火墙设置

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
EOF

sudo sysctl --system

证书工具

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

添加 k8s 存储库(阿里云)

sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

安装 kubeadm kubectl kubelet:

sudo apt-get update
sudo apt-get install -y kubelet=1.28.2-00 kubeadm=1.28.2-00 kubectl=1.28.2-00 
sudo apt-mark hold kubelet kubeadm kubectl

将默认 CRI 改为 containerd

参考:github.com/cncamp/101/…

创建 containerd 配置目录:

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

修改 /etc/containerd/config.toml 配置:

k8s.gcr.io/pause:3.x
# 改为 >>>
registry.aliyuncs.com/google_containers/pause:3.9

SystemdCgroup = false
# 改为 >>>
SystemdCgroup = true

更新配置,直接禁用 docker,启用 containerd:

systemctl daemon-reload
systemctl disable docker
systemctl stop docker
systemctl enable containerd
systemctl restart containerd

kubeadm 初始化集群,注意地址改成本机地址:

kubeadm init \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.28.2 \
 --pod-network-cidr=192.168.0.0/16 \
 --apiserver-advertise-address=192.168.79.129

注意将输出文件存起来,会提示如下信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.79.129:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
	--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

所以,在当前用户,按照提示配置好 $HOME/.kube/config,查看所有的 pod:

kubectl get po --all-namespaces

可以正常看到 pod 就已经成功了:

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-66f779496c-5mlvq                   0/0     Pending   0          26m
kube-system   coredns-66f779496c-hgpr5                   0/0     Pending   0          26m
kube-system   etcd-matebook-x-pro                        1/1     Running   0          26m
kube-system   kube-apiserver-matebook-x-pro              1/1     Running   0          26m
kube-system   kube-controller-manager-matebook-x-pro     1/1     Running   0          26m
kube-system   kube-proxy-6jclh                           1/1     Running   0          26m
kube-system   kube-scheduler-matebook-x-pro              1/1     Running   0          26m

去除 master 节点的 taints

执行这个去除 Master 的 taints,否则 pod 无法调度在 master 节点。

kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-

安装 Calico CNI 插件

参照官方文档:docs.tigera.io/calico/late…

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml

时间稍为会有点长,完成之后稍等片刻,然后 kubelet get po --all-namespaces 看到如下结果:

NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-5ffbc745df-95rvm          1/1     Running   0          3m22s
calico-apiserver   calico-apiserver-5ffbc745df-f4g88          1/1     Running   0          3m22s
calico-system      calico-kube-controllers-757d6c6d9d-9tnsk   1/1     Running   0          4m47s
calico-system      calico-node-cp4b9                          1/1     Running   0          4m47s
calico-system      calico-typha-648459974b-zqr7z              1/1     Running   0          4m47s
calico-system      csi-node-driver-f6qcp                      2/2     Running   0          4m47s
kube-system        coredns-66f779496c-pp6dc                   1/1     Running   0          12m
kube-system        coredns-66f779496c-rdc4m                   1/1     Running   0          12m
kube-system        etcd-matebook-x-pro                        1/1     Running   4          12m
kube-system        kube-apiserver-matebook-x-pro              1/1     Running   4          12m
kube-system        kube-controller-manager-matebook-x-pro     1/1     Running   4          12m
kube-system        kube-proxy-whwl6                           1/1     Running   0          12m
kube-system        kube-scheduler-matebook-x-pro              1/1     Running   4          12m
tigera-operator    tigera-operator-7f8cd97876-vgvvl           1/1     Running   0          4m57s

检查状态 kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok        
controller-manager   Healthy   ok        
etcd-0               Healthy   ok

尝试创建一个 nginx 实例

创建一个 deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

然后 kubectl create -f deployment.yaml 注意拉取镜像的过程是隐藏的,如果拉镜像不顺畅可能会卡住挺久的。如果是这种情况,可以用下面这种方式先拉镜像,可以看到进度:

ctr image pull docker.io/library/nginx:latest

成功的话 kubectl get po 可以看到下面的结果:

NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7c79c4bf97-8kjf6   1/1     Running   0          11s
nginx-deployment-7c79c4bf97-tl9mt   1/1     Running   0          11s

然后添加 service.yaml,通过 nodePort 将 nginx 放出在本机 80 端口:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
    - protocol: TCP
      port: 80
alfred@matebook-x-pro:~/kube/kubernetes$ k get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        48m
my-nginx     NodePort    10.108.220.245   <none>        80:31153/TCP   4s
alfred@matebook-x-pro:~/kube/kubernetes$ k get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2           7m31s
alfred@matebook-x-pro:~/kube/kubernetes$ k get po
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7c79c4bf97-8kjf6   1/1     Running   0          7m34s
nginx-deployment-7c79c4bf97-tl9mt   1/1     Running   0          7m34s

然后打开 http://127.0.0.1:31153 或者 http://10.108.220.245 都可以看到网页已经可以正常访问了。

相关文章

KubeSphere 部署向量数据库 Milvus 实战指南
探索 Kubernetes 持久化存储之 Longhorn 初窥门径
征服 Docker 镜像访问限制!KubeSphere v3.4.1 成功部署全攻略
那些年在 Terraform 上吃到的糖和踩过的坑
无需 Kubernetes 测试 Kubernetes 网络实现
Kubernetes v1.31 中的移除和主要变更

发布评论