Kubernetes 二进制添加扩容Node节点

前面一直介绍了kubernetes各个版本的二进制安装,但是很多情况是我们需要进行扩展。 随着业务的扩大,node节点的扩展也是必不可少的。这篇文章就给大家带来机遇kubernetes 1.14安装的一个扩展教程,其他集群安装的扩展也大概相同。可能相关的路径不对而已

Kubernetes 1.14 二进制集群安装

新闻联播老司机

  • 19年8月13日
  • 喜欢:1
  • 浏览:18.6k
  • Prometheus 原理介绍

    新闻联播老司机

  • 19年6月17日
  • 喜欢:0
  • 浏览:6.4k
  • 本次环境这里添加了一个k8s-05 ip为192.168.0.14的新node节点,添加多个node节点的步骤和方式相同,这里不一一举例
    没有特殊说明的都在k8s-01上进行操作
    首先我们需要修改host主机,添加k8s-05

    #在之前的master节点添加
    echo "192.168.0.14  k8s-05" >>/etc/hosts
    
    
    #在k8s-05操作
    cat >>/etc/hosts<<EOF
    192.168.0.10  k8s-01
    192.168.0.11  k8s-02
    192.168.0.12  k8s-03
    192.168.0.13  k8s-04
    EOF
    

    检查host解析

    [root@k8s-01 ~]# ping -c 1 k8s-05
    PING k8s-05 (192.168.0.14) 56(84) bytes of data.
    64 bytes from k8s-05 (192.168.0.14): icmp_seq=1 ttl=64 time=0.910 ms
    
    --- k8s-05 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.910/0.910/0.910/0.000 ms
    

    分发秘钥

    cd 
    ssh-copy-id -i ~/.ssh/id_rsa.pub k8s-05
    #为了后面拷贝证书等文件方便快捷,这里继续分发秘钥
    

    在k8s-05节点创建K8s相关目录

    mkdir -p  /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert
    

    推送CA证书

    cd /etc/kubernetes/cert
    scp ca.pem ca-config.json k8s-05:/etc/kubernetes/cert/
    

    flanneld部署

    cd /opt/k8s/work/flannel
    scp flanneld mk-docker-opts.sh k8s-05:/opt/k8s/bin/
    

    拷贝flanneld密钥

    ssh k8s-05 "mkdir -p /etc/flanneld/cert"
    scp /etc/flanneld/cert/flanneld*.pem k8s-05:/etc/flanneld/cert
    

    拷贝flannel启动文件

    scp /etc/systemd/system/flanneld.service k8s-05:/etc/systemd/system/
    
    #启动flannel
    ssh k8s-05 "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
    
    #检查是否启动成功
    ssh k8s-05 "systemctl status flanneld|grep Active"
    

    查看etcd网络数据
    新增flannel会注册到etcd

    source /opt/k8s/bin/environment.sh
    etcdctl 
      --endpoints=${ETCD_ENDPOINTS} 
      --ca-file=/etc/kubernetes/cert/ca.pem 
      --cert-file=/etc/flanneld/cert/flanneld.pem 
      --key-file=/etc/flanneld/cert/flanneld-key.pem 
      ls ${FLANNEL_ETCD_PREFIX}/subnets
    

    正常结果如下
    Kubernetes 二进制添加扩容Node节点-每日运维
    上面的步骤结束后我们flannel网络就设置完毕

    Kubernetes Node 节点安装Docker

    这里我们直接在k8s-05节点安装docker

    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
    yum makecache fast
    yum -y install docker-ce
    

    创建配置文件

    mkdir -p /etc/docker/
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    

    修改配置文件

    vim /usr/lib/systemd/system/docker.service
    

    Kubernetes 二进制添加扩容Node节点-每日运维
    在配置文件14行删除原来并添加下面的参数

    EnvironmentFile=-/run/flannel/docker
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
    

    启动docker并检查服务状态

    systemctl daemon-reload && systemctl enable docker && systemctl restart docker
    
    systemctl status docker|grep Active
    

    查看docke0网桥是否正常

    [root@k8s-05 ~]# ip addr show flannel.1 && /usr/sbin/ip addr show docker0
    3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
        link/ether 1a:c7:47:3a:f5:4e brd ff:ff:ff:ff:ff:ff
        inet 172.30.64.0/32 scope global flannel.1
           valid_lft forever preferred_lft forever
        inet6 fe80::18c7:47ff:fe3a:f54e/64 scope link
           valid_lft forever preferred_lft forever
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:99:c3:03:9d brd ff:ff:ff:ff:ff:ff
        inet 172.30.64.1/21 brd 172.30.71.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    安装kubelet

    创建kubelet bootstrap kubeconfig文件

    #k8s-01操作
    
    cd /opt/k8s/work
    export BOOTSTRAP_TOKEN=$(kubeadm token create 
      --description kubelet-bootstrap-token 
      --groups system:bootstrappers:k8s-05 
      --kubeconfig ~/.kube/config)
    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=/etc/kubernetes/cert/ca.pem 
      --embed-certs=true 
      --server=https://192.168.0.100:8443 
      --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
      --token=${BOOTSTRAP_TOKEN} 
      --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig
    # 设置上下文参数
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kubelet-bootstrap 
      --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-05.kubeconfig
    

    分发kubeconfig

    cd /opt/k8s/work
    scp kubelet-bootstrap-k8s-05.kubeconfig k8s-05:/etc/kubernetes/kubelet-bootstrap.kubeconfig
    

    查看kubeadm为各个节点创建的token

    $ kubeadm token list --kubeconfig ~/.kube/config
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
    2bd2l8.48aqiyi70ilmyapd   23h       2020-02-15T15:18:04-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-03
    2juok7.m3ovzxlplynkidg2   23h       2020-02-15T15:18:04-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-02
    4tco8p.fnzj1yfvsx5hkf2e   23h       2020-02-15T15:18:05-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-04
    8kw20m.ehj3git0b2e1bwkc   23h       2020-02-15T15:17:56-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-01
    j1olh5.qzktcctz5kcaywqk   23h       2020-02-15T16:02:01-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-05
    kktiu3.3a1adkatjo4zjuqh   23h       2020-02-15T15:17:57-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-02
    nzkb4r.g63nm9qqbq2e474q   23h       2020-02-15T15:18:04-05:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-01
    
    #目前这里我们已经可以看到K8s-05节点的信息
    

    查看各token关联的Secret

    $ kubectl get secrets  -n kube-system|grep bootstrap-token
    bootstrap-token-2bd2l8                           bootstrap.kubernetes.io/token         7      47m
    bootstrap-token-2juok7                           bootstrap.kubernetes.io/token         7      47m
    bootstrap-token-4tco8p                           bootstrap.kubernetes.io/token         7      47m
    bootstrap-token-8kw20m                           bootstrap.kubernetes.io/token         7      47m
    bootstrap-token-j1olh5                           bootstrap.kubernetes.io/token         7      3m29s
    bootstrap-token-kktiu3                           bootstrap.kubernetes.io/token         7      47m
    bootstrap-token-nzkb4r                           bootstrap.kubernetes.io/token         7      47m
    
    #可以看到已经有一个新建的
    

    创建和分发kubelet参数配置

    cd /opt/k8s/work
    sed -e "s/##NODE_IP##/192.168.0.14/" kubelet-config.yaml.template > kubelet-config-192.168.0.14.yaml.template
    scp kubelet-config-192.168.0.14.yaml.template root@k8s-05:/etc/kubernetes/kubelet-config.yaml
    

    拷贝kubelet启动文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    sed -e "s/##NODE_NAME##/k8s-05/" kubelet.service.template > kubelet-k8s-05.service
    scp kubelet-k8s-05.service root@k8s-05:/etc/systemd/system/kubelet.service
    

    拷贝kubelet命令

    scp /opt/k8s/bin/kubelet k8s-05:/opt/k8s/bin/
    

    启动kubelet

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    ssh root@k8s-05 "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@k8s-05 "/usr/sbin/swapoff -a"
    ssh root@k8s-05 "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
    

    手动approve server cert csr
    稍等片刻后,需要手动通过证书请求
    基于安全考虑,CSR approving controllers不会自动approve kubelet server证书签名请求,需要手动approve

    kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    

    当我们再次查看节点

    [root@k8s-01 work]# kubectl get node
    NAME     STATUS   ROLES    AGE   VERSION
    k8s-01   Ready       57m   v1.14.2
    k8s-02   Ready       57m   v1.14.2
    k8s-03   Ready       57m   v1.14.2
    k8s-04   Ready       57m   v1.14.2
    k8s-05   Ready       51s   v1.14.2
    

    安装kube-proxy

    此处均在 node01 上执行
    推送kube-proxy二进制启动文件

    cd /opt/k8s/work/
    scp kubernetes/server/bin/kube-proxy k8s-05:/opt/k8s/bin/
    

    分发kubeconfig文件

    cd /opt/k8s/work/
    scp kube-proxy.kubeconfig root@k8s-05:/etc/kubernetes/
    

    分发和创建kube-proxy配置文件

    cd /opt/k8s/work/
    sed -e "s/##NODE_NAME##/k8s-05/" -e "s/##NODE_IP##/10.0.20.15/" kube-proxy-config.yaml.template > kube-proxy-config-k8s-05.yaml.template
    scp kube-proxy-config-k8s-05.yaml.template root@k8s-05:/etc/kubernetes/kube-proxy-config.yaml
    

    分发kube-proxy systemd unit文件

    scp kube-proxy.service root@k8s-05:/etc/systemd/system/
    

    启动kube-proxy服务

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    ssh root@k8s-05 "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@k8s-05 "modprobe ip_vs_rr"
    ssh root@k8s-05 "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
    

    检查启动结果

    ssh root@k8s-05 "systemctl status kube-proxy|grep Active"
    ssh root@k8s-05 "netstat -lnpt|grep kube-prox"
    

    检查ipvs路由规则

    ssh root@k8s-05 "/usr/sbin/ipvsadm -ln"
    

    这里所有节点安装完毕,可以创建Pod进行测试

    cd /opt/k8s/work
    cat > nginx-ds.yml <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-ds
      labels:
        app: nginx-ds
    spec:
      type: NodePort
      selector:
        app: nginx-ds
      ports:
      - name: http
        port: 80
        targetPort: 80
    ---
    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: nginx-ds
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
    spec:
      template:
        metadata:
          labels:
            app: nginx-ds
        spec:
          containers:
          - name: my-nginx
            image: daocloud.io/library/nginx:1.13.0-alpine
            ports:
            - containerPort: 80
    EOF
    
    kubectl apply -f /opt/k8s/work/nginx-ds.yml
    

    检查结果

    kubectl get pod -o wide
    NAME             READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
    busybox          1/1     Running   1          63m   172.30.88.4    k8s-01              
    nginx-ds-25w2v   1/1     Running   0          65m   172.30.104.2   k8s-03              
    nginx-ds-2b6mn   1/1     Running   0          65m   172.30.240.2   k8s-04              
    nginx-ds-6rsm4   1/1     Running   0          12m   172.30.64.2    k8s-05              
    nginx-ds-n58rv   1/1     Running   0          65m   172.30.88.2    k8s-01              
    nginx-ds-zvnx2   1/1     Running   0          65m   172.30.184.2   k8s-02              
    

    相关文章:

    1. Kubernetes 1.14 二进制集群安装
    2. Kubenetes 1.13.5 集群二进制安装
    3. Kuerbernetes 1.11 集群二进制安装
    4. Kubeadm搭建高可用(k8s)Kubernetes v1.24.0集群