ingressnginx应用常见的两种方式

2023年 7月 15日 36.6k 0

有这么一个场景,前面是一个中国移动的云服务器,他们的网络环境是通过SNAT和DNAT的方式进行做的网络转换。这种情况在云环境中并不多见,一般发生在自建物理机房中,通常而言,DNAT和SNAT发生在防火墙或者外部路由阶段。

一个IP地址和不同的端口来进行映射到后端的一个节点上的某个端口上,这种情况下走的必然是4层转发,这就有出现了一个问题,你的https是无法在这里进行卸载的,要么在前端架设代理层,要么在后端上添加https,而后端使用的kubernetes集群,如下

image-20220313220752212.png

后端上添加https,就需要有一个能够做域名解析一个层,这个时候就需要使用类似于nginx-ingress的东西来处理这个

image-20220313220710263.png

这个时候无论上面的两种情况如何解决,都会有一个负载和卸载后端故障节点的问题,如果后端或者前端某一个节点挂掉,这时候存在了单点故障和后端快速卸载的问题,那大概率变成了这个样子

image-20220313221652903.png

这样仍然有单点故障的问题,首先DNAT是不需要考虑的,因为脱离了我们的掌控,proxy层只做后端的服务器故障剔除或者上线,此时你大概率使用的一个4层的nginx

以ingress-nginx为例,ingress-nginx配置方式基本上大概有两种

  • 1。使用默认的nodeport进行转发
  • 2。使用宿主机的网络名称空间进行转发

通常这两种方式都被采用,第二种方式被认为更高效,原因是pod不进行隔离宿主机的网络名称空间,因此少了一层网络名称空间的消耗,这意味着从内核空间到用户空间少了一次转换,从这个角度来,他比nodeport快

安装ingress-nginx

我们下载1.1.1的ingress-nginx

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml

我找了一个人家已经搬好的镜像

docker pull liangjw/kube-webhook-certgen:v1.1.1
docker pull liangjw/ingress-nginx-controller:v1.1.1

--改名称

docker tag liangjw/kube-webhook-certgen:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
docker tag liangjw/ingress-nginx-controller:v1.1.1 k8s.gcr.io/ingress-nginx/controller:v1.1.1

如果此时没有外网可以保存到本地传递给其他节点

docker save -o controller.tar k8s.gcr.io/ingress-nginx/controller:v1.1.1
docker save -o kube-webhook-certgen.tar k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
for i in 21 30 ;do scp controller.tar 172.16.1.$i:~/;done
for i in 21 30 ;do scp kube-webhook-certgen.tar 172.16.1.$i:~/;done

或许你可以配置daemonset或者配置nodeName来调度

sed -i 's/kind: Deployment/kind: DaemonSet/g' deploy.yaml

声明式运行

[root@node1 ~/ingress]# kubectl apply -f deployment.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

正常情况下,此时ingress-nginx-controller已经准备妥当

[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx  get pod  -o wide
NAME                                        READY   STATUS      RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-m7hph        0/1     Completed   0          114s   192.20.137.84   172.16.100.50   <none>           <none>
ingress-nginx-admission-patch-bmx2r         0/1     Completed   0          114s   192.20.180.14   172.16.100.51   <none>           <none>
ingress-nginx-controller-78c57d6886-m7mtc   1/1     Running     0          114s   192.20.180.16   172.16.100.51   <none>           <none>

我们什么都没有修改,因此她的nodeport的端口也是随机的,这里可以修改城固定的端口

[root@linuxea-50 ~/ingress]#  kubectl -n ingress-nginx get svc
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.68.108.97    <none>        80:31837/TCP,443:31930/TCP   2m28s
ingress-nginx-controller-admission   ClusterIP   10.68.102.110   <none>        443/TCP                      2m28s

配置测试

配置名为myapp一个nginx

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: linuxea_app
    version: v0.1.32
  ports:
  - name: http
    targetPort: 80
    port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dpment-linuxea
  namespace: default
spec:
  replicas: 7
  selector:
    matchLabels:
      app: linuxea_app
      version: v0.1.32
  template:
    metadata:
      labels:
        app: linuxea_app
        version: v0.1.32
    spec:
      containers:
      - name: nginx-a
        image: marksugar/nginx:1.14.b
        ports:
        - name: http
          containerPort: 80

确保可以通过custer-ip进行访问

[root@linuxea-50 ~/ingress]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.68.0.1       <none>        443/TCP    273d
myapp        ClusterIP   10.68.211.186   <none>        80/TCP     9m42s
mysql        ClusterIP   None            <none>        3306/TCP   2d5h
[root@linuxea-50 ~/ingress]# curl  10.68.211.186
linuxea-dpment-linuxea-6bdfbd7b77-tlh8k.com-127.0.0.1/8 192.20.137.98/32

配置ingress

  • ingress.yaml

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: test
    namespace: default
    spec:
    ingressClassName: nginx
    rules:
    - host: linuxea.test.com
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: myapp
              port:
                number: 80

    而后执行即可

    [root@linuxea-50 ~/ingress]# kubectl apply -f ingress.yaml
    ingress.networking.k8s.io/test configured
    [root@linuxea-50 ~/ingress]# kubectl get ingress
    NAME   CLASS   HOSTS              ADDRESS         PORTS   AGE
    test   nginx   linuxea.test.com   172.16.100.51   80      30s

我们进行访问测试

首先先修改下本地的hosts文件作为域名解析

# C:WindowsSystem32driversetchosts
...添加如下
172.16.100.51 linuxea.test.com

image-20220310225033503.png

那么现在我们已经可以通过默认的配置进行访问了。

配置负载

image-20220314215413535.png

1.我们先将nginx绑定到节点上

而此时,我们需要配置的是,只允许一部分node可以访问即可,于是我们添加一个标签选择器

kubectl label node 172.16.100.50 beta.kubernetes.io/zone=ingress
kubectl label node 172.16.100.51 beta.kubernetes.io/zone=ingress

删掉之前的nodeSelector

      nodeSelector:
        kubernetes.io/os: linux

改成

      nodeSelector:
        beta.kubernetes.io/zone: ingress

并且将deployment改成DaemonSet

sed -i 's/kind: Deployment/kind: DaemonSet/g' 
[root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2 

标签打完之后如下

[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get node  --show-labels|grep "ingress"
172.16.100.50   Ready    master   276d   v1.20.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.50,kubernetes.io/os=linux,kubernetes.io/role=master
172.16.100.51   Ready    master   276d   v1.20.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/fluentd-ds-ready=true,beta.kubernetes.io/os=linux,beta.kubernetes.io/zone=ingress,kubernetes.io/arch=amd64,kubernetes.io/hostname=172.16.100.51,kubernetes.io/os=linux,kubernetes.io/role=master

如下

[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide
NAME                                   READY   STATUS      RESTARTS   AGE   IP               NODE           
ingress-nginx-admission-create-kfj7v   0/1     Completed   0          22m   192.20.137.99    172.16.100.50  
ingress-nginx-admission-patch-5dwvf    0/1     Completed   1          22m   192.20.137.110   172.16.100.50  
ingress-nginx-controller-4q9qb         1/1     Running     0          12m   192.20.180.27    172.16.100.51  
ingress-nginx-controller-n9qkl         1/1     Running     0          12m   192.20.137.92    172.16.100.50  

此时,我们是通过172.16.100.51和172.16.100.50进行访问的,那么你的请求就需要打到这两个节点的nodeport端口上,因此我们将nodeport端口固定

apiVersion: v1
kind: Service
....
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
      nodePort: 31080
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      nodePort: 31443
      appProtocol: https
  selector:
....

image-20220313220101179.png

2.配置nginx

我们可以配置4层或者7层的nginx,这取决于我们需要做什么。如果是4层,只做一个对后端节点的卸载,如果是7层,那可以配置域名等

yum install nginx-mod-stream nginx

安装了4层模块后会自动加载,我们在nginx.conf中配置即可

stream {
    include  stream/*.conf;
}

并创建目录

mkdir stream

创建配置文件如下

upstream test-server {
  server 172.16.100.50:31080 max_fails=3 fail_timeout=1s weight=1;
  server 172.16.100.51:31080 max_fails=3 fail_timeout=1s weight=1;
}
log_format  proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
              '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ;
access_log /data/logs/nginx/web-server.log proxy;
server {
        listen 31080;
        proxy_connect_timeout 3s;
        proxy_timeout 3s;
        proxy_pass test-server;
}

我们访问测试下

[root@linuxea-49 /etc/nginx]# tail -f /data/logs/nginx/web-server.log 
172.16.100.3 4803 - [14/Mar/2022:00:38:27 +0800] 200 TCP "172.16.100.51:31080" "19763" "0.000"
172.16.100.3 4811 - [14/Mar/2022:00:38:29 +0800] 200 TCP "172.16.100.50:31080" "43999" "0.000"
172.16.100.3 4812 - [14/Mar/2022:00:38:30 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000"
172.16.100.3 4813 - [14/Mar/2022:00:38:31 +0800] 200 TCP "172.16.100.50:31080" "43944" "0.000"
172.16.100.3 4816 - [14/Mar/2022:00:38:34 +0800] 200 TCP "172.16.100.51:31080" "3464" "0.000"
172.16.100.3 4819 - [14/Mar/2022:00:38:43 +0800] 200 TCP "172.16.100.50:31080" "44105" "0.001"
172.16.100.3 4820 - [14/Mar/2022:00:38:44 +0800] 200 TCP "172.16.100.51:31080" "44105" "0.000"
172.16.100.3 4821 - [14/Mar/2022:00:38:47 +0800] 200 TCP "172.16.100.50:31080" "8660" "0.000"
172.16.100.3 4825 - [14/Mar/2022:00:39:06 +0800] 200 TCP "172.16.100.51:31080" "42747" "0.000"
172.16.100.3 4827 - [14/Mar/2022:00:39:09 +0800] 200 TCP "172.16.100.50:31080" "32058" "0.000"

如下

image-20220314004117656.png

配置负载2

我们修改ingress-nginx配置文件,采用hostNetwork: true,ingress-nginx的pod将不会隔离网络名称空间,采用宿主机的网络,这样就可以直接使用service,而不是nodeport

image-20220314215441420.png

1.修改网络模式并修改dnsPolicy ,一旦hostnetwork: true,dnsPolicy就不能在是ClusterFirst,而应该是ClusterFirstWithHostNet,只有这样才能在集群和宿主机上都能进行解析

spec:
  selector:
...
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: controller
          image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
....

如下

[root@linuxea-50 ~/ingress]# kubectl apply -f deployment.yaml2
[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx get pod -o wide
NAME                                   READY   STATUS      RESTARTS   AGE   IP               NODE          
ingress-nginx-admission-create-kfj7v   0/1     Completed   0          23h   192.20.137.99    172.16.100.50 
ingress-nginx-admission-patch-5dwvf    0/1     Completed   1          23h   192.20.137.110   172.16.100.50 
ingress-nginx-controller-5nd59         1/1     Running     0          46s   172.16.100.51    172.16.100.51 
ingress-nginx-controller-zzrsz         1/1     Running     0          85s   172.16.100.50    172.16.100.50 

当你修改完成后,你会发现他用的宿主机的网卡,这就是没有隔离网络名称空间,这样好处相对nodeport,是要快

[root@linuxea-50 ~/ingress]# kubectl -n ingress-nginx exec -it ingress-nginx-controller-zzrsz -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether a2:5a:55:54:d1:1d brd ff:ff:ff:ff:ff:ff
    inet 172.16.100.50/24 brd 172.16.100.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b68f:449c:af0f:d91f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:c8:d3:08:a9 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: calib6c2ec954f8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
6: califa7cddb93a8@docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
....

2.配置nginx 7层代理

而后我们可以直接配置nginx,不用关系svc的事情

那么现在,我们可以把证书配置在7层的这个代理上面

我们直接进行配置一个ssl的自签证书

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com
[root@linuxea-49 /etc/nginx]# ll ssl
total 12
-rw-r--r-- 1 root root  424 Mar 14 22:23 dhparam.pem
-rw-r--r-- 1 root root 1285 Mar 14 22:31 linuxea.crt
-rw-r--r-- 1 root root 1704 Mar 14 22:31 linuxea.key

日志格式

    log_format upstream2 '$proxy_add_x_forwarded_for $remote_user [$time_local] "$request" $http_host'
        '$body_bytes_sent "$http_referer" "$http_user_agent" $ssl_protocol $ssl_cipher'
        '$request_time [$status] [$upstream_status] [$upstream_response_time] "$upstream_addr"'

而后直接在conf.d下引入

k8s.conf

upstream web {
    server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1;
    server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1;
}
#server {
#    listen 80;
#    server_name http://linuxea.test.com/;
#    if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; }
#    index index.html index.htm index.php default.html default.htm default.php;
#}
server {
    listen 80;
    server_name linuxea.test.com;
#    if ($scheme = 'http' ) { rewrite ^(.*)$ https://$host$1 permanent; }
    index index.html index.htm index.php default.html default.htm default.php;
    #limit_conn conn_one 20;
    #limit_conn perserver 20;
    #limit_rate 100k;
    #limit_req zone=anti_spider burst=10 nodelay;
    #limit_req zone=req_one burst=5 nodelay;
    access_log  /data/logs/nginx/web-server.log upstream2;
    location / {
      proxy_pass http://web;
      include proxy.conf;
    }
}
server {
    listen 443 ssl;
    server_name linuxea.test.com;
    #include fangzhuru.conf;
    ssl_certificate   ssl/linuxea.crt;
    ssl_certificate_key  ssl/linuxea.key;
    access_log  /data/logs/nginx/web-server.log upstream2;
#    include ssl-params.conf;
    location / {
        proxy_pass http://web;
        include proxy.conf;
    }
}

proxy.conf如下

proxy_connect_timeout 1000s;
proxy_send_timeout  2000;
proxy_read_timeout   2000;
proxy_buffer_size    128k;
proxy_buffers     4 256k;
proxy_busy_buffers_size 256k;
proxy_redirect     off;

proxy_set_header   Upgrade   $http_upgrade;
proxy_set_header   Connection   "upgrade";
proxy_set_header   REMOTE-HOST   $remote_addr;

proxy_hide_header  Vary;
proxy_set_header   Accept-Encoding '';
proxy_set_header   Host   $host;
proxy_set_header   Referer $http_referer;
proxy_set_header   Cookie $http_cookie;
proxy_set_header   X-Real-IP  $remote_addr;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;

3.访问测试

image-20220314224427906.pngimage-20220314224448987.png

5.配置https

我们在ingress-nginx上配置https

[root@linuxea-50 ~/ingress]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout linuxea.key -out linuxea.crt -subj /C=CH/ST=ShangHai/L=Xian/O=Devops/CN=linuxea.test.com
Generating a 2048 bit RSA private key
......................+++
............................................+++
writing new private key to 'linuxea.key'
-----

创建secret

[root@linuxea-50 ~/ingress]# kubectl create secret tls nginx-ingress-secret --cert=linuxea.crt --key=linuxea.key
secret/nginx-ingress-secret created

查看secret

[root@linuxea-50 ~/ingress]#  kubectl get secret nginx-ingress-secret 
NAME                   TYPE                DATA   AGE
nginx-ingress-secret   kubernetes.io/tls   2      24s
[root@linuxea-50 ~/ingress]#  kubectl describe secret nginx-ingress-secret 
Name:         nginx-ingress-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1285 bytes
tls.key:  1700 bytes

而后在ingress中配置中添加字段

spec:
  tls:
  - hosts:
    - linuxea.test.com
    secretName: nginx-ingress-secret

如下

[root@linuxea-50 ~/ingress]# cat ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test
  namespace: default
spec:
  tls:
  - hosts:
    - linuxea.test.com
    secretName: nginx-ingress-secret
  ingressClassName: nginx
  rules:
  - host: linuxea.test.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp
            port:
              number: 80

在4层的代理里面加一下443端口

upstream test-server443 {
  server 172.16.100.50:443 max_fails=3 fail_timeout=1s weight=1;
  server 172.16.100.51:443 max_fails=3 fail_timeout=1s weight=1;
}
access_log /data/logs/nginx/web-server.log proxy;
server {
        listen 443;
        proxy_connect_timeout 3s;
        proxy_timeout 3s;
        proxy_pass test-server443;
}
upstream test-server {
  server 172.16.100.50:80 max_fails=3 fail_timeout=1s weight=1;
  server 172.16.100.51:80 max_fails=3 fail_timeout=1s weight=1;
}
log_format  proxy '$remote_addr $remote_port - [$time_local] $status $protocol '
              '"$upstream_addr" "$upstream_bytes_sent" "$upstream_connect_time"' ;
access_log /data/logs/nginx/web-server.log proxy;
server {
        listen 80;
        proxy_connect_timeout 3s;
        proxy_timeout 3s;
        proxy_pass test-server;
}

image-20220314230428775.png

[root@linuxea-49 /etc/nginx/stream]# tail -f /data/logs/nginx/web-server.log 
172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000"
172.16.100.3 12881 - [14/Mar/2022:22:55:40 +0800] 200 TCP "172.16.100.51:443" "1444" "0.000"
172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000"
172.16.100.3 13183 - [14/Mar/2022:23:04:39 +0800] 200 TCP "172.16.100.50:443" "547" "0.000"
172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000"
172.16.100.3 13184 - [14/Mar/2022:23:04:42 +0800] 200 TCP "172.16.100.51:443" "1492" "0.000"
172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001"
172.16.100.3 13234 - [14/Mar/2022:23:05:58 +0800] 200 TCP "172.16.100.50:443" "547" "0.001"
172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000"
172.16.100.3 13235 - [14/Mar/2022:23:06:01 +0800] 200 TCP "172.16.100.51:443" "1227" "0.000"

参考

kubernetes Ingress nginx http以及7层https配置 (17)

相关文章

LeaferJS 1.0 重磅发布:强悍的前端 Canvas 渲染引擎
10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
一步步配置基于kubeadmin的kubevip高可用
istio全链路传递cookie和header灰度
REST Web 服务版本控制

发布评论