前言
本文将对Kubernetes中的Service相关的网络基础知识进行梳理,并会演示如何从零搭建一个Ingress,所有镜像在国内均可下载,参考资料为《kubernetes权威指南:从docker到kubernetes实践全接触》第五版。
正文
一. 前置准备
为了演示网络相关的概念,这里准备了一个一主两从的K8s集群,集群节点信息如下所示。
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m Ready control-plane,master 44d v1.23.14 192.168.52.128 CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.7
w1 Ready 44d v1.23.14 192.168.52.129 CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.7
w2 Ready 44d v1.23.14 192.168.52.130 CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.7
同时准备了一个简单的Springboot应用的镜像,该Springboot应用仅提供一个接口,调用该接口会返回一行字符串,对应接口的Controller实现如下所示。
@RestController
public class TestController {
@GetMapping("/api/v1/say")
public String sayHello() {
return "你好世界";
}
}
然后准备了如下一份部署yml文件。
apiVersion: apps/v1
kind: Deployment
metadata:
name: learn-k8s
labels:
app: learn-k8s
spec:
replicas: 2
selector:
matchLabels:
app: learn-k8s
template:
metadata:
labels:
app: learn-k8s
spec:
containers:
- name: learn-k8s
image: learn-k8s:v1
ports:
- containerPort: 8080
部署成功后,两个Pod信息显示如下。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
learn-k8s-587954dc67-b84z2 1/1 Running 1 (49m ago) 6d22h 10.244.80.196 w2
learn-k8s-587954dc67-xd7vz 1/1 Running 0 40m 10.244.190.68 w1
二. Pod的网络
在前置准备中创建的两个Pod,都有自己的IP地址,这其实是Pod在创建时,集群为Pod分配的同属于一个内网且互不重复的IP地址。
还记得在基于Kubeadm搭建K8s集群时,我们在初始化主节点时,使用了如下的指令。
kubeadm init --kubernetes-version=1.23.14 --apiserver-advertise-address=192.168.52.128 --pod-network-cidr=10.244.0.0/16
上述的--pod-network-cidr就是用于指定Pod的IP段。
同时由于在搭建K8s集群时,安装了Calico网络插件,所以在K8s集群内,如下情况的网络访问都是通的。
- Pod之间进行网络访问;
- Pod访问集群Node节点;
- 集群Node节点访问Pod。
图示如下。
其实在Kubernetes的网络模型中,基础原则就是:一个Pod拥有一个独立的IP地址,并且在集群内部,都可以通过Pod的IP地址来访问这个Pod,所以完全不要去考虑Pod中的Container的网络了,因为Kubernetes中的最小操作单元就是Pod。那么也应该相应的知道,Pod中如果有多个Container,那么这些Container的IP地址,网络设备和配置等都是共享这个Pod的。
三. Kubernetes中的Service
1. 基础概念
现在已经知道在集群内部可以通过Pod的IP来直接访问Pod,那么现在思考如下几个问题。
问题一:Pod可能会发生动态扩缩容,或者进行重建,此时会新增Pod或者Pod的IP发生改变,Pod的IP如何进行维护。
问题二:一个服务可能部署了多个Pod,那么客户端进行访问时,如何才能让流量均衡的打在这个服务的Pod上。
问题三:大部分时候我们需要从集群外访问Pod,这种情况应该怎么办。
上面的问题其实可以概括为:集群外的客户端如何感知集群内的Pod的动态变化。
在Kubernetes中,为了解决上述问题(不仅只解决上述问题),提供了Service这一组件,用于为一组Pod对外提供稳定的访问地址以及负载均衡功能,Service是Kubernetes实现微服务的核心组件。
Service有多种类型,本节后续将结合示例,进行一个基础学习。
2. ClusterIP
我们可以创建Service并且指定其type为ClusterIP,此时集群会为这个Service分配一个ClusterIP(称为虚拟IP地址),我们通过Service的ClusterIP就能在集群内部访问这个Service。
那么Service是如何与Pod关联的呢,请看如下一个Service的yml文件。
apiVersion: v1
kind: Service
metadata:
name: learn-k8s-service
spec:
selector:
app: learn-k8s
ports:
- protocol: TCP
port: 8000
targetPort: 8080
type: ClusterIP
首先指定了这个Service的名称是learn-k8s-service,然后类型是ClusterIP,会关联所有具有app: learn-k8s这个label的Pod,同时port指定了Service自身的端口为8000,然后targetPort指定了关联的Pod的端口为8080。
现在通过kubectl apply -f learn-k8s-service-deploy.yaml部署这个Service,然后通过get service -o wide | grep learn-k8s-service查看刚部署的Service的信息,如下所示。
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
learn-k8s-service ClusterIP 10.110.100.168 8000/TCP 2m51s app=learn-k8s
那么此时我们可以通过10.110.100.168:8000来访问到我们刚刚创建的Service,Service此时和关联的Pod的关系如下所示。
此时我们在集群内部,执行如下指令。
curl http://10.110.100.168:8000/api/v1/say
是可以得到如下打印结果的。
你好世界
也就是通过访问Service,我们的请求是成功打到了Service关联的Pod上。现在通过执行kubectl describe service learn-k8s-service,可以得到如下的信息。
Name: learn-k8s-service
Namespace: default
Labels:
Annotations:
Selector: app=learn-k8s
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.110.100.168
IPs: 10.110.100.168
Port: 8000/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.190.68:8080,10.244.80.196:8080
Session Affinity: None
Events:
可以看到,我们创建的Service关联的Pod的Endpoint(就是IP:Port)为:10.244.190.68:8080和10.244.80.196:8080,这其实就是关联的Pod的IP和端口,当请求来到Service时,就会根据负载均衡策略,将请求路由到对应的Endpoint上,如果当Pod新增或减少,再或者Pod的IP发生了改变,那么Service都是可以感知到的(依赖K8s的Service控制器持续的检测Pod列表来实现)。现在进行一个演示,执行kubectl scale deployment learn-k8s --replicas=3,进行Pod的一个扩容,扩容后,再查看Pod信息,如下所示。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
learn-k8s-587954dc67-b84z2 1/1 Running 2 (6h30m ago) 8d 10.244.80.196 w2
learn-k8s-587954dc67-dx4kg 1/1 Running 0 11s 10.244.190.70 w1
learn-k8s-587954dc67-xd7vz 1/1 Running 1 (6h30m ago) 42h 10.244.190.68 w1
现在再describe一下Service的信息,如下所示。
Name: learn-k8s-service
Namespace: default
Labels:
Annotations:
Selector: app=learn-k8s
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.110.100.168
IPs: 10.110.100.168
Port: 8000/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.190.68:8080,10.244.190.70:8080,10.244.80.196:8080
Session Affinity: None
Events:
可以看到Service自动和刚刚新弹出的Pod进行了一个关联,那么这就相当于,无论Pod是扩容或缩容,再或者是重建导致IP发生故障,那么Service都会自动帮我们维护后端Pod的列表,我们访问服务,仅需要访问Service就可以了,无须关注后端Pod的实例情况。
3. NodePort
在第2小节中,对Service的ClusterIP有了一个初步的认识,但是很明显的一个问题就是,只能在集群内部通过ClusterIP访问到Service,而集群外部是访问不通的。
那有什么办法可以解决这个问题呢,其中一种手段就是Service的NodePort。即将Service的端口号和集群每个节点的一个端口号(NodePort)做映射,只要客户端和集群的网络是通的,那么客户端就能够通过集群节点的IP和NodePort访问到Service,从而访问到对应的Pod。
我们现在修改一下Service的yml文件,如下所示。
apiVersion: v1
kind: Service
metadata:
name: learn-k8s-service
spec:
selector:
app: learn-k8s
ports:
- protocol: TCP
port: 8000
targetPort: 8080
nodePort: 30000
type: NodePort
也就是将type由ClusterIP更改为了NodePort,然后添加了nodePort为30000,即将Service的8000端口映射到了集群每个Node的30000端口上。
重新部署Service,部署成功后,查看Service信息如下。
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
learn-k8s-service NodePort 10.106.39.22 8000:30000/TCP 15s app=learn-k8s
那么现在从集群外使用客户端工具访问Service,结果如下。
此时网络示意图如下所示。
此时是可以通过K8s集群中任意一个Node的IP地址再加上NodePort来访问到Service的。
4. Ingress
使用Service的NodePort确实可以解决集群外部访问集群内部的Service的问题,但是这种方式有如下几个问题。
要解决上述三点问题,可以使用Ingress,啥意思呢,意思就是部署一个Ingress控制器(Ingress Controller),并让这个Ingress控制器的端口与集群节点某个端口做一个映射,然后再添加一张路由表,这样在集群外部通过集群节点的IP和映射的端口可以访问到Ingress控制器,从而再通过路由表路由到我们的Service,最终访问到Pod。
现在演示如何创建Ingress Controller。在第3小节中我们创建了一个NodePort类型的Service,我们先删除这个Service,指令如下。
kubectl delete service learn-k8s-service
然后将Service的yml文件内容修改如下,也就是还原为ClusterIP的类型。
apiVersion: v1
kind: Service
metadata:
name: learn-k8s-service
spec:
selector:
app: learn-k8s
ports:
- protocol: TCP
port: 8000
targetPort: 8080
type: ClusterIP
执行如下指令完成Service部署。
kubectl apply -f learn-k8s-service-deploy.yaml
我们选择使用Ingress Nginx Controller,官方文档地址为:Ingress Nginx Controller。打开官方文档,可以发现有两种安装方式,第一种是使用Helm,第二种是使用yml文件的安装方式,我们选择更为简单的yml文件的方式。
官方为我们已经准备好了yml文件,可以打开如下地址进行查看。
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml
因为神秘力量的原因,到这里可能会遇到两个问题。
不过没关系,如下直接贴出完整的yml文件。
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-nginx-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.8.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeSelector:
nodename: ingress
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v20230407
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v20230407
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.8.0
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
相较于官方的yml文件,上述的yml文件内容做了如下修改。
保存yml文件并命名为ingress-nginx-controller-deploy.yaml,在开始部署前,先为w1节点添加label,指令如下所示。
kubectl label node w1 nodename=ingress
可以通过下面的指令查看w1节点的label。
kubectl get node w1 --show-labels
打印如下。
NAME STATUS ROLES AGE VERSION LABELS
w1 Ready 47d v1.23.14 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=w1,kubernetes.io/os=linux,nodename=ingress
这样Controller的Pod就会调度到w1节点上。
现在执行如下指令部署Ingress-Nginx Controller。
kubectl apply -f ingress-nginx-controller-deploy.yaml
成功部署时会打印如下信息。
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
可以看到,和Ingress-Nginx Controller相关的Namespace,Service和Deployment等资源都完成了创建,Controller的Pod在ingress-nginx命名空间下,通过如下指令进行查看。
kubectl get pods -n ingress-nginx -o wide
打印如下。
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-hfg27 0/1 Completed 0 113s 10.244.80.200 w2
ingress-nginx-admission-patch-62l2c 0/1 Completed 0 113s 10.244.190.75 w1
ingress-nginx-controller-5c7446448f-j4rnp 1/1 Running 0 113s 192.168.52.129 w1
如期的调度到了w1节点上,此时使用w1的节点IP加上80或者443端口就能够访问到Controller。
现在只剩一个路由配置,在Kubernetes中,路由配置也是以一种组件的形式存在,对应的kind就是Ingress,现在提供如下的Ingress的yml文件。
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: learn.k8s.service
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: learn-k8s-service
port:
number: 8000
上述yml文件通过annotations声明将当前Ingress绑定给Ingress Nginx Controller使用,然后声明客户端对目标地址learn.k8s.service/ 的访问将被路由到名为learn-k8s-service的Service的8000端口上。
现在保存yml文件为ingress-nginx-deploy.yaml,并执行如下指令部署。
kubectl apply -f ingress-nginx-deploy.yaml
同时在Windows的hosts文件中添加如下内容充当DNS解析。
192.168.52.129 learn.k8s.service
最后在集群外通过客户端工具访问learn.k8s.service/api/v1/say ,能成功得到如下返回数据。
访问示意图如下所示。
客户端发起请求时,先在DNS解析器拿到域名对应的IP地址,也就是我们在hosts文件中配置的w1节点的IP地址,然后客户端拿到IP后,通过IP加上80端口(http请求的默认端口)访问到我们部署的Ingress Controller(监听着80和443两个端口),由于配置了Ingress并且客户端请求的目标地址是learn.k8s.service/ ,所以请求被路由到名为learn-k8s-service的Service的8000端口上,最终被负载均衡到某一个Pod上。