istio 基于url路径路由(5)

2023年 7月 15日 58.2k 0

流量治理

在istio1.5后的版本,istiod充当控制平面,并且将配置分发到所有的sidecar代理和网关,能够支持网格的应用实现只能话的负载均衡机制。

而envoy通过简单的二次开发后在istio中称为istio-proxy,被用于围绕某个pod,以sidecar的模式允许在某个应用的pod中。除此之外envoy还负责ingress和egress,分别是入向和出向网关。

在k8s中,一旦istio部署完成,并添加了标签到名称空间下后,sidecar是以自动注入的方式注入到pod中,而无需手动配置。无论手动还是自动都是借助istio 控制器。

ingress作为统一入口,是必须有的,而egress并非必然。

pod中的应用程序,向外发送的时候是通过sidecar来进行处理,而当前的代理作为接收端的时候只是仅仅进行转发到后端应用程序即可。如果是 集群外的,则先从ingerss引入进来,而后在往后端转发。

而在istio中服务注册表是用来发现所有的服务,并且将每个服务的服务名称为主机,pod为上游集群,访问主机的流量默认没有路由条件的转给名称对应的所有pod。每个seidcar会拿到服务的所有配置,每一个服务对应的服务转换为虚拟主机的配置。虚拟主机适配的主机头就是服务的服务名,主机头适配的所有流量都转发给后端的所有pod, service的作用是负责发现pod,并不介入流量转发

在envoy中定义一个组件listencr,并且定义一个上游服务器组cluster,流量进入后去往哪里,需要定义多个vhosts,根据主机头就那些匹配到某虚拟主机,根据路由匹配规则url等,来判定转发给上游某个集群。cluster的角色和nginx的upstrm很像,调度,以及会话等。如果此时要进行流量比例,就需要在这里进行配置。调度算法由destnartionrule来进行定义,如:路由等。而虚拟机主机是由virtualService定义hosts。而如果是外部流量ingress gateway就需要定义一个gateway的crd。

在istio中,如果使用envoy的方式那就太复杂了,因此,要想配置流量治理,我们需要了解配置一些CRD,如:

  • Gateway

为网格引入外部流量,但是不会下发到整个网格,只会将配置下发到ingress-gateway这个pod,而这个pod是没用sidecar的

  • serviceEntty

对于出站流量统一配置需要serviceEntty来定义, 也会转换成envoy的原生api配置,这些配置只会下发到egress gateway用来管控出向流量

  • vitrual services

只要定义了网格,istiod就会将k8s集群上控制平面和数据平面的所有service(istio-system和打了标签的namespace)自动发现并转换为envoy配置,下发到各个sidecar-proxy。这些配置的下发是所有的,service和service之间本身都是可以互相访问的, 每个service都会被转换成pod envoy的egress listerners, 因此,只要service存在, service之间通过envoy配置的listeners,以及路由,cluster,这些本身就可以进行互相访问。

istio将网格中每个service端口创建为listener,而其匹配到的endpoint将组和为一个cluster

而vitrual services是对网格内的流量配置的补充,对于一个service到达另外一个cluster之间的扩充,一个到另外一个的调度算法等其他高级功能,比如:

1.路由规则,子集2.url3.权重等

vitrual services就是配置在listeners上的vitrual hosts和router config

  • Destination rules

destination rules将配置好的配置指向某个后端的cluster,在cluster上指明均衡机制,异常探测等类的流量分发机制。

这些配置应用后会在所有的网格内的每个sidecar内被下发,大部分都在outbound出站上

Destination rules和vitrual services是配置的扩充,因此Destination和vitrual services每次并非都需要配置,只有在原生默认配置无法满足的时候,比如需要配置高级功能的时候才需要扩充配置

要配置这些流量治理,需要virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,指定虚拟主机配置virtualService和destnartionrule

image-20220717161355400.png

外部的入站流量会经由ingress gateway到达集群内部,也就是南北流量

  • 经由gateway定义的ingress的vhsots
  • 包括目标流量访问的"host",以及虚拟主机监听的端口号

集群内部的流量仅会在sidecar之间流动,也就是东西向流量, 大都在egress gateway发挥作用

  • virtualService为sedecar envoy定义的listener(定义流量路由机制等)
  • DestinationRule为sedecar envoy定义的cluster(包括发现端点等)

网格内的流量无论是发布测试等,都是通过访问发起段的正向代理出站envoy进行配置,并且网格内的流量配置与动向都是在数据平面完成。而控制平面只是在进行下发配置策略的定义。

要想在egress或者ingress 定义相应的配置,需要通过virtualService来进行定义,

1. 基于url路径路由

新的版本与旧版本之间,我们希望百分之一的流量在新版本之上,而百分之99还在旧的版本上

我们重新配置下清单,我准备了两个pod,当打开根目录的时候显示版本是1

nginx:v1.0

linuxea-dpment-linuxea-x-xxxxx version number 1.0

nginx:v1.1

linuxea-dpment-linuxea-x-xxxxx version number 1.1

/version/显示同样的信息。

准备两个版本的pod,根路径和/version/都存在,且版本号不一样,而后配置进行测试、

registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 
registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.1

使用如上两个版本来测试

1.1 dpment-a

要想被istio发现,我们必须创建一个service,而后创建一个dpment-a的deployment的pod

清单如下

---
apiVersion: v1
kind: Service
metadata:
  name: dpment-a
  namespace: java-demo
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: linuxea_app
    version: v0.1
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dpment-linuxea-a
  namespace: java-demo  
spec:
  replicas: 
  selector:
    matchLabels:
      app: linuxea_app
      version: v0.1
  template:
    metadata:
      labels:
        app: linuxea_app
        version: v0.1
    spec:
      containers:
      - name: nginx-a
        # imagePullPolicy: Always
        image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 
        ports:
        - name: http
          containerPort: 80

此时我们可以通过命令来查看当前创建的这个pod在istio的表现

获取当前的pod名称

(base) [root@linuxea.com test]# INGS=$(kubectl -n java-demo get pod -l app=linuxea_app -o jsonpath={.items[0].metadata.name})
(base) [root@linuxea.com test]# echo $INGS
dpment-linuxea-a-68dc49d5d-c9pcb

查看proxy-status

(base) [root@linuxea.com test]# istioctl proxy-status
NAME                                                   CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                      VERSION
dpment-linuxea-a-68dc49d5d-c9pcb.java-demo             Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
dpment-linuxea-a-68dc49d5d-h6v6v.java-demo             Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
dpment-linuxea-a-68dc49d5d-svl52.java-demo             Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
istio-egressgateway-65b46d7874-xdjkr.istio-system      Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
istio-ingressgateway-559d4ffc58-7rgft.istio-system     Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
sleep-557747455f-46jf5.java-demo                       Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1

查看routes的80端口,可见dpment-a已经被创建

(base) [root@linuxea.com test]# istioctl proxy-config routes $INGS.java-demo --name 80
NAME     DOMAINS                                             MATCH     VIRTUAL SERVICE
80       argocd-server.argocd, 10.98.127.60                  /*        
80       dpment-a, dpment-a.java-demo + 1 more...            /*        
80       ingress-nginx.ingress-nginx, 10.99.195.253          /*        
80       istio-egressgateway.istio-system, 10.97.213.128     /*        
80       istio-ingressgateway.istio-system, 10.97.154.56     /*        
80       kuboard.kube-system, 10.97.104.136                  /*        
80       skywalking-ui.skywalking, 10.104.119.238            /*        
80       sleep, sleep.java-demo + 1 more...                  /*        
80       tracing.istio-system, 10.104.76.74                  /*        
80       web-nginx.test, 10.104.18.194                       /*   

查看cluster也被发现到

(base) [root@linuxea.com test]# istioctl proxy-config cluster $INGS.java-demo | grep dpment-a
dpment-a.java-demo.svc.cluster.local                                   80        -          outbound      EDS     

在endpionts中能看到后端的ip

(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo  | grep dpment-a
130.130.0.3:80                                          HEALTHY       OK                outbound|80||dpment-a.java-demo.svc.cluster.local
130.130.0.4:80                                          HEALTHY       OK                outbound|80||dpment-a.java-demo.svc.cluster.local
130.130.1.119:80                                        HEALTHY       OK                outbound|80||dpment-a.java-demo.svc.cluster.local

或者使用cluster来过滤

(base) [root@linuxea.com test]# istioctl proxy-config endpoints $INGS.java-demo  --cluster "outbound|80||dpment-a.java-demo.svc.cluster.local"
ENDPOINT             STATUS      OUTLIER CHECK     CLUSTER
130.130.0.3:80       HEALTHY     OK                outbound|80||dpment-a.java-demo.svc.cluster.local
130.130.0.4:80       HEALTHY     OK                outbound|80||dpment-a.java-demo.svc.cluster.local
130.130.1.119:80     HEALTHY     OK                outbound|80||dpment-a.java-demo.svc.cluster.local

这里的ip就是pod的ip

(base) [root@linuxea.com test]# kubectl -n java-demo get pod -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
dpment-linuxea-a-68dc49d5d-c9pcb   2/2     Running   0          30m   130.130.0.4     master2   <none>           <none>
dpment-linuxea-a-68dc49d5d-h6v6v   2/2     Running   0          31m   130.130.0.3     master2   <none>           <none>
dpment-linuxea-a-68dc49d5d-svl52   2/2     Running   0          30m   130.130.1.119   k8s-03   <none>           <none>
a.查看

而后我们run一个pod来

这个pod也会被加入到istio中来

kubectl run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash

如下

(base) [root@linuxea.com test]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-4.4# 

通过service名称访问

bash-4.4# curl dpment-a
linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0
bash-4.4# curl dpment-a
linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0
bash-4.4# curl dpment-a
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0

并且listeners的端口也在这个pod内

bash-4.4# ss -tlnpp
State               Recv-Q               Send-Q                              Local Address:Port                               Peer Address:Port               
LISTEN              0                    128                                       0.0.0.0:15021                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15021                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15090                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15090                                   0.0.0.0:*                  
LISTEN              0                    128                                     127.0.0.1:15000                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15001                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15001                                   0.0.0.0:*                  
LISTEN              0                    128                                     127.0.0.1:15004                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15006                                   0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:15006                                   0.0.0.0:*                  
LISTEN              0                    128                                             *:15020                                         *:*       

此时,可以过滤listeners的80端口来查看他的侦听器

bash-4.4# curl -s 127.0.0.1:15000/listeners | grep 80
10.102.80.102_10257::10.102.80.102:10257
0.0.0.0_8080::0.0.0.0:8080
0.0.0.0_80::0.0.0.0:80
10.104.119.238_80::10.104.119.238:80
10.109.18.63_8083::10.109.18.63:8083
0.0.0.0_11800::0.0.0.0:11800
10.104.18.194_80::10.104.18.194:80
10.96.124.32_12800::10.96.124.32:12800
10.103.47.163_8080::10.103.47.163:8080
0.0.0.0_8060::0.0.0.0:8060
10.96.59.20_8084::10.96.59.20:8084
10.106.152.2_8080::10.106.152.2:8080
10.96.171.119_8080::10.96.171.119:8080
10.96.132.151_8080::10.96.132.151:8080
10.99.185.170_8080::10.99.185.170:8080
10.105.132.58_8082::10.105.132.58:8082
10.96.59.20_8081::10.96.59.20:8081
0.0.0.0_8085::0.0.0.0:8085

查看cluster

bash-4.4# curl -s 127.0.0.1:15000/clusters|grep  dpment-a
outbound|80||dpment-a.java-demo.svc.cluster.local::observability_name::outbound|80||dpment-a.java-demo.svc.cluster.local
outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_connections::4294967295
outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_pending_requests::4294967295
outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_requests::4294967295
outbound|80||dpment-a.java-demo.svc.cluster.local::default_priority::max_retries::4294967295
outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_connections::1024
outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_pending_requests::1024
outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_requests::1024
outbound|80||dpment-a.java-demo.svc.cluster.local::high_priority::max_retries::3
outbound|80||dpment-a.java-demo.svc.cluster.local::added_via_api::true
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_active::2
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_connect_fail::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::cx_total::2
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_active::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_error::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_success::2
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_timeout::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::rq_total::2
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::hostname::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::health_flags::healthy
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::weight::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::region::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::sub_zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::canary::false
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::priority::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::success_rate::-1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.3:80::local_origin_success_rate::-1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_active::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_connect_fail::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::cx_total::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_active::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_error::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_success::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_timeout::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::rq_total::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::hostname::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::health_flags::healthy
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::weight::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::region::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::sub_zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::canary::false
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::priority::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::success_rate::-1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.0.4:80::local_origin_success_rate::-1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_active::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_connect_fail::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::cx_total::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_active::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_error::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_success::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_timeout::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::rq_total::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::hostname::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::health_flags::healthy
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::weight::1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::region::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::sub_zone::
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::canary::false
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::priority::0
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::success_rate::-1
outbound|80||dpment-a.java-demo.svc.cluster.local::130.130.1.119:80::local_origin_success_rate::-1

此时的,我们在run起的这个pod中通过curl命令来请求dpment-a

dpment-a本身是在service中实现的,但是在istio介入后,就委托给istio实现

while true;do curl dpment-a;sleep 0.5;done
(base) [root@linuxea.com ~]# kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-4.4# while true;do curl dpment-a;sleep 0.5;done
linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-svl52.com-127.0.0.1/8 130.130.1.119/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-h6v6v.com-127.0.0.1/8 130.130.0.3/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0
linuxea-dpment-linuxea-a-68dc49d5d-c9pcb.com-127.0.0.1/8 130.130.0.4/24 version number 1.0

kiali

image-20220802111406000.png

cli pod请求的是dpment-a,经过自己的sidecar-envoy(egress-listener)根据请求调度了dpment-a的请求,请求先在cli的sidecar上发生的,出站流量通过egress listener的dpment-a的服务,对于这个主机的请求是通过egress listener的cluster调度到后端进行响应

b.ingress-gw

如果此时要被外部访问,就需要配置ingress-gw

因此,配置即可

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: dpment-gateway
  namespace: istio-system           # 要指定为ingress gateway pod所在名称空间
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "dpment.linuxea.com"
    - "dpment1.linuxea.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: dpment
  namespace: java-demo
spec:
  hosts:
  - "dpment.linuxea.com"                     # 对应于gateways/proxy-gateway
  - "dpment1.linuxea.com"      
  gateways:
  - istio-system/dpment-gateway      # 相关定义仅应用于Ingress Gateway上
  #- mesh
  http:
  - name: dpment-a
    route:
    - destination:
        host: dpment-a
---

apply后在本地解析域名即可

image-20220802145534827.png

1.2 dpment-b

此时 ,我们在创建一个dpment-b的service

---
apiVersion: v1
kind: Service
metadata:
  name: dpment-b
  namespace: java-demo
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: linuxea_app
    version: v0.2
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dpment-linuxea-b
  namespace: java-demo  
spec:
  replicas: 2
  selector:
    matchLabels:
      app: linuxea_app
      version: v0.2
  template:
    metadata:
      labels:
        app: linuxea_app
        version: v0.2
    spec:
      containers:
      - name: nginx-b
        # imagePullPolicy: Always
        image: registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v2.0 
        ports:
        - name: http
          containerPort: 80

创建完成

(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc
NAME                                    READY   STATUS    RESTARTS   AGE
pod/cli                                 2/2     Running   0          5h59m
pod/dpment-linuxea-a-68dc49d5d-c9pcb    2/2     Running   0          23h
pod/dpment-linuxea-a-68dc49d5d-h6v6v    2/2     Running   0          23h
pod/dpment-linuxea-a-68dc49d5d-svl52    2/2     Running   0          23h
pod/dpment-linuxea-b-59b448f49c-j7gk9   2/2     Running   0          29m
pod/dpment-linuxea-b-59b448f49c-nfkfh   2/2     Running   0          29m

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/dpment-a   ClusterIP   10.107.148.63    <none>        80/TCP    23h
service/dpment-b   ClusterIP   10.109.153.119   <none>        80/TCP    29m

如下

(base) [root@linuxea.com test]# kubectl -n java-demo get pod,svc -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP              NODE     NOMINATED NODE   READINESS GATES
pod/cli                                 2/2     Running   0          5h59m   130.130.0.9     master2   <none>           <none>
pod/dpment-linuxea-a-68dc49d5d-c9pcb    2/2     Running   0          23h     130.130.0.4     master2   <none>           <none>
pod/dpment-linuxea-a-68dc49d5d-h6v6v    2/2     Running   0          23h     130.130.0.3     master2   <none>           <none>
pod/dpment-linuxea-a-68dc49d5d-svl52    2/2     Running   0          23h     130.130.1.119   k8s-03   <none>           <none>
pod/dpment-linuxea-b-59b448f49c-j7gk9   2/2     Running   0          29m     130.130.1.121   k8s-03   <none>           <none>
pod/dpment-linuxea-b-59b448f49c-nfkfh   2/2     Running   0          29m     130.130.0.13    master2   <none>           <none>

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/dpment-a   ClusterIP   10.107.148.63    <none>        80/TCP    23h   app=linuxea_app,version=v0.1
service/dpment-b   ClusterIP   10.109.153.119   <none>        80/TCP    29m   app=linuxea_app,version=v0.2

1.3 dpment

此时dpment-a和dpment-b已经被创建,他们会生成相应的listener,clusters,routes,endpions,而后我们在创建一个dpment

而后我们创建一个dpment的VirtualService在网格内做url转发

  • 如果是/version/的就重定向到/,并转发到dpment-b
  • 否则就转发到dpment-a

配置如下

---
apiVersion: v1
kind: Service
metadata:
  name: dpment
  namespace: java-demo    
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: dpment
  type: ClusterIP
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: dpment
  namespace: java-demo  
spec:
  hosts:
  - dpment
  http:
  - name: version
    match:
    - uri:
        prefix: /version/
    rewrite:
      uri: /
    route:
    - destination:
        host: dpment-b
  - name: default
    route:
    - destination:
        host: dpment-a

关键配置注释

spec:
  hosts:
  - dpment  # 与service名称一致
  http:     # 7层路由机制
  - name: version
    match:
    - uri:  # 请求报文中的url
        prefix: /version/ # 如果以/version/为前缀
    rewrite: # 重写
      uri: /  # 如果以/version/为前缀就重写到/
    route:
    - destination:
        host: dpment-b # 如果以/version/为前缀就重写到/,并且发送到 dpment-b 的host
  - name: default  # 不能匹配/version/的都会发送到default,并且路由到dpment-a
    route:
    - destination:
        host: dpment-a

我们定义了一个路由规则,如果访问的是/version/的url就重写为/并且路由到dpment-b,否则就路由到dpment-a

创建dpment , 现在多了一个svc

(base) [root@linuxea.com test]# kubectl -n java-demo get svc
NAME       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
dpment     ClusterIP   10.96.155.138    <none>        80/TCP    6s
dpment-a   ClusterIP   10.107.148.63    <none>        80/TCP    23h
dpment-b   ClusterIP   10.109.153.119   <none>        80/TCP    51m

还有一个vs

(base) [root@master2 ~]# kubectl -n java-demo get vs
NAME     GATEWAYS   HOSTS        AGE
dpment              ["dpment"]   19s

此时我们查看routes,如下图

image-20220802174652262.png

(base) [root@linuxea.com ~]# istioctl proxy-config routes $IMP.java-demo | grep 80
web-nginx.test.svc.cluster.local:80                                       *                                                                    /*                     
8060                                                                      webhook-dingtalk.monitoring, 10.107.177.232                          /*                     
8080                                                                      argocd-applicationset-controller.argocd, 10.96.132.151               /*                     
8080                                                                      cloud-his-gateway-nodeport.default, 10.96.171.119                    /*                     
8080                                                                      cloud-his-gateway.default, 10.103.47.163                             /*                     
8080                                                                      devops-system-nodeport.default, 10.106.152.2                         /*                     
8080                                                                      devops-system.default, 10.99.185.170                                 /*                     
8080                                                                      jenkins-master-service.devops, 10.100.245.168                        /*                     
8080                                                                      jenkins-service.jenkins, 10.98.131.142                               /*                     
8085                                                                      cloud-base-uaa.devops, 10.109.0.226                                  /*                     
80                                                                        argocd-server.argocd, 10.98.127.60                                   /*                     
80                                                                        dpment-a, dpment-a.java-demo + 1 more...                             /*                     
80                                                                        dpment-b, dpment-b.java-demo + 1 more...                             /*                     
80                                                                        dpment, dpment.java-demo + 1 more...                                 /version/*             dpment.java-demo
80                                                                        dpment, dpment.java-demo + 1 more...                                 /*                     dpment.java-demo
80                                                                        ingress-nginx.ingress-nginx, 10.99.195.253                           /*                     
80                                                                        istio-egressgateway.istio-system, 10.97.213.128                      /*                     
80                                                                        istio-ingressgateway.istio-system, 10.97.154.56                      /*                     
80                                                                        kuboard.kube-system, 10.97.104.136                                   /*                     
80                                                                        skywalking-ui.skywalking, 10.104.119.238                             /*                     
80                                                                        tracing.istio-system, 10.104.76.74                                   /*                     
80                                                                        web-nginx.test, 10.104.18.194                                        /*                     
argocd-applicationset-controller.argocd.svc.cluster.local:8080            *                                                                    /*                     
devops-system-nodeport.default.svc.cluster.local:8080                     *                                                                    /*                     
argocd-metrics.argocd.svc.cluster.local:8082                              *               

我们在java-demo 的pod内进行测试

kubectl -n java-demo run cli --image=marksugar/alpine:netools -it --rm --restart=Never --command -- /bin/bash

仍然起一个cli进行测试

如果请求直接访问的发送到v0.1, 如果请求携带version的url,发送到0.2

bash-4.4# while true;do curl dpment ; sleep 0.$RANDOM;done
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-b-55694cb7f5-lhkrb.com-127.0.0.1/8 130.130.1.122/24 version number 2.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-b-55694cb7f5-576qs.com-127.0.0.1/8 130.130.0.15/24 version number 2.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0
linuxea-dpment-linuxea-a-777847fd74-fsnsv.com-127.0.0.1/8 130.130.0.19/24 version number 1.0

我们循环的访问后,在kiali页面能看到不通的状态

 while true;do curl dpment; curl dpment/version/;sleep 0.$RANDOM;done

打开web页面观测istio-20220803.gif

相关文章

LeaferJS 1.0 重磅发布:强悍的前端 Canvas 渲染引擎
10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
一步步配置基于kubeadmin的kubevip高可用
istio全链路传递cookie和header灰度
REST Web 服务版本控制

发布评论