istio 1.14.1 kiali配置(4)

2023年 7月 15日 52.5k 0

4.kiali配置

ingress和egress分别处理的出和入的流量,而大部分相关资源都需要定义资源来完成。

每个pod访问外部的时候,流量到达sedcar上,而sedcar上有对应的配置。这些配置包括:

  • 集群内有多少个服务
  • 服务对应的pod是什么
  • 访问给服务流量的比例是多少

而virtualServIce就是为了这些来定义,这类似于nginx的虚拟主机。virtualServIce为每一个服务定义一个虚拟主机或者定义一个path路径,一旦用户流量到达虚拟主机上,根据访问将请求发送到“upstrem server”。而在istio之上是借助kubernetes的service,为每一个虚拟主机,虚拟主机的名称就是服务service的名字,虚拟主机背后上游中有多少个节点和真正的主机是借助kubernetes的服务来发现pod,每一个pod在istio之上被称为Destination,一个目标。

一个对应的服务对应一个主机名字来进行匹配,客户端流量请求的就是主机头的流量就会被这个服务匹配到目标上,而目标有多少个pod取决于kubernetes集群上的服务有多少个pod, 在envoy中这些pod被称为cluster.

virtualServIce就是用来定义虚拟主机有多少个,发往主机的流量匹配的不同规则,调度到那些,Destination就是定义后端集群中有多少个pod,这些pod如果需要分组就需要Destination rule来进行定义子集。

比如说,一个主机服务是A,MatchA,对于根的流量,送给由A服务的所有pod上,流量就被调度给这些pod,负载均衡取决于istio和envoy的配置。

对于这些流量而言,可以将pod分为两部分,v1和v2, 99%的流量给v1,1%的流量给v2,并且对1%的流量进行超时,重试,故障注入等。

要想配置流量治理,我们需要配置virtualService,并且需要定义destnartionrule。实际上,我们至少需要让集群被外界访问,而后配置ingress-gateway,紧接着配置virtualService和destnartionruleimage-20220717161355400.png

4.1 测试一个pod

此前有一个java-demo

此时再创建一个pod,满足app和version这两个标签

    app: linuxea_app
    version: v1.0

如下

---
apiVersion: v1
kind: Service
metadata:
  name: dpment
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: linuxea_app
    version: v1.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dpment-linuxea
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linuxea_app
      version: v0.1
  template:
    metadata:
      labels:
        app: linuxea_app
        version: v0.1
    spec:
      containers:
      - name: nginx-a
        image: marksugar/nginx:1.14.a
        ports:
        - name: http
          containerPort: 80         

apply

> kubectl.exe -n java-demo apply -f .linuxeav1.yaml
service/dpment created
deployment.apps/dpment-linuxea created

istioctl ps能看到网格中有sidcar的pod和gateway

> kubectl.exe -n java-demo get svc
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE  
dpment      ClusterIP   10.68.212.243   <none>        80/TCP           20h  
java-demo   NodePort    10.68.4.28      <none>        8080:31181/TCP   6d21h
> kubectl.exe -n java-demo get pod
NAME                              READY   STATUS    RESTARTS      AGE
dpment-linuxea-54b8b64c75-b6mqj   2/2     Running   2 (43m ago)   20h
java-demo-79485b6d57-rd6bm        2/2     Running   2 (43m ago)   42h
[root@linuxea-48 ~]# istioctl  ps
NAME                                                   CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                     VERSION
dpment-linuxea-54b8b64c75-b6mqj.java-demo              Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-56d9c5557-tffdv     1.14.1
istio-egressgateway-7fcb98978c-8t685.istio-system      Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-56d9c5557-tffdv     1.14.1
istio-ingressgateway-55b6cffcbc-9rn99.istio-system     Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-56d9c5557-tffdv     1.14.1
java-demo-79485b6d57-rd6bm.java-demo                   Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-56d9c5557-tffdv     1.14.1

4.2 配置LoadBalancer

我们配置一个VIP172.16.100.110/24来模拟LoadBalancer

[root@linuxea-11 ~]# ip addr add 172.16.100.110/24 dev eth0
[root@linuxea-11 ~]# ip a | grep 172.16.100.110
    inet 172.16.100.110/24 scope global secondary eth0
[root@linuxea-11 ~]# ping 172.16.100.110
PING 172.16.100.110 (172.16.100.110) 56(84) bytes of data.
64 bytes from 172.16.100.110: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 172.16.100.110: icmp_seq=2 ttl=64 time=0.017 ms
64 bytes from 172.16.100.110: icmp_seq=3 ttl=64 time=0.024 ms
64 bytes from 172.16.100.110: icmp_seq=4 ttl=64 time=0.037 ms
^C
--- 172.16.100.110 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3081ms
rtt min/avg/max/mdev = 0.017/0.027/0.037/0.007 ms

而后使用kubectl -n istio-system edit svc istio-ingressgateway编辑

 27   clusterIP: 10.68.113.92
 28   externalIPs:
 29   - 172.16.100.110
 30   clusterIPs:
 31   - 10.68.113.92
 32   externalTrafficPolicy: Cluster
 33   internalTrafficPolicy: Cluster
 34   ipFamilies:
 35   - IPv4

如下image-20220716170521723.png

一旦修改,可通过命令查看

[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl -n istio-system get svc
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                      AGE
grafana                ClusterIP      10.68.57.153    <none>           3000/TCP                                                                     45h
istio-egressgateway    ClusterIP      10.68.66.165    <none>           80/TCP,443/TCP                                                               2d16h
istio-ingressgateway   LoadBalancer   10.68.113.92    172.16.100.110   15021:31787/TCP,80:32368/TCP,443:30603/TCP,31400:30435/TCP,15443:32099/TCP   2d16h
istiod                 ClusterIP      10.68.7.43      <none>           15010/TCP,15012/TCP,443/TCP,15014/TCP                                        2d16h
jaeger-collector       ClusterIP      10.68.50.134    <none>           14268/TCP,14250/TCP,9411/TCP                                                 45h
kiali                  ClusterIP      10.68.203.141   <none>           20001/TCP,9090/TCP                                                           45h
prometheus             ClusterIP      10.68.219.101   <none>           9090/TCP                                                                     45h
tracing                ClusterIP      10.68.193.43    <none>           80/TCP,16685/TCP                                                             45h
zipkin                 ClusterIP      10.68.101.144   <none>           9411/TCP                                                                     45h

4.3.1 nodeport

istio-ingressgateway一旦修改为nodeport打开就需要使用ip:port来进行访问

nodeport作为clusterip的增强版,但是如果是在一个云环境下可能就需要LoadBalancer

事实上LoadBalancer并非是在上述例子中的自己设置的ip,而是一个高可用的ip

开始修改nodeport

编辑 kubectl.exe -n istio-system edit svc istio-ingressgateway,修改为 type: NodePort,如下:

  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

随机一个端口

PS C:Usersusert> kubectl.exe -n istio-system get svc
NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
grafana                ClusterIP   10.110.43.38     <none>        3000/TCP                                                                     14d
istio-egressgateway    ClusterIP   10.97.213.128    <none>        80/TCP,443/TCP                                                               14d
istio-ingressgateway   NodePort    10.97.154.56     <none>        15021:32514/TCP,80:30142/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP   14d
istiod                 ClusterIP   10.98.150.70     <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        14d
jaeger-collector       ClusterIP   10.111.33.218    <none>        14268/TCP,14250/TCP,9411/TCP                                                 14d
kiali                  ClusterIP   10.111.90.166    <none>        20001/TCP,9090/TCP                                                           14d
prometheus             ClusterIP   10.99.141.97     <none>        9090/TCP                                                                     14d
tracing                ClusterIP   10.104.76.74     <none>        80/TCP,16685/TCP                                                             14d
zipkin                 ClusterIP   10.100.238.112   <none>        9411/TCP                                                                     14d

4.3.2 kiali开放至外部

要想开放kiali至集群外部,需要定义并创建kiali VirtualService,Gateway,DestinationRule资源对象image-20220717165116105.png

当安装完成,默认安装一些crds

(base) [root@linuxea-master1 ~]# kubectl -n istio-system get crds | grep istio
authorizationpolicies.security.istio.io     2022-07-14T02:28:00Z
destinationrules.networking.istio.io        2022-07-14T02:28:00Z
envoyfilters.networking.istio.io            2022-07-14T02:28:00Z
gateways.networking.istio.io                2022-07-14T02:28:00Z
istiooperators.install.istio.io             2022-07-14T02:28:00Z
peerauthentications.security.istio.io       2022-07-14T02:28:00Z
proxyconfigs.networking.istio.io            2022-07-14T02:28:00Z
requestauthentications.security.istio.io    2022-07-14T02:28:00Z
serviceentries.networking.istio.io          2022-07-14T02:28:00Z
sidecars.networking.istio.io                2022-07-14T02:28:00Z
telemetries.telemetry.istio.io              2022-07-14T02:28:00Z
virtualservices.networking.istio.io         2022-07-14T02:28:00Z
wasmplugins.extensions.istio.io             2022-07-14T02:28:00Z
workloadentries.networking.istio.io         2022-07-14T02:28:00Z
workloadgroups.networking.istio.io          2022-07-14T02:28:00Z

和api

(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources | grep istio
wasmplugins                                          extensions.istio.io            true         WasmPlugin
istiooperators                    iop,io             install.istio.io               true         IstioOperator
destinationrules                  dr                 networking.istio.io            true         DestinationRule
envoyfilters                                         networking.istio.io            true         EnvoyFilter
gateways                          gw                 networking.istio.io            true         Gateway
proxyconfigs                                         networking.istio.io            true         ProxyConfig
serviceentries                    se                 networking.istio.io            true         ServiceEntry
sidecars                                             networking.istio.io            true         Sidecar
virtualservices                   vs                 networking.istio.io            true         VirtualService
workloadentries                   we                 networking.istio.io            true         WorkloadEntry
workloadgroups                    wg                 networking.istio.io            true         WorkloadGroup
authorizationpolicies                                security.istio.io              true         AuthorizationPolicy
peerauthentications               pa                 security.istio.io              true         PeerAuthentication
requestauthentications            ra                 security.istio.io              true         RequestAuthentication
telemetries                       telemetry          telemetry.istio.io             true         Telemetry

并且可以通过命令过滤--api-group=networking.istio.io

(base) [root@linuxea-master1 ~]# kubectl -n istio-system api-resources --api-group=networking.istio.io
NAME               SHORTNAMES   APIGROUP              NAMESPACED   KIND
destinationrules   dr           networking.istio.io   true         DestinationRule
envoyfilters                    networking.istio.io   true         EnvoyFilter
gateways           gw           networking.istio.io   true         Gateway
proxyconfigs                    networking.istio.io   true         ProxyConfig
serviceentries     se           networking.istio.io   true         ServiceEntry
sidecars                        networking.istio.io   true         Sidecar
virtualservices    vs           networking.istio.io   true         VirtualService
workloadentries    we           networking.istio.io   true         WorkloadEntry
workloadgroups     wg           networking.istio.io   true         WorkloadGroup

这些可以通过帮助看来是如何定义的,比如gw的配置

kubectl explain gw.spec.server

定义gateway

  • Gateway

标签匹配istio-ingressgateway

  selector:
    app: istio-ingressgateway

创建到istio-ingressgaetway下,如下

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: kiali-gateway
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 20001
      name: http-kiali
      protocol: HTTP
    hosts:
    - "kiali.linuxea.com"
---

创建只会可用 istioctl proxy-status查看

此时我们通过标签获取到istio-ingress的pod名称

kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo
(base) [root@linuxea-master1 ~]# kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name} && echo
istio-ingressgateway-559d4ffc58-7rgft

并且配置成变量进行调用

(base) [root@linuxea-master1 ~]# INGS=$(kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath={.items[0].metadata.name})
(base) [root@linuxea-master1 ~]# echo $INGS
istio-ingressgateway-559d4ffc58-7rgft

随后查看以及定义的侦听器

(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config listeners $INGS
ADDRESS PORT  MATCH DESTINATION
0.0.0.0 8080  ALL   Route: http.8080
0.0.0.0 15021 ALL   Inline Route: /healthz/ready*
0.0.0.0 15090 ALL   Inline Route: /stats/prometheus*
0.0.0.0 20001 ALL   Route: http.20001

可以看到,此时的0.0.0.0 20001 ALL Route: http.20001以及被添加

但是在Ingress中是不会自动创建routed,因此在routes中VIRTUAL SERVICE是404

(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS
NAME           DOMAINS     MATCH                  VIRTUAL SERVICE
http.8080      *           /productpage           bookinfo.java-demo
http.8080      *           /static*               bookinfo.java-demo
http.8080      *           /login                 bookinfo.java-demo
http.8080      *           /logout                bookinfo.java-demo
http.8080      *           /api/v1/products*      bookinfo.java-demo
http.20001     *           /*                     404
               *           /stats/prometheus*
               *           /healthz/ready*

gateway创建完成在名称空间下

(base) [root@linuxea-master1 package]# kubectl -n istio-system get gw
NAME            AGE
kiali-gateway   3m

于是,我们创建VirtualService

  • VirtualService

在gateway中配置了hosts,于是在VirtualService中需要指明hosts,并且需要指明流量规则适配的位置,比如Ingress-gateway,在上的配置里面ingress-gateway的名字是kiali-gateway,于是在这里就配置上。gateway中的端口是20001,默认。将流量路由到一个kiali的serivce,端口是20001,该service将流量调度到后端的pod

VirtualService要么配置在ingress gateway作为接入流量,要么就在配在集群内处理内部流量

  • hosts确保一致
  • 关联gateway的名称
(base) [root@master2 ~]# kubectl -n istio-system get gw
NAME            AGE
kiali-gateway   1m18s
  • route 的host指向的是上游集群的cluster,而这个cluster名称和svc的名称是一样的。

请注意,这里的流量不会发送到svc ,svc负责发现,流量发从到istio的cluster中,这与ingress-nginx的发现相似

(base) [root@master2 ~]# kubectl -n istio-system get svc kiali
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
kiali   ClusterIP   10.111.90.166   <none>        20001/TCP,9090/TCP   14m

如下:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kiali-virtualservice
  namespace: istio-system
spec:
  hosts:
   - "kiali.linuxea.com"
  gateways:
  - kiali-gateway
  http:
  - match:
    - port: 20001
    route:
    - destination:
        host: kiali
        port:
          number: 20001
---

此时routes中就会发现到http.20001 kiali.linuxea.com /* kiali-virtualservice.istio-system

(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config routes $INGS
NAME           DOMAINS               MATCH                  VIRTUAL SERVICE
http.8080      *                     /productpage           bookinfo.java-demo
http.8080      *                     /static*               bookinfo.java-demo
http.8080      *                     /login                 bookinfo.java-demo
http.8080      *                     /logout                bookinfo.java-demo
http.8080      *                     /api/v1/products*      bookinfo.java-demo
http.20001     kiali.linuxea.com     /*                     kiali-virtualservice.istio-system
               *                     /stats/prometheus*
               *                     /healthz/ready*

创建完成后vs也会在名称空间下被创建

(base) [root@linuxea-master1 package]# kubectl -n istio-system get vs
NAME                   GATEWAYS            HOSTS                   AGE
kiali-virtualservice   ["kiali-gateway"]   ["kiali.linuxea.com"]   26m

同时查看cluster

(base) [root@linuxea-master1 ~]# istioctl -n istio-system proxy-config cluster $INGS|grep kiali
kiali.istio-system.svc.cluster.local                                   9090      -               outbound      EDS
kiali.istio-system.svc.cluster.local                                   20001     -               outbound      EDS

要想通过web访问,svc里面必然需要有这个20001端口,如果没用肯定是不能够访问的,因为我们在配置里配置的就是20001,因此修改

(base) [root@linuxea-master1 ~]# kubectl -n istio-system edit svc istio-ingressgateway
....

  - name: http-kiali
    nodePort: 32653
    port: 20001
    protocol: TCP
    targetPort: 20001
...

svc里面以及由了一个20001端口被映射成32653

(base) [root@linuxea-master1 ~]# kubectl -n istio-system get svc
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                                                                                      AGE
grafana                ClusterIP      10.110.43.38     <none>          3000/TCP                                                                                     14d
istio-egressgateway    ClusterIP      10.97.213.128    <none>          80/TCP,443/TCP                                                                               15d
istio-ingressgateway   LoadBalancer   10.97.154.56     172.16.15.111   15021:32514/TCP,80:30142/TCP,20001:32653/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP   15d

这两个端口都可以访问

20001image-20220729105525438.png

32653

image-20220729105726789.png

并且在proxy-config的routes中也会体现出来

(base) [root@linuxea-master1 ~]# istioctl proxy-config routes $INGS.istio-system
NAME           DOMAINS               MATCH                  VIRTUAL SERVICE
http.8080      *                     /productpage           bookinfo.java-demo
http.8080      *                     /static*               bookinfo.java-demo
http.8080      *                     /login                 bookinfo.java-demo
http.8080      *                     /logout                bookinfo.java-demo
http.8080      *                     /api/v1/products*      bookinfo.java-demo
http.20001     kiali.linuxea.com     /*                     kiali-virtualservice.istio-system
               *                     /stats/prometheus*     
               *                     /healthz/ready*      
  • DestinationRule

DestinationRule默认会自动生成,并非必须要定义的,这却决于是否需要更多的功能扩展定义

该kiali的serivce将流量调度到后端的pod。此时ingress-gateway将流量转到kiali的pod之间是否需要流量加密,或者不加密,这部分的调度是由cluster决定。而其中使用什么样的调度算法,是否启用链路加密,是用DestinationRule来定义的。

如:tls:mode: DISABLE 不使用链路加密

  trafficPolicy:
    tls:
      mode: DISABLE
  • host: kiali : 关键配置,匹配到service的name

如下

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: kiali
  namespace: istio-system
spec:
  host: kiali
  trafficPolicy:
    tls:
      mode: DISABLE

apply只会会创建一个DestinationRule

(base) [root@linuxea-master1 ~]# kubectl -n istio-system get dr
NAME    HOST    AGE
kiali   kiali   88s

配置完成查看cluster

(base) [root@linuxea-master1 ~]# istioctl proxy-config cluster $INGS.istio-system
...
kiali.istio-system.svc.cluster.local                                   9090      -               outbound      EDS            kiali.istio-system
kiali.istio-system.svc.cluster.local                                   20001     -               outbound      EDS            kiali.istio-system
...

gateway是不会生效到网格内部的

istio-system是istio安装组件的名称空间,也就是控制平面

(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system
ADDRESS PORT  MATCH DESTINATION
0.0.0.0 8080  ALL   Route: http.8080
0.0.0.0 15021 ALL   Inline Route: /healthz/ready*
0.0.0.0 15090 ALL   Inline Route: /stats/prometheus*
0.0.0.0 20001 ALL   Route: http.20001

java-demo是数据平面的名称空间

  • 在java-demo中只有出站反向的PassthroughCluster,而这个PassthroughCluster是由service生成的
  • 一旦创建service就自动创建,在这里创建到网关上,并没有在网格内
(base) [root@linuxea-master1 ~]# istioctl -n java-demo  proxy-config listeners marksugar  --port 20001
ADDRESS PORT  MATCH                                DESTINATION
0.0.0.0 20001 Trans: raw_buffer; App: http/1.1,h2c Route: 20001
0.0.0.0 20001 ALL                                  PassthroughCluster

80端口的清单如下:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kiali-virtualservice
  namespace: istio-system
spec:
  hosts:
  - "kiali.linuxea.com"
  gateways:
  - kiali-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: kiali
        port:
          number: 20001
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: kiali-gateway
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 80
      name: http-kiali
      protocol: HTTP
    hosts:
    - "kiali.linuxea.com"
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: kiali
  namespace: istio-system
spec:
  host: kiali
  trafficPolicy:
    tls:
      mode: DISABLE
---

apply

> kubectl.exe apply -f .kiali.linuxea.com.yaml
virtualservice.networking.istio.io/kiali-virtualservice created
gateway.networking.istio.io/kiali-gateway created
destinationrule.networking.istio.io/kiali created

此时我们通过istioctl proxy-config查看

[root@linuxea-48 ~]# istioctl -n istio-system proxy-config all istio-ingressgateway-55b6cffcbc-4vc94 
SERVICE FQDN                                                           PORT      SUBSET     DIRECTION     TYPE           DESTINATION RULE
BlackHoleCluster                                                       -         -          -             STATIC         
agent                                                                  -         -          -             STATIC         
此处删除,省略       
skywalking-ui.skywalking.svc.cluster.local                             80        -          outbound      EDS            
tracing.istio-system.svc.cluster.local                                 80        -          outbound      EDS            
tracing.istio-system.svc.cluster.local                                 16685     -          outbound      EDS            
xds-grpc                                                               -         -          -             STATIC         
zipkin                                                                 -         -          -             STRICT_DNS     
zipkin.istio-system.svc.cluster.local                                  9411      -          outbound      EDS            

ADDRESS PORT  MATCH DESTINATION
0.0.0.0 8080  ALL   Route: http.8080
0.0.0.0 15021 ALL   Inline Route: /healthz/ready*
0.0.0.0 15090 ALL   Inline Route: /stats/prometheus*

NAME          DOMAINS               MATCH                  VIRTUAL SERVICE
http.8080     kiali.linuxea.com     /*                     kiali-virtualservice.istio-system
              *                     /stats/prometheus*     
              *                     /healthz/ready*        

RESOURCE NAME     TYPE           STATUS     VALID CERT     SERIAL NUMBER                               NOT AFTER                NOT BEFORE
default           Cert Chain     ACTIVE     true           244178067234775886684219941410566024258     2022-07-18T06:25:04Z     2022-07-17T06:23:04Z
ROOTCA            CA             ACTIVE     true           102889470196612755194280100451505524786     2032-07-10T16:33:20Z     2022-07-13T16:33:20Z

image-20220717171903848.png

同时可以使用命令查看路由

[root@linuxea-48 /usr/local]# istioctl -n java-demo pc route dpment-linuxea-54b8b64c75-b6mqj 
NAME                                                                      DOMAINS                                                                        MATCH                  VIRTUAL SERVICE
80                                                                        argocd-server.argocd, 10.68.36.89                                              /*                     
80                                                                        dpment, dpment.java-demo + 1 more...                                           /*                     
此处省略                                                     /*                     
inbound|80||                                                              *                                                                              /*                     
15014                                                                     istiod.istio-system, 10.68.7.43                                                /*                     
16685                                                                     tracing.istio-system, 10.68.193.43                                             /*                     
20001                                                                     kiali.istio-system, 10.68.203.141                                              /*                     
                                                                          *                                                                              /stats/prometheus*    

image-20220717174120081.png

默认就是cluster,因此VIRTUAL SERVICE是空的,可以通过命令查看image-20220717174317052.png

EDS是envoy中的,表示能够通过eds动态的方式来发现后端的pod并生成一个集群的

我们使用命令过滤后端有几个pod,此时我们只有一个

[root@linuxea-48 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment
172.20.1.12:80                                          HEALTHY     OK                outbound|80||dpment.java-demo.svc.cluster.local

如果我们进行scale,将会发生变化

[root@linuxea-11 /usr/local]# istioctl -n java-demo pc endpoint dpment-linuxea-54b8b64c75-b6mqj |grep dpment
172.20.1.12:80                                          HEALTHY     OK                outbound|80||dpment.java-demo.svc.cluster.local
172.20.2.168:80                                         HEALTHY     OK                outbound|80||dpment.java-demo.svc.cluster.local

此时我们准备在访问kiali

修改本地Hosts

172.16.100.110 kiali.linuxea.com

kiali.linuxea.comimage-20220716171455371.png

4.3.3 grafana

在kiali中,配置了20002的端口进行访问,并且还修改添加了service的pod才完成访问,而后又补充了一个80端口的访问,于是乎,我们将grafana也配置成80端口访问

  • 一旦配置了80端口,hosts就不能配置成*了

配置关键点

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: grafana-gateway
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 80   # 80端口侦听器
      name: http 
      protocol: HTTP
    hosts:
    - "grafana.linuxea.com" # 域名1,这里可以是多个域名,不能为*
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: grafana-virtualservice
  namespace: istio-system
spec:
  hosts:
  - "grafana.linuxea.com" # 匹配Gateway的hosts
  gateways:
  - grafana-gateway # 匹配Gateway的name,如果不是同一个名称空间的需要加名称空间引用
  http:
  - match:  # 80端口这里已经不能作为识别标准,于是match url
    - uri:
        prefix: /  # 只要是针对grafana.linuxea.com发起的请求,无论是什么路径
    route:
    - destination:
        host: grafana
        port:
          number: 3000
---
# DestinationRule 在这里是可有可无的
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: grafana
  namespace: istio-system
spec:
  host: grafana
  trafficPolicy:
    tls:
      mode: DISABLE
---

于是,我们先创建gateway

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: grafana-gateway
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "grafana.linuxea.com"

apply

(base) [root@linuxea-master1 ~]# kubectl -n istio-system get gw
NAME              AGE
grafana-gateway   52s
kiali-gateway     24h

gateway创建后,这里的端口是8080

80端口是作为流量拦截的,这里的80端口都会被转换成8080,访问仍然是请求80端口

(base) [root@linuxea-master1 ~]# istioctl proxy-config listeners $INGS.istio-system
ADDRESS PORT  MATCH DESTINATION
0.0.0.0 8080  ALL   Route: http.8080
0.0.0.0 15021 ALL   Inline Route: /healthz/ready*
0.0.0.0 15090 ALL   Inline Route: /stats/prometheus*
0.0.0.0 20001 ALL   Route: http.20001

定义virtualservice

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: grafana-virtualservice
  namespace: istio-system
spec:
  hosts:
  - "grafana.linuxea.com"
  gateways:
  - grafana-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: grafana
        port:
          number: 3000

apply

(base) [root@linuxea-master1 ~]# kubectl -n istio-system get vs
NAME                     GATEWAYS              HOSTS                     AGE
grafana-virtualservice   ["grafana-gateway"]   ["grafana.linuxea.com"]   82s
kiali-virtualservice     ["kiali-gateway"]     ["kiali.linuxea.com"]     24h

routes中已经配置有了DOMAINS和VIRTUAL SERVICE的配置

(base) [root@linuxea-master1 opt]# istioctl -n istio-system  proxy-config routes $INGS
NAME           DOMAINS                 MATCH                  VIRTUAL SERVICE
http.8080      *                       /productpage           bookinfo.java-demo
http.8080      *                       /static*               bookinfo.java-demo
http.8080      *                       /login                 bookinfo.java-demo
http.8080      *                       /logout                bookinfo.java-demo
http.8080      *                       /api/v1/products*      bookinfo.java-demo
http.8080      grafana.linuxea.com     /*                     grafana-virtualservice.istio-system
http.20001     kiali.linuxea.com       /*                     kiali-virtualservice.istio-system

而在cluster中的grafana的出站是存在的

(base) [root@linuxea-master1 opt]# istioctl -n istio-system  proxy-config cluster $INGS|grep grafana
grafana.istio-system.svc.cluster.local                                 3000      -               outbound      EDS            
grafana.monitoring.svc.cluster.local                                   3000      -               outbound      EDS 

此时 ,我们 配置本地hosts后就可以打开grafanaimage-20220729181638055.png

其中默认已经添加了模板

4.4 简单测试

于是,我们在java-demo pod里面ping dpment

[root@linuxea-48 ~]# kubectl -n java-demo exec -it  java-demo-79485b6d57-rd6bm  -- /bin/bash
Defaulting container name to java-demo.
Use 'kubectl describe pod/java-demo-79485b6d57-rd6bm -n java-demo' to see all of the containers in this pod.
bash-5.1$  while true;do curl dpment; sleep 0.2;done
linuxea-dpment-linuxea-54b8b64c75-b6mqj.com-127.0.0.1/8 172.20.1.254/24

如下图

此时 ,我们的访问的dpment并没有走service,而是服务网格的sidecatimage-20220716173056422.png

而后返回kiali在所属名称空间查看image-20220716173402567.png

4.5 dpment开放至外部

在上面,我们将kiali ui开放至外部访问,现在我们将dpment也开放至外部。

因此,我除了deplpoyment的yaml,我们还需要配置其他的部分

此前的deployment.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: dpment
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: linuxea_app
    version: v0.1
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dpment-linuxea
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linuxea_app
      version: v0.1
  template:
    metadata:
      labels:
        app: linuxea_app
        version: v0.1
    spec:
      containers:
      - name: nginx-a
        image: marksugar/nginx:1.14.a
        ports:
        - name: http
          containerPort: 80          

配置istio外部访问

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: dpment-virtualservice
  namespace: java-demo  
spec:
  hosts:
  - "kiali.linuxea.com"
  gateways:
  - kiali-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: dpment
        port:
          number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: dpment-gateway
  namespace: java-demo
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - port:
      number: 80
      name: dpment
      protocol: HTTP
    hosts:
    - "kiali.linuxea.com"
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: dpment-destinationRule
  namespace: java-demo
spec:
  host: kiali
  trafficPolicy:
    tls:
      mode: DISABLE
---

而后就可以通过浏览器进行访问image-20220717195136458.png

如下image-20220717195306881.png

关键概念

  • listeners

service本身就会发现后端的pod的ip端口信息,而listeners是借助于发现的service实现的

相关文章

LeaferJS 1.0 重磅发布:强悍的前端 Canvas 渲染引擎
10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
一步步配置基于kubeadmin的kubevip高可用
istio全链路传递cookie和header灰度
REST Web 服务版本控制

发布评论