istio 1.14.1安装与测试(2)

2023年 7月 15日 37.6k 0

要想部署istio,我门需要kubernetes组件。控制平面默认部署在istio-system名称空间内,包含,istiod,ingress-gateway,egress-gateway,addons(kiali.prometheus,grafanajacger)。而数据平面的自动注入只会在部署应用之后部署在应用的名称空间内,以Sidecar 运行,并且是需要打上一个label才能被启用。

而egress-gateway是可选的,对于addons,需要手动按需进行配置。如:kiali,prometheus等。

部署

有三种方式进行部署

  • istioctl

istio的专用工具,通过命令行选项完整支持istioOpperator api,包括CR生成CRD,并且由内置的默认配置,可选择默认的配置进行部署

    • default: 根据istioOpperator API的默认设置启用相关组件,适用于生产环境
    • demo: 部署较多组件演示istio功能
    • minimal:类似于default,仅部署控制平台
    • remore: 用于配置共享control plane多集群
    • empty: 不部署任何组件,通常帮助用户自定义profifle时生成基础配置
    • preview:包含预览性的profile,可探索新功能,不保证稳定性和安全及性能
  • istio Opperator

istio相关自定义CR的专用控制器,运行在k8s下以一个pod形式运行,负责自动维护CR定义的资源对象。根据需要定义CR配置文件,提交到kkubernetes API后,由operator完成相应的操作。这仍然借助于istioOpperator api。与istioctl不通,我们需要自定义配置才能够进行部署

  • helm

基于特定的chart,也可以是helm安装及配置istio,目前处于alpha阶段

这些配置所属资源群组是install.istio.io/v1alpha1,配置该资源类型下的资源对象即可定义

istioctl最为常用,我们采用istioctl进行部署

1.下载安装包

istio和其他的软件部署类似,不过istio有一个istioctl工具管理,同时也支持helm和istio pperator

我门根据官方文档的Download Istio进行下载istioctl,或者到github进行下载即可

$ curl -L https://istio.io/downloadIstio | sh -

我直接去github下载

wget https://github.com/istio/istio/releases/download/1.14.1/istioctl-1.14.1-linux-amd64.tar.gz
wget https://github.com/istio/istio/releases/download/1.14.1/istio-1.14.1-linux-arm64.tar.gz

tar xf istio-1.14.1-linux-arm64.tar.gz -C /usr/local
cd /usr/local
ln -s istio-1.14.1 istio
cd ~/istio 
tar xf istioctl-1.14.1-linux-amd64.tar.gz -C /usr/local/istio/bin
(base) [root@master1 istio]# istioctl version
no running Istio pods in "istio-system"
1.14.1

目录结构如下

(base) [root@master1 istio]# ll
总用量 28
drwxr-x---  2 root root    22 7月  13 17:13 bin
-rw-r--r--  1 root root 11348 6月   8 10:11 LICENSE
drwxr-xr-x  5 root root    52 6月   8 10:11 manifests
-rw-r-----  1 root root   796 6月   8 10:11 manifest.yaml
-rw-r--r--  1 root root  6016 6月   8 10:11 README.md
drwxr-xr-x 23 root root  4096 6月   8 10:11 samples #  bookinfo目录
drwxr-xr-x  3 root root    57 6月   8 10:11 tools

如果你是windows用户下载windows包即可

2.安装和配置

安装完成后要实现流量治理需要借助几个开箱即用的CRD来完成,CRD属于网络群组,分别是VirtualService,DestinationRule,Gateway,ServiceEntry和Envoyflter等。

istio一旦部署后,务必确保服务到达网格中的所有其他服务才行

istio默认是不知道pod与pod直接的访问关系,因此

1.sedcar下发的配置必须是能够注册找到网格所有的其他服务的

2.istio的出向流量

在k8s中,通常,service是通过iptables和ipvs借助内核完成规则配置。service是作为kubnerets上的每个服务,提供注册发现的机制。

A访问B通常先到达B的service,而后在到B的pod。

A访问B是从什么时候开始别识别成B的服务的

每个节点上的Kube-proxy程序,读取加载并转换成该节点上的ipvs或者iptables规则,每一个节点都有kube-proxy,每一个节点都有每一个节点的定义。当A在访问B的时候,请求流量到达自己所在节点的内核就已经被识别成B的服务的请求,而后就被调度给另外一个节点的B pod。service是把没一个节点都配置为每一个服务的调度器,仅负责在当前节点的客户端在访问没一个服务时在本地完成服务识别和调度。

每定义一个service ,就相当于把当前集群的每个节点配置为该service的负载均衡器,只不过该节点的负载均衡器只负责接入该节点的流量。这种代理机制是作为4层代理机制。

如,ingress和service,流量经过ingress被代理到pod之前是不需要经过service的,ingress网关本身就完成调度,service只是用来发现Pod的。

而在istio之上,kubernetes上定义的service都被转为istio上的service,这意味着istio把每个pod起了一个单独的sedcar作为代理,代理这个pod,这类似于给这个pod起了一个单独的ingress gateway

而访问在外部的服务究竟有多少个pod是借助kurnetes服务来发现完成,每一个服务都会被配置成一个ingess gateway的资源。这些是出向流量的作用,而对于接入的流量没有复杂的操作。

istiod借助kubernetes把每个节点的服务配置都读进来并且下发到每个sedcar对应的资源上。而不是只有某一个。

如: 当前节点有A,B,C,D的配置信息,但是当前业务只需要A访问D,istiod默认是无法做到只允许A访问D的。

2.1 install

使用install或者apply应用,默认是适配到default,我门使用istioctl profile list查看

(base) [root@master1 istio]# istioctl profile list
Istio configuration profiles:
    default # 生产
    demo   # 演示
    empty # 测试
    external
    minimal
    openshift
    preview
    remote

如果使用 istioctl install --set profile=demo -y就配置到了生产环境

使用istioctl profile dump demo 能够查看demo的yaml,或者输入default查看default的yaml

  • profile: 内建配置环境,作为资源配置,使用的环境

我们可以提前pull好镜像

docker pull docker.io/istio/pilot:1.14.1
docker pull docker.io/istio/proxyv2:1.14.1

选项

-y : --skip-confirmation

--set : 可进行定义启用或者关闭

这些在istioctl profile dump demo可以查看到yaml配置,可以根据--set定义,如果太多则可以进行yaml编排,而后使用-f指定yaml文件即可

倘若需要在安装后添加新配置install即可,相当于apply

我们直接进行安装

[root@linuxea-48 ~]# istioctl install --set profile=demo -y
✔ Istio core installed                                                   
✔ Istiod installed                                     
✔ Ingress gateways installed                   
✔ Egress gateways installed 
✔ Installation complete                                                 
Making this installation the default for injection and validation.

Thank you for installing Istio 1.14.  Please take a few minutes to tell us about your install/upgrade experience!  https://forms.gle/yEtCbt45FZ3VoDT5A

安装完成,使用istioctl.exe x precheck检查是否安装好

PS C:Usersusert> istioctl.exe x precheck
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
  To get started, check out https://istio.io/latest/docs/setup/getting-started/
  • 使用 istioctl verify-install命令查看是否都是successfully状态即可。

在istio-system 名称空间下,会创建如下

[root@linuxea-11 ~]# kubectl -n istio-system get all
NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-65b46d7874-xdjkr    1/1     Running   0          61s
istio-ingressgateway-559d4ffc58-7rgft   1/1     Running   0          61s
istiod-8689fcd796-mqd8n                 1/1     Running   0          87s
(base) [root@master1 local]# kubectl get all -n istio-system
NAME                                        READY   STATUS    RESTARTS   AGE
pod/istio-egressgateway-65b46d7874-xdjkr    1/1     Running   0          78s
pod/istio-ingressgateway-559d4ffc58-7rgft   1/1     Running   0          78s
pod/istiod-8689fcd796-mqd8n                 1/1     Running   0          104s

NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE
service/istio-egressgateway    ClusterIP      10.97.213.128   <none>        80/TCP,443/TCP                                                               78s
service/istio-ingressgateway   LoadBalancer   10.97.154.56    <pending>     15021:32514/TCP,80:30142/TCP,443:31060/TCP,31400:30785/TCP,15443:32082/TCP   78s
service/istiod                 ClusterIP      10.98.150.70    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        104s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/istio-egressgateway    1/1     1            1           78s
deployment.apps/istio-ingressgateway   1/1     1            1           78s
deployment.apps/istiod                 1/1     1            1           104s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/istio-egressgateway-65b46d7874    1         1         1       78s
replicaset.apps/istio-ingressgateway-559d4ffc58   1         1         1       78s
replicaset.apps/istiod-8689fcd796                 1         1         1       104s

而后其他addons组件安装,在samples/addons目录下

[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl  apply -f ./
serviceaccount/grafana created
configmap/grafana created
service/grafana created
deployment.apps/grafana created
configmap/istio-grafana-dashboards created
configmap/istio-services-grafana-dashboards created
deployment.apps/jaeger created
service/tracing created
service/zipkin created
service/jaeger-collector created
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created

如下

[root@linuxea-48 /usr/local/istio/samples/addons]# kubectl -n istio-system get pod 
NAME                                    READY   STATUS    RESTARTS      AGE
grafana-67f5ccd9d7-psrsn                1/1     Running   0             9m12s
istio-egressgateway-7fcb98978c-8t685    1/1     Running   1 (30m ago)   19h
istio-ingressgateway-55b6cffcbc-9rn99   1/1     Running   1 (30m ago)   19h
istiod-56d9c5557-tffdv                  1/1     Running   1 (30m ago)   19h
jaeger-78cb4f7d4b-btn7h                 1/1     Running   0             9m11s
kiali-6b455fd9f9-5cqjx                  1/1     Running   0             9m11s  # UI客户端
prometheus-7cc96d969f-l2rkt             2/2     Running   0             9m11s

2.2 配置生成

除此之外,可以通过命令来生成配置,并且生成的k8s yaml格式的配置清单

istioctl manifest generate --set profile=demo

将上述命令保存进行部署和istioctl是一样的

istioctl manifest generate --set profile=demo | kubectl apply -f -

2.3 配置标签

而后给需要使用istio的名称空间打一个istio-injection=enabled的标签

一旦在某个名称空间打了一个istio-injection=enabled的标签,就会在当前名称空间下自动给pod注入一个sedcar

并且,每个服务都需要一个service,并且需要对应的标签:app和version用于后续的配置

app:
version:

如: java-demo名称空间打一个标签istio-injection=enabled

[root@linuxea-48 /usr/local/istio/samples/addons]#  kubectl label namespace java-demo istio-injection=enabled
namespace/java-demo labeled

如下

[root@linuxea-48 /usr/local/istio/samples/addons]#  kubectl get ns --show-labels
NAME              STATUS   AGE     LABELS
argocd            Active   20d     kubernetes.io/metadata.name=argocd
default           Active   32d     kubernetes.io/metadata.name=default
ingress-nginx     Active   22d     app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx
istio-system      Active   19h     kubernetes.io/metadata.name=istio-system
java-demo         Active   6d21h   istio-injection=enabled,kubernetes.io/metadata.name=java-demo
kube-node-lease   Active   32d     kubernetes.io/metadata.name=kube-node-lease
kube-public       Active   32d     kubernetes.io/metadata.name=kube-public
kube-system       Active   32d     kubernetes.io/metadata.name=kube-system
marksugar         Active   21d     kubernetes.io/metadata.name=marksugar
monitoring        Active   31d     kubernetes.io/metadata.name=monitoring
nacos             Active   18d     kubernetes.io/metadata.name=nacos
skywalking        Active   17d     kubernetes.io/metadata.name=skywalking

而后在启动pod的时候,至少需要一个service,而后启动java-demo

(base) [root@master1 sleep]# kubectl  -n java-demo get pod
NAME                        READY   STATUS    RESTARTS   AGE
java-demo-76b97fc95-fkmjs   2/2     Running   0          14m
java-demo-76b97fc95-gw9r6   2/2     Running   0          14m
java-demo-76b97fc95-ngkb9   2/2     Running   0          14m
java-demo-76b97fc95-pt2t5   2/2     Running   0          14m
java-demo-76b97fc95-znqrm   2/2     Running   0          14m

此时的pod已经是两个了,另外一个是istio-proxy

或者通过命令创建一个pod即可

> kubectl.exe -n java-demo run marksugar --image=registry.cn-hangzhou.aliyuncs.com/marksugar/nginx:v1.0 --restart=Never
pod/marksugar created
> kubectl.exe -n java-demo get pod
NAME                             READY   STATUS    RESTARTS   AGE
marksugar                        2/2     Running   0          9s

也可以通过命令查看

# kubectl.exe -n java-demo get pod marksugar -o yaml
.......
initContainers:
  - args:
    - istio-iptables
    - -p
    - "15001"
    - -z
    - "15006"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - '*'
    - -x
    - ""
    - -b
    - '*'
    - -d
    - 15090,15021,15020
    image: docker.io/istio/proxyv2:1.14.1
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 10m
        memory: 40Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: false
.......

而后通过curl -I POD的IP,可以看到envoy的信息

x-envoy-upstream-service-time: 0
x-envoy-decorator-operation: :0/*

如下

# curl -I 130.130.0.106
HTTP/1.1 200 OK
server: istio-envoy
date: Tue, 26 Jul 2022 09:24:01 GMT
content-type: text/html
content-length: 70
last-modified: Tue, 26 Jul 2022 09:04:16 GMT
etag: "62dfae10-46"
accept-ranges: bytes
x-envoy-upstream-service-time: 0
x-envoy-decorator-operation: :0/*

而后可以通过curl localhost:15000/listeners查看套接字

  • 这里会将发现到都所有service全部转换成listeners,并且作为egress 使用
> kubectl.exe -n java-demo exec -it  marksugar -- curl localhost:15000/listeners
0d333f68-f03b-44e1-b38d-6ed612e71f6c::0.0.0.0:15090
28f19ea0-a4d5-4935-a887-a906f0ea410b::0.0.0.0:15021
130.130.1.37_9094::130.130.1.37:9094
10.109.235.93_443::10.109.235.93:443
130.130.1.41_8443::130.130.1.41:8443
10.98.127.60_443::10.98.127.60:443
10.97.213.128_443::10.97.213.128:443
130.130.1.38_9094::130.130.1.38:9094
10.107.145.213_8443::10.107.145.213:8443
10.98.150.70_443::10.98.150.70:443
10.97.154.56_31400::10.97.154.56:31400
172.16.15.138_9100::172.16.15.138:9100
10.97.154.56_15443::10.97.154.56:15443
172.16.15.137_9100::172.16.15.137:9100
10.96.0.1_443::10.96.0.1:443
10.107.160.181_443::10.107.160.181:443
172.16.15.137_10250::172.16.15.137:10250
130.130.1.36_9094::130.130.1.36:9094
130.130.1.35_8443::130.130.1.35:8443
10.97.154.56_443::10.97.154.56:443
130.130.1.41_9443::130.130.1.41:9443
10.102.45.140_443::10.102.45.140:443
10.104.213.128_10259::10.104.213.128:10259
10.100.230.94_6379::10.100.230.94:6379
10.98.150.70_15012::10.98.150.70:15012
10.96.0.10_53::10.96.0.10:53
172.16.15.138_10250::172.16.15.138:10250
10.102.80.102_10257::10.102.80.102:10257
10.104.18.194_80::10.104.18.194:80
130.130.1.37_9093::130.130.1.37:9093
10.96.132.151_8080::10.96.132.151:8080
10.110.43.38_3000::10.110.43.38:3000
0.0.0.0_10255::0.0.0.0:10255
130.130.1.38_9093::130.130.1.38:9093
10.104.119.238_80::10.104.119.238:80
0.0.0.0_9090::0.0.0.0:9090
0.0.0.0_5557::0.0.0.0:5557
10.109.18.63_8083::10.109.18.63:8083
10.99.185.170_8080::10.99.185.170:8080
130.130.1.42_9090::130.130.1.42:9090
0.0.0.0_15014::0.0.0.0:15014
0.0.0.0_15010::0.0.0.0:15010
10.96.171.119_8080::10.96.171.119:8080
172.16.15.138_4194::172.16.15.138:4194
10.103.151.226_9090::10.103.151.226:9090
10.105.132.58_8082::10.105.132.58:8082
10.111.33.218_14250::10.111.33.218:14250
10.107.145.213_8302::10.107.145.213:8302
0.0.0.0_9080::0.0.0.0:9080
0.0.0.0_8085::0.0.0.0:8085
10.96.59.20_8084::10.96.59.20:8084
0.0.0.0_11800::0.0.0.0:11800
10.107.145.213_8600::10.107.145.213:8600
10.96.0.10_9153::10.96.0.10:9153
10.106.152.2_8080::10.106.152.2:8080
10.96.59.20_8081::10.96.59.20:8081
0.0.0.0_8500::0.0.0.0:8500
10.107.145.213_8400::10.107.145.213:8400
0.0.0.0_80::0.0.0.0:80
0.0.0.0_9411::0.0.0.0:9411
10.99.155.134_5558::10.99.155.134:5558
0.0.0.0_3000::0.0.0.0:3000
0.0.0.0_5556::0.0.0.0:5556
10.96.124.32_12800::10.96.124.32:12800
10.97.154.56_15021::10.97.154.56:15021
10.103.47.163_8080::10.103.47.163:8080
130.130.1.36_9093::130.130.1.36:9093
10.107.145.213_8301::10.107.145.213:8301
0.0.0.0_8060::0.0.0.0:8060
0.0.0.0_16685::0.0.0.0:16685
10.96.132.151_7000::10.96.132.151:7000
10.96.140.72_9001::10.96.140.72:9001
0.0.0.0_20001::0.0.0.0:20001
10.107.145.213_8300::10.107.145.213:8300
130.130.1.44_9090::130.130.1.44:9090
10.111.33.218_14268::10.111.33.218:14268
10.97.225.212_9093::10.97.225.212:9093
172.16.15.137_4194::172.16.15.137:4194
10.103.187.135_9088::10.103.187.135:9088
virtualOutbound::0.0.0.0:15001
virtualInbound::0.0.0.0:15006

同时也可以使用cluster来查看cluster, kubectl -n java-demo exec -it marksugar -- curl localhost:15000/clusters

我们会看到很多outbound,这些是由istiod自动发现 kubernetes集群所有的service,并转换成eveny配置,并下发的配置

......
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::health_flags::healthy
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::weight::1
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::region::
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::zone::
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::sub_zone::
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::canary::false
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::priority::0
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::success_rate::-1
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.0.97:9080::local_origin_success_rate::-1
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_active::0
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_connect_fail::0
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::cx_total::0
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::rq_active::0
outbound|9080|v3|reviews.java-demo.svc.cluster.local::130.130.1.112:9080::rq_error::0
.....

这些配置可以通过istioctl proxy-status查看下发状态, 如:marksugar.java-demo

# istioctl  proxy-status
NAME                                                   CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                      VERSION
details-v1-6d89cf9847-46c4z.java-demo                  Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
istio-egressgateway-65b46d7874-xdjkr.istio-system      Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
istio-ingressgateway-559d4ffc58-7rgft.istio-system     Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
marksugar.java-demo                                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1

image-20220726173750706.png

也可以通过istioctl ps查看状态

所有节点安装socat

使用istioctl ps

(base) [root@master2 kube]# istioctl ps
NAME                                                   CLUSTER        CDS        LDS        EDS        RDS          ECDS         ISTIOD                      VERSION
istio-egressgateway-65b46d7874-xdjkr.istio-system      Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
istio-ingressgateway-559d4ffc58-7rgft.istio-system     Kubernetes     SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
java-demo-76b97fc95-fkmjs.java-demo                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
java-demo-76b97fc95-gw9r6.java-demo                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
java-demo-76b97fc95-ngkb9.java-demo                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
java-demo-76b97fc95-pt2t5.java-demo                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1
java-demo-76b97fc95-znqrm.java-demo                    Kubernetes     SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-8689fcd796-mqd8n     1.14.1

2.4 测试

为了更好的演示,我们创建一个service关联上述的pod

给marksugar打一个标签并且关联到service

kubectl -n java-demo  label pods marksugar app=marksugar
kubeclt -n java-demo  create service clusterip marksugar --tcp=80:80
(base) [root@master1 ~]# kubectl -n java-demo  label pods marksugar app=marksugar
pod/marksugar labeled
(base) [root@master1 ~]# kubectl -n java-demo  create service clusterip marksugar --tcp=80:80
service/marksugar created

如下

(base) [root@master1 ~]#  kubectl -n java-demo get svc
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
marksugar     ClusterIP   10.107.112.228   <none>        80/TCP     58s

已经关联上

(base) [root@master1 ~]#  kubectl -n java-demo describe svc marksugar
Name:              marksugar
Namespace:         java-demo
Labels:            app=marksugar
Annotations:       <none>
Selector:          app=marksugar
Type:              ClusterIP
IP:                10.107.112.228
Port:              80-80  80/TCP
TargetPort:        80/TCP
Endpoints:         130.130.0.106:80
Session Affinity:  None
Events:            <none>

当这个服务创建会被istiod发现,并自动生成配置信息下发到每个sedecar

  • listeners

listeners多了一条10.107.112.228_80::10.107.112.228:80

(base) [root@master1 ~]#   kubectl -n java-demo exec -it  marksugar -- curl localhost:15000/listeners
......
10.103.187.135_9088::10.103.187.135:9088
virtualOutbound::0.0.0.0:15001
virtualInbound::0.0.0.0:15006
10.107.112.228_80::10.107.112.228:80
  • cluster

在看cluster

(base) [root@master1 ~]#  kubectl -n java-demo exec -it marksugar -- /bin/sh
/ #  curl localhost:15000/clusters|less
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::health_flags::healthy
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::weight::1
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::region::
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::zone::
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::sub_zone::
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::canary::false
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::priority::0
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::success_rate::-1
outbound|80||marksugar.java-demo.svc.cluster.local::130.130.0.106:80::local_origin_success_rate::-1
outbound|80||kuboard.kube-system.svc.cluster.local::observability_name::outbound|80||kuboard.kube-system.svc.cluster.local
.......

并且可以通过/ # curl localhost:15000/config_dump查看一些其他路由配置信息

相关文章

LeaferJS 1.0 重磅发布:强悍的前端 Canvas 渲染引擎
10分钟搞定支持通配符的永久有效免费HTTPS证书
300 多个 Microsoft Excel 快捷方式
一步步配置基于kubeadmin的kubevip高可用
istio全链路传递cookie和header灰度
REST Web 服务版本控制

发布评论