创建资源中,apiserver只接受JSON格式的资源定义,此前使用的run命令,只不过是自动转换成JSON格式而已。但是JSON太过于重量,使用YAML格式来提供配置清单,apiserver会自动将yaml无损转换为JSON模式,并且不需要多余的信息,而后在提交执行而后会通过资源配置清单,创建一个Yaml的文件来管理自主式的POD资源。其中需要遵循几个字段:apiversion,kind,metadata,spec,status
[root@linuxea ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
client-linuxea 1/1 Running 0 21h
nginx-linuxea-5786698598-8qt5n 1/1 Running 0 21h
nginx-linuxea-5786698598-9w5z7 1/1 Running 0 21h
nginx-linuxea-5786698598-xl9xp 1/1 Running 0 21h
yaml文件
查看输出yaml
[root@linuxea ~]# kubectl get pod nginx-linuxea-5786698598-8qt5n -o yaml
属性名 属性值
apiVersion: v1 ===>> k8s名称和版本 ,group/name version
kind: Pod ===>> 类别
metadata: ===>> 原数据
creationTimestamp: 2018-08-23T15:22:34Z
generateName: nginx-linuxea-5786698598-
labels:
pod-template-hash: "1342254154"
run: nginx-linuxea
name: nginx-linuxea-5786698598-8qt5n
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-linuxea-5786698598
uid: 993df65c-a6de-11e8-9c95-88882fbd1028
resourceVersion: "122893"
selfLink: /api/v1/namespaces/default/pods/nginx-linuxea-5786698598-8qt5n
uid: 5c6294c1-a6e8-11e8-9c95-88882fbd1028
spec: ===>> 规格字段,期望的目标的状态,用户定义的
containers:
- image: marksugar/nginx:1.14.a
imagePullPolicy: IfNotPresent
name: nginx-linuxea
ports:
- containerPort: 8787
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-l2x78
readOnly: true
dnsPolicy: ClusterFirst
nodeName: linuxea.node-1.com
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-l2x78
secret:
defaultMode: 420
secretName: default-token-l2x78
status: ===>> 显示当前资源的当前状态,只读
conditions:
- lastProbeTime: null
lastTransitionTime: 2018-08-23T15:22:34Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2018-08-23T15:22:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2018-08-23T15:22:34Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://8e340023d553c2de65a043eb9d643c42b9961a75440eda8a0334494e51da0f89
image: marksugar/nginx:1.14.a
imageID: docker-pullable://marksugar/nginx@sha256:103dd97b01c3283b56a587e7d95135f8fc410be1df36477d2d477a41f00daa59
lastState: {}
name: nginx-linuxea
ready: true
restartCount: 0
state:
running:
startedAt: 2018-08-23T15:22:35Z
hostIP: 10.10.240.202
phase: Running
podIP: 172.16.1.14
qosClass: BestEffort
startTime: 2018-08-23T15:22:34Z
大部分的资源配置清单中,都由五个字段组成
- apiversion
apiversion: 组名/版本,可参考$kubectl api-versions。其中可以进行多次分组,测试,bete等多版本兼容同时存在
[root@linuxea ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
- kind
kind资源类别。如:Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Service等
- metadata
metadata元数据: name: 同一资源下,name是唯一的,受限于namespace labels:标签
annotations:资源注解
selfLink 资源引用的PATH注解:/api/群组/版本/namespaces/NAMESPACE(名称空间)/TYPE(资源类别)/NAME
- spec:期望状态
spec可能会嵌套很多二级三级字段,不同的资源类型,其spec需要嵌套的字段各不相同。其中标记为resources的是必选字段,其他的均为可选字段。这些用来定义用户的期望状态
- status:当前状态
本字段是kubernetes集群维护,用户是不能定义的
假如需要定义一个pod,可使用kubectl explain pod来查看需要的字段说明,此刻若是想知道metadata字段,则可在后面加上.matadate,如: kubectl explain pod.metadata。当然,他可以多层嵌套
[root@linuxea ~]# kubectl explain pod
[root@linuxea ~]# kubectl explain pod.metadata
[root@linuxea ~]# kubectl explain pods.spec
[root@linuxea ~]# kubectl explain pods.spec.containers
[root@linuxea ~]# kubectl explain pods.spec.containers.tty
定义资源
为了测试,定义三个容器在一个pod中,三个容器都不同
[root@linuxea linuxea]# cat pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-linuxea
namespace: default
labels:
www: linuxea-com
tier: backend
spec:
containers:
- name: linuxea-pod1-com
image: "marksugar/nginx:1.14.a"
ports:
- containerPort: 88
- name: linuxea-pod2-com
image: "marksugar/nginx:1.14.b"
ports:
- containerPort: 88
- name: alpine
image: "alpine:3.8"
command:
- "/bin/sh"
- "-c"
- "tail -f /etc/passwd"
使用yaml文件的方式就不在是run,而是create
[root@linuxea linuxea]# kubectl create -f pod-demo.yaml
pod/pod-demo-linuxea created
这里有一个ContainerCreating的过程,而后就running起来
[root@linuxea linuxea]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
client-linuxea 1/1 Running 0 23h 172.16.2.252 linuxea.node-2.com <none>
pod-demo-linuxea 1/3 ContainerCreating 0 3s 172.16.3.23 linuxea.node-3.com <none>
[root@linuxea linuxea]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
client-linuxea 1/1 Running 0 23h 172.16.2.252 linuxea.node-2.com <none>
pod-demo-linuxea 3/3 Running 0 3s 172.16.3.23 linuxea.node-3.com <none>
使用kubectl describe pods pod-demo-linuxea查看定义的pod信息
[root@linuxea linuxea]# kubectl describe pods pod-demo-linuxea
Name: pod-demo-linuxea
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: linuxea.node-3.com/10.10.240.146
Start Time: Fri, 24 Aug 2018 15:25:26 +0100
Labels: tier=backend
www=linuxea-com
Annotations: <none>
Status: Running
IP: 172.16.3.23
Containers:
linuxea-pod1-com:
Container ID: docker://7c845c4640e74d6d542a3a18c5b2de69719dc74649207431a62f4f525843f15f
Image: marksugar/nginx:1.14.a
Image ID: docker-pullable://marksugar/nginx@sha256:103dd97b01c3283b56a587e7d95135f8fc410be1df36477d2d477a41f00daa59
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 24 Aug 2018 15:25:26 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l2x78 (ro)
linuxea-pod2-com:
Container ID: docker://0c75ff2650b0b26b7b909858fb9f7d6259b1b17da1b161073757334237711846
Image: marksugar/nginx:1.14.b
Image ID: docker-pullable://marksugar/nginx@sha256:f6998c772d71c6fd0398963b575833bfddac1c996095a3c9a4ba59953d2a05ec
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 24 Aug 2018 15:25:26 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l2x78 (ro)
alpine:
Container ID: docker://df9e005ff05e4e40f3603f7b8704c47424c3c1427a2648514f5778c7fe5ad306
Image: alpine:3.8
Image ID: docker-pullable://alpine@sha256:7043076348bf5040220df6ad703798fd8593a0918d06d3ce30c6c93be117e430
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
tail -f /etc/passwd
State: Running
Started: Fri, 24 Aug 2018 15:25:27 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-l2x78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-l2x78:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-l2x78
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned default/pod-demo-linuxea to linuxea.node-3.com
Normal Pulled 3m kubelet, linuxea.node-3.com Container image "marksugar/nginx:1.14.a" already present on machine
Normal Created 3m kubelet, linuxea.node-3.com Created container
Normal Started 3m kubelet, linuxea.node-3.com Started container
Normal Pulled 3m kubelet, linuxea.node-3.com Container image "marksugar/nginx:1.14.b" already present on machine
Normal Created 3m kubelet, linuxea.node-3.com Created container
Normal Started 3m kubelet, linuxea.node-3.com Started container
Normal Pulled 3m kubelet, linuxea.node-3.com Container image "alpine:3.8" already present on machine
Normal Created 3m kubelet, linuxea.node-3.com Created container
Normal Started 3m kubelet, linuxea.node-3.com Started container
可以使用kubectl logs pod-demo-linuxea查看
[root@linuxea linuxea]# kubectl logs pod-demo-linuxea
Error from server (BadRequest): a container name must be specified for pod pod-demo-linuxea, choose one of: [linuxea-pod1-com linuxea-pod2-com alpine]
或者这样
[root@linuxea linuxea]# kubectl logs linuxea-pod1-com
[root@linuxea linuxea]# kubectl logs linuxea-pod2-com
[root@linuxea linuxea]# kubectl logs linuxea-pod2-com alpine
删除资源
kubectl delete -f pod-demo.yaml即可删除yaml中包含的资源,若是要回复create即可
[root@linuxea linuxea]# kubectl delete -f pod-demo.yaml
pod "pod-demo-linuxea" deleted