参考:https://www.hi-linux.com/posts/10148.html

前提条件

已安装kata container及containerd with CRI plugin

1
2
3
4
5
6
7
8
9
10
root@ubuntu-001:~# kata-runtime --version
kata-runtime : 1.13.0-alpha0
commit : <<unknown>>
OCI specs: 1.0.1-dev

root@ubuntu-001:~# containerd --version
containerd containerd.io 1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b

root@ubuntu-001:~# ctr --version
ctr containerd.io 1.4.3

注意:

containerd在安装docker(新版本docker)已经作为docker依赖项安装了

ctr是containerd的命令行工具(containerd CLI)

cri is a native plugin of containerd 1.1 and above. It is built into containerd and enabled by default. You do not need to install cri if you have containerd 1.1 or above. Just remove the cri plugin from the list of disabled_plugins in the containerd configuration file (/etc/containerd/config.toml).

目前的主要问题是,Kata 不支持 host 网络。而 Kubernetes 中,etcd、nodelocaldns、kube-apiserver、kube-scheduler、metrics-server、node-exporter、kube-proxy、calico、kube-controller-manager 等,也就是 Static Pod 和 Daemonset 都会使用 host 网络。所以在安装部署时,依然使用 runc 作为默认的运行时,而将 kata-runtime 作为可选的运行时给特定的负载使用。

基于kubeadm搭建kubernetes集群

1、安装kubelet、kubeadm 和 kubectl

配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云源

1
2
3
4
5
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl

1
2
3
4
5
6
apt-get update
apt-cache madison kubectl
apt-cache madison kubeadm
apt-cache madison kubelet
apt-get install -y kubelet=1.19.3-00 kubeadm=1.19.3-00 kubectl=1.19.3-00

启动kubelet服务

1
systemctl enable kubelet && systemctl start kubelet 

2、关闭swap分区

1
swapoff -a

3、初始化master节点(–control-plane-endpoint指定vip,搭建HA集群,–cri-socket指定containerd为容器运行时,默认为docker)

kubeadm init –control-plane-endpoint 10.0.105.121 –image-repository registry.aliyuncs.com/google_containers –cri-socket /run/containerd/containerd.sock –kubernetes-version v1.19.3 –pod-network-cidr 10.244.0.0/16 –token-ttl 0 –ignore-preflight-errors Swap –upload-certs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
root@ubuntu-001:~# kubeadm init --control-plane-endpoint 10.0.105.121 --image-repository registry.aliyuncs.com/google_containers --cri-socket /run/containerd/containerd.sock --kubernetes-version v1.19.3 --pod-network-cidr 10.244.0.0/16 --token-ttl 0 --ignore-preflight-errors Swap --upload-certs
W0208 10:57:44.075039 30107 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntu-001] and IPs [10.96.0.1 10.0.105.121]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-001] and IPs [10.0.105.121 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-001] and IPs [10.0.105.121 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.501962 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4e30c9182268879cf56b505c9b4e173325d06ab7e1ef2ad8332ef448a9297333
[mark-control-plane] Marking the node ubuntu-001 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu-001 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nh9l3h.4ufk3uoiyedne5mv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 10.0.105.121:6443 --token nh9l3h.4ufk3uoiyedne5mv \
--discovery-token-ca-cert-hash sha256:0ab2bf0feb7ba1ee5ebdbdc5d81f542a952e708044db5f6f2124ae5b5fe8a7cd \
--control-plane --certificate-key 4e30c9182268879cf56b505c9b4e173325d06ab7e1ef2ad8332ef448a9297333

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.105.121:6443 --token nh9l3h.4ufk3uoiyedne5mv \
--discovery-token-ca-cert-hash sha256:0ab2bf0feb7ba1ee5ebdbdc5d81f542a952e708044db5f6f2124ae5b5fe8a7cd

按照kubeadm init成功后打印提示,安装网络插件flannel,加入其它master节点及node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@ubuntu-001:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu-001 Ready master 53s v1.19.3 10.0.105.121 <none> Ubuntu 18.04.3 LTS 5.4.0-65-generic containerd://1.4.3

root@ubuntu-001:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-6d56c8448f-9fnlx 1/1 Running 0 15m 10.244.0.2 ubuntu-001 <none> <none>
kube-system coredns-6d56c8448f-vfvcg 1/1 Running 0 15m 10.244.0.3 ubuntu-001 <none> <none>
kube-system etcd-ubuntu-001 1/1 Running 0 16m 10.0.105.121 ubuntu-001 <none> <none>
kube-system kube-apiserver-ubuntu-001 1/1 Running 0 16m 10.0.105.121 ubuntu-001 <none> <none>
kube-system kube-controller-manager-ubuntu-001 1/1 Running 0 16m 10.0.105.121 ubuntu-001 <none> <none>
kube-system kube-flannel-ds-8j8cl 1/1 Running 0 22s 10.0.105.121 ubuntu-001 <none> <none>
kube-system kube-proxy-cg2ln 1/1 Running 0 15m 10.0.105.121 ubuntu-001 <none> <none>
kube-system kube-scheduler-ubuntu-001 1/1 Running 0 16m 10.0.105.121 ubuntu-001 <none> <none>

可以看到容器运行时使用的是containerd://1.4.3

默认master节点不会调度pod,去掉此限制(可选做)

1
2
3
root@ubuntu-001:~# kubectl taint node --all node-role.kubernetes.io/master:NoSchedule-
node/ubuntu-001 untainted

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
root@ubuntu-001:~# crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
9cc752c5fede8 22 seconds ago Ready coredns-6d56c8448f-vfvcg kube-system 0
980935e9ea8f2 24 seconds ago Ready coredns-6d56c8448f-9fnlx kube-system 0
a4ce86d80fa28 31 seconds ago Ready kube-flannel-ds-8j8cl kube-system 0
1aa88e7a6e40c 16 minutes ago Ready kube-proxy-cg2ln kube-system 0
bd1f9e7022de5 16 minutes ago Ready kube-scheduler-ubuntu-001 kube-system 0
2dd7e343fdaaa 16 minutes ago Ready etcd-ubuntu-001 kube-system 0
96f52cba3a131 16 minutes ago Ready kube-controller-manager-ubuntu-001 kube-system 0
6ab9ae42f6ff3 16 minutes ago Ready kube-apiserver-ubuntu-001 kube-system 0

root@ubuntu-001:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

root@ubuntu-001:~# crictl ps
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
0cd56ff487375 bfe3a36ebd252 About a minute ago Running coredns 0 9cc752c5fede8
d48bacaa9c210 bfe3a36ebd252 About a minute ago Running coredns 0 980935e9ea8f2
dc2be48b4143d f03a23d55e578 About a minute ago Running kube-flannel 0 a4ce86d80fa28
4dd35f6ad4ce3 cdef7632a242b 16 minutes ago Running kube-proxy 0 1aa88e7a6e40c
dc1baf15016e0 aaefbfa906bd8 17 minutes ago Running kube-scheduler 0 bd1f9e7022de5
f37feb9c32ad3 0369cf4303ffd 17 minutes ago Running etcd 0 2dd7e343fdaaa
08823e6edbb1c 9b60aca1d8180 17 minutes ago Running kube-controller-manager 0 96f52cba3a131
fc78e9051b6f0 a301be0cd44bb 17 minutes ago Running kube-apiserver 0 6ab9ae42f6ff3

root@ubuntu-001:~# ps -ef|grep containerd
root 29906 1 0 10:57 ? 00:00:03 /usr/bin/containerd
root 29928 1 0 10:57 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 30559 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6ab9ae42f6ff3505762ce6c7676dbd74a45b6f2902c6eb32e82d70cb1d853be7 -address /run/containerd/containerd.sock
root 30631 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 96f52cba3a13165938769b136f4f786d7d073dcb85d120c9a1d45da653101545 -address /run/containerd/containerd.sock
root 30707 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2dd7e343fdaaa0b0d54e704a429134584c5f85a8ead0623c41e242d8c92e0e50 -address /run/containerd/containerd.sock
root 30913 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id bd1f9e7022de5b8585afd4bfeabe6189eb21fd013643dff3c44dc71c5379dfa5 -address /run/containerd/containerd.sock
root 31322 1 1 10:58 ? 00:00:15 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
root 31667 1 0 10:58 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1aa88e7a6e40cea8cb3386841b9204e0ad17ba258efcd40e28bfbcdf2f4300be -address /run/containerd/containerd.sock
root 40524 1 0 11:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id a4ce86d80fa28dd289186c104698b36fc112de9195121c0d52a1b50e68007471 -address /run/containerd/containerd.sock
root 40920 1 0 11:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 980935e9ea8f29b3e77d4128a13da7cead0b0db068b04ac26076fe8ac9d17694 -address /run/containerd/containerd.sock
root 41153 1 0 11:14 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9cc752c5fede80ceb14c1e1f66ed87460e3c396b881b4d53d02b5f858cbe134e -address /run/containerd/containerd.sock

可以看到底层的运行时还是runc,因为在我们的containerd配置文件里是这样配置的

root@ubuntu-001:~# crictl images ls
IMAGE TAG IMAGE ID SIZE
docker.io/library/nginx latest f6d0b4767a6c4 53.6MB
quay.io/coreos/flannel v0.13.1-rc1 f03a23d55e578 20.7MB
registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd252 14MB
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ffd 86.7MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.19.3 a301be0cd44bb 29.7MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.19.3 9b60aca1d8180 28MB
registry.aliyuncs.com/google_containers/kube-proxy v1.19.3 cdef7632a242b 49.3MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.19.3 aaefbfa906bd8 13.8MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5de 300kB

使用runc运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
root@ubuntu-001:~# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

root@ubuntu-001:~# kubectl apply -f pod.yaml
pod/nginx created

root@ubuntu-001:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx 1/1 Running 0 2m56s
kube-system coredns-6d56c8448f-9fnlx 1/1 Running 0 33m
kube-system coredns-6d56c8448f-vfvcg 1/1 Running 0 33m
kube-system etcd-ubuntu-001 1/1 Running 0 33m
kube-system kube-apiserver-ubuntu-001 1/1 Running 0 33m
kube-system kube-controller-manager-ubuntu-001 1/1 Running 0 33m
kube-system kube-flannel-ds-8j8cl 1/1 Running 0 17m
kube-system kube-proxy-cg2ln 1/1 Running 0 33m
kube-system kube-scheduler-ubuntu-001 1/1 Running 0 33m

root@ubuntu-001:~# crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
1a7b030c70a52 26 seconds ago Ready nginx default 0
9cc752c5fede8 17 minutes ago Ready coredns-6d56c8448f-vfvcg kube-system 0
980935e9ea8f2 17 minutes ago Ready coredns-6d56c8448f-9fnlx kube-system 0
a4ce86d80fa28 17 minutes ago Ready kube-flannel-ds-8j8cl kube-system 0
1aa88e7a6e40c 33 minutes ago Ready kube-proxy-cg2ln kube-system 0
bd1f9e7022de5 33 minutes ago Ready kube-scheduler-ubuntu-001 kube-system 0
2dd7e343fdaaa 33 minutes ago Ready etcd-ubuntu-001 kube-system 0
96f52cba3a131 33 minutes ago Ready kube-controller-manager-ubuntu-001 kube-system 0
6ab9ae42f6ff3 33 minutes ago Ready kube-apiserver-ubuntu-001 kube-system 0

root@ubuntu-001:~# crictl ps
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
8aa19e57c4507 f6d0b4767a6c4 26 seconds ago Running nginx 0 1a7b030c70a52
0cd56ff487375 bfe3a36ebd252 17 minutes ago Running coredns 0 9cc752c5fede8
d48bacaa9c210 bfe3a36ebd252 17 minutes ago Running coredns 0 980935e9ea8f2
dc2be48b4143d f03a23d55e578 17 minutes ago Running kube-flannel 0 a4ce86d80fa28
4dd35f6ad4ce3 cdef7632a242b 33 minutes ago Running kube-proxy 0 1aa88e7a6e40c
dc1baf15016e0 aaefbfa906bd8 33 minutes ago Running kube-scheduler 0 bd1f9e7022de5
f37feb9c32ad3 0369cf4303ffd 33 minutes ago Running etcd 0 2dd7e343fdaaa
08823e6edbb1c 9b60aca1d8180 33 minutes ago Running kube-controller-manager 0 96f52cba3a131
fc78e9051b6f0 a301be0cd44bb 33 minutes ago Running kube-apiserver 0 6ab9ae42f6ff3

root@ubuntu-001:~# ps -ef|grep containerd
root 29906 1 0 10:57 ? 00:00:05 /usr/bin/containerd
root 29928 1 0 10:57 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 30559 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6ab9ae42f6ff3505762ce6c7676dbd74a45b6f2902c6eb32e82d70cb1d853be7 -address /run/containerd/containerd.sock
root 30631 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 96f52cba3a13165938769b136f4f786d7d073dcb85d120c9a1d45da653101545 -address /run/containerd/containerd.sock
root 30707 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2dd7e343fdaaa0b0d54e704a429134584c5f85a8ead0623c41e242d8c92e0e50 -address /run/containerd/containerd.sock
root 30913 1 0 10:57 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id bd1f9e7022de5b8585afd4bfeabe6189eb21fd013643dff3c44dc71c5379dfa5 -address /run/containerd/containerd.sock
root 31322 1 1 10:58 ? 00:00:26 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
root 31667 1 0 10:58 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1aa88e7a6e40cea8cb3386841b9204e0ad17ba258efcd40e28bfbcdf2f4300be -address /run/containerd/containerd.sock
root 40524 1 0 11:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id a4ce86d80fa28dd289186c104698b36fc112de9195121c0d52a1b50e68007471 -address /run/containerd/containerd.sock
root 40920 1 0 11:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 980935e9ea8f29b3e77d4128a13da7cead0b0db068b04ac26076fe8ac9d17694 -address /run/containerd/containerd.sock
root 41153 1 0 11:14 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9cc752c5fede80ceb14c1e1f66ed87460e3c396b881b4d53d02b5f858cbe134e -address /run/containerd/containerd.sock
root 46631 1 0 11:31 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a7b030c70a524c2ce71b07b63119c11a84738818d6986d58521a9fd7cde9c69 -address /run/containerd/containerd.sock

root@ubuntu-001:~# ps -ef|grep 46631
root 46631 1 0 11:31 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a7b030c70a524c2ce71b07b63119c11a84738818d6986d58521a9fd7cde9c69 -address /run/containerd/containerd.sock
root 46660 46631 0 11:31 ? 00:00:00 /pause
root 46737 46631 0 11:31 ? 00:00:00 nginx: master process nginx -g daemon off;
root 47687 58536 0 11:34 pts/2 00:00:00 grep --color=auto 46631

pause容器及pod 容器以containerd-shim子进程形式存在

root@ubuntu-001:~# ps -ef|grep 40920
root 40920 1 0 11:13 ? 00:00:00 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 980935e9ea8f29b3e77d4128a13da7cead0b0db068b04ac26076fe8ac9d17694 -address /run/containerd/containerd.sock
root 40960 40920 0 11:13 ? 00:00:00 /pause
root 41042 40920 0 11:13 ? 00:00:01 /coredns -conf /etc/coredns/Corefile
root 47867 58536 0 11:34 pts/2 00:00:00 grep --color=auto 40920

使用kata container运行pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
root@ubuntu-001:~# cat kata-runtime.yaml 
kind: RuntimeClass
apiVersion: node.k8s.io/v1beta1
metadata:
name: kata-containers
handler: kata

root@ubuntu-001:~# cat kata-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kata-nginx
spec:
runtimeClassName: kata-containers
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

root@ubuntu-001:~# kubectl apply -f kata-runtime.yaml
runtimeclass.node.k8s.io/kata-containers created

root@ubuntu-001:~# kubectl apply -f kata-pod.yaml
pod/kata-nginx created

root@ubuntu-001:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kata-nginx 1/1 Running 0 28s 10.244.0.5 ubuntu-001 <none> <none>
nginx 1/1 Running 0 4h12m 10.244.0.4 ubuntu-001 <none> <none>

root@ubuntu-001:~# crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
f7f69925f3bdf 43 seconds ago Ready kata-nginx default 0
1a7b030c70a52 4 hours ago Ready nginx default 0
9cc752c5fede8 4 hours ago Ready coredns-6d56c8448f-vfvcg kube-system 0
980935e9ea8f2 4 hours ago Ready coredns-6d56c8448f-9fnlx kube-system 0
a4ce86d80fa28 4 hours ago Ready kube-flannel-ds-8j8cl kube-system 0
1aa88e7a6e40c 5 hours ago Ready kube-proxy-cg2ln kube-system 0
bd1f9e7022de5 5 hours ago Ready kube-scheduler-ubuntu-001 kube-system 0
2dd7e343fdaaa 5 hours ago Ready etcd-ubuntu-001 kube-system 0
96f52cba3a131 5 hours ago Ready kube-controller-manager-ubuntu-001 kube-system 0
6ab9ae42f6ff3 5 hours ago Ready kube-apiserver-ubuntu-001 kube-system 0

root@ubuntu-001:~# crictl ps
CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID
4487c2621c1b4 f6d0b4767a6c4 41 seconds ago Running nginx 0 f7f69925f3bdf
8aa19e57c4507 f6d0b4767a6c4 4 hours ago Running nginx 0 1a7b030c70a52
0cd56ff487375 bfe3a36ebd252 4 hours ago Running coredns 0 9cc752c5fede8
d48bacaa9c210 bfe3a36ebd252 4 hours ago Running coredns 0 980935e9ea8f2
dc2be48b4143d f03a23d55e578 4 hours ago Running kube-flannel 0 a4ce86d80fa28
4dd35f6ad4ce3 cdef7632a242b 5 hours ago Running kube-proxy 0 1aa88e7a6e40c
dc1baf15016e0 aaefbfa906bd8 5 hours ago Running kube-scheduler 0 bd1f9e7022de5
f37feb9c32ad3 0369cf4303ffd 5 hours ago Running etcd 0 2dd7e343fdaaa
08823e6edbb1c 9b60aca1d8180 5 hours ago Running kube-controller-manager 0 96f52cba3a131
fc78e9051b6f0 a301be0cd44bb 5 hours ago Running kube-apiserver 0 6ab9ae42f6ff3

root@ubuntu-001:~# ps -ef|grep containerd
root 29906 1 0 12:21 ? 00:00:29 /usr/bin/containerd
root 29928 1 0 12:21 ? 00:00:03 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 30559 1 0 12:21 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6ab9ae42f6ff3505762ce6c7676dbd74a45b6f2902c6eb32e82d70cb1d853be7 -address /run/containerd/containerd.sock
root 30631 1 0 12:21 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 96f52cba3a13165938769b136f4f786d7d073dcb85d120c9a1d45da653101545 -address /run/containerd/containerd.sock
root 30707 1 0 12:21 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2dd7e343fdaaa0b0d54e704a429134584c5f85a8ead0623c41e242d8c92e0e50 -address /run/containerd/containerd.sock
root 30913 1 0 12:21 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id bd1f9e7022de5b8585afd4bfeabe6189eb21fd013643dff3c44dc71c5379dfa5 -address /run/containerd/containerd.sock
root 31322 1 1 12:22 ? 00:02:30 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock
root 31667 1 0 12:22 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1aa88e7a6e40cea8cb3386841b9204e0ad17ba258efcd40e28bfbcdf2f4300be -address /run/containerd/containerd.sock
root 40524 1 0 12:37 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id a4ce86d80fa28dd289186c104698b36fc112de9195121c0d52a1b50e68007471 -address /run/containerd/containerd.sock
root 40920 1 0 12:38 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 980935e9ea8f29b3e77d4128a13da7cead0b0db068b04ac26076fe8ac9d17694 -address /run/containerd/containerd.sock
root 41153 1 0 12:38 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9cc752c5fede80ceb14c1e1f66ed87460e3c396b881b4d53d02b5f858cbe134e -address /run/containerd/containerd.sock
root 46631 1 0 12:55 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a7b030c70a524c2ce71b07b63119c11a84738818d6986d58521a9fd7cde9c69 -address /run/containerd/containerd.sock
root 96541 1 0 15:40 ? 00:00:00 /usr/bin/containerd-shim-kata-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id f7f69925f3bdfccc823f0672d7d554e4d2b7f3f7d1bf15c7db8802c03a162cff
root 97121 58536 0 15:42 pts/2 00:00:00 grep --color=auto containerd

root@ubuntu-001:~# ps -ef|grep 96541
root 96541 1 0 15:40 ? 00:00:00 /usr/bin/containerd-shim-kata-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id f7f69925f3bdfccc823f0672d7d554e4d2b7f3f7d1bf15c7db8802c03a162cff
root 97226 58536 0 15:42 pts/2 00:00:00 grep --color=auto 96541

运行的是轻量的虚拟机,所以不存在pause进程

root@ubuntu-001:~# ps -ef|grep 46631
root 46631 1 0 12:55 ? 00:00:01 /usr/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1a7b030c70a524c2ce71b07b63119c11a84738818d6986d58521a9fd7cde9c69 -address /run/containerd/containerd.sock
root 46660 46631 0 12:55 ? 00:00:00 /pause
root 46737 46631 0 12:55 ? 00:00:00 nginx: master process nginx -g daemon off;
root 97379 58536 0 15:43 pts/2 00:00:00 grep --color=auto 46631

root@ubuntu-001:~# kata-runtime list
ID PID STATUS BUNDLE CREATED OWNER
f7f69925f3bdfccc823f0672d7d554e4d2b7f3f7d1bf15c7db8802c03a162cff -1 running /run/containerd/io.containerd.runtime.v2.task/k8s.io/f7f69925f3bdfccc823f0672d7d554e4d2b7f3f7d1bf15c7db8802c03a162cff 2021-02-08T07:40:46.667379896Z #0
4487c2621c1b4b2fe6214ea8d3916f1ce3fd0c691024ac947474603af7a51ea3 -1 running /run/containerd/io.containerd.runtime.v2.task/k8s.io/4487c2621c1b4b2fe6214ea8d3916f1ce3fd0c691024ac947474603af7a51ea3 2021-02-08T07:40:52.142467623Z #0

ID分别为kata-nginx的pod id及容器 id