cri is a native plugin of containerd 1.1 and above. It is built into containerd and enabled by default. You do not need to install cri if you have containerd 1.1 or above. Just remove the cri plugin from the list of disabled_plugins in the containerd configuration file (/etc/containerd/config.toml).
root@ubuntu-001:~# kubeadm init --control-plane-endpoint 10.0.105.121 --image-repository registry.aliyuncs.com/google_containers --cri-socket /run/containerd/containerd.sock --kubernetes-version v1.19.3 --pod-network-cidr 10.244.0.0/16 --token-ttl 0 --ignore-preflight-errors Swap --upload-certs W0208 10:57:44.075039 30107 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntu-001] and IPs [10.96.0.1 10.0.105.121] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-001] and IPs [10.0.105.121 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-001] and IPs [10.0.105.121 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 13.501962 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 4e30c9182268879cf56b505c9b4e173325d06ab7e1ef2ad8332ef448a9297333 [mark-control-plane] Marking the node ubuntu-001 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ubuntu-001 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: nh9l3h.4ufk3uoiyedne5mv [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
root@ubuntu-001:~# crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT 9cc752c5fede8 22 seconds ago Ready coredns-6d56c8448f-vfvcg kube-system 0 980935e9ea8f2 24 seconds ago Ready coredns-6d56c8448f-9fnlx kube-system 0 a4ce86d80fa28 31 seconds ago Ready kube-flannel-ds-8j8cl kube-system 0 1aa88e7a6e40c 16 minutes ago Ready kube-proxy-cg2ln kube-system 0 bd1f9e7022de5 16 minutes ago Ready kube-scheduler-ubuntu-001 kube-system 0 2dd7e343fdaaa 16 minutes ago Ready etcd-ubuntu-001 kube-system 0 96f52cba3a131 16 minutes ago Ready kube-controller-manager-ubuntu-001 kube-system 0 6ab9ae42f6ff3 16 minutes ago Ready kube-apiserver-ubuntu-001 kube-system 0
root@ubuntu-001:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@ubuntu-001:~# crictl ps CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT POD ID 0cd56ff487375 bfe3a36ebd252 About a minute ago Running coredns 0 9cc752c5fede8 d48bacaa9c210 bfe3a36ebd252 About a minute ago Running coredns 0 980935e9ea8f2 dc2be48b4143d f03a23d55e578 About a minute ago Running kube-flannel 0 a4ce86d80fa28 4dd35f6ad4ce3 cdef7632a242b 16 minutes ago Running kube-proxy 0 1aa88e7a6e40c dc1baf15016e0 aaefbfa906bd8 17 minutes ago Running kube-scheduler 0 bd1f9e7022de5 f37feb9c32ad3 0369cf4303ffd 17 minutes ago Running etcd 0 2dd7e343fdaaa 08823e6edbb1c 9b60aca1d8180 17 minutes ago Running kube-controller-manager 0 96f52cba3a131 fc78e9051b6f0 a301be0cd44bb 17 minutes ago Running kube-apiserver 0 6ab9ae42f6ff3