Kubernetes with flannel: CNI config uninitialized
I am new with Kubernetes and am trying to setup a Kubernetes cluster on local machines. Bare metal. No OpenStack, No Maas or something.
kubeadm init ... on the master node,
kubeadm join ... on the slave nodes and applying flannel at the master I get the message from the slaves:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Can anyone tell me what I have done wrong or missed any steps?
Should flannel be applied to all the slave nodes as well? If yes, they do not have a
Thanks a lot!
PS. All the nodes do not have internet access. That means all files have to be copied manually via ssh.
The problem was the missing internet connection. After loading the Docker images manually to the worker nodes they appear to be ready.
Unfortunately I did not found a helpful error message around this.
[BUG REPORT] cni config uninitialized after updating to v1.16.0 , message:docker: network plugin is not ready: cni config uninitialized , then after running journalctl -xeu kubelet I saw: plugin flannel does not Unable to update cni config: No networks found in /etc/cni/net.d Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized So i checked, no cni folder in /etc even that kubernetes-cni-0.6.0-0.x86_64 is installed.
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw "network plugin is not ready: cni config uninitialized".
1. Install flannel by
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
2. Reset the coredns pod
kubectl delete coredns-xx-xx
3. Then run
kubectl get pods to see if it works.
if you see this error "cni0" already has an IP address different from 10.244.1.1/24". follow this:
ifconfig cni0 down brctl delbr cni0 ip link delete flannel.1
if you see this error "Back-off restarting failed container", and you can get the log by
root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system .:53 2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6 2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".
Then you can see the file "/etc/resolv.conf" on the failed node, if the nameserver is localhost there will be a loopback.Change to:
#nameserver 127.0.1.1 nameserver 22.214.171.124
Container runtime network not ready: cni config uninitialized, kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/ message:docker: network plugin is not ready: cni config uninitialized. Aug 31 16:34:41 k8smaster1 kubelet: E0831 16:34:41.499982 8876 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Aug 31 16:34:41 k8smaster1 kubelet: W0831 16:34:41.499746 8876 cni.go:189] Unable to update cni
Usually flannel is deployed as daemonset. Meaning on all worker nodes.
Kubernetes cni config uninitialized, check if the images have been downloaded into your machines(VM). docker image list or kubeadm config images list. expected image: quay.io/coreos/flannel v0. Out-of-the-box, Rancher provides the following CNI network providers for Kubernetes clusters: Canal, Flannel, Calico and Weave (Weave is available as of v2.2.0). You can choose your CNI network provider when you create new Kubernetes clusters from Rancher. Canal. Canal is a CNI network provider that gives you the best of Flannel and Calico.
Kubernetes - Two Steps Installation, Learn how to use Kubeadm to install Kubernetes in mins. Here we use flannel as CNI plugin for demonstration. Nodes stall in NotReady status due to cni config uninitialized or pods are not able to access external network. I can see from kubelet logs that docker cni network is uninitialised and kubelet cannot reach HTTPS API Server on Port 6443. I gladly appreciate any hints how to resolve this. Thanks in advance. cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d message:docker: network plugin is not ready: cni config uninitialized
Network Plugins, For CNI plugins, this is simply “cni”. Network Plugin Requirements. Besides providing the NetworkPlugin interface to configure and clean up pod Multus is a Multi CNI plugin to support the Multi Networking feature in Kubernetes using CRD based network objects in Kubernetes. Multus supports all reference plugins (eg. Flannel , DHCP , Macvlan ) that implement the CNI specification and 3rd party plugins (eg.
Using flannel with Kubernetes, A ConfigMap containing both a CNI configuration and a flannel configuration. The network in the flannel configuration should match the pod network CIDR. The CNI plugin is selected by passing Kubelet the --network-plugin=cni command-line option. Kubelet reads a file from --cni-conf-dir (default /etc/cni/net.d) and uses the CNI configuration from that file to set up each pod’s network.