Are these pods inside the overlay network?

kubernetes pod to pod communication example
kubernetes networking
kubernetes flannel
pod-network-cidr
kubernetes cni
calico vs flannel
kube-proxy
kubernetes calico

How can I confirm whether or not some of the pods in this Kubernetes cluster are running inside the Calico overlay network?


Pod Names:

Specifically, when I run kubectl get pods --all-namespaces, only two of the nodes in the resulting list have the word calico in their names. The other pods, like etcd and kube-controller-manager, and others do NOT have the word calico in their names. From what I read online, the other pods should have the word calico in their names.

$ kubectl get pods --all-namespaces  

NAMESPACE     NAME                                                               READY   STATUS              RESTARTS   AGE  
kube-system   calico-node-l6jd2                                                  1/2     Running             0          51m  
kube-system   calico-node-wvtzf                                                  1/2     Running             0          51m  
kube-system   coredns-86c58d9df4-44mpn                                           0/1     ContainerCreating   0          40m  
kube-system   coredns-86c58d9df4-j5h7k                                           0/1     ContainerCreating   0          40m  
kube-system   etcd-ip-10-0-0-128.us-west-2.compute.internal                      1/1     Running             0          50m  
kube-system   kube-apiserver-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  
kube-system   kube-controller-manager-ip-10-0-0-128.us-west-2.compute.internal   1/1     Running             0          51m  
kube-system   kube-proxy-dqmb5                                                   1/1     Running             0          51m  
kube-system   kube-proxy-jk7tl                                                   1/1     Running             0          51m  
kube-system   kube-scheduler-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  


stdout from applying calico

The stdout that resulted from applying calico is as follows:

$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml  

configmap/calico-config created  
service/calico-typha created  
deployment.apps/calico-typha created  
poddisruptionbudget.policy/calico-typha created  
daemonset.extensions/calico-node created\nserviceaccount/calico-node created  
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created  


How the cluster was created:

The commands that installed the cluster are:

$ sudo -i 
# kubeadm init --kubernetes-version 1.13.1 --pod-network-cidr 192.168.0.0/16 | tee kubeadm-init.out
# exit 
$ sudo mkdir -p $HOME/.kube
$ sudo chown -R lnxcfg:lnxcfg /etc/kubernetes
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml  

This is running on AWS in Amazon Linux 2 host machines.

as per the official docs : (https://docs.projectcalico.org/v3.6/getting-started/kubernetes/) it looks fine. It contains further commands to activate and also check out the demo on the frontpage which shows some verifications

NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
kube-system  calico-node-24h85                          2/2    Running  0         2m43s
kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s

Cluster Networking, Pod-to-Pod communications: this is the primary focus of this document. pods in the host network of a node can communicate with all pods on all nodes Flannel is a very simple overlay network that satisfies the Kubernetes� The plug-in supports up to 250 Pods per virtual machine and up to 16,000 Pods in a virtual network. These limits are different for the Azure Kubernetes Service. Using the plug-in. The plug-in can be used in the following ways, to provide basic virtual network attach for Pods or Docker containers:

Could you please let me where you found the literature mentioning that other pods would also have the calico name in them?

As far as I know, in the kube-system namespace, the scheduler, api server, controller and the proxy are provided by native kubernetes, hence the naming convention doesn't have any calico in them.

And one more thing, calico applies to the PODs you create for the actual applications you wish to run on k8s, not to the kubernetes control plane.

Are you facing any problem with the cluster creation? Then the question would be different.

Hope this helps.

Review of Pod-to-Pod Communications in Kubernetes, In this article, we dive into Pod-to-Pod communications by showing you able to reach each Pod, even though Pods are in an overlay network. Overlay Network: An overlay network can be thought of as a computer network on top of another network. All nodes in an overlay network are connected with one another by means of logical or virtual links and each of these links correspond to a path in the underlying network.

This is normal and expected behavior, you have only a few pods starting with Calico. They are created when you initialize Calico or add new nodes to your cluster.

etcd-*, kube-apiserver-*, kube-controller-manager-*, coredns-*, kube-proxy-*, kube-scheduler-* are mandatory system components, pods have no dependency on Calico. Hence names would be system based.

Also, as @Jonathan_M already wrote - Calico doesn't apply to K8s control plane. Only to newly created pods

You could verify whether your pods inside network overlay or not by using kubectl get pods --all-namespaces -o wide

My example:

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
default       my-nginx-76bf4969df-4fwgt               1/1     Running   0          14s   192.168.1.3   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-h9w9p               1/1     Running   0          14s   192.168.1.5   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-mh46v               1/1     Running   0          14s   192.168.1.4   kube-calico-2   <none>           <none>
kube-system   calico-node-2b8rx                       2/2     Running   0          70m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   calico-node-q5n2s                       2/2     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   coredns-86c58d9df4-q22lx                1/1     Running   0          74m   192.168.0.2   kube-calico-1   <none>           <none>
kube-system   coredns-86c58d9df4-q8nmt                1/1     Running   0          74m   192.168.1.2   kube-calico-2   <none>           <none>
kube-system   etcd-kube-calico-1                      1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-apiserver-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-controller-manager-kube-calico-1   1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-6zsxc                        1/1     Running   0          74m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-97xsf                        1/1     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   kube-scheduler-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>


kubectl get nodes --all-namespaces -o wide
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
kube-calico-1   Ready    master   84m   v1.13.4   10.132.0.12   <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-gcp   docker://18.9.2
kube-calico-2   Ready    <none>   70m   v1.13.4   10.132.0.13   <none>        Ubuntu 16.04.6 LTS   4.15.0-1023-gcp   docker://18.9.2

You can see that K8s control plane uses initial IPs and nginx deployment pods already use Calico 192.168.0.0/16 range.

Understanding kubernetes networking: pods | by Mark Betz, This post is going to attempt to demystify the several layers of networking operating in a kubernetes cluster. When talking about kubernetes I usually call this network the “pod network” because it is an overlay network that� Network traffic, shown by the arrow connecting pods B and C, is facilitated by the network overlay and pods do not have knowledge about the host’s networking stack. Having pods on a virtualized network solves significant issues with providing dynamically scheduled networked workloads. However, these virtual IPs are randomly assigned. This

Kubernetes Cluster Networking 101, This article discusses the basics of Kubernetes networking. It compares How the same pod containers can communicate with each other. This is handled In general, we can define networks as underlay and overlay types:� Network traffic, shown by the arrow connecting pods B and C, is facilitated by the network overlay and pods do not have knowledge about the host’s networking stack. Having pods on a virtualized network solves significant issues with providing dynamically scheduled networked workloads. However, these virtual IPs are randomly assigned. This

How Kubernetes Networking Works - Under the Hood, This means a pod is reachable not just within the docker network, but is for some scenarios also Calico works with an overlay network, in this case IPINIP,� III- Overlay Solutions. Unless there is a specific reason to use an overlay solution, it generally does not make sense considering the networking model of Kubernetes and it lacks of support for multiple networks. Kubernetes requires that nodes should be able to reach each Pod, even though Pods are in an overlay network.

How Kubernetes Networking Works - The Basics 101, In Kubernetes, every pod has its own routable IP address. With all of this overlay networking being handled dynamically by Kubernetes, it is extremely difficult to� Use overlay networks Estimated reading time: 11 minutes The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.

Comments
  • Note that ContainerCreating persists in the OP even after the pods are deleted. Google research indicates that this is often due to Calico installation problems, to which this OP also points.
  • Note that ContainerCreating persists in the OP even after the pods are deleted.
  • But according to me coredns isnt part of calico. can you undo the calico install and check the status before you start?
  • I found it in the CNCF's official training course for the Certified Kubernetes Administrator. To me, that seemed official. Note that ContainerCreating persists in the OP even after the pods are deleted. Google research indicates that this is often due to Calico installation problems, to which this OP also points.