Kubernetes 101 : Pod's networking explained - Flannel, cni0, ... -



Pods within the same host communicate through the cni0 virtual bridge.
Because the pods live in separate isolated network namespaces, our cni0 bridge connects these network namespaces together using a virtual network device called veth.

The veth virtual network device is like a virtual network cables with one end attached to the pod's namespace and the other end attached to the cni0 virtual bridge.

Below is a simple diagram that illustrates the process:



To communicate with pods on other nodes in the kubernetes cluster, the administrator of the cluster can choose either layer 2, layer 3 or encapsulation of packets through an overlay vxlan network - flannel for example - to make the communication between pods across the cluster possibe.

Flannel for example uses the overlay vxlan encapsulation method.

Below is a simple diagram that shows how a packet move between pods in different nodes through the flannel interface: 


To pick a networking plugin, we give that information to the kubelet using the --network-plugin=cni parameter.
The cni configuration files are located in the directory specified in the --cni-conf-dir parameter - /etc/cni/net.d by default - and the binaries in the directory specified in the --cni-bin-dir parameter - by default the /opt/cni/bin -.

To see the veth interface and the virtual bridge, we could use the below command from the host:


The pods have IP addresses in the range of the "--pod-network-cidr". It is set at the cluster level when setting up the cluster:


We could also define the cidr at the node level in the Yaml file of the node.

We create an "nginx" pod using the below command:


Then we check its IP address using the below command to see if it is within the pod's cidr range:


Remark:

Flannel is a popular CNI - Container Network Interface - that uses a encapsulation to wrap Ethernet frames - layer 2 - in UDP packets - layer 4 -.

Comments

Leave as a comment:

Archive