Containers 101 : CNI - The Container Network Interface



When running a container engine like Rkt, Docker or CRI-O, when running pods in Kubernetes or when using namespaces, the networking components are more or less the same. 
They perform among other things the below tasks:
  • Create network namespaces, to have isolated network stacks.
  • Create virtual bridges to connect containers, namespaces,...
  • Create network connections to link the namespaces to the virtual bridges.
  • Enable the network interfaces and assign IP addresses to them.
  • Activate NAT and Port forwarding for example to expose services to the outside world.


Since the networking components in Kubernetes, Linux namespaces, and most container engines works almost in the same way, it was decided to bundle them into standard network plugins.
The network plugins and the runtime engines need to adhere to a set of rules so they could be work together natively.
This standard they need to adhere to is called CNI (Container network interface).
CNI standardizes all the actions mentioned above (namespace creation, virtual bridge creation,...), so they could be bundled in a plugin that would work seamlessly with different container products as long as these are CNI "certified". 
CNI defines the blueprint for the plugins, and also for the way these network plugins interact with the containers runtime for example.

Examples of "CNI" rules: 

One of these rule consist in that the container engine creates the namespaces, and runs the network plugin when a container is created and when a container is deleted.
Another rule, is that all the plugin need to have a "add", "del" and "check" commands, and that the messages they send, should follow the standard format defined by CNI.
Lighter container engine:
The container runtime engine pushes all the networking to the "CNI compatible" programs like the linux virtual "bridge".
The CNI "compatible" container engine run the below command to create a network bridge in the namespace (namespace_id):


CNI's plugins:

Among the CNI network plugins, we can mention:
  • Bridge
  • VLAN
  • MCVLAN
  • - - - - - - -
There are of course other third-party plugins like:
  • Flannel
  • VMware - NSX
  • Calico
  • - - - - - - - 
Remark:

Docker has its own standard when it comes to networking, it is called CNM (Container Network Model), so the CNI plugins don't work natively with docker.
We will need to run the below commands to disable docker networking ( --network=none ), then use the networking offered by the "bridge" plugin:


We then run the below command, to get list all the running containers:


Then  we run the "bridge" plugin command:


Which will take care of the networking part.
The container_id is the identifier of the running image_name. We get it by running the above command "docker ps -a".

Comments

Leave as a comment:

Archive