Kubernetes 101 : Making our Pods accessible to client applications - ClusterIP, endpoints, kube-proxy, ... -



We have four nginx pods are running on two different nodes. Below is the Yaml file of the deployment of these four nginx replicas that are listening on port 80:


We then create a ClusterIP service that client applications could use to be able reach these pods using the below command:


- 8888 : The port the ClusterIP service is listening on.
- 80 : The port the pod is listening on.

A service named "deploy_1.svc" will be created and the endpoints which represent the IP addresses and ports of the nginx pods, will also be created as we can see below:


The kube-proxy creates the iptables rules for the ClusterIP service which has the following address and port "10.23.4.23:8888".

Remark:

The iptables are sequential database, that tend to be slower with big amounts of data. For massive databases we could use IPVS with kube-proxy instead of iptables.

The CNI Cilium for example does not use kube-proxy, it uses a cilium agent that creates the rules for the ClusterIP and stores them  in an eBPF map.

The eBPFs are a bit similar to Linux kernel modules:

  • They could be loaded into the kernel.
  • They are easier to write.
  • They respond to events.
  • They are verified before they are run through a kernel verifier.
  • They share their data with the user-space through eBPF maps.

Comments

Leave as a comment:

Archive