Part 1: What is the Purpose of the Pause Container?
In my Kubernetes Core Concepts: Pod article, I stated that containers in the same Pod can access each other via localhost
***** You can follow me on LinkedIn *****
How does it work?
When you create two containers on Docker, these containers cannot communicate via localhost
. Because they run in different network namespaces.
Let’s verify this statement:
root@main:~# docker run -dit - name web1 -e PORT=1111 webratio/nodejs-http-server
eca3f2e0aad53fa2955473dae2af001c358936e654f2e9757a0af81887177f17
root@main:~# docker run -dit - name web2 -e PORT=2222 webratio/nodejs-http-server
bf0bfa83d15a4bbac98af94c3ebf19d13b4f250fbf4cb3f32a016f1a7704acae
root@main:~# docker exec -it web1 bash
root@eca3f2e0aad5:/# curl -I localhost:1111
HTTP/1.1 200 OK
...
root@eca3f2e0aad5:/# curl -I localhost:2222
curl: (7) Failed to connect to localhost port 2222: Connection refused
I can access port 1111 from web1 container. Because the process listening on port 1111 is already running on web1 container.
But I can’t access port 2222 from web1 container. Because the process listening on port 2222 is running on web2.
Since containers are isolated from each other, they cannot communicate with each other via localhost
.
I’ll try running these two containers in a Kubernetes cluster with the same configuration:
00-multiple-container-in-one-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: adil-blog
spec:
containers:
- image: webratio/nodejs-http-server
name: web1
env:
- name: PORT
value: "1111"
- image: webratio/nodejs-http-server
name: web2
env:
- name: PORT
value: "2222"
Apply:
➜ k8s kubectl apply -f 00-multiple-container-in-one-pod.yaml
pod/adil-blog created
Test connectivity between containers:
➜ ~ kubectl exec -it adil-blog --container web1 -- /bin/bash
root@adil-blog:/# curl -I localhost:1111
HTTP/1.1 200 OK
....
root@adil-blog:/# curl -I localhost:2222
HTTP/1.1 200 OK
...
curl localhost:2222
works fine in web1 container on Kubernetes. But why?
Let’s ssh into the Kubernetes node and see what’s going on.
List containers:
[root@ip-192-168-46-242 ~]# ctr -n k8s.io c list
CONTAINER IMAGE RUNTIME
1cee2b4a5c283b5f0d0ded98f98cb741c0c1e8f11cb276395818f9cbdda93629 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5 io.containerd.runc.v2
45d69046a900f991a26c64d7934e8f3423eaab6cea19601780738bd1f09de08e 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5 io.containerd.runc.v2
4dd32a0c0623efafa210cd4bbb1a0628b990d3d80fd337dee62f4650a5141ebe docker.io/webratio/nodejs-http-server:latest io.containerd.runc.v2
4e7b895e5bdcf71610045c7133fcbf6b3bacbf02df1960b442dcbf134e2a0cd7 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/amazon-k8s-cni:v1.11.4 io.containerd.runc.v2
743c5b145d76e3051b50df9a049171f40fb265e3bc877c7addce773ee56a1b6c docker.io/webratio/nodejs-http-server:latest io.containerd.runc.v2
7c6848a6c21259ee42ce8dae109608c914642d1be49b9d1126d115dd0e60d72e 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/amazon-k8s-cni-init:v1.11.4 io.containerd.runc.v2
c202955ab57e3b8a09f2b683891547634712e2ec7c9c197e70355b6d56e78da9 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5 io.containerd.runc.v2
e3ac2c00e0df70b3e6c22d03bf60930338d9776b47435535af45560f5f529c16 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/kube-proxy:v1.24.7-minimal-eksbuild.2 io.containerd.runc.v2
There are two containers with the webratio/nodejs-http-server
image. This is expected.
Let’s examine the namespaces of these webratio/nodejs-http-server
containers.
[root@ip-192-168-46-242 ~]# ctr -n k8s.io c info 4dd32a0c0623efafa210cd4bbb1a0628b990d3d80fd337dee62f4650a5141ebe | grep namespaces -A19
"namespaces": [
{
"type": "pid"
},
{
"type": "ipc",
"path": "/proc/5571/ns/ipc"
},
{
"type": "uts",
"path": "/proc/5571/ns/uts"
},
{
"type": "mount"
},
{
"type": "network",
"path": "/proc/5571/ns/net"
}
],
[root@ip-192-168-46-242 ~]# ctr -n k8s.io c info 743c5b145d76e3051b50df9a049171f40fb265e3bc877c7addce773ee56a1b6c | grep namespaces -A19
"namespaces": [
{
"type": "pid"
},
{
"type": "ipc",
"path": "/proc/5571/ns/ipc"
},
{
"type": "uts",
"path": "/proc/5571/ns/uts"
},
{
"type": "mount"
},
{
"type": "network",
"path": "/proc/5571/ns/net"
}
],
Containers using pid 5571’s namespaces : ipc
, uts
, network
What is the PID 5571?
[root@ip-192-168-46-242 ~]# ps aux | grep 5571
65535 5571 0.0 0.0 972 4 ? Ss 09:05 0:00 /pause
Pid 5571 belongs to the Pause container.
Since we have multiple Pause containers, here’s how we can tell which Pause container is being used by our containers:
[root@ip-192-168-46-242 ~]# ctr -n k8s.io c info 743c5b145d76e3051b50df9a049171f40fb265e3bc877c7addce773ee56a1b6c | grep sandbox-id
"io.kubernetes.cri.sandbox-id": "45d69046a900f991a26c64d7934e8f3423eaab6cea19601780738bd1f09de08e",
[root@ip-192-168-46-242 ~]# ctr -n k8s.io c info 45d69046a900f991a26c64d7934e8f3423eaab6cea19601780738bd1f09de08e | grep Image
"Image": "602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5",
What is the purpose of the ipc, uts, and network namespaces?
IPC namespace
The IPC namespace provides a layer to enable Inter-Process Communication between containers.
UTS namespace
The UTS namespace provides a layer for using the same hostname between containers.
Network namespace
I have an article explaining how the network namespace works: Container Networking Under The Hood: Network Namespaces
The network namespace provides a layer for using the same network stack between containers. Thanks to the network namespace, containers in the same Pod can communicate with each other via localhost
Relationship between the Pause container and our containers (web1, web2)
When you try to deploy a container or multiple containers in a Pod, Kubernetes creates a Pause container in the Pod. The containers you deploy will be attached to the Pause container’s namespaces.
How Can Two Containers Communicate Via localhost on Docker?
We will create a Pause container on Docker. We will attach two more containers to the namespace of the Pause container.
root@main:~# docker run -d --name pause --ipc=shareable google/pause
701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0
# attach web1 & web2 to the Pause container's network namespace
root@main:~# docker run -d --name web1 --net=container:pause -e PORT=1111 webratio/nodejs-http-server
85df94162c8330baa3f485bb30c28e038eff84b6d305a25b9be0bf27e0bd437a
root@main:~# docker run -d --name web2 --net=container:pause -e PORT=2222 webratio/nodejs-http-server
9fe8429c39b74ec0b433f3fcf6189c874b1b819ecc36c3c85f8b3a05699e8f03
Make sure the web1 and web2 containers are added to the Pause container’s namespace:
root@main:~# pause_container_id=$(docker ps -aqf name=pause)
root@main:~# docker inspect web1 | grep $pause_container_id
"ResolvConfPath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/hostname",
"HostsPath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/hosts",
"NetworkMode": "container:701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0",
"Hostname": "701e90302606",
root@main:~# docker inspect web2 | grep $pause_container_id
"ResolvConfPath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/hostname",
"HostsPath": "/var/lib/docker/containers/701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0/hosts",
"NetworkMode": "container:701e90302606455c50e25143011cf7a4f811c2fa94625e19386fbcbe20acafc0",
"Hostname": "701e90302606",
Test connectivity between web1 and web2:
root@main:~# docker exec -it web1 bash
root@701e90302606:/# curl -I localhost:1111
HTTP/1.1 200 OK
...
root@701e90302606:/# curl -I localhost:2222
HTTP/1.1 200 OK
...
root@701e90302606:/# exit
root@main:~# docker exec -it web2 bash
root@701e90302606:/# curl -I localhost:1111
HTTP/1.1 200 OK
...
root@701e90302606:/# curl -I localhost:2222
HTTP/1.1 200 OK
...