Since the whole world has heeded containers and the importance of them, most of them are looking up to Kubernetes. Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and management of containerized applications. Pods are the smallest unit that can be deployed and managed by Kubernetes. The two types of Pods are Single Container pods & Multi Container Pods.
If you are new to Kubernetes, check out our blog on Kubernetes for beginners.
What Is A Kubernetes Pod?

Kubernetes doesn’t run containers directly; alternatively, it wraps one or more containers into a higher-level structure called Pod. A pod is a group of one or more containers that are deployed together on the same host. The pod is deployed with a shared storage/network, and a specification for how to run the containers.Containers can easily communicate with other containers in the same pod as though they are on the same machine.
If you need to run a single container in Kubernetes, then you have to create a pod for it which is nothing but Single Container Pod. If you have to run two or more containers in a pod, then the pod created to place these containers is called a Multi Container Pod.
Know in detail about Pods, from our detailed blog on Kubernetes Pods, creating, deleting and the best practices.
Multi Container Pods

Since containers should not be deployed directly to save ourselves from inevitable disasters, we use pods to deploy them as discussed before. But what if there are tightly-coupled containers, for e.g, a database container and a data container, should we place them in two different pods or a single one?
This is where a Multi Container Pod comes in. Thus, when an application needs several containers running on the same host, the best option is to make a Multi Container (Pod) with everything you need for the deployment of the application. But wouldn’t that break the “One process per container” rule?! Yes, it does and definitely gives a hard time in troubleshooting. Above all, there are more pros than cons to this, for instance, more granular containers can be reused between the team.
Design-patterns Of Multi Container Pods
Design patterns and the use-cases are for combining multiple containers into a single pod. There are 3 common ways of doing it, the sidecar pattern, the adapter pattern, and the ambassador pattern, we will go through all of this.
Sidecar Design Pattern
Imagine a container having a use-case, where there is a web server with a log processor, the sidecar design pattern aims to resolve this kind of exact problem. The sidecar pattern consists of the main application, i.e. the web application, and a helper container with a responsibility that is vital for your application but is not necessarily a part of the application and might not be needed for the main container to work.

Also check: our blog on Kubernetes Install
Adapter Design Pattern
The adapter pattern is used to standardize the output by the primary container. Standardizing refers to format the output in a specific manner that fits the standards across your applications. For instance, an adapter container could expose a standardized monitoring interface to the application even though the application does not implement it in a standard way. The adapter container takes care of transforming the output into what is acceptable at the cluster level.

Also Read: Our previous blog post on Kubernetes Persistent Storage. Click here
Ambassador Design Pattern
The ambassador design pattern is used to connect containers with the outside world. In this design pattern, the helper container can send network requests on behalf of the main application. It is nothing but a proxy that allows other containers to connect to a port on the localhost. This is a pretty useful pattern especially when you are migrating your legacy application into the cloud. For instance, some legacy applications that are very difficult to modify, and can be migrated by using ambassador patterns to handle the request on behalf of the main application.

Check out: Difference between Docker vs VM
Communication Inside A Multi Container Pod
There are three ways that containers in the pod communicate with each other. Namely, Shared Network Namespace, Shared Storage Volumes, and Shared Process Namespace.
Shared Network Namespace
All the containers in the pod share the same network namespace, therefore all containers can communicate with each other on the localhost. For instance, We have two containers in the same pod listening on ports 8080 and 8081 respectively. Container 1 can talk to container 2 on localhost:8080.

Check Out: How to Use Configmap Kubernetes. Click here
Shared Storage Volumes
All the containers can have the same volume mounted so that they can communicate with each other by reading and modifying files in the storage volume.
Click here to get detailed information on the Shared Volume way of communication.

To know more about Scheduler Kubernetes. Click here
Shared Process Namespace
Another way for the containers to communicate is with the Shared Process Namespace. With this, the containers inside the pod can signal each other. For this to be enabled, we need to have this setting shareProcessNamespace to true in the pod spec.
Click here to get detailed information on the Shared Process Namespace way of communication.

How To Deploy A Multi Container Pod?
Prerequisites:
The only thing you need to deploy a multi-container pod is a running Kubernetes cluster! If you don’t have a cluster up and running, check out: How to deploy a Kubernetes cluster on an Ubuntu server. Once you spin up your cluster, you are ready to deploy a multi-container pod.
Defining a multi-container pod:
As with every file definition in Kubernetes, we define our multi-container pod in a YAML file. So, the first step is to create a new file with the command:
vim multi-pod.yml
Paste the following code in the file:
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: ubuntu-container
image: ubuntu
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello, World!!! > /pod-data/index.html"]
In the above YAML file, you will see that we have deployed a container based on the Nginx image, as our web server. The second container is named ubuntu-container, is deployed based on the Ubuntu image, and writes the text “Hello, World!!!” to the index.html file served up by the first container.
Also Read: Our blog post on Kubernetes ckad exam, Everything you need to know before giving this exam.
How To Deploy A Multi Container Pod:
To deploy a multi-container command we use the kubectl command given below:
kubectl apply -f multi-pod.yml
Once the pod is deployed, the containers a bit to actually change to the running state (even though the first container will continue running) and then you have to access the Nginx-container with the command:
kubectl exec -it multi-pod -c nginx-container -- /bin/bash
By executing the above command you will find yourself at the bash prompt of the Nginx container. To make sure the second container has done its job or not, issue the following command:
curl localhost
You should see “Hello, World!!!” printed out.

Ubuntu-container successfully wrote the required text to the NGINX index.html file
Hurray! We have successfully deployed a multi-container pod in a Kubernetes cluster. Even though this is a very basic example, it shows how do containers interact with in the same pod.
Also Check: Our blog post on Kubernetes Architecture Components, to understand how it works.
Why Use Multi Container Pods
Well there are many good reasons why to use them rather than not to use them; here are some of them:
- The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application.
- With the same network namespace, shared volumes, and the same IPC namespace it possible for these containers to efficiently communicate, ensuring data locality.
- They enable you to manage several tightly coupled application containers as a single unit.
- Another reason is that all containers have the same lifecycle which should run on the same node.