Kubernetes is one of the leading container orchestration which is in the spotlight now. Everybody wants to learn Kubernetes these days, and the best way to learn is to perform. Especially all the developers want to earn the Certifed Kubernetes Application Developer (CKAD) badge and add Kubernetes Application Development to their expertise.
This blog enfolds the tasks you need to carry out to learn Kubernetes Application Development and clear the Certifed Kubernetes Application Developer Exam (CKAD) and gain a thorough knowledge of Kubernetes Application Development. The topics discussed are shared below:
The learning-path for an aspiring Kubernetes Application Developer is given below:

Activity Guide I: Register For Azure Free Trial Account
The first thing you must do is to get a Trial Account for Microsoft Azure. (You get $200 FREE Credit from Microsoft to practice)
Microsoft Azure is one of the top choices for any organization due to its freedom to build, manage, and deploy applications. Now, we will look at how to register for the Microsoft Azure FREE Trial Account, to register click here.
After you register for Microsoft Cloud Trial Account, you should get an Email Like below from Microsoft:

II: VM Creation Walkthrough On Azure Cloud
The most basic task that can be performed on any cloud platform is the creation of a Virtual Machine
Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment than the other choices offer.

This guide gives you information about what you should consider before you create a VM, how you create it, and how you manage it.
III: Docker Installation & Launching Container From Docker Hub
The first thing you must do is to install Docker on the machine to perform commands and perform operations. then the second important task is to Launch a container with Ubuntu image from Docker hub public images.
Read our blog to get an idea of Docker architecture.
IV: Working With Docker Images
Docker images are the template that is used to create a docker container. Images are read-only template with instructions for creating a Docker container. Docker image is a file, comprised of multiple layers, that is used to execute code in a Docker container.
In this Activity guide, we cover how to Create/Push an Image, how to Tag images, Inspect Image details, Listing out Images, Delete Images from Local repo.
Checkout our blog to get an overview of Docker Image.

V: Docker Default And Custom Bridge Networking
Networking in Docker is to connect the docker container to each other and outside world so they can communicate with each other also they can talk to Docker Host. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
In this Activity guide, we cover the Inspect Bridge type network, Start/Stop container on default Bridge network, check network connectivity, Create Bridge type Custom network, Create containers, and connect to custom-bridge.
To get a brief introduction, visit this page Docker Networking.

Also check: Comparison between Docker vs VM, difference of both the machines you should know.
VI: Working With Docker Volume & Implementing Docker Storage Hostpath
Containers are non-persistent storage but you can mount or pull an image and deploy it to a separate volume and that separate volume would then have persistence storage so you can start up and shut down and perform operations on that image without necessarily losing data or state.
In this Activity guide, we cover Create a container and mount the host path to the container, Customise the web page, Create docker volume, Inspect volume, Create a file in mounted volume path, Create a directory on Docker Host.

VII: Configuring External DNS, Logging And Storage Driver
By default, a container inherits the DNS settings of the host Containers that use the default bridge network to get a copy of this file, whereas containers that use a custom network using Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.
In this Activity guide, we cover Verify the resolv.conf file content, Create/Update daemon.json to use external DNS for all containers, Restart Docker service, Start container with specific Logging driver, Verify present Storage driver configured.
VIII: Working With Application Stack & Create Dockerfile
When running Docker Engine in swarm mode, we can use a docker stack to deploy a complete application stack to the swarm.
Dockerfile is a document that contains all the commands a user could call on the command line to assemble an image. The docker build command builds an image from a Dockerfile.
In this Activity guide, we cover creating a docker file, Add code to the docker file, Create a sample index.html file, Execute the Dockerfile, Use “nginxbuilt” image to start an Nginx container, installing docker-compose, Build and run the application with docker-compose, Edit Compose file to add a bind mount.

IX: Bootstrap A Cluster Using Kubeadm
A Kubernetes cluster is a set of node machines for running containerized applications. In Kubernetes, nodes pool together their resources to form a more powerful machine.
In this activity guide, we cover how to bootstrap a Kubernetes cluster using Kubeadm, Installing kubeadm & kubectl packages, create clusters, and join worker node to cluster, running NGINX server as a pod, running NGINX server as scalable deployment.
Visit Kubernetes Architecture, to know more about it.

Kubernetes Cluster (1 Master Node, 2 Worker Nodes)
X: Creating Pods With ClusterIP & NodePort Type Of Services
A Kubernetes pod is a group of containers that are deployed together on the same host.
ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created.
In this Activity Guide, we are going to cover how to create pods which are the smallest deployable units of a container and also with the ClusterIP and NodePort type of services.


XI: Deploying Highly Available & Scalable Application
High Availability in Kubernetes is to run multiple replicas of containers to avoid single failure of cluster. Applications running on Single containers that container can easily fail. It is very important to deploy such kind of applications to provide a better service.
This guide walks you on the path of deploying such applications which is one of the most vital aspects of application deployment.
Visit our blog to know in detail about High availability and Scalable application.

XII: Upgrading & Rollback Application With Deployment And ReplicaSet
Using deployment rolling updates we can upgrade the image used by a deployment. The state of deployment is saved which allows us to rollback to previous versions of a deployment. Every time you create a Deployment, the deployment creates a ReplicaSet and delegates creating (and deleting) the Pods.
This Activity Guide helps one in understanding the techniques of using Rollback options to go to the previous deployment, which will be very helpful at times.

XIII: Automated Scaling Of Application With HPA And Metric Server
The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set, or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics). If the metrics-server plugin is installed in your cluster, you will be able to see the CPU and memory values for your cluster nodes or any of the pods.
This Activity Guide covers, the process of autoscaling of applications with the Horizontal Pod Autoscaler and also the Metric Sever.

XIV: Pod Assignment With The Node Selector
The Kubernetes 1.6 offers four advanced scheduling features:
- Node Selector
- Node Affinity
- Pod Affinity/Anti-Affinity
- Taints and Tolerations
XV: Advanced Pod Scheduling With Node Affinity And Anti-Affinity
Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on nodes and label selectors specified in pods. Node affinity allows a pod to specify an affinity (or anti-affinity) towards a group of nodes it can be placed on. The node does not have control over the placement.
In this activity guide, we cover Create Deployment With Node Affinity, Verify pod Scheduling, Create deployment with kubectl, Create Deployment With Node Anti-Affinity, Creating Pod With Node Anti-Affinity.

XVI: Advanced Scheduling And Pod Affinity And Anti-affinity
Pod affinity and pod anti-affinity allow you to specify rules about how pods should be placed relative to other pods. The rules are defined using custom labels on nodes and label selectors specified in pods. Pod affinity/anti-affinity allows a pod to specify an affinity (or anti-affinity) towards a group of pods it can be placed with. The node does not have control over the placement.
In this activity guide, we cover Create Sample Application Deployment, Create Deployment With Pod Affinity/anti-affinity.

XVII: Advanced Scheduling With Taint And Toleration
A taint allows a node to refuse pod to be scheduled unless that pod has matching toleration.
You apply taints to a node through the node specification and apply tolerations to a pod through the pod specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint.
In this activity guide, we cover Tainting A Node To Simulate Advanced Scheduling Creating Pod With/Without Toleration Simulate Eviction Of Pod Using No schedule Effect
XVIII: Assigning Resource Quota And Demonstrating Limiting Resources Scenario
If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.
In this activity guide, we cover Create Namespace, Create Resource quota, Simulate Resource Creation Failure Due To Resource Quota/Count Limits.

XIX: Understanding Helm & Helm charts
Helm is an open-source tool used for packaging and deploying applications on Kubernetes. It is often referred to as the Kubernetes Package Manager because of its similarities to any other package manager you would find on your favorite OS.
In this section of activity guides, we cover How to install Helm, Deploy applications using helm & Access the applications.
XX: Configuring Health Checks – Readiness/Liveness Probes
Health checks, or probes as they are called in Kubernetes, are carried out by the kubelet to determine when to restart a container and used by services and deployments to determine if a pod should receive traffic.
In this activity guide, we cover Create Pod With Readiness/Liveness Probe Health Check Configuration, Simulating Readiness/Liveness Probe Failure.
Learn more: Helm in Kubernetes: An Introduction to Helm
XXI: Troubleshooting Application-level Failure
Troubleshooting your application – Useful for users who are deploying code into Kubernetes and wondering why it is not working. Troubleshooting your cluster – Useful for cluster administrators and people whose Kubernetes cluster is unhappy.
In this activity guide, we cover the Simulating Failure Scenario, Analysing The Failure By Creating Pod, Troubleshooting Control Plane, Troubleshooting Application Pod Failure, Troubleshooting Kubelet Failure.
XXII: Monitor And Debug Container Logs
While building a containerized application, logging is definitely one of the most important things to get right. A logging agent is a dedicated tool that exposes logs. A logging agent is a container that has access to a directory with log files from all of the application containers on that node.
In this activity guide, we cover how to monitor the containerized applications and also it comprises of the different debugging techniques involving the logs.

XXIII: Adding ELK Logging And Monitoring To Your Guestbook Application
Logging, Alerting, and Monitoring are one of the key components of the software life cycle. Having an effective alerting and monitoring tool improves system performance and productivity and also helps in reducing (or even eliminate) downtime. It can help in identifying and fixing issues faster, minimizing the impact on your customer and business.
In this Activity Guide, we cover how to monitor and log the application for an ELK cluster.

XXIV: Deploying Stateful Applications In Kubernetes Cluster
StatefulSet is the workload API object used to manage stateful applications. … Like a DeploymentManages a replicated application on your cluster. , a StatefulSet manages Pods that are based on an identical container spec.
In this activity guide, we cover Creating Logging namespace, Setting up Elasticsearch application, Pods in a StatefulSet, Scaling up and down a Statefulset object, Rolling update StatefulSets, Clean Up resources created the lab exercise.

XXV: Demonstrating Ingress Controller Load Balancing Techniques
In order for the Ingress resource to work, the cluster must have an ingress controller running. Unlike other types of controllers that run as part of the Kube-controller-manager binary, Ingress controllers are not started automatically with a cluster.
In this activity guide, we cover Deploying NGINX ingress-controller using helm chart, Creating simple applications, Create ingress route to route traffic, Testing the ingress controller routes correctly to both the application and Clean up resources created in this lab exercise.

XXVI: Demonstrating Various Ways To Spin Up Secured Containers
Security is also one of the major factors that containers pose. This plays an important role because there are various security threats on containers. For instance, they include, the risk of privilege escalation via containers, another threat is an attack originating from one container that compromises data or resources used by a different container. Similarly, you could face simple DoS attacks, and also there is a risk of insecure or unvalidated app images.
In this Activity Guide, we cover the various methods of spinning up containers to avoid such issues while using containers for application deployment.
XXVII: Deploying PHP Guestbook Application With Redis
In this Activity Guide, we cover how to build and deploy a simple, multi-tier web application using Kubernetes and Docker. This consists of the following components:
- A single-instance Redis master to store guestbook entries
- Multiple replicated Redis instances to serve reads
- Multiple web frontend instances
Visit our blog on CKA if you feel you are the one for Certified Kubernetes Administrator role.