This blog post gives a walkthrough of the Step-By-Step Activity Guides of Certified Kubernetes Administrator (CKA) Certification training program that you must perform to learn this course.

This post covers Hands-On Activity Guides that you must perform in order to learn Docker & Kubernetes and clear the CKA certification exam.

Activity Guide 1: Register For Azure Free Trial Account

The first thing you must do is to get a Trial Account for Microsoft Azure. (You get 200 USD FREE Credit from Microsoft to practice)

Microsoft Azure is one of the top choices for any organization due to its freedom to build, manage, and deploy applications. Here, we will look at how to register for the Microsoft Azure FREE Trial Account, click here.

2: VM Creation Walkthrough

The most basic task that can be performed on any cloud platform is the creation of a Virtual Machine

Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment than the other choices offer.

This guide gives you an insight into how to create a virtual machine and how to manage it.

Check out: Complete Guide on CKA Certification Exam

3: Docker Installation 

Docker is a free and open platform for building, shipping, and running apps inside the containers. Docker allows you to easily deliver apps. You can handle your infrastructure the same way you manage your applications with Docker.

Docker is available for download and installation on Windows Os, Linux, and macOS.

To know how to install docker on your machine read our blog on Docker Installation.

4: Working with Docker Container

A Docker container is a version of an image that can be run. The Docker API and CLI can be used to build, start, stop, pause, and remove containers. You can mount storage to a container, link it to one or more networks, and even generate a new picture based on its current state.

In this Activity guide, we cover how to create/delete a container, the lifecycle of the container, Inspect container details, Listing containers, how to exec into a container.

Read our blog to get an idea of the Docker container.

Know more: about Container Orchestration and Management Options

5: Working With Docker Images

Docker images are the template that is used to create a docker container. Images are read-only template with instructions for creating a Docker container. Docker image is a file, comprised of multiple layers, that is used to execute code in a Docker container.

In this Activity guide, we cover how to Create/Push an Image, how to Tag images, Inspect Image details, Listing out Images, Delete Images from Local repo.

Read our blog to get an idea of Docker Image

6: Docker Host Networking

When a container is in host network mode, it takes out any network isolation between the docker host and the docker containers, and it does not receive its own IP address. For example, if you use host networking and run a container that binds to port 80, the container’s application is available on port 80 on the host’s IP address.

Since it does not require network address translation (NAT). host mode networking can be useful for optimizing performance and in situations where a container must handle a large number of ports.

Read our blog to know more about Docker Network

7: Docker Custom Bridge Networking

Networking in Docker is to connect the docker container to each other and the outside world so they can communicate with each other also can talk to Docker Host. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.

In this Activity guide, we cover the Inspect Bridge type network, Start/Stop container on default Bridge network, check network connectivity, Create Bridge type Custom network, Create containers, and connect to custom-bridge.

Read our blog to know more about Docker Network

8: Working With Docker Volume 

Containers are non-persistent storage when we stop a container we lose all the data to overcome this issue we need to use persistence storage so we can store data persistently. In docker we have 2 ways to store data persistently 1) Docker Volume 2) Bind Mounts. Docker volumes are completely managed by Docker while the bind mounts depend on the file structure of the host machine.

In this Activity guide, we cover Create docker volume, Inspect volume, Create a file in mounted volume path, Create a directory on Docker Host.

9: Implementing Docker Storage Bind Mount

When you use a bind mount type storage a file or directory on the docker host machine is mounted into a container. Bind mounts are very good performance type storage, but they rely on the host machine’s filesystem having a specific directory structure available.

In this Activity guide, we cover Create a container and mount the host path to the container, Customise the web page mounted to local filesystem.

Read our blog to know more about Docker Storage

10: Configuring External DNS, Logging and Storage Driver

By default, a container inherits the DNS settings of the host Containers that use the default bridge network to get a copy of this file, whereas containers that use a custom network use Docker’s embedded DNS server, which forwards external DNS lookups to the DNS servers configured on the host.

In this Activity guide, we cover Verify the resolv.conf file content, Create/Update daemon.json to use external DNS for all containers, Restart Docker service, Start container with specific Logging driver, Verify present Storage driver configured.

Also checkout: Comparison between Docker vs VM, difference of both the machines you should know.

11: Working with Dockerfile

Docker can read instructions from a Dockerfile and generate images for you automatically. A Dockerfile is a text file that contains all of the commands that a user may use to assemble an image from the command line. Users can use docker build to automate a build that executes multiple command-line instructions in a row.

In this Activity guide, we cover how to write dockerfile instruction to create a docker image, how to build an image, how to use the different options in dockerfile, reducing image size using Multi-stage build, Onbuild dockerfile.

Read more about Dockerfile

12: Working With Application Stack

When running Docker Engine in swarm mode, we can use a docker stack to deploy a complete application stack to the swarm.

Dockerfile is a document that contains all the commands a user could call on the command line to assemble an image. The docker build command builds an image from a Dockerfile.

In this Activity guide, we cover installing docker-compose, Build and run the application with docker-compose, Edit Compose file to add a bind mount.

Also read: Kubernetes service by Amazon EKS

13: Bootstrap Kubernetes Cluster Using Kubeadm

Kubernetes cluster is a set of node machines for running containerized applications. At the highest level of Kubernetes, there exist two kinds of servers, a Master and a Worker node. These servers can be Virtual Machine(VM) or physical servers(Bare metal). Together, these servers form a Kubernetes cluster and are controlled by the services that make up the Control Plane.

In this activity guide, we cover how to bootstrap a Kubernetes cluster using Kubeadm, Installing kubeadm & kubectl packages, create cluster and join worker node to master, Install CNI plugin for networking.

To know how to install the Kubernetes cluster on your machine read our blog on Kubernetes Installation.

14: Deploying High Available Stateless Application with Deployment & ReplicaSet

In Kubernetes, most service-style applications use Deployments to run applications on Kubernetes. Using Deployments, you can describe how to run your application container as a Pod in Kubernetes and how many replicas of the application to run. Kubernetes will then take care of running as many replicas as specified.

In this activity guide, we cover deploying NGINX server as a pod, running NGINX server as scalable deployment, Scaling Nginx Deployment Replicas Using Scale Command, Auto-Healing With Deployment Controller.

Visit our blog to know in detail about High availability and Scalable application.

15: Creating pods with CusterIP and NodePort types of Service

Kubernetes networking allows Kubernetes components like Pods, containers, API server, etc. to communicate with each other. The Kubernetes platform is different from other networking platforms because it is based on a flat network structure that eliminates the need to map host ports to container ports.

In this activity guide, we cover Running Nginx Server as Pod inside the Cluster, Exposing Nginx within Cluster Using ClusterIP, Exposing Nginx outside Cluster Using NodePort.

Read more about Kubernetes Networking and Services

16: Upgrading and rollback application with Deployment and Replicaset

The ability to execute rolling updates is one of the main advantages of using Deployment to power your pods. Rolling updates allow you to gradually change the configuration of your pods, and Deployments give you a lot of control over the operation. Kubernetes builds a new ReplicaSet and retains the old ReplicaSet. So that we can use the old ReplicaSet to roll back to a previous state.

In this activity guide, we cover Update Deployment & Rolling Out New Versions Pod, Update Deployment & Fixing the Failed Pods, Rollback to previous Deployment version, Use Different Deployment Strategy.

Visit our blog to know in detail about Kubernetes Deployment

17: Automated Scaling of Application HPA and Metric Server

Based on observed CPU and Memory consumption, the Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica group, or stateful set (or, with custom metrics support, on some other application-provided metrics). Horizontal Pod Autoscaling is not applicable to structures that cannot be scaled, such as DaemonSets.

In this activity guide, we cover Installing metrics-server in cluster, Creating Deployment with Resource Limit Defined, Verify Cluster & Pod Level Metrics By Metrics-Server, Creating Horizontal Pod Autoscaler (HPA), Demonstrating autoscaling of pod on load increase.

18: Kubernetes Storage (Volume, PV, PVC, Storage Class)

In Kubernetes Persistent Storage a PersistentVolume (PV) is a piece of storage within the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. PV is an abstraction for the physical storage device (such as NFS or iSCSI communication) that you have attached to the cluster. A PersistentVolumeClaim (PVC) is a request for storage by a user. The claim can include specific storage parameters required by the application.

In this activity guide, we cover Configuring NFS storage Persistence Volume, Create Persistent Volumes (PV), Create Persistence Volume Claim (PVC), Mounting NFS volume inside Pod.

Visit our blog to know in detail about Kubernetes Volume

19: Advanced Scheduling and Node Affinity and Anti-affinity

Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on nodes and label selectors specified in pods. Node affinity allows a pod to specify an affinity (or anti-affinity) towards a group of nodes it can be placed on. The node does not have control over the placement.

In this activity guide, we cover Create Deployment With Node Affinity, Verify pod Scheduling, Create Deployment With Node Anti-Affinity, Creating Pod With Node Anti-Affinity.

Visit our blog to know in detail about Kubernetes Scheduling

20: Advanced Scheduling and Pod Affinity and Anti-affinity

Pod affinity and pod anti-affinity allow you to specify rules about how pods should be placed relative to other pods. The rules are defined using custom labels on nodes and label selectors specified in pods. Pod affinity/anti-affinity allows a pod to specify an affinity (or anti-affinity) towards a group of pods it can be placed with. The node does not have control over the placement.

In this activity guide, we cover Create Sample Application Deployment, Create Deployment With Pod Affinity/anti-affinity.

Visit our blog to know in detail about Kubernetes Scheduling

21: Advanced Scheduling with Taint and Toleration

A taint allows a node to refuse pod to be scheduled unless that pod has matching toleration.

You apply taints to a node through the node specification and apply tolerations to a pod through the pod specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint.

In this activity guide, we cover Tainting A Node To Simulate Advanced Scheduling Creating Pod With/Without Toleration Simulate Eviction Of Pod Using Noschedule Effect

Visit our blog to know in detail about Kubernetes Scheduling

22: Deploy and Update Deamonset Controller

DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is selected by the Kubernetes scheduler. However, DaemonSet pods are created and scheduled by the DaemonSet controller instead.

In this activity guide, we cover Create Daemonset Update Daemonset, Performing Rollback On A Daemonset.

23: Deploying and Managing a StatefulSet Resource

StatefulSet is the workload API object used to manage stateful applications. … Like a DeploymentManages a replicated application on your cluster. , a StatefulSet manages Pods that are based on an identical container spec.

In this activity guide, we cover Creating Logging namespace, Setting up Elasticsearch application, Pods in a StatefulSet.

24: Limiting Resources With Resource Quota

If a container requests a resourceKubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

In this activity guide, we cover Create Namespace, Create Resource quota, Simulate Resource Creation Failure Due To Resource Quota/Count Limits.

25: Cluster Node Maintenance

If any node is facing any performance issue or needs to apply kernel upgrade that required some node downtime then we can drain a node, that indicates that all of the pods have been safely evicted and scheduled to another node, and the node is marked as unschedulable.

In this activity guide, we cover Running Nginx Server As Deployment In The Cluster, Cordon/Uncordon Worker node, Drain Worker Node.

26: Troubleshooting App & Control Plane Failure

In this activity guide, we cover the Simulating Failure Scenario, Analysing The Failure By Creating Pod, Troubleshooting Control Plane, Troubleshooting Application Pod Failure, Troubleshooting Kubelet Failure.

27: Security In Kubernetes- RBAC, Service Account, Security Context, Configmap

RBAC stands for Role-Based Access Control. It’s an approach that’s used for proscribing access to users and applications on the system/network. RBAC could be a security style that restricts access to valuable resources based on the role the user holds, hence the name role-based.

In this activity guide, we cover Authentication and Authorisation using RBAC, Defining Security Context with default/specific user/non-root user, Create Readonlt Pod, Create a priviledged pod, Setting Container Environment Variables using ConfigMap, Create Pod that Uses ConfigMap.

Read: All you need to know on Role Based Access Control

Read: All you need to know on Kubernetes Security

28: Implementing Network Policy in Kubernetes Cluster

To control the traffic between pods and from the internet we use network policy. Kubernetes network policy lets developers secure access to and from their applications. This is how we can restrict a user’s access. All Pods in Kubernetes communicate with each other which are present in the cluster. By default all Pods are non-isolated however Pods become isolated by having a Kubernetes Network Policy in Kubernetes.

In this activity guide, we cover Restrict Incoming Traffic on pods, Restrict outgoing Traffic from pods, Securing Kubernetes network.

Read: All you need to know on Network policy

29: Backup And Restore Etcd In Kubernetes

Etcd is a consistent and highly available key-value store used as Kubernetes’ backing store for all cluster data.

If your Kubernetes cluster uses etcd as its backing store, make sure you have a backup plan for those data.

In this activity guide, we cover Installing And Placing Etcd Binaries, Taking Etcd Snapshot, Backing Up The Certificates, Restore ETCD.

Read: All you need to know on Etcd Backup And Restore In Kubernetes.

30: Upgrade Kubernetes Cluster [Master & Worker Nodes]

To upgrade a Kubernetes cluster is very important to keep up with the latest security features and bug fixes, as well as benefit from new features being released on an ongoing basis. This is especially important when we have installed a really outdated version or if we want to automate the process and always be on top of the latest supported version.

In this activity guide, we cover Installing Old Version of kubernetes Cluster, Check Stable version of Kubernetes cluster, Upgrade Kubernetes master and worker node components.

Read: All you need to know on Kubernetes Cluster Upgrade[Master & Worker Nodes].

31: Deploy an end to end PHP Guestbook Application on Kubernetes

In this exercise, we cover how to build and deploy a simple multi-tier web application using Kubernetes and docker. This example we are using redis as a backend pod to store guestbook entries and multiple web PHP frontend instances.

In this activity guide, we cover Deploying a master and Slave Redis backend pool, Create the Guestbook PHP frontend Service, and accessing a guestbook application.

32: Demonstrating Application and Cluster logging & Monitoring

The kubectl logs command can be used to view the output for the currently running container case. Node level logging is the next level of logging in the Kubernetes environment. This is divided into two parts: the log files themselves, and the Kubernetes side, which allows the logs to be accessed and deleted remotely under certain conditions.

For Monitoring in Kubernetes, the foremost popular open-source monitoring tool is the ELK Stack.  An acronym for ElasticsearchLogstash, and Kibana, ELK also includes a fourth component — Beats, which are lightweight data shippers. Each component within the stack takes care of a different step in the logging pipeline, and together, all of them provide a comprehensive and powerful logging solution for Kubernetes.

In this activity guide, we cover deploying Elasticsearch Cluster, Deploying Logstash as deployment kind, Deploying Filebeat as Daemon set kind, Deploying Kibana as deployment, Accessing Application logs on Kibana Dashboard.

Read more about Monitoring in Kubernetes

33: Advance Routing With Ingress-Controller

In order for the Ingress resource to work, the cluster must have an ingress controller running. Unlike other types of controllers that run as part of the Kube-controller-manager binary, Ingress controllers are not started automatically with a cluster.

In this activity guide, we cover Deploying NGINX ingress-controller using helm chart, Creating simple applications, Create ingress route to route traffic, Testing the ingress controller routes correctly to both the application and Clean up resources created in this lab exercise.

Read more about Kubernetes Networking and Services

34: Dynamic Provisioning of Persistent Volumes 

Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes.

In this activity guide, we cover Built-in storage classes, Creating Persistent Volume Claim, Use PV in a Pod.

Visit our blog to know in detail about Kubernetes Volume

35: Create and Configure Managed Kubernetes Cluster On Cloud

In Kubernetes, nodes pool together their resources to form a more powerful machine. When you deploy programs onto the cluster, it intelligently handles distributing work to the individual nodes for you. If any nodes are added or removed, the cluster will shift around work as necessary.

In this activity guide, we cover Setting Up Kubernetes Cluster, Create Kubernetes Cluster, Install Azure CLI to Manage a Kubernetes Cluster, We Use Kubectl.

Also read: Kubernetes service by Amazon EKS

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *