Author: Mohinder Jaiswal
Kubernetes is also known as ‘k8s’. This word comes from the Greek language, which means a pilot or helmsman.
Kubernetes in an open source container management tool hosted by Cloud Native Computing Foundation (CNCF). This is also known as the enhanced version of Borg which was developed at Google in 2014 to manage both long running processes and batch jobs, which was earlier handled by separate systems.
Kubernetes comes with a capability of automating deployment, scaling of application, and operations of application containers across clusters. It is capable of creating container centric infrastructure.
Features of Kubernetes
Following are the essential features of Kubernetes:

- Pod: It is a deployment unit in Kubernetes with a single Internet protocol address.
- Horizontal Scaling: It is an important feature in the Kubernetes. This feature uses a HorizontalPodAutoscalar to automatically increase or decrease the number of pods in a deployment, replication controller, replica set, or stateful set on the basis of observed CPU utilization.
- Automatic Bin Packing: Kubernetes helps the user to declare the maximum and minimum resources of computers for their containers.
- Service Discovery and load balancing: Kubernetes assigns the IP addresses and a Name of DNS for a set of containers, and also balances the load across them.
- Automated rollouts and rollbacks: Using the rollouts, Kubernetes distributes the changes and updates to an application or its configuration. If any problem occurs in the system, then this technique rollbacks those changes for you immediately.
- Persistent Storage: Kubernetes provides an essential feature called ‘persistent storage’ for storing the data, which cannot be lost after the pod is killed or rescheduled. Kubernetes supports various storage systems for storing the data, such as Google Compute Engine’s Persistent Disks (GCE PD) or Amazon Elastic Block Storage (EBS). It also provides the distributed file systems: NFS or GFS.
- Self-Healing: This feature plays an important role in the concept of Kubernetes. Those containers which are failed during the execution process, Kubernetes restarts them automatically. And, those containers which do not reply to the user-defined health check, it stops them from working automatically.
Kubernetes – Cluster Architecture
Kubernetes follows client-server architecture. Wherein, we have master installed on one machine and the node on separate Linux machines.

Master Components
Below are the main components found on the master node:
- etcd cluster – a simple, distributed key value storage which is used to store the Kubernetes cluster data (such as number of pods, their state, namespace, etc), API objects and service discovery details. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.
- kube-apiserver – Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers and others), serving as frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.
- kube-controller-manager – runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.
- cloud-controller-manager – is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.
- kube-scheduler – helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.
Node (worker) component
Below are the main components found on a (worker) node:
- kubelet – the main service on a node, regularly taking in new or modified pod specifications (primarily through the kube-apiserver) and ensuring that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.
- kube-proxy – a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.
Kubernetes Setup (Linux OS)
The installation of Kubernetes on Linux is a straightforward process. Follow the below steps to install the Kubernetes. In the installation of Kubernetes, each step is mandatory.
Step 1: In this step, we have to update the necessary dependencies of a system using two commands.
The first command is used to get all the updates. Execute the following command in the terminal; it will ask to enter the system’s password.
- sudo apt-get update
Output:

When the first command is successfully executed, type the following second command, which is used to make the repositories.
- sudo apt-get install -y apt-transport-https
Output:

Step 2: After the above steps are successfully executed, we have to install the dependencies of docker in this step.
Type the following command to install the docker. In the installation process, we have to choose Y for confirmation of the installation.
- sudo apt install docker.io
Output:

After installing the docker, we have to type the different two commands for starting and enabling the docker. Type the following first command, which starts the docker:
- sudo systemctl start docker
Now, type the following second command, which enables the docker:
- sudo systemctl enable docker
Output:

Now, we can check the version of docker by typing the following command:
- Docker -version
Output:

Step 3: After the successful execution of all the commands of the second step, we have to install the curl command. The curl is used to send the data using URL syntax.
Now, install the curl by using the following command. In the installation, we have to type Y.
- sudo apt-get install curl
Output:

Now, we have to download the add package key for Kubernetes by the following command:
- sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
Output:

If you get an error from the above command, then it means your curl command is not successfully installed, so first install the curl command, and again run the above command.
Now, we have to add the Kubernetes repositories by the following command:
- sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”
Output:

After the successful execution of the above command, we have to check any updates by executing the following command:
- sudo apt-get update
Output:

Step 4: After the execution of the above commands in the above steps, we have to install the components of Kubernetes by executing the following command:
- sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
Output:

Step 5: After the above installation is done, we have to initialize the kubeadm by executing the following command. The following command disables the swapping on other devices:
- sudo swapoff -a
Output:

Now, we have to initialize the kubeadm by executing the following command:
- sudo kubeadm init
Output:

Step 6: After the above command is successfully executed, we have to run the following commands, which are given in the initialization of kubeadm. These commands are shown in the above screenshot. The following commands are used to start a cluster:
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
Output:


Step 7: In this step, we have to deploy the paths using the following command:
- sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Output:

Step 8: After the execution of the above command, we have to run the following command to verify the installation:
- sudo kubectl get pods –all-namespaces
Output:

Note: If the output is displayed as shown in the above screenshot. It means that the Kubernetes is successfully installed on our system.