Getting Started with Kubernetes

Getting Started with Kubernetes

Prerequisite: Basic understanding of container system such as Docker

Kubernetes is the talk of the town in the tech world. What is it? Why was it created in the first place?

Why?

As internet usage has increased exponentially, web applications are needed to support the demands of the global audience. Traditional monoliths running in a powerful server are not sufficient for a global scale. Hence, we needed more machines.

With more machines, we have a new set of challenges to address:

  • How to install the same instance of the application in different machines?
  • How to upgrade the software with less/no downtime of application?
  • How to manage hardware failures with less/no downtime of application?

All this was handled manually earlier with scripts by system admins. This kept them busy at midnight when that new software update has crashed in some machines!

As Google is the pioneer in large-scale systems, faced the above issues in its Search infrastructure. This has lead to the eventual development of Kubernetes. Kubernetes automates most of the manual tasks listed above and much more. Thus, improving the development and deployment lifecycle of applications.

What?

Now that we are convinced of the need of Kubernetes, let us see its official definition:

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

How?

Kubernetes is nothing but a distributed application deployed over your network.

Let’s understand the high-level architecture of Kubernetes. In this cluster of machines, each machine is called a node. A node can be a physical machine or a VM (Virtual Machine). Each node has a Kubernetes process running within it. But wait, where is our application? It is also running in the same nodes but in containers (such as Docker containers). These containers are wrapped into an entity known as a pod. Pods are the smallest possible entities in Kubernetes that help in deploying an application. Essentially, our application is running in different pods distributed in different nodes. As you can see, each node may contain one or more pods.

How these nodes interact with each other and how do they know each other?

Among these nodes, a node is chosen as the master node, which keeps the record of each node and associated pod.

Above explanation is over simplified to get the gist of need of Kubernetes and its working.

“Hello World” of Kubernetes

It all looks fine theoretically but how do I get started and see it working quickly on my laptop?

Kubernetes has various open-source/commercial flavors to make it easier to develop and deploy applications, such as

I find K3s to be a lightweight and handy tool to get started with Kubernetes.

Almost all of the tools in Kubernetes world are compatible with Linux. Although, they may have support for Mac/Windows but mostly need to do some tweaks before getting them working. Hence, I prefer Linux when playing around with Kubernetes.

  • If you are not in a Linux environment, you can have any Linux VM (such as Ubuntu) installed through a virtual box. For this example, I am using Ubuntu 20.04 LTS VM running in my Oracle VM VirtualBox.
  • Install K3s using the following command. It will take few seconds.

curl -sfL [https://get.k3s.io](https://get.k3s.io) | sh -

This will install Kubernetes in a single node cluster. Here, the node will be your VM.

  • After installation is done, check your node status with

kubectl get node

What kubectl does? kubectl is a Swiss army knife to operate with the Kubernetes cluster. Its power will be evident as we explore further into Kubernetes.

  • How can I see pods deployed in my single node cluster?

kubectl get pods --all-namespaces

Pod list displayed for you could be different from the above as default names for pods are suffixed with random strings. Observe that a single node holds pods of application as well as pods of Kubernetes processes. Yes, Kubernetes processes themselves run in pods.

  • Now the fun part. Let’s deploy our application. For this example, let’s deploy [nginx](https://nginx.org/en/) server onto the pod.
  • The following command will deploy nginx in a new pod.

kubectl run nginx --image=nginx

  • After few seconds, check your nginx pod status with

kubectl get pods

If you see the status as “Running” then your nginx server is deployed successfully!

Great! How do I connect to my nginx server?

kubectl port-forward nginx 9000:80

This essentially, forwards the traffic on container port 80 to host port 9000.

  • Then, you should be able to see something similar in browser:

Welcome to Kubernetes. You have deployed your first application onto Kubernetes cluster.

What’s next?

Now that we have seen 50K feet view of Kubernetes, we might be wondering about:

  • How Kubernetes manages to scale?
  • What happens if the pod crashes?
  • By the way, how my REST endpoints are reached hidden deep inside the pod and in turn in a container?
  • How to code these infrastructure details? Is this something called IaC?

In coming articles, will dig deeper into Kubernetes from its usage and architecture point of view.