Introduction to Kubernetes and its components

Introduction to Kubernetes and its components

February 16, 2023
by
8 mins read

Before we dive into Kubernetes and its components I just want to clarify that this is not a comprehensive tutorial on Kubernetes since it’s a complex system. My goal is to share my approach to understanding Kubernetes concepts and the techniques that helped me learn it pretty quickly. I hope my approach and combination of various posts, videos, and real-world examples will help others to understand how Kubernetes works.

Now, let’s get started with Kubernetes!

Also, I recommend you to have some basic knowledge of containers and containerization technologies such as Docker. If you’re new to containers, I suggest checking out this guide on Docker as a prerequisite for this tutorial series.

What is Kubernetes?

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes is designed to be a platform-agnostic and vendor-neutral solution that works across various environments, such as public, private, and hybrid clouds. It provides a unified API and control plane that can manage any containerized application across multiple hosts and clusters.

Kubernetes Architecture

Kubernetes is based on a client-server architecture that consists of a control plane and worker nodes. The control plane manages the overall state of the cluster and schedules the deployment of containerized applications on the worker nodes. The worker nodes are responsible for running the containers and providing the necessary resources for them to operate.

Kubernetes Control Plane Components

The Kubernetes control plane consists of the following core components:

  • etcd: A distributed key-value store that stores the configuration and state of the entire Kubernetes cluster.
  • kube-apiserver: The API server that exposes the Kubernetes API and serves as the primary interface for the control plane components.
  • kube-controller-manager: A set of controllers that monitor the state of the cluster and take actions to maintain the desired state.
  • kube-scheduler: A component that schedules the deployment of containerized applications to the worker nodes based on resource availability and application requirements.

Kubernetes Worker Node Components

The Kubernetes worker nodes consist of the following core components:

  • kubelet: A component that runs on each worker node and is responsible for managing the containers on that node.
  • kube-proxy: A network proxy that runs on each worker node and enables the containers to communicate with each other and with the outside world.
  • container runtime: The software that runs the containers, such as Docker, containerd, or CRI-O.

Key Concepts in Kubernetes

Clusters

Before diving deeper into Kubernetes, it’s essential to understand some of its basic terms and concepts.

A Kubernetes cluster is a group of nodes that work together to run containerized applications. The cluster includes a master node, which runs the control plane components, and one or more worker nodes, which run the containerized applications.

The nodes are the worker machines that run applications and can be either physical or virtual machines. The nodes are managed by the Kubernetes control plane.

Control Plane

The Kubernetes control plane is a collection of components that manage the state of the cluster. The control plane components include:

  • API Server: The API server is the central management point of the Kubernetes control plane. It exposes the Kubernetes API, which is used to manage the state of the cluster.
  • etcd: etcd is a distributed key-value store that is used to store the state of the Kubernetes cluster.
  • Controller Manager: The controller manager is responsible for managing the various controllers that manage the state of the cluster.
  • Scheduler: The scheduler is responsible for scheduling workloads onto the nodes.

Follow the links bellow.

Nodes

A node is a physical or virtual machine that runs Kubernetes. Each node has a container runtime, which is responsible for running containers. Nodes are managed by the Kubernetes control plane and communicate with each other through the Kubernetes API server.

Namespaces

A namespace is a way to organize Kubernetes resources into virtual clusters. For example, you can create separate namespaces for development, testing, and production or to separate different applications or environments.

Pods

A pod is the smallest deployable unit in Kubernetes. A pod is a single instance of a running process in a cluster. Each pod has its own IP address and can contain one or more containers. Pods are managed by Kubernetes controllers, such as Deployments or ReplicaSets.

ReplicaSets

A ReplicaSet is a Kubernetes controller that manages the scaling and availability of a set of identical pods. A ReplicaSet ensures that a specified number of pod replicas are running at all times. If a pod fails or is deleted, the ReplicaSet creates a new pod to replace it.

Deployments

A Deployment is a Kubernetes controller that manages the rollout and scaling of a set of ReplicaSets. A Deployment ensures that a specified number of replicas of an application are running at all times. Deployments also provide rolling updates and rollbacks, making it easy to deploy new versions of an application.

Services

A Service is a Kubernetes object that provides a stable IP address and DNS name for a set of pods. Services can load-balance traffic across multiple pods and provide a single entry point to a set of pods. Services can be used to expose an application to the internet or to other services within the cluster.

Jobs

A Job is a Kubernetes controller that manages the execution of a batch process to completion. A Job creates one or more pods to run a specific task or batch job, and then terminates the pods when the job is complete. Jobs are useful for running periodic or one-time tasks, such as backups or database migrations.

Volumes

A Volume is a way to store data in a pod that persists across container restarts. Volumes can be used to store configuration files, logs, or other data that needs to be shared across containers in a pod. Kubernetes supports a variety of volume types, such as hostPath, emptyDir, configMap, and secret.

Secrets

A Secret is a Kubernetes object that is used to store sensitive information, such as passwords, API keys, or TLS certificates. Secrets are stored in the Kubernetes API server and can be mounted as files or environment variables in pods.

ConfigMaps

A ConfigMap is a Kubernetes object that is used to store configuration data in key-value pairs. ConfigMaps can be used to store application configuration files, command-line arguments, or other configuration data that needs to be shared across containers in a pod.

DaemonSets

A DaemonSet is a Kubernetes controller that ensures that a specific pod runs on every node in a cluster. DaemonSets are useful for running system daemons, such as log collectors or monitoring agents, on every node in the cluster.

Networking Policies

Networking Policies are a way to define rules for traffic within a Kubernetes cluster. For example, you can use Network Policies to control access to specific pods or services in the cluster or to allow traffic only from specific IP addresses or to specific ports.

 

Getting Started with Kubernetes

Now that we have covered the basics of Kubernetes, let’s dive into the practical aspects of getting started with Kubernetes.

Installing Kubernetes

To get started with Kubernetes,you will need to install it on your local machine or a remote server. Kubernetes can be installed using various methods, such as manually installing and configuring the components, using a managed Kubernetes service, or using a Kubernetes distribution like kubeadm or Minikube.

Minikube

Minikube is a tool that enables you to run a single-node Kubernetes cluster on your local machine for testing and development purposes. It provides an easy way to get started with Kubernetes without the need for a full-blown cluster.

To install Minikube, follow the installation instructions on the official website: https://minikube.sigs.k8s.io/docs/start/

kubeadm

Kubeadm is a useful tool that allows you to create a Kubernetes cluster on multiple nodes easily. It provides a convenient way to establish a cluster that can be tailored to your unique needs.

To install kubeadm, follow the installation instructions on the official website: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

 

Let’s Interact With Kubernetes

List Pods

To list the pods in your cluster, use the kubectl get pods command:

$ kubectl get pods

 

This will list all of the pods in the default namespace.

You can use various options with kubectl get to customize the output. For example, to get more information about each pod, use the --show-labels option:

$ kubectl get pods --show-labels

 

Create a Pod

To create a new pod in your cluster, you can use the kubectl run command. This command creates a new pod with a single container running the specified image:

$ kubectl run nginx --image=nginx

 

This will create a new pod with the name nginx and a container running the nginx image.

Create a Deployment

Deployments are a higher-level abstraction that makes it easier to manage and scale multiple replicas of a pod. To create a new deployment in your cluster, use the kubectl create deployment command:

$ kubectl create deployment nginx --image=nginx

 

This will create a new deployment with the name nginx and a single replica. You can use the --replicas option to specify a different number of replicas:

$ kubectl create deployment nginx --image=nginx --replicas=3

 

This will create a new deployment with the name nginx and three replicas.

Scale a Deployment

To scale a deployment up or down, use the kubectl scale command. For example, to scale the nginx deployment to five replicas, use the following command:

$ kubectl scale deployment nginx --replicas=5

 

This will increase the number of replicas in the nginx deployment to five.

Expose a Service

Services are used to expose pods to the network. To create a new service that exposes the nginx deployment, use the kubectl expose command:

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer

 

This will create a new service with the name nginx that exposes the nginx deployment on port 80. The --type option specifies the type of service to create; in this case, a LoadBalancer service is created, which will provision a load balancer for the service.

Using Port Forwarding

To access a pod directly, you can use port forwarding to forward traffic from a local port to a port on the pod. To forward traffic from port 8080 on your local machine to port 80 on the nginx pod, use the following command:

$ kubectl port-forward pod/nginx 8080:80

This will forward traffic from port 8080 on your local machine to port 80 on the nginx pod.

Additional Resources

Deploying Your First Application

Once you have installed Kubernetes, the next step is to deploy your first application. In Kubernetes, you can deploy an application by creating a deployment object that specifies the desired state of the application.

To deploy an application using a deployment object, you will need to create a YAML file that defines the deployment object. Here is an example YAML file that deploys a simple web application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: nginx:latest
        ports:
        - containerPort: 80

 

In this example, the deployment object specifies that we want to deploy three replicas of the nginx web server. The selector field specifies that the deployment should manage all pods with the label app: myapp. The template field specifies the pod template, which contains a single container running the nginx image.

To deploy the application, you can use the kubectl command-line tool to apply the YAML file:

$ kubectl apply -f myapp.yaml

 

This will create the deployment object and start the specified number of replicas.

Kubernetes Dashboard

The Kubernetes dashboard is a web-based user interface that enables you to view and manage your Kubernetes resources. You can use the dashboard to monitor the health of your applications, view logs and metrics, and perform various administrative tasks.

To access the dashboard, run the following commands:

minikube dashboard

or

$ kubectl proxy

 

This will start a local proxy that allows you to access the dashboard at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Conclusion

In this tutorial, we have covered the basics of Kubernetes, including its architecture, components, and key concepts like clusters, nodes, pods, services etc. We have also discussed how to install Kubernetes and deploy a simple application. Finally, we have explored some tools and mechanisms for monitoring your Kubernetes applications.

Kubernetes can be a complex and daunting technology to learn, but it is also a powerful tool for managing and deploying containerized applications. I encourage you to continue your learning journey by exploring the many resources available online, such as the official Kubernetes documentation, tutorials, and courses. With time and practice, you can become a proficient Kubernetes user and contribute to the vibrant and growing Kubernetes community.

Leave a Reply

Your email address will not be published.

Newsletter

Follow Me

Follow Me on Instagram

MilanMaximo

This error message is only visible to WordPress admins

Error: No feed with the ID 1 found.

Please go to the Instagram Feed settings page to create a feed.

Latest from Blog

Kubernetes

Benefits of Using Kubernetes

Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. Here I will write about
Exploring Kubernetes Objects

Exploring Kubernetes Objects

Kubernetes objects are the building blocks of a Kubernetes deployment. They are used to define the desired state of the application, including
Go toTop