Deploying application Kubernetes

Deploying a simple application on a local Kubernetes cluster

February 17, 2023
by
9 mins read

In this tutorial, I will assume that you already have a basic understanding of Kubernetes and have already set up a local Kubernetes cluster. This tutorial covers creating YAML manifests, and deploying a simple application on a local Kubernetes cluster. If you need help with this, I recommend checking out the official Kubernetes documentation.
Before we begin, it’s worth noting that there are many ways to deploy applications on a Kubernetes cluster, and what works best for you will depend on your specific use case. In this tutorial, we’ll be focusing on using kubectl to deploy applications using YAML manifests.

Without further ado, let’s get started!

Table of Contents

  1. Introduction
  2. Setting up the Environment
  3. Creating a Simple Web Server
  4. Deploying the Web Server to Kubernetes
  5. Updating the Web Server
  6. Scaling the Web Server
  7. Deleting Resources
  8. Tips and Tricks

1. Introduction

In this tutorial, we will deploy a simple web server on a local Kubernetes cluster using kubectl. We will create a YAML manifest to define our deployment, and use kubectl to apply the manifest to our cluster. We will then update the deployment and delete it when we’re done.

By the end of this tutorial, you should be able to use kubectl to deploy simple applications on a local Kubernetes cluster.

2. Setting up the Environment

Before we begin, we need to make sure that our environment is set up correctly. Here are the steps to follow:

  1. Install kubectl: You can find the installation instructions for your platform on the official Kubernetes documentation website.
  2. Set up a local Kubernetes cluster: There are several options for setting up a local Kubernetes cluster, such as using Minikube or kind. I recommend using kind, as it is the easiest to set up and use. You can find the installation instructions for kind on the official Kubernetes documentation website.
  3. Verify that kubectl is configured correctly: Run the following command to check that kubectl is properly configured to talk to your Kubernetes cluster:
$ kubectl cluster-info

 If everything is set up correctly, you should see information about your cluster.

 3. Creating a Simple Web Server

If everything is set up correctly, you should see information about your cluster.

Now that our environment is set up, we can start creating our simple web server. For this example, we’ll use a pre-built Docker image that serves a static HTML file.

Create a new file called web-server.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: web-server
  replicas: 1
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: web-server
        image: nginx:1.22.0-alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: web-server-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: web-server-config
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
      <title>Simple Web Server</title>
    </head>
    <body>
      <h1>Hello, World!</h1>
      <p>This is a simple web server running in Kubernetes.</p>
    </body>
    </html>

 Great, let’s continue breaking down the manifest.

The Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  selector:
    matchLabels:
      app: web-server
  replicas: 1
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: web-server
        image: nginx:1.22.0-alpine
        ports:
        - containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: web-server-config

 

This is a basic deployment that specifies a single replica of a container called web-server. Here’s what each field means:

apiVersion and kind: These specify the Kubernetes API version and resource type, respectively.

metadata.name: This is the name of the deployment.

spec.selector.matchLabels.app: This selects the pod(s) that the deployment will manage. In this case, the label app is set to web-server.

spec.replicas: This specifies the desired number of replicas of the pod(s) to run.

spec.template.metadata.labels.app: This labels the pod(s) created by the deployment with the app: web-server label.

spec.template.spec.containers: This specifies the container to run in the pod(s). In this case, we’re using the nginx:1.14.2-alpine Docker image, which serves as a basic web server.

spec.template.spec.ports.containerPort: This specifies the port that the container exposes.

spec.template.spec.volumeMounts: This mounts the html volume into the container at /usr/share/nginx/html.

spec.template.spec.volumes.configMap.name: This specifies the name of the ConfigMap that contains the index.html file.

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: web-server-config
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
      <title>Simple Web Server</title>
    </head>
    <body>
      <h1>Hello, World!</h1>
      <p>This is a simple web server running in Kubernetes.</p>
    </body>
    </html>

 

This is a basic ConfigMap that contains the index.html file that our web server will serve. Here’s what each field means:

  • apiVersion and kind: These specify the Kubernetes API version and resource type, respectively.
  • metadata.name: This is the name of the ConfigMap.
  • data.index.html: This is the key-value pair that contains the contents of the index.html file.

 

4. Deploying the Web Server to Kubernetes

Now that we have our YAML manifest, we can use kubectl to deploy the web server to our Kubernetes cluster.

  1. First, we need to create the ConfigMap:
$ kubectl apply -f web-server-configmap.yaml --namespace=default

 

This will create the web-server-config ConfigMap in the default namespace.

  1. Next, we can create the deployment:

 

$ kubectl apply -f web-server-deployment.yaml --namespace=default

This will create the web-server deployment in the default namespace.

We can verify that the deployment and its associated resources have been created:

$ kubectl get deployments,pods --namespace=default

This should return something like:

NAME           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web-server   1/1     1            1           3m3s

NAME                          READY   STATUS    RESTARTS   AGE
pod/web-server-68d7f94b6d-7s57s   1/1     Running   0          3m3s

This shows that our deployment is running and that we have one pod (web-server-68d7f94b6d-7s57s) running the web-server container.

5. Updating the Web Server

Now that our web server is running, we can make changes to the index.html file and update the web server without needing to redeploy the entire application.

To update the index.html file, we can edit the ConfigMap directly using kubectl:

$ kubectl edit configmap web-server-config --namespace=default

 This will open up an editor where we can modify the index.html file. Once we’ve made our changes, we can save and exit the editor.

After the ConfigMap has been updated, the changes will automatically be propagated to the running container. We can verify this by accessing the web server:

$ curl http://localhost

 This should return the updated index.html file.

 6. Scaling the Web Server

 Now that our web server is running, we may want to scale the number of replicas to handle more traffic. We can do this using kubectl:

$ kubectl scale deployment web-server --replicas=3 --namespace=default

 This scales the web-server deployment to three replicas. We can verify that the replicas are running:

$ kubectl get deployments,pods --namespace=default

 This should return something like:

NAME           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web-server   3/3     3            3           10m

NAME                          READY   STATUS    RESTARTS   AGE
pod/web-server-68d7f94b6d-7s57s   1/1     Running   0          10m
pod/web-server-68d7f94b6d-jjdfp   1/1     Running   0          5m
pod/web-server-68d7f94b6d-tz76p   1/1     Running   0          5m

 This shows that we now have three replicas of the web-server container running.

 7. Deleting the Web Server

 To delete the web server and all of its associated resources, we can use kubectl:

$ kubectl delete -f web-server.yaml --namespace=default

 This deletes the web-server deployment, the ConfigMap, and all of the associated pods.

We can verify that everything has been deleted:

$ kubectl get deployments,pods,configmaps --namespace=default

 This should return an empty result.

 8. Tips and Tricks

 Use Imperative Commands for Quick Changes

While creating YAML manifests is the recommended way to manage Kubernetes resources, it can be slow and cumbersome for making quick changes. For example, if you just want to change the number of replicas of a deployment, you can use an imperative command instead of modifying the YAML manifest:

$ kubectl scale deployment web-server --replicas=3

This will change the number of replicas of the web-server deployment to 3.

Use Labels and Selectors to Group Resources

Labels and selectors are key concepts in Kubernetes that allow you to group resources and manage them together. For example, you can use labels to group all of the resources associated with a particular application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
  labels:
    app: my-app
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: my-app
        role: web
    spec:
      containers:
      - name: web-server
        image: my-web-server
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-server
spec:
  selector:
    app: my-app
    role: web
  ports:
  - name: http
    port: 80
    targetPort: 80

 

In this example, both the deployment and the service have labels that identify them as part of the my-app application. The service also uses a selector to only forward traffic to pods with the app: my-app and role: web labels.

Use Namespaces to Organize Resources

Namespaces are another way to organize resources in Kubernetes. They allow you to create isolated environments within a single cluster. For example, you might create a separate namespace for each team or project.

To create a namespace, you can use the kubectl create namespace command:

$ kubectl create namespace my-namespace

To deploy resources to a namespace, you can add the --namespace flag to your kubectl apply or kubectl create commands:

$ kubectl apply -f web-server.yaml --namespace=my-namespace

To view the resources in a namespace, you can use the kubectl get command with the --namespace flag:

$ kubectl get deployments,pods,services --namespace=my-namespace

Use Port Forwarding for Debugging

Sometimes it’s helpful to debug a running container by accessing it from your local machine. You can use port forwarding to forward traffic from a local port to a port on a container:

$ kubectl port-forward pod/web-server-xxxxx 8080:80

This will forward traffic from port 8080 on your local machine to port 80 on the web-server container. You can then access the web server by visiting http://localhost:8080 in your web browser.

Use Kubernetes Dashboards for Visualization

Kubernetes provides a web-based dashboard that allows you to visualize the resources running in your cluster. You can access the dashboard by running the following command.

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

 This will deploy the dashboard to your cluster. To access the dashboard, you need to create a proxy connection to your cluster:

$ kubectl proxy

 This will start a proxy server on your local machine. You can then access the dashboard by visiting the following URL in your web browser:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

 Note that this URL is only accessible from your local machine. If you want to access the dashboard from a remote machine, you need to create an SSH tunnel or expose the dashboard using a LoadBalancer or NodePort service.

 Use Helm for Package Management

Helm is a package manager for Kubernetes that allows you to package and deploy applications as “charts”. Helm charts are pre-configured YAML manifests that can be easily installed and managed using Helm.

To install Helm, follow the instructions in the official documentation. Once you have Helm installed, you can install charts from the official Helm repository or from other sources.

For example, you can install the stable/nginx-ingress chart using the following command:

$ helm install nginx-ingress stable/nginx-ingress

 This will install the nginx-ingress controller in your cluster, which allows you to route traffic to your services using a domain name.

 Keep Resource Limits in Mind

When deploying applications to Kubernetes, it’s important to keep resource limits in mind. Resource limits help to prevent your applications from consuming too much CPU or memory and causing problems for other applications running in the same cluster.

To set resource limits for a deployment, you can add the resources field to the deployment YAML manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: web-server
    spec:
      containers:
      - name: web-server
        image: my-web-server
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: "1"
            memory: "512Mi"
          requests:
            cpu: "0.5"
            memory: "256Mi"

 

In this example, we have set a limit of 1 CPU and 512 MiB of memory for the web-server container. We have also set a request of 0.5 CPU and 256 MiB of memory, which tells Kubernetes how much resources to reserve for the container.

Backup and Restore Your Kubernetes Resources

Kubernetes provides several tools for backing up and restoring your cluster resources. These tools can be used to recover from disaster scenarios or to migrate resources between clusters.

One popular tool for backing up and restoring Kubernetes resources is Velero. Velero can be used to back up and restore resources to and from cloud object storage services like Amazon S3, Google Cloud Storage, and Microsoft Azure Blob Storage.

To install Velero, follow the instructions in the official documentation. Once you have Velero installed, you can use it to create and restore backups of your Kubernetes resources.

9. Conclusion

In this tutorial, we learned how to deploy a simple web server to a local Kubernetes cluster using kubectl. We covered creating a YAML manifest, deploying the web server, updating the configuration, scaling the deployment, and deleting the resources using kubectl. We also covered some tips and tricks for working with Kubernetes on a local cluster, including using Kubernetes dashboards for visualization, using Helm for package management, setting resource limits for deployments, and backing up and restoring Kubernetes resources.

Kubernetes is a powerful tool for deploying and managing containerized applications, and it can take some time to become comfortable with the various concepts and tools involved. However, with practice and experimentation, you can quickly become proficient in working with Kubernetes and deploying applications to your cluster.

I hope this tutorial has been helpful in getting you started with Kubernetes, and I encourage you to continue exploring the many features and capabilities of this powerful container orchestration platform.

 

Leave a Reply

Your email address will not be published.

Newsletter

Follow Me

Follow Me on Instagram

MilanMaximo

This error message is only visible to WordPress admins

Error: No feed with the ID 1 found.

Please go to the Instagram Feed settings page to create a feed.

Latest from Blog

Kubernetes

Benefits of Using Kubernetes

Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. Here I will write about
Exploring Kubernetes Objects

Exploring Kubernetes Objects

Kubernetes objects are the building blocks of a Kubernetes deployment. They are used to define the desired state of the application, including
Go toTop