Kubernetes for Dummies… really

In this article, we'll talk about the basic principles of Kubernetes, the challenges of configuring and managing it, and a really simple alternative to harness its power.

At this point, you've probably heard the name Kubernetes a few times. It's often used synonymously with cloud-native software and applies to both software architecture and infrastructure. And it often plays a big role in DevOps. But what exactly is Kubernetes, what does it do, and what benefits does it bring?

What is Kubernetes?

First of all, Kubernetes is a tool for orchestrating all your containers running on nodes in the cloud. It was originally developed by Google developers to manage the large number of containers running in different environments. Then, in 2014, it was decided to release it as open source and make it available to the public. At the same time, the Cloud Native Computing Foundation was created as part of the Linux Foundation, and Google donated Kubernetes to the foundation. Since then, it has become the de facto standard for container orchestration.

So Kubernetes is a container orchestration system. Containers are hosts for microservices, which are small independent applications. And sometimes you need quite a few of them.... thousands of containers, across multiple environments at times. That's why you need an orchestration system.

Central functions

  • High availability, no downtime
  • Scalability or high performance, scale up or down, makes your application more flexible as load increases or decreases
  • Disaster recovery, backup and restore
  • Dynamic resource allocation (RAM, CPU (and even Bandwidth using the right CNI plugin))
  • Access control (RBAC/ IAM)
  • Networking (Firewalling, Subnets, VPN, Port forwarding)
  • And many more…

How does Kubernetes work?

There are several concepts and components that are important to understand how Kubernetes and its approach to orchestration work.

It all starts with a Kubernetes cluster that includes a master node and multiple worker nodes for your application workloads. Docker containers reside on each node. The master node contains the API server for centralized management functions that you can access through the UI, API, or CLI. It also contains the controller manager that keeps track of what's happening in the cluster, a scheduler that ensures pod placement, and etcd, the Kubernetes backing store.

All nodes in Kubernetes run on a virtual network for secure communication.

Pods

Pods are an important component of Kubernetes. A pod contains a container, i.e., the application or microservice. It is an abstraction or wrapper of a container and usually a pod contains one application container. It is the smallest unit in Kubernetes. It is important to know that each pod is given an IP address and it is ephemeral, i.e. stateless. With this IP address, each pod can be addressed and communicate with other pods.

Services

To preserve the IP address when a pod dies, for example, due to lack of resources, there are services. Services are groups of Kubernetes pods that have the same feature set. The service has an IP address that persists even if the pod dies, and ensures that IP address-based communication for the application it contains continues to work even if pods die.

Services can be internal, accessible only to other services in the cluster, or external, accessible to the outside world (depending on the configuration e.g. ClusterIP). For this, there is the ingress service, which forwards external requests to an internal IP. The Kubernetes pod, a set of containers, along with their shared volumes, is a basic, functional unit. Containers are typically closely related in terms ofFor this, there is the Ingress service, which forwards external requests to an internal IP. The Kubernetes pod, a group of containers with their shared volumes, is a basic functional unit. Containers tend to be tightly coupled in terms of the services and functions they provide.

ConfigMap

A ConfigMap is an API object that is used to store non-confidential data in key-value pairs. Pods can use ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.

A ConfigMap allows you to decouple environment-specific configuration from your container images so that your applications are easily portable.

Volumes

Disk files in a container are ephemeral, which causes some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container, but with a clean state. A second problem occurs when sharing files between containers running together in a pod. Kubernetes volume abstraction solves both problems.

So, I essence you can use volumes to make your applications persistent.

Load Balancing

Then there is load balancing, which ensures that traffic is routed to services as efficiently and effectively as possible. Kubernetes load balancing makes the most sense when you consider how Kubernetes organizes containers. Kubernetes doesn't look at individual containers or individual instances of a service, but sees containers in terms of the specific services or groups of services they run or provide.

For true load balancing, Ingress provides the most popular method. Ingress works with a controller with an Ingress resource and a daemon. The Ingress resource is a set of rules for traffic. The daemon applies the rules within a dedicated Kubernetes pod. The ingress controller has its own sophisticated capabilities and built-in load balancing features, and can be customized for specific providers or systems.

Secrets

Another important aspect is secrets, which are used to store secret data such as credentials. Secrets are part of the kubernetes and are injected during deployment so that containers can use them at runtime. They can be used as environment variables or mounted as a volume to be used at runtime of the pod.

Deployments

A Kubernetes deployment tells Kubernetes how to create or modify instances of the pods that contain a containerized application. Deployments can help efficiently scale the number of replication pods, enable the rollout of updated code in a controlled manner, or revert to a previous deployment version if needed.

Namespaces

Finally, there are namespaces, a Kubernetes concept for organizing components, mainly pods and resources, into virtual groups that are strictly isolated from each other. These are useful for a) structuring components, b) avoiding conflicts between teams, c) sharing services between different environments, and/or d) setting access and resource restrictions at the namespace level. This can be useful for projects, application components, or for environments such as development, staging, and production to set them up in separate namespaces.

Managing Kubernetes

There are several ways to manage Kubernetes, but in each case you need to work with the API server to create and configure clusters, pods, services, volumes and namespaces, load balancing, SSL certificates, and so on. The usual way to do this is to use YAML files, which are scripts to run commands to manage Kubernetes. YAML is basically a human-friendly data serialization standard for all programming languages.

Here’s an example of a YAML script

There are a number of tools available, here are two of the most common ones you will need to configure and manage Kubernetes:

kubectl. kubectl is the command line that allows you to give specific instructions to Kubernetes clusters. This mainly involves the allocation of resources.

k9s. k9s is a terminal-based user interface for interacting with your Kubernetes clusters. The goal of this project is to make it easier to navigate, monitor, and manage your deployed applications in the wild. K9s continuously monitors Kubernetes for changes and then provides commands to interact with the monitored resources.

Challenges managing Kubernetes

  • Well, for starters, a lot of configuration needs to be done to even get started, and it's not that simple. Kubernetes is powerful but also very complex.
  • Managing workflows and scaling containers also requires a lot of ongoing configuration and maintenance.
  • There are a lot of points of failure. The entire chain from application code to ongoing deployment involves a series of steps that you need to configure, monitor and control. At any of these steps, a minor misconfiguration can cause deployment or scaling to fail or, in the worst case, result in unexpected costs and resource consumption.
  • Writing YAML files also presents a number of challenges: First, strict indentation is required, so one misplaced space or tab can cause the entire script to fail. Then, again, you create a lot of code to deploy the actual software code just to manage Kubernetes workflows. And finally, there's the question of where to store those scripts, such as in a separate repository. So you have to create complex scripts to run a sometimes complex software architecture, which increases the overall complexity of a project.
  • Monitoring what is happening in Kubernetes and in the deployment process can also be challenging, as one needs to know where to look for deployment logs to detect failures, traffic between pods, etc. Logging data is created in many places, and aggregating it all and keeping track of it is a task in itself.
  • Updating/Upgrading Kubernetes can also be a tricky process. You can run into breaking changes, functionalities will be dismissed others will be added. You need to track all changes between your running version and the upgraded versions. Most hyperscalers do not permit downgrading if something fails.

Fortunately, there's also an easy way to harness the full power of Kubernetes without having to manage it.

With mogenius, you can take advantage of Kubernetes without tedious configurations and with a user-friendly UI or API to deploy, run, and scale your applications. It takes care of all the tedious stuff between building an application and deploying it to your users in the cloud.

The platform automates the entire process from setting up stages in isolated namespaces to deploying application containers on any stage and with comprehensive security features.

Here are all the tasks it takes off your plate in a nutshell:

  • Provisioning cloud resources, e.g. virtual machine scaling sets and storage with a public cloud provider such as Microsoft Azure or Amazon AWS.
  • Setup of a Kubernetes cluster, incl. all configurations, ingress, services etc. of the master node and the worker nodes
  • Setup and automatic assignment of namespaces (= mogenius cloudspace).
  • Set up a CI/CD pipeline to automatically deploy updates for your containerized applications
  • Deploy and scale pods with a user-friendly UI or via API
  • Automatically configure and assign SSL-encrypted IP addresses and URLs to services, with automatic SSL certificate handling
  • Automatically configured config maps
  • Automatic creation of volume mounts accessible to all services in a namespace
  • Consolidation of all logs from Kubernetes deployment processes, build processes, and application runtime in one place to easily track and troubleshoot errors during deployment or at runtime
  • Enterprise-grade DDoS protection and web application firewall protection for all namespaces and services
  • Automatic detection of vulnerabilities in containers to identify updates needed for security reasons
  • Deployment with global CDN
  • Make sure your spendings don’t go crazy – mogenius ensures you stay within your budget and have an easy way to track your costs

So let's see how easy it can be to deploy and scale a container with Auto-Config Kubernetes from mogenius. We will show you screenshots from mogenius Studio, you can also manage the whole platform with API calls.

Head on over to https://studio.mogenius.com/user/registration to create a mogenius account, which only takes a few seconds.

Once you are logged in the mogenius studio, create a new cloudspace, which also only takes a few seconds:

The mogenius cloudspace is your project space, which contains a budget of resources which you can assign to individual services. By default, it contains a production stage, which is a Kubernetes namespace and at this point is already fully set up for deployments.

  1. Choose name and create a new namespace
  2. Add additional stages, which again represent a Kubernetes namespace and are strictly isolated from each other in terms of resources:

Now let’s go back to the dashboard:

We are now ready to deploy a container as a Kubernetes service, and scale up a few pods. In this example, we will take a built container image from a registry. Alternatively, you can also pull, automatically build and deploy a Dockerfile from a repository, or choose a pre-configured service from the service library to start a new application, database, or tool.

  1. In the dashboard overview, select container images
  2. Enter the name of the container image and version in the image registry. By default the system will take the images from Docker Hub.
  3. You can specify any registry URL there, and also pull private container images by setting the access credentials
  4. Make sure you enter the right internal port name
  5. Keep “Expose” selected if you want to make this service accessible outside of your cloudspace to the Internet.
  6. Please refer to the documentation for each service to determine the right internal port number.
  7. You can leave the rest of the defaults settings and create the service

Now the system will start the process, deploy the container image as a service on Kubernetes with one pod / replica set in your cloudspace, in this case the production stage.

It’s that simple. Once the deployment has gone through, you will see the dashboard for your nginx service:

In the dashboard, you will see important stats of resource usage from your total resource budget available to your cloudspace. These are the Kubernetes resource limits to make sure your service stays within the resource budget you want to assign to it.

You can also see deployment logs and application run time logs in one place to facilitate debugging, also build logs if your application had to be built prior to deployment, which in this case was not necessary.

Notice you will have an internal and an external service hostname assigned to your service

The internal hostname is important for your other services in your cloudspace to access this service, and the external hostname for external users to access it. You can also assign your own domain name to it, but that’s for later. For now it’s important to know that any SSL certificates to ensure secure and envcrypted access to the service are automatically configured and managed for you, so you don’t have to worry about that aspect.

Now let’s scale this application service. In this case I want to utilize the Kubernetes horizontal scaling function and load balancing between replica sets, to make sure my service is redundantly available in case one Pod crashes.

  1. Select Resources from the service menu and add additional instances
  2. Then save my changes and the deployment process gets triggered again and another Pod is spun up. It’s really that easy.
  3. View the logs for each application instance and make sure everything is ok.

You can manage your resource allocation, scaling options, environment variables, stop and restart your service in the dashboard and use the full power of Kubernetes power with just a few clicks.

Ready to get started?

Jump right in with a free trial or book a demo with a solution architect to discuss your needs.

The latest on DevOps and Platform Engineering trends

Subscribe to our newsletter and stay on top of the latest developments

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.


By signing up, I agree to the mogenius privacy policy.