In this article, we'll talk about the basic principles of Kubernetes, the challenges of configuring and managing it, and a really simple alternative to harness its power.
At this point, you've probably heard the name Kubernetes a few times. It's often used synonymously with cloud-native software and applies to both software architecture and infrastructure. And it often plays a big role in DevOps. But what exactly is Kubernetes, what does it do, and what benefits does it bring?
First of all, Kubernetes is a tool for orchestrating all your containers running on nodes in the cloud. It was originally developed by Google developers to manage the large number of containers running in different environments. Then, in 2014, it was decided to release it as open source and make it available to the public. At the same time, the Cloud Native Computing Foundation was created as part of the Linux Foundation, and Google donated Kubernetes to the foundation. Since then, it has become the de facto standard for container orchestration.
So Kubernetes is a container orchestration system. Containers are hosts for microservices, which are small independent applications. And sometimes you need quite a few of them.... thousands of containers, across multiple environments at times. That's why you need an orchestration system.
There are several concepts and components that are important to understand how Kubernetes and its approach to orchestration work.
It all starts with a Kubernetes cluster that includes a master node and multiple worker nodes for your application workloads. Docker containers reside on each node. The master node contains the API server for centralized management functions that you can access through the UI, API, or CLI. It also contains the controller manager that keeps track of what's happening in the cluster, a scheduler that ensures pod placement, and etcd, the Kubernetes backing store.
All nodes in Kubernetes run on a virtual network for secure communication.
Pods are an important component of Kubernetes. A pod contains a container, i.e., the application or microservice. It is an abstraction or wrapper of a container and usually a pod contains one application container. It is the smallest unit in Kubernetes. It is important to know that each pod is given an IP address and it is ephemeral, i.e. stateless. With this IP address, each pod can be addressed and communicate with other pods.
To preserve the IP address when a pod dies, for example, due to lack of resources, there are services. Services are groups of Kubernetes pods that have the same feature set. The service has an IP address that persists even if the pod dies, and ensures that IP address-based communication for the application it contains continues to work even if pods die.
Services can be internal, accessible only to other services in the cluster, or external, accessible to the outside world (depending on the configuration e.g. ClusterIP). For this, there is the ingress service, which forwards external requests to an internal IP. The Kubernetes pod, a set of containers, along with their shared volumes, is a basic, functional unit. Containers are typically closely related in terms ofFor this, there is the Ingress service, which forwards external requests to an internal IP. The Kubernetes pod, a group of containers with their shared volumes, is a basic functional unit. Containers tend to be tightly coupled in terms of the services and functions they provide.
A ConfigMap is an API object that is used to store non-confidential data in key-value pairs. Pods can use ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
A ConfigMap allows you to decouple environment-specific configuration from your container images so that your applications are easily portable.
Disk files in a container are ephemeral, which causes some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container, but with a clean state. A second problem occurs when sharing files between containers running together in a pod. Kubernetes volume abstraction solves both problems.
So, I essence you can use volumes to make your applications persistent.
Then there is load balancing, which ensures that traffic is routed to services as efficiently and effectively as possible. Kubernetes load balancing makes the most sense when you consider how Kubernetes organizes containers. Kubernetes doesn't look at individual containers or individual instances of a service, but sees containers in terms of the specific services or groups of services they run or provide.
For true load balancing, Ingress provides the most popular method. Ingress works with a controller with an Ingress resource and a daemon. The Ingress resource is a set of rules for traffic. The daemon applies the rules within a dedicated Kubernetes pod. The ingress controller has its own sophisticated capabilities and built-in load balancing features, and can be customized for specific providers or systems.
Another important aspect is secrets, which are used to store secret data such as credentials. Secrets are part of the kubernetes and are injected during deployment so that containers can use them at runtime. They can be used as environment variables or mounted as a volume to be used at runtime of the pod.
A Kubernetes deployment tells Kubernetes how to create or modify instances of the pods that contain a containerized application. Deployments can help efficiently scale the number of replication pods, enable the rollout of updated code in a controlled manner, or revert to a previous deployment version if needed.
Finally, there are namespaces, a Kubernetes concept for organizing components, mainly pods and resources, into virtual groups that are strictly isolated from each other. These are useful for a) structuring components, b) avoiding conflicts between teams, c) sharing services between different environments, and/or d) setting access and resource restrictions at the namespace level. This can be useful for projects, application components, or for environments such as development, staging, and production to set them up in separate namespaces.
There are several ways to manage Kubernetes, but in each case you need to work with the API server to create and configure clusters, pods, services, volumes and namespaces, load balancing, SSL certificates, and so on. The usual way to do this is to use YAML files, which are scripts to run commands to manage Kubernetes. YAML is basically a human-friendly data serialization standard for all programming languages.
Here’s an example of a YAML script
There are a number of tools available, here are two of the most common ones you will need to configure and manage Kubernetes:
kubectl. kubectl is the command line that allows you to give specific instructions to Kubernetes clusters. This mainly involves the allocation of resources.
k9s. k9s is a terminal-based user interface for interacting with your Kubernetes clusters. The goal of this project is to make it easier to navigate, monitor, and manage your deployed applications in the wild. K9s continuously monitors Kubernetes for changes and then provides commands to interact with the monitored resources.
Fortunately, there's also an easy way to harness the full power of Kubernetes without having to manage it.
With mogenius, you can take advantage of Kubernetes without tedious configurations and with a user-friendly UI or API to deploy, run, and scale your applications. It takes care of all the tedious stuff between building an application and deploying it to your users in the cloud.
The platform automates the entire process from setting up stages in isolated namespaces to deploying application containers on any stage and with comprehensive security features.
Here are all the tasks it takes off your plate in a nutshell:
So let's see how easy it can be to deploy and scale a container with Auto-Config Kubernetes from mogenius. We will show you screenshots from mogenius Studio, you can also manage the whole platform with API calls.
Head on over to https://studio.mogenius.com/user/registration to create a mogenius account, which only takes a few seconds.
Once you are logged in the mogenius studio, create a new cloudspace, which also only takes a few seconds:
The mogenius cloudspace is your project space, which contains a budget of resources which you can assign to individual services. By default, it contains a production stage, which is a Kubernetes namespace and at this point is already fully set up for deployments.
We are now ready to deploy a container as a Kubernetes service, and scale up a few pods. In this example, we will take a built container image from a registry. Alternatively, you can also pull, automatically build and deploy a Dockerfile from a repository, or choose a pre-configured service from the service library to start a new application, database, or tool.
Now the system will start the process, deploy the container image as a service on Kubernetes with one pod / replica set in your cloudspace, in this case the production stage.
It’s that simple. Once the deployment has gone through, you will see the dashboard for your nginx service:
In the dashboard, you will see important stats of resource usage from your total resource budget available to your cloudspace. These are the Kubernetes resource limits to make sure your service stays within the resource budget you want to assign to it.
You can also see deployment logs and application run time logs in one place to facilitate debugging, also build logs if your application had to be built prior to deployment, which in this case was not necessary.
Notice you will have an internal and an external service hostname assigned to your service
The internal hostname is important for your other services in your cloudspace to access this service, and the external hostname for external users to access it. You can also assign your own domain name to it, but that’s for later. For now it’s important to know that any SSL certificates to ensure secure and envcrypted access to the service are automatically configured and managed for you, so you don’t have to worry about that aspect.
Now let’s scale this application service. In this case I want to utilize the Kubernetes horizontal scaling function and load balancing between replica sets, to make sure my service is redundantly available in case one Pod crashes.
You can manage your resource allocation, scaling options, environment variables, stop and restart your service in the dashboard and use the full power of Kubernetes power with just a few clicks.
Creating local development environment on Kubernetes can be tricky. Discover a simple yet powerful approach with Docker Desktop and mogenius.