tl;dr The goal: Spinning up a simple and affordable Kubernetes and deploying the first service IaaS: Hetzner Cloud DNS: Cloudflare Environment and deployment: mogenius Costs: ~EUR 4/month
What this is all about If you’re developing cloud-native software it will probably run on Kubernetes sooner or later. But managing your own Kubernetes is often painful due to the high complexity and steep learning curve. On the other hand, managed Kubernetes services (like EKS, AKS, GKE, etc.) can quickly drain your budget. This is why we came up with an easy-to-use and cost efficient setup that you can use for development environments, sandboxes, or for your home lab.
In this detailed guide, we’ll walk through setting up a server on Hetzner Cloud and we’ll deploy the lightweight Kubernetes distribution K3s. We’ll then use mogenius to manage additional tools on Kubernetes as well as for deployment automation and monitoring. Let’s get started with Hetzner!
Set up a Hetzner Account The first step is to create an account with Hetzner. To do this, visit Hetzner Cloud and set up your account. A credit card is required during the sign up.
Create a Project Once you have your account, you can directly create your project in the Hetzner Cloud Console. This will be your central working environment where you will manage your cloud infrastructure.
Deposit SSH Key For a secure connection to your instances, it is essential to deposit an SSH key. You can enter this in the Security section of your project.
To create an SSH key you can use this command. Do not forget to choose a memorable name for your key, as you will need it later.
ssh-keygen -t ed25519 -b 4096 -C "your_email@example.com" -f <DESTINATION-FILE-NAME>
cat <DESTINATION-FILE-NAME>.pub
To add your SSH key to Hetzner, copy the entire public key, which is usually stored in the file <DESTINATION-FILE-NAME>.pub. Then paste it into the designated field in the Hetzner Cloud interface.
Generate API Key For automation and interaction with the Hetzner Cloud API, you need an API key. Go to “Security > Tokens” in your project and generate a new token. Keep the generated token safe, for example, in a password manager like 1Password . The token will not be displayed again.
Install Hetzner CLI To execute commands and control your cloud service, you need the Hetzner Command Line Interface (CLI). Alternatively, you can perform all actions through Hetzner's user-friendly interface (UI). However, the console allows for faster and simpler execution of processes, we’ll use it for the rest of this article.
brew install hcloud
We’re using homebrew for installing the Hetzner CLI but you can find alternative options in the Hetzner docs .
Create a Context with Hetzner CLI To authenticate with your Hetzner account we’ll create a new context. A context in the Hetzner CLI represents a configuration including your API token. Create a new context with:
hcloud context create <PROJECT-NAME>
After entering the token, the context is active and will be used for all future terminal sessions.
Configure Private Network and Placement Group For internal communication of your instances, set up a private network.
hcloud network create --name <PRIVATE-NETWORK-NAME> --ip-range 10.0 .0 .0 / 16
hcloud network add-subnet <PRIVATE-NETWORK-NAME> --network-zone eu-central --type server --ip-range 10.0 .0 .0 / 16
Additionally, you can configure a Spread Placement Group if you require high availability. This ensures that your instances run on physically separate servers so the step is not required if you plan on running a single-node Kubernetes.
hcloud placement-group create --name <PLACEMENT-GROUP-NAME> --type spread
Creating a VM Now, create a virtual machine (VM) with:
hcloud server create --datacenter nbg1-dc3 --type cax11 --name <SERVER-NAME> --image debian- 12 --ssh-key <SSH-KEY-NAME> --network <PRIVATE-NETWORK-NAME> --placement-group <PLACEMENT-GROUP-NAME>
We are setting up a CAX11 server in Nuremberg, Germany, (based on the Arm64 architecture) that will run Debian 12. With our setup the costs are estimated at 4€/month. Please note that this can vary but overall you should get a similarly efficient setup.
Configuring Firewall We’ll secure the VM with a firewall and define rules for inbound traffic. By default these rules allow traffic from all source IPs and you can modify them if you need to restrict inbound traffic.
bashCopy code
hcloud firewall create --name <FIREWALL-NAME>
hcloud firewall add-rule <FIREWALL-NAME> --description "Allow SSH In" --direction in --port 22 --protocol tcp --source-ips 0.0 .0 .0 / 0 --source-ips ::/ 0
hcloud firewall add-rule <FIREWALL-NAME> --description "Allow HTTP In" --direction in --port 80 --protocol tcp --source-ips 0.0 .0 .0 / 0 --source-ips ::/ 0
hcloud firewall add-rule <FIREWALL-NAME> --description "Allow HTTPS In" --direction in --port 443 --protocol tcp --source-ips 0.0 .0 .0 / 0 --source-ips ::/ 0
hcloud firewall add-rule <FIREWALL-NAME> --description "Allow K3S In" --direction in --port 6443 --protocol tcp --source-ips 0.0 .0 .0 / 0 --source-ips ::/ 0
hcloud firewall add-rule <FIREWALL-NAME> --description "Allow ICMP In" --direction in --protocol icmp --source-ips 0.0 .0 .0 / 0 --source-ips ::/ 0
The firewall is now assigned to the previously created virtual network (VN).
hcloud firewall apply-to-resource <FIREWALL-NAME> --type server --server <SERVER-NAME>
Log into your server You can log into your server using the ssh key we previously created.
ssh -l root <SERVER-IP>
You’ll find the IP of your server in the Hetzner UI on the detail page of your VM, or via the following command.
hcloud server describe <SERVER-NAME>
NFS Storage If you plan to utilize NFS storage, you need to check if the necessary package is available on the server post-login. For systems where it's missing, such as Debian 12, run the following command to install it:
apt-get install -y nfs-common
Disable swap Before proceeding with the Kubernetes installation, it's crucial to disable swap when using k3s, as it is not supported with swap memory enabled. Begin by turning off swap temporarily with the command
swapoff -a
Then add
/etc/sysctl.conf
vm.swappiness = 0
After modifying the file, validate that swap is indeed disabled by running
free -m
and ensuring that the swap line reads
Swap: 0 0 0
, confirming that no swap is active.
Install AppArmor To finish the VM setup we’ll install AppArmor (Application Armor) and AppArmor-Utils to increase the security. AppArmor is a Linux kernel security module that implements Mandatory Access Control (MAC) to restrict the capabilities of programs based on individual profiles. It aims to enhance the security of the system by limiting what applications can do, based on a set of rules or policies that define which files and network ports the applications can access. These profiles are written in an easy-to-read language and can be set to an "allow" or "deny" policy for different system resources.
Apparmor-utils is a package that provides a collection of utilities for dealing with AppArmor profiles. These tools make it easier to manage AppArmor profiles, including their creation, updating, and monitoring.
apt update
apt upgrade -y
apt install apparmor apparmor-utils -y
Now, we’re done with the VM setup and we can move on to installing Kubernetes.
Install k3s We will install K3s on the virtual machine (VM). K3s is a lighter version of the Kubernetes distribution. This specific version has been deliberately simplified to allow for faster deployment and easier management. Overall, K3s enables quicker and simpler deployment of Kubernetes clusters while still offering the full functionality of Kubernetes for container orchestration and management.
First, generate a token that will be used for installing k3s. This token serves as an authentication method for communication between the server (or servers) and the agents (worker nodes) within the Kubernetes cluster.
openssl rand -base64 24
Use the following command to install k3s on the VM. The previously generated token must be inserted in the command. The installation of traefik and metrics-server is initially deferred and will be performed later through mogenius.
curl -sfL https: //get.k3s.io | sh -s - server \
--disable traefik \
--disable metrics-server \
--write-kubeconfig-mode= 644 \
--node-name= "$(hostname -f)" \
--kubelet-arg= "cloud-provider=external" \
--token= "<TOKEN>" \
--tls-san= "$(hostname -I | awk '{print $2}')"
We’re done with the installation of k3s on the VM. Next, let’s make sure that we can connect to the cluster locally.
Install kubectl and kubectx For efficient local management of Kubernetes we’ll install the Kubernetes command line interface kubectl on your local machine. In addition to that we’ll use kubectx to easily switch between contexts if you’re running multiple clusters. Open a new terminal window (we don’t want to run this on the VM but we’ll need the ssh connection again in a minute).
brew install kubectl
brew install kubectx
Prepare your kubeconfig file The configuration of your Kubernetes clusters is stored in the local kubeconfig file. If the local file ~/.kube/config doesn’t exist yet, create it.
Add k3s to your kubeconfig Now, on the Hetzner VM, open the k3s.yaml with the following command,
cat /etc/rancher/k3s/k3s.yaml
and copy the content to your local ~/.kube/config. Make sure to replace the default IP address 127.0.0.1 with the external IP of your server.
As your kubeconfig can contain multiple clusters you should change the cluster name from default to a more meaningful name, e.g. k3s-hetzner.
In the end, your kubeconfig should look something like this.
apiVersion: v1
clusters :
- cluster:
certificate-authority-data: <YOUR_DATA>
server: https: //<SERVER_IP>:6443
name: k3s-hetzner
contexts :
- context:
cluster: k3s-hetzner
user : k3s-hetzner
name : k3s-hetzner
current-context: k3s-hetzner
kind : Config
preferences : {}
users :
- name: k3s-hetzner
user :
client-certificate-data: <YOUR_DATA>
client-key-data: <YOUR_DATA>
Test your connection Let’s make sure that you can connect to the K3s cluster from your local machine. Set the appropriate context using kubectx.
kubectx k3s-hetzner
Now you can use kubectl to test if you can retrieve data from your K3s.
kubectl get nodes
The response should look like this.
NAME STATUS ROLES AGE VERSION
mogenius Ready control-plane,master 3d6h v1 .28 .8 +k3s1
Setting up mogenius Now that Kubernetes is running on our server, what’s next?! We want to deploy containers on Kubernetes, so there are usually a few more steps required, like ingress setup, certificate handling, build and deployment pipeline, and many more things we need to teach ourselves. Instead of manually handling these tasks, we're opting for a faster route to results. We can explore mogenius as a solution that automates most of the building blocks between our blank K3s and a deployed container, while also abstracting Kubernetes in a convenient manner.
Create an account First, sign up for mogenius and create an organization. The free tier doesn’t require a credit card and allows you to connect one cluster so it covers your K3s setup.
Add a cluster Once your account is ready, add a cluster. The name you enter for the cluster is only used for displaying it in your mogenius account.
Next, you’ll install the mogenius operator on K3s to connect your cluster with the mogenius platform. You can either use the command line interface or Helm for the installation. We’ll go with Helm and copy the command.
helm repo add mogenius https: //helm.mogenius.com/public
helm repo update
helm install mogenius-operator mogenius/mogenius-k8s-manager -n mogenius --create-namespace --wait \
--set global .cluster_name= "k3s-hetzner" \
--set global .api_key= "API_KEY" \
--set global .namespace= "mogenius"
Switch to your terminal. Before running the install command, make sure that Helm is installed on your local machine.
brew install helm
Now paste the Helm command and run it with the context set to your K3s cluster. The mogenius operator will be installed and you can head back to the mogenius platform. A notification will be displayed once the operator has established a connection.
Once installed, the operator will suggest to install a set of services to finish your K3s setup. The tools are installed via Helm in the background.
After confirming at the bottom of the page it will take a minute for all services to be installed. You can check the status in the cluster settings.
Connect your domain To expose services externally a domain is required for your cluster. Connect it by adding two DNS records to your domain. If you don’t have one yet, you can sign up at a provider like Cloudflare and register your own domain quickly.
In the DNS settings of your domain, add two A records, pointing to the external loadbalancer IP of your K3s (it will be displayed in the cluster settings in mogenius once the ingress controller is installed).
*.k3s.yourdomain.com k3s.yourdomain.com This way, a subdomain will be created that is used as the default hostname with a wildcard so that mogenius can automatically create external hostnames for each service. In mogenius, enter the subdomain (e.g. k3s.yourdomain.com) in the field “Loadbalancer host” and save your changes.
With this setup, we’re now ready to deploy container images. Note that optionally you can also add credentials for an external container registry if you want to enable the internal build pipeline to deploy services from Github or Gitlab.
Create a project In the main navigation go to projects and create a new project. Select your K3s cluster from the dropdown and optionally adjust the resources assigned to this project. You can always change those later.
Deploy a service On your project dashboard add a new service from type container image.
We’ll deploy a simple nginx for testing and use the image from Docker hub . The form field uses Docker hub as a default but you can also enter a URL to any other container registry.
For the internal port enter 80 and check the box to expose the service externally. You can leave all other settings as they are and deploy the service.
Your service will be deployed to your K3s immediately. An external hostname is created and a TLS certificate is issued automatically. You can access your service via the external hostname, or directly from other services in the namespace (which is a stage in your mogenius project) using the internal hostname.
The service also provides you with aggregated status monitoring so that you can always keep an eye on your services on Kubernetes. You can modify the deployment, scale your service with multiple pods, and manage environment variables.
For those who prefer to watch a brief video, we recommend this step-by-step guide. It shows how to connect a Kubernetes cluster on Hetzner to an organization and deploy a first service using the mogenius pipeline.
VIDEO
What’s happening in the background? With mogenius projects we can easily deploy any container to Kubernetes. The complexity of k8s is mostly abstracted and there is little to no configuration required. Great for moving fast, troubleshooting, and focussing on building the product. But especially when you’re starting with Kubernetes, it makes sense to understand what’s happening in the background. That’s why you still can access all workloads in Kubernetes directly. Check out the workload manager in your cluster settings. You’ll have access to all k8s resources, as well as YAML editor, describe, and the shell.
Wrap up In this guide we went through setting up a new server on Hetzner to deploy a simple and affordable Kubernetes using K3s. We now have a stack at under EUR 10/month for playing around, prototyping, and development. But at the same time our setup is easily extendible if we want to launch in production later.
With mogenius on top of K3s we implemented a solution that takes care of creating namespaces, deployments, and all other required Kubernetes workloads. The tool allows us to quickly deploy services from Github, Gitlab, or any container registry without requiring deep knowledge of Kubernetes. Later, you can also invite your team and provide them with projects on the cluster, without them requiring direct access to the Kubernetes cluster.
We hope you enjoyed the guide! We’re thinking about making this a series, covering topics like scaling the cluster, working with the build pipeline, going to production, etc. Let us know what you think and feel free to reach out.