Using Kubernetes Labels in Deployment and Release Processes Kubernetes is often used to host and manage complex applications, consisting of multiple resources. In managing several application instances, the sheer volume of resources in a Kubernetes cluster can be substantial. To combat this, Kubernetes provides tools like labels and selectors to help manage these resources effectively.
Labels categorize Kubernetes resources with arbitrary key-value pairs, while selectors filter resources based on these labels, making it easier to find specific objects amidst the noise. While labels and selectors themselves do not serve a single purpose, they are supported by numerous Kubernetes features and can facilitate various workflows:
Resource management: Group resources like pods and services via labels to enable easier management, scaling, monitoring, or targeting specific subsets. Service discovery and load balancing: Use labels in services to effectively route traffic to the correct pods, enhancing load balancing across application instances. Deployment strategies: Employ labels for facilitating deployment strategies like blue-green or canary deployments.
This guide shows how labels and selectors can simplify Kubernetes resource management and improve flexibility in scaling, updating, and securing applications.
Understanding Kubernetes Labels and Selectors: The Basics Before exploring examples, let's review the basics of how labels and selectors work in Kubernetes.
Labels Kubernetes labels are key-value pairs attached to resources, typically following the syntax: key=value
. For instance, if you want to create an Nginx pod with the label app=nginx
, you could use this syntax with the -l
flag:
kubectl run nginx --image=nginx:latest -l app=nginx
Alternatively, if you want to define this pod using a YAML manifest, you would use standard YAML key-value syntax, specifying the labels under the metadata.labels
object:
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx:latest
name: nginx
Labels are dynamic, allowing you to add, remove, and modify them without restarting the attached object. You can imperatively modify the labels on a running resource using the following command:
# Add a label
kubectl label pod ngnix environment=production
# Remove a label (this will remove the `app` label)
kubectl label pod nginx app-
# Modify an existing label
kubectl label pod environment=staging --overwrite
After applying these commands to the pod created by the previous command, the resulting pod will only have a single environment=staging
label.
Labels are arbitrary, and you can define whatever labels you like. Still, common labeling strategies in Kubernetes include the following:
Environment labels (environment=production
) Version labels (version=1.0
) Tier labels (tier=frontend
)
Selectors Selectors allow you to filter and select resources that match your specified criteria. There are two main types of selectors:
Equality-based selectors: Match resources with (or without) a specific label (key=value
or key!=value
) Set-based selectors: Match resources based on inclusion in a set (key in (value1, value2)
or key notin (value1, value2)
)
For instance, using these selectors with the kubectl get pods
command might look something like this:
# Find pods in the production environment
kubectl get pods -l environment=production
# Find pods NOT in the staging environment
kubectl get pods -l environment!=staging
# Find pods of version 1.0 or 1.1
kubectl get pods -l 'version in (1.0, 1.1)'
# Find pods of versions other than 1.2 and 1.3
kubectl get pods -l 'version notin (1.2, 1.3)'
How Labels and Selectors Work Together Labels and selectors offer a powerful combination for interacting with Kubernetes features like:
Services: Use labels to discover and route traffic to correct pods. ReplicaSets: Select pods based on labels for scaling and replication. Deployments: Use labels for rolling updates and rollbacks. Network policies: Control network traffic flow between pods using label-based rules. Resource quotas: Manage resource usage via labels across namespaces. Pod affinity/anti-affinity: Schedule pods based on proximity or separation criteria.
Using Labels and Selectors in Kubernetes Let's look at some concrete examples of how Kubernetes labels and selectors can be used together in common real-world operations.
Create a Deployment with Environment and Version Labels Suppose you are deploying an application that consists of a frontend and a backend. First, you might want to create a deployment for your application's frontend. You can include some labels to make the resources easier to operate on in subsequent steps. This manifest adds app
, tier
, version
, and environment
labels to both the deployment and the pods that it creates:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web-app
tier: frontend
version: "1.0"
environment: production
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
tier: frontend
version: "1.0"
environment: production
spec:
containers:
- name: web-container
image: nginx:1.27.0
Scale the Deployment to Multiple Replicas A deployment created from the above manifest would be set up to have one replica. You may decide that you want to change this; perhaps your site is experiencing an influx of traffic, and you want to increase the replicas for all production deployments of your application. You could do so with the following command:
kubectl scale deployment --selector= 'environment=production,app=web-app' --replicas=3
This command uses a combination of selectors to target all deployments with labels of environment=production
and app=web-app
. Notably, this does not include the tier
label, so if you also have a backend deployment for your application, that would be scaled as well. This is one of the useful traits of selectors as they allow you to operate on multiple resources while still allowing fine-grained control over what gets selected.
Update the Deployment to a New Version A large number of Kubernetes commands support selectors. As another example, you could also update the image used by your production deployment using selectors. In this case, the -l
flag is used instead of --selector=''
. Functionally, this is the same as the previous method, but it's another option for specifying selectors for kubectl
commands. You can specify the flag multiple times if you need to provide numerous selectors:
kubectl set image deployment -l app=web-app -l environment=production web-container=nginx:1.27.1
This command will update the image
used by the web-container
container of any deployments with app=web-app
and environment=production
labels.
Selecting Pods by Label A common use case for labels in busy Kubernetes clusters is finding and focusing on particular sets of resources—commonly pods. With the application created by the above manifest, there are a few labels you could use and a few different ways you could specify them:
Select Pods from the Web Application Deployment As you've seen, you can get pods that belong to a specific app:
kubectl get pods -l app=web-app
Select Pods Running Only the Latest Version of the Application In some cases, you might be running multiple versions of the same application and are only interested in resources belonging to a specific version:
kubectl get pods -l app=web-app -l version=1.0
Select Pods Running in the Production Environment It's also quite common to run several versions of an application across different environments, such as staging, development, and production. Often, you'll only be interested in resources running in a particular environment:
kubectl get pods -l app=web-app -l environment=production
Exclude Pods in the Production Environment from a Particular Operation Alternatively, you might be interested in resources in all environments except a specific one. In this case, you can use the !=
equality operator to exclude a specific value:
kubectl get pods -l app=web-app -l environment!=production
Select Pods That Are Part of the Frontend or Backend Tier of the Application You may also want to select resources that fit more complex selectors. In this case, the command will select all application prods with either a frontend or backend tier:
kubectl get pods -l app=web-app -l 'tier in (frontend, backend)'
By combining selectors and using a combination of equality- and set-based selectors, you can achieve high levels of specificity.
Select Pods That Have a Certain Label, Regardless of Their Value Sometimes, you might want to see all resources with a given label, irrespective of their value. In this case, you can simply omit the value from the key-value selector, specifying just the key instead, like so:
kubectl get pods -l app
Create a Service That Targets Specific Pods In Kubernetes, a service allows you to expose a network application that's running in one or more pods. Crucially, this gives you a way to reference the application without knowing running pods' names or IP addresses. However, for this to work, the service needs to know which pods it should send traffic to. This is a perfect use case for labels and selectors. Suppose you want to create a service that represents the backend of your application. A manifest for such a service might look like this:
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
tier: backend
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
This would create a service that binds to pods with tier=backend
and app=web-app
.
Use Node Selectors to Deploy Pods on Specific Types of Nodes Another common use case for labels and selectors is describing the capabilities of specific nodes. You might have some nodes in your cluster with faster hardware or perhaps specialized hardware, like GPUs. In this case, you can apply a label to those nodes like so:
kubectl label nodes <node-name> node-type=high-performance
Then, when creating deployments and pods, you can specify that a resource should use one of these special nodes by specifying the nodeSelector
field:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fast-app
spec:
replicas: 1
selector:
matchLabels:
app: fast-app
template:
metadata:
labels:
app: fast-app
spec:
nodeSelector:
node-type: high-performance
containers:
- name: fast-container
image: fast-app:latest
This will ensure that your deployment's pods are only scheduled onto nodes with the specified labels.
Create a Network Policy That Applies Only to Pods with Certain Labels In Kubernetes, network policies allow you to control the flow of network traffic between pods. You might want to enforce a restriction that your backend pods can only receive traffic from pods labeled as belonging to the same application. In this case, you can use labels and selectors to specify an ingress.from
rule.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-app
This will ensure that the backend pods can only receive traffic from pods with the app=web-app
label.
Use Labels to Implement a Blue-Green Deployment Strategy The blue-green deployment strategy is a common deployment technique where you run two versions of your application—one labeled blue
and the other green
. When you update your application, you push the update to the color that is not receiving production traffic. Next, you switch the traffic to that new application version. If something goes wrong, your previous stable version is still running, and you can easily switch back to it. You can implement a basic version of blue-green deployment using labels and selectors.
To do this, first, create a service that routes traffic to one version of your application:
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
selector:
version: blue
app: web-app
tier: frontend
ports:
- port: 80
targetPort: 8080
Next, deploy the new version of your application with a label of version=green
:
kubectl apply -f green-deployment.yaml
You could directly access this new version to test it without changing the service that controls production traffic. This could be through a secondary service that exposes the nonactive version of the application for internal testing:
kubectl port-forward service/web-app-alt 8081:80
Once you are satisfied with the results of testing, you can update your production service to point to the version=green
instance of your application:
apiVersion: v1
kind: Service
metadata:
name: web-app
spec:
selector:
version: green
app: web-app
tier: frontend
ports:
- port: 80
targetPort: 8080
This would switch production traffic to the green version of the application. If something goes wrong, you can update the service again to switch it back to the blue
version with minimal risk.
Wrapping Up Kubernetes is a powerful tool, but keeping track of and managing all of the resources needed by a Kubernetes application can be complex. Luckily, most Kubernetes features support labels and selectors.
These primitives enable efficient resource organization, streamline service discovery, and enhance deployment strategies such as blue-green and canary releases. By leveraging labels and selectors, you can simplify management tasks, improve application scalability, and more easily work with your ever-growing collection of resources. Mastering labels and selectors is essential for optimizing Kubernetes workflows and ensuring smoother application operations. Another aspect of ensuring smooth Kubernetes operations is having suitable environments in which to work.
Even with proper labeling, navigating clusters and resources can be challenging for developers due to Kubernetes' high complexity. mogenius eliminates this complexity by empowering developers to create self-service Kubernetes workspaces with unified access control, helpful guardrails, and various other features to take the hassle out of managing Kubernetes resources.