Getting cloud-native applications production ready  -  May 2022

Cloud-native: Two ways of getting your application production-ready

Gerrit
production_rediness.png

In recent years cloud-native applications have gone from being a trend to becoming "the way forward" for developers. Proof of this is CNCF's recent report on the state of cloud-native development in Q3 2021, which shows how the adoption of cloud-native computing has reached nearly 50% worldwide.

The reasons for the rise of cloud-native apps are varied. However, one that stands out above the rest is the ability to leverage user feedback to improve the application continuously. This is crucial in today's market, where keeping customers happy is vital.

With the above in mind, we come to the main topic of this article. While it’s true that cloud-native apps can deliver what users want at the pace that organizations need, this comes at a price. Complexity.

For this reason, today, we will cover two ways of getting your cloud-native application production-ready, the traditional way and the new way that saves you time and effort.

Getting your application production-ready: the traditional way

Whether you have created a mobile application, an IoT backend service, or a web service, you will need to meet specific requirements to deploy your application in the cloud securely and reliably. The main ones are listed below.

Choosing an underlying infrastructure

For your application to run in the cloud, you will need a secure, scalable, and highly available infrastructure that provides resources such as storage, networking, computing power, and possibly one or more databases.

Building this infrastructure on-premises is a daunting task. On the one hand, pick the right hardware for your use case, CPU, memory, storage units, network, firewalls, load balancers, etc. On the other hand, configuring the hardware to suit your needs is also a challenge requiring experience, time, and money.

Fortunately, modern public and private cloud computing services simplify the process, as you can rent such infrastructure and customize it to provide the resources you need to run your microservices.

So let's assume you take the most efficient path and use a public/private cloud provider that allows you to scale the infrastructure on-demand. Step one completed.

Provisioning the infrastructure

So you've chosen a cloud provider with a cutting-edge infrastructure with ultra-fast NVMe storage and the latest Intel/AMD CPUs to run your applications. That's exciting, but what about the tools, services, and runtime applications needed to get the most out of it?

As you can imagine, the latest hardware is useless if it’s not provisioned to meet your specific needs. So, it requires a lot of experience and knowledge to fine-tune the infrastructure. However, something similar to the previous step happens, IaaS cloud hosting solutions come to the rescue.

Today, you can hire a managed Kubernetes service that offers you customizable infrastructure ready to use. Well, almost ready.

In reality, what vendors like Amazon provide are the building blocks for you to design customized solutions. So your team is still primarily responsible for configuring persistent storage, CDNs, cache brokers, reverse proxies, set up load balancing, and more.

In short, regardless of whether you take the shortcut of hiring a managed container orchestration service, this step consumes time and resources.

Configuring cloud-native capabilities

What's the point of deploying your application in the cloud if you don't take advantage of being cloud-native?

While managed services have come a long way in making things easier, your DevOps team is responsible for deciding which tool to use to collect metrics, logs, how health checks will be done, etc. Said alert and monitoring system is essential to establish SLIs and SLOs that ensure an optimal user experience.

So, for example, even if your cloud hosting provider has the option to deploy Prometheus and Grafana with a single click, you still have to configure and customize them. Not only that, you will need to make sure your monitoring system is scalable, which can be challenging for some use cases.

Speaking of use cases, implementing an application monitoring system is just one of many cloud-native capabilities you may need. Depending on your application, you may need to configure additional cluster management tools, logging and tracing tools, security tools, observability tools, and development tools.

Overwhelmed? We have not yet touched on the topic of configuring network policies, authentication, cronjobs, or CI/CD integration pipelines. The list goes on and on. Configuration of artifact repositories, integration with local development tools, testing systems, automation… This can be one of the stages that take more effort.

Setting up cluster security

Welcome to step four. One of the most feared of all. Securing your cluster from malicious agents is a topic that deserves an entire article. Ideally, your organization should embrace SecDevOps to ensure that security becomes a shared responsibility across the whole application lifecycle.

This involves setting up firewalls, DDoS protection, configuring SSL certificates, user authentication, and authorization, just to name the most relevant aspects to consider.

Additional steps

We have tried to group and simplify as much as possible the steps to have the necessary infrastructure to launch your cloud-native app. Surely we have overlooked some aspects during the process, but in general terms, all of the above summarizes what it takes to enjoy the benefits that cloud computing offers.

So now that you've got the infrastructure set up, you can (finally) get to work on your application. Wouldn't it be great if you could develop your application without having to go through all the above steps?

A better way: the mogenius way

Getting your application production-ready is quite a challenge. Truth be told, that's the main reason why so many apps haven't migrated to the cloud yet. It’s simply too expensive and demanding to set up the infrastructure described so far.

At least until now.

Meet Mogenius, a state-of-the-art platform that lets you build and run cloud-native applications with minimal effort. In fact, it’s such a revolutionary approach that it’s easier to explain "the Mogenius way" with images.

After registering and verifying your email, you will be able to access the mogenius Studio home page.

The first thing you will need to do is create a cloudspace. You can think of the cloudspace as the environment where you will deploy your application or service. Each cloudspace is based on a fully managed Kubernetes cluster on Microsoft Azure that includes all the necessary tooling to run your application.

●     Cloud resources (storage, computing power, networking)

●     A CI/CD pipeline that allows you to have full control over the life cycle of your application

●     SSL certificates that are automatically created and managed for each cloudspace

●     Pre-configured databases (MySQL, Postgres, Redis, CouchDB, among others)

●     A huge library of preconfigured services and tools that ease your workflow

In other words, you skip all the steps involved in creating, provisioning, and configuring the cloud infrastructure to focus on developing and managing your application!

Once you create the cloudspace you will see a window asking you how you want to deploy your service. At this point, you can choose between using one of the preconfigured templates or connecting to the repository where you have your code and Dockerfile.

Suppose you have an idea for a killer app that you want to develop using AngularJS. In that case, you should click the Use a Template option, browse the template library and choose Angular. Afterward, you’ll need to name your project, connect or create a new repository, and optionally set additional parameters.

That's it. Your Angular template is ready to use. No difficult decisions to make. You have an Angular deployment in the cloud in less than a minute.