Part 2: Building a microservice architecture from scratch to create a Twitter clone

Recap and outlook

Welcome to the second part of our series on building a microservice architecture from scratch to create a Twitter clone. In our first part, we introduced the project and talked about all the building blocks of our architecture. We've made some key design decisions, and now we're ready to start building. So today we'll start setting up the backend of our Twitter clone.
Here you can find Part 1: Introduction to building our Twitter clone

Using templates to get started quickly

In this episode of our series, we're beginning with three core services to build our backend: ArangoDB as a database, our key:value store, Redis, and nginx. We'll use the first two services as they come, there are no changes needed to the base setup. nginx will need a little bit more configuration and we're going to cover the most important steps later.
Our cloud platform will be mogenius and luckily there are templates for most of the services we'll create along the series. You'll get more details on how templates are deployed in the mogenius docs.

Prerequisites: The services are running with Docker. Please make sure to get Docker on your local machine. Our app will run via https on production. So on our local machine we are going to set up https with NGINX to get an equivalent environment.

In this part of our series, we start with three core services to build our backend: ArangoDB as the database, our key:value store, Redis, and nginx. We'll use the first two services as is, no changes to the base setup are required. nginx will require a bit more configuration and we'll cover the key steps later.
Our cloud platform will be mogenius, and fortunately there are templates available for most of the services we will be creating throughout the series. For more details on how to deploy templates, see the mogenius docs.

Prerequisites: The services will be run using Docker. Please make sure you have get Docker installed on your local machine. Our app will run via https in production. So on our local machine we will set up https with NGINX to get a corresponding environment.

Session/Cache service with Redis
Production setup

...or check out our docs for 'Setting up Redis...'

Once the Redis service is saved or you push commits to the main branch, ( auto provisioning ) runs.

If you are developing features in one service, you should move to another branch until your feature is ready for testing or release. Otherwise, your service will start a new deployment every time you push. It would probably be best if you stick to a common branching model like Gitflow or Trunk-based development . By the way, you can also avoid triggering the build process with the comment prefix [skip ci] .


Development setup

Prerequisites: We use make for our build and run commands to make life easier in our local development environment.

  1. Clone your Redis Github repository to your local machine.
  2. Switch to another branch.
  3. Create a Makefile.

For communication between local Docker services, we first set up an internal Docker network.

Build and run the container.

  1. In our Docker run command, we share the internal port with our local machine. We use the default port of Redis. Now we can check if Redis is accessible via the command line.

CAUTION: To avoid flooding our Docker images on the local machine with unnamed images or zombie containers, we try to remove them before the build or run command. Please modify the Makefile if you don't want this behavior.

You can check out the source code in our Redis Github repository.

Database service with ArangoDB

  1. We will start with the service setup again to get things running.
  2. Then we will discuss how the data structures will look like in our demo.
  3. Finally we will login into ArangoDB and initially create the appropriated collections for documents and edges.
  4. We will start again with the setup of the service to get things running.
  5. Then we will discuss what the data structures will look like in our demo.
  6. Finally, we will log into ArangoDB and create the appropriated collections for documents and edges.

Production setup

Access the ArangoDB admin page via your service url and the exposed port

Login into Arangodb

You can also check out our ArangoDB docs


Development setup

  1. Clone your ArangoDB Github repository to your local machine.
  2. Switch to a another branch.

Create a Makefile and set a random 'password' for your development env.

Build and run the container.

In our Docker run command, we share the internal port with our local machine. We use the default port of ArangoDB. Now we can check if ArangoDB is running on our local machine. Open it in your browser via:

You can checkout the source code in our ArangoDB Github repository

Database structure

Basically we have three main entities.
Post, Tag, User

As you can see from the image, we will use ArangoDB's multi-model approach for a graph database. So we will have documents and graphs in one step. The entities listed will be our documents and the connections between them will be our edges (which, by the way, are again a document)
So we will have:
Collection of documents: Post, Tag, User
Collection of edges: Tagged (from: Tag, to: Post), Linked (from: User, to: Post), (from: Post, to: Post)


Setup the database and create collections

Using the root user and _system database for applications is not a good practice. Therefore, we will create a new user and database through the administration interface.


Create new collections of type documents or edges as listed above


Final database overview

Now we have made the basic setup for storage and are ready for the final step.

Reverse proxy with NGINX and https support

Before we dive into the NGINX configuration, we have to set up our local machine for https. For my machine I used the tutorial How to use HTTPS for local development . If this doesn't fit for your operating system, please check your favorite search engine.

NGINX is our gateway in front of our services. This means that each service gets its own location directive . To understand the idea, let's first take a look at the file structure of the service repository and the Dockerfile.

As you can see, each conf.d/* and include.d/* has a variant for each stage. We run a local environment and a production environment. The Dockerfile distinguishes between these environments via the build time arguments. The default is our production build. The cert folder contains the local https certificates. This folder is listed in .gitignore, so it is not pushed to the repository. To get things going, we first add our conf.d/default.*.conf , which replaces the NGINX default configuration and contains the custom include.d/* config files. As a little teaser for the next article, we will also add the location directive for our first service 00-location-auth-service.*.conf .

Since the makefile is used for local development we need to set build arguments --build-arg env=local.

Please keep in mind in our project each new micro service needs a new location directive. This is extra administrative effort but for us it's worth it since we use a NGINX feature called auth_request which we will introduce in minute.

To perform authentication, NGINX makes an HTTP subrequest to an external server where the subrequest is verified. If the subrequest returns a 2xx response code, the access is allowed, if it returns 401 or 403, the access is denied.

First we set up our default.local.conf . Most of it is a copy of the original default.conf . Only the https and include configuration is added.

Next is our 00-location-auth-service.local.conf .

You can find the corresponding production configs (00-location-auth-service.prod.conf, default.prod.conf) in our Github repository. The only difference is the missing https directive (not needed with mogenius), the Kubernetes resolver and the URL of the auth service.

Now we are ready for the implementation part.

Conclusion:

What we learned...

  • We set up our Redis production and development system.
  • We checked the local Redis system via cli for 'keys *'.
  • We set up our ArangoDB production and development system.
  • We logged into the ArangoDB admin interface.
  • We discuss the database design.
  • We created a new database user and a new database.
  • We created collections of type documents and edges.
  • We set up the NGINX reverse proxy with https support.

That's it for today!

Ready to get started?

Jump right in with a free trial or book a demo with a solution architect to discuss your needs.

The latest on DevOps and Platform Engineering trends

Subscribe to our newsletter and stay on top of the latest developments

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.


By signing up, I agree to the mogenius privacy policy.