Welcome to the second part of our series on building a microservice architecture from scratch to create a Twitter clone. In our first part, we introduced the project and talked about all the building blocks of our architecture. We've made some key design decisions, and now we're ready to start building. So today we'll start setting up the backend of our Twitter clone.
Here you can find Part 1: Introduction to building our Twitter clone
In this episode of our series, we're beginning with three core services to build our backend: ArangoDB as a database, our key:value store, Redis, and nginx. We'll use the first two services as they come, there are no changes needed to the base setup. nginx will need a little bit more configuration and we're going to cover the most important steps later.
Our cloud platform will be mogenius and luckily there are templates for most of the services we'll create along the series. You'll get more details on how templates are deployed in the mogenius docs.
Prerequisites: The services are running with Docker. Please make sure to get Docker on your local machine. Our app will run via https on production. So on our local machine we are going to set up https with NGINX to get an equivalent environment.
In this part of our series, we start with three core services to build our backend: ArangoDB as the database, our key:value store, Redis, and nginx. We'll use the first two services as is, no changes to the base setup are required. nginx will require a bit more configuration and we'll cover the key steps later.
Our cloud platform will be mogenius, and fortunately there are templates available for most of the services we will be creating throughout the series. For more details on how to deploy templates, see the mogenius docs.
Prerequisites: The services will be run using Docker. Please make sure you have get Docker installed on your local machine. Our app will run via https in production. So on our local machine we will set up https with NGINX to get a corresponding environment.
...or check out our docs for 'Setting up Redis...'
Once the Redis service is saved or you push commits to the main branch, ( auto provisioning ) runs.
If you are developing features in one service, you should move to another branch until your feature is ready for testing or release. Otherwise, your service will start a new deployment every time you push. It would probably be best if you stick to a common branching model like Gitflow or Trunk-based development . By the way, you can also avoid triggering the build process with the comment prefix [skip ci] .
Prerequisites: We use make for our build and run commands to make life easier in our local development environment.
For communication between local Docker services, we first set up an internal Docker network.
Build and run the container.
CAUTION: To avoid flooding our Docker images on the local machine with unnamed images or zombie containers, we try to remove them before the build or run command. Please modify the Makefile if you don't want this behavior.
You can check out the source code in our Redis Github repository.
Access the ArangoDB admin page via your service url and the exposed port
Login into Arangodb
You can also check out our ArangoDB docs
Create a Makefile and set a random 'password' for your development env.
Build and run the container.
In our Docker run command, we share the internal port with our local machine. We use the default port of ArangoDB. Now we can check if ArangoDB is running on our local machine. Open it in your browser via:
You can checkout the source code in our ArangoDB Github repository
Basically we have three main entities.
Post, Tag, User
As you can see from the image, we will use ArangoDB's multi-model approach for a graph database. So we will have documents and graphs in one step. The entities listed will be our documents and the connections between them will be our edges (which, by the way, are again a document)
So we will have:
Collection of documents: Post, Tag, User
Collection of edges: Tagged (from: Tag, to: Post), Linked (from: User, to: Post), (from: Post, to: Post)
Setup the database and create collections
Using the root user and _system database for applications is not a good practice. Therefore, we will create a new user and database through the administration interface.
Create new collections of type documents or edges as listed above
Final database overview
Now we have made the basic setup for storage and are ready for the final step.
Before we dive into the NGINX configuration, we have to set up our local machine for https. For my machine I used the tutorial How to use HTTPS for local development . If this doesn't fit for your operating system, please check your favorite search engine.
NGINX is our gateway in front of our services. This means that each service gets its own location directive . To understand the idea, let's first take a look at the file structure of the service repository and the Dockerfile.
As you can see, each conf.d/* and include.d/* has a variant for each stage. We run a local environment and a production environment. The Dockerfile distinguishes between these environments via the build time arguments. The default is our production build. The cert folder contains the local https certificates. This folder is listed in .gitignore, so it is not pushed to the repository. To get things going, we first add our conf.d/default.*.conf , which replaces the NGINX default configuration and contains the custom include.d/* config files. As a little teaser for the next article, we will also add the location directive for our first service 00-location-auth-service.*.conf .
Since the makefile is used for local development we need to set build arguments --build-arg env=local.
Please keep in mind in our project each new micro service needs a new location directive. This is extra administrative effort but for us it's worth it since we use a NGINX feature called auth_request which we will introduce in minute.
To perform authentication, NGINX makes an HTTP subrequest to an external server where the subrequest is verified. If the subrequest returns a 2xx response code, the access is allowed, if it returns 401 or 403, the access is denied.
First we set up our default.local.conf . Most of it is a copy of the original default.conf . Only the https and include configuration is added.
Next is our 00-location-auth-service.local.conf .
You can find the corresponding production configs (00-location-auth-service.prod.conf, default.prod.conf) in our Github repository. The only difference is the missing https directive (not needed with mogenius), the Kubernetes resolver and the URL of the auth service.
Now we are ready for the implementation part.
What we learned...
That's it for today!
Creating local development environment on Kubernetes can be tricky. Discover a simple yet powerful approach with Docker Desktop and mogenius.