Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

A Practical Guide To HashiCorp Consul - Part 2

This is part 2 of 2 part series on A Practical Guide to HashiCorp Consul. The previous part was primarily focused on understanding the problems that Consul solves and how it solves them. This part is focused on a practical application of Consul in a real-life example. Let’s get started.

With most of the theory covered in the previous part, let's move on to Consul’s practical example.

What are we Building?

We are going to build a Django Web Application that stores its persistent data in MongoDB. We will containerize both of them using Docker. Build and run them using Docker Compose.

To show how our web app would scale in this context, we are going to run two instances of Django app. Also, to make this even more interesting, we will run MongoDB as a Replica Set with one primary node and two secondary nodes.

Given we have two instances of Django app, we will need a way to balance a load among those two instances, so we are going to use Fabio, a Consul aware load-balancer, to reach Django app instances.

This example will roughly help us simulate a real-world practical application.

 Example Application nodes and services
 Example Application nodes and services deployed on them

The complete source code for this application is open-sourced and is available on GitHub - pranavcode/consul-demo.

Note: The architecture we are discussing here is not specifically constraint with any of the technologies used to create app or data layers. This example could very well be built using a combination of Ruby on Rails and Postgres, or Node.js and MongoDB, or Laravel and MySQL.

How Does Consul Come into the Picture?

We are deploying both, the app and the data, layers with Docker containers. They are going to be built as services and will talk to each other over HTTP.

Thus, we will use Consul for Service Discovery. This will allow Django servers to find MongoDB Primary node. We are going to use Consul to resolve services via Consul’s DNS interface for this example.

Consul will also help us with the auto-configuration of Fabio as load-balancer to reach instances of our Django app.

We are also using the health-check feature of Consul to monitor the health of each of our instances in the whole infrastructure.

Consul provides a beautiful user interface, as part of its Web UI, out of the box to show all the services on a single dashboard. We will use it to see how our services are laid out.

Let’s begin.

Setup: MongoDB, Django, Consul, Fabio, and Dockerization

We will keep this as simple and minimal as possible to the extent it fulfills our need for a demonstration.

MongoDB

The MongoDB setup we are targeting is in the form of MongoDB Replica Set. One primary node and two secondary nodes.

The primary node will manage all the write operations and the oplog to maintain the sequence of writes, and replicate the data across secondaries. We are also configuring the secondaries for the read operations. You can learn more about MongoDB Replica Set on their official documentation.

We will call our replication set as ‘consuldemo’.

We will run MongoDB on a standard port 27017 and supply the name of the replica set on the command line using the parameter ‘--replSet’.

As you may read from the documentation MongoDB also allows configuring replica set name via configuration file with the parameter for replication as below:

CODE: https://gist.github.com/velotiotech/dece9e1c0a5ecadb6339e8fe08e04e20.js

In our case, the replication set configuration that we will apply on one of the MongoDB nodes, once all the nodes are up and running is as given below:

CODE: https://gist.github.com/velotiotech/adbd8e5d0d1cba3432742bcb3d062d1f.js

This configuration will be applied to one of the pre-defined nodes and MongoDB will decide which node will be primary and secondary.

Note: We are not forcing the set creation with any pre-defined designations on who becomes primary and secondary to allow the dynamism in service discovery. Normally, the nodes would be defined for a specific role.

We are allowing slave reads and reads from the nearest node as a Read Preference.

We will start MongoDB on all nodes with the following command:

CODE: https://gist.github.com/velotiotech/e3e4af6214859b36ee9bfb2637fc30a7.js

This gives us a MongoDB Replica Set with one primary instance and two secondary instances, running and ready to accept connections.

We will discuss containerizing the MongoDB service in the latter part of this article.

Django

We will create a simple Django project that represents Blog application and containerizes it with Docker.

Building the Django app from scratch is beyond the scope of this tutorial, we recommend you to refer to Django’s official documentation to get started with Django project. But, we will still go through some important aspects.

As we need our Django app to talk to MongoDB, we will use a MongoDB connector for Django ORM, Djongo. We will set up our Django settings to use Djongo and connect with our MongoDB. Djongo is pretty straightforward in configuration.

For a local MongoDB installation it would only take two lines of code:

CODE: https://gist.github.com/velotiotech/2fe5567e44756dd5350a88b999cc42ee.js

In our case, as we will need to access MongoDB over another container, our config would look like this:

CODE: https://gist.github.com/velotiotech/4b69cde165e1ed8e235a67d026fbaeec.js

Details:

  • ENGINE: The database connector to use for Django ORM.
  • NAME: Name of the database.
  • HOST: Host address that has MongoDB running on it.
  • PORT: Which port is your MongoDB listening for requests.

Djongo internally talks to PyMongo and uses MongoClient for executing queries on Mongo. We can also use other MongoDB connectors available for Django to achieve this, like, for instance, django-mongodb-engine or pymongo directly, based on our needs.

Note: We are currently reading and writing via Django to a single MongoDB host, the primary one, but we can configure Djongo to also talk to secondary hosts for read-only operations. That is not in the scope of our discussion. You can refer to Djongo’s official documentation to achieve exactly this.

Continuing our Django app building process, we need to define our models. As we are building a blog-like application, our models would look like this:

CODE: https://gist.github.com/velotiotech/526fb38b5fc72f087a60d26b48ee9849.js

We can run a local MongoDB instance and create migrations for these models. Also, register these models into our Django Admin, like so:

CODE: https://gist.github.com/velotiotech/77b19c713b690ab9101c90b6e5099725.js

We can play with the Entry model’s CRUD operations via Django Admin for this example.

Also, to realize the Django-MongoDB connectivity we will create a custom View and Template that displays information about MongoDB setup and currently connected MongoDB host.

Our Django views look like this:

CODE: https://gist.github.com/velotiotech/77a9a9414a1c9b7900649dd2d05fc76b.js

Our URLs or routes configuration for the app looks like this:

CODE: https://gist.github.com/velotiotech/6b9097186c9dec7982ac89ec4447f78b.js

And for the project - the app URLs are included like so:

CODE: https://gist.github.com/velotiotech/e06093e8f6e3740dce464c18e1e75b1e.js

Our Django template, templates/home.html looks like this:

CODE: https://gist.github.com/velotiotech/08d29af158be0d1845697692216766fc.js

To run the app we need to migrate the database first using the command below:

CODE: https://gist.github.com/velotiotech/388be41b4be2895d46e18fb045ac119f.js

And also collect all the static assets into static directory:

CODE: https://gist.github.com/velotiotech/6a1b049998f8c25efeab60083a3e01b6.js

Now run the Django app with Gunicorn, a WSGI HTTP server, as given below:

CODE: https://gist.github.com/velotiotech/ae6c88ec0ca955f3e0c018a874c62879.js

This gives us a basic blog-like Django app that connects to MongoDB backend.

We will discuss containerizing this Django web application in the latter part of this article.

Consul

We place a Consul agent on every service as part of our Consul setup.

The Consul agent is responsible for service discovery by registering the service on the Consul cluster and also monitors the health of every service instance.

Consul on nodes running MongoDB Replica Set

We will discuss Consul setup in the context of MongoDB Replica Set first - as it solves an interesting problem. At any given point of time, one of the MongoDB instances can either be a Primary or a Secondary.

The Consul agent registering and monitoring our MongoDB instance within a Replica Set has a unique mechanism - dynamically registering and deregistering MongoDB service as a Primary instance or a Secondary instance based on what Replica Set has designated it.

We achieve this dynamism by writing and running a shell script after an interval that toggles the Consul service definition for MongoDB Primary and MongoDB Secondary on the instance node’s Consul Agent.

The service definitions for MongoDB services are stored as JSON files on the Consul’s config directory ‘/etc/config.d’.

Service definition for MongoDB Primary instance:

CODE: https://gist.github.com/velotiotech/32872e14d1afef0f6daefcc604bf28c3.js

If you look closely, the service definition allows us to get a DNS entry specific to MongoDB Primary, rather than a generic MongoDB instance. This allows us to send the database writes to a specific MongoDB instance. In the case of Replica Set, the writes are maintained by MongoDB Primary.

Thus, we are able to achieve both service discovery as well as health monitoring for Primary instance of MongoDB.

Similarly, with a slight change the service definition for MongoDB Secondary instance goes like this:

CODE: https://gist.github.com/velotiotech/3cbefe4460871ee111d47a7807f17385.js

Given all this context, can you think of the way we can dynamically switch these service definitions?

We can identify if the given MongoDB instance is primary or not by running command `db.isMaster()` on MongoDB shell.

The check can we drafted as a shell script as:

CODE: https://gist.github.com/velotiotech/8b4e484f02d786c051ed014a81d12599.js

Similarly, the non-master or non-primary instances of MongoDB can also be checked against the same command, by checking a `secondary` value:

CODE: https://gist.github.com/velotiotech/bd3d7b23fbc880e9373537f222e7ce02.js

Note: We are using jq - a lightweight and flexible command-line JSON processor - to process the JSON encoded output of MongoDB shell commands.

One way of writing a script that does this dynamic switch looks like this:

CODE: https://gist.github.com/velotiotech/fffc779fa901a5a0abc6cbf06692c667.js

Note: This is an example script, but we can be more creative and optimize the script further.

Once we are done with our service definitions we can run the Consul agent on each MongoDB nodes. To run a agent we will use the following command:

CODE: https://gist.github.com/velotiotech/871b8e19cee02a8964ad5d02f749423e.js

Here,  ‘consul_server’ represents the Consul Server running host. Similarly, we can run such agents on each of the other MongoDB instance nodes.

Note: If we have multiple MongoDB instances running on the same host, the service definition would change to reflect the different ports used by each instance to uniquely identify, discover and monitor individual MongoDB instance.

Consul on nodes running Django App

For the Django application, Consul setup will be very simple. We only need to monitor Django app’s port on which Gunicorn is listening for requests.

The Consul service definition would look like this:

CODE: https://gist.github.com/velotiotech/02ccdd9b8cb53ab074566c1e7e90a794.js

Once we have the Consul service definition for Django app in place, we can run the Consul agent sitting on the node Django app is running as a service. To run the Consul agent we would fire the following command:

CODE: https://gist.github.com/velotiotech/eeb045ec2c1f309477e95f46366734cc.js

Consul Server

We are running the Consul cluster with a dedicated Consul server node. The Consul server node can easily host, discover and monitor services running on it, exactly the same way as we did in the above sections for MongoDB and Django app.

To run Consul in server mode and allow agents to connect to it, we will fire the following command on the node that we want to run our Consul server:

CODE: https://gist.github.com/velotiotech/2fd08381bffd83bb9ac83b37a7d32034.js

There are no services on our Consul server node for now, so there are no service definitions associated with this Consul agent configuration.

Fabio

We are using the power of Fabio to be auto-configurable and being Consul-aware.

This makes our task of load-balancing the traffic to our Django app instances very easy.

To allow Fabio to auto-detect the services via Consul, one of the ways is to add a tag or update a tag in the service definition with a prefix and a service identifier `urlprefix-/<service>`. Our Consul’s service definition for Django app would now look like this:</service>

CODE: https://gist.github.com/velotiotech/1f20414896ba3a1cf32e7546af91e445.js

In our case, the Django app or service is the only service that will need load-balancing, thus this Consul service definition change completes the requirement on Fabio setup.

Dockerization

Our whole app is going to be deployed as a set of Docker containers. Let’s talk about how we are achieving it in the context of Consul.

Dockerizing MongoDB Replica Set along with Consul Agent

We need to run a Consul agent as described above alongside MongoDB on the same Docker container, so we will need to run a custom ENTRYPOINT on the container to allow running two processes.

Note: This can also be achieved using Docker container level checks in Consul. So, you will be free to run a Consul agent on the host and check across service running in Docker container. Which, will essentially exec into the container to monitor the service.

To achieve this we will use a tool similar to Foreman. It is a lifecycle management tool for physical and virtual servers - including provisioning, monitoring and configuring.

To be precise, we will use the Golang adoption of Foreman, Goreman. It takes the configuration in the form of Heroku’s Procfile to maintain which processes to be kept alive on the host.

In our case, the Procfile looks like this:

CODE: https://gist.github.com/velotiotech/32481c6f358831621ba545014b173747.js

The `consul_check` at the end of the Profile maintains the dynamism between both Primary and Secondary MongoDB node checks, based on who is voted for which role within MongoDB Replica Set.

The shell scripts that are executed by the respective keys on the Procfile are as defined previously in this discussion.

Our Dockerfile, with some additional tools for debug and diagnostics, would look like:

CODE: https://gist.github.com/velotiotech/0fa7d63d44a3073746abff1d07602e63.js

Note: We have used bare Ubuntu 18.04 image here for our purposes, but you can use official MongoDB image and adapt it to run Consul alongside MongoDB or even do Consul checks on Docker container level as mentioned in the official documentation.

Dockerizing Django Web Application along with Consul Agent

We also need to run a Consul agent alongside our Django App on the same Docker container as we had with MongoDB container.

CODE: https://gist.github.com/velotiotech/7a62aa0d22b9681b99c25090a0f4af7c.js

Similarly, we will have the Dockerfile for Django Web Application as we had for our MongoDB containers.

CODE: https://gist.github.com/velotiotech/a60d1b7964a18678736c92e3952805ac.js

Dockerizing Consul Server

We are maintaining the same flow with Consul server node to run it with custom ENTRYPOINT. It is not a requirement, but we are maintaining a consistent view of different Consul run files.

Also, we are using Ubuntu 18.04 image for the demonstration. You can very well use Consul’s official image for this, that accepts all the custom parameters as are mentioned here.

CODE: https://gist.github.com/velotiotech/27eb0144ccd54cec8ee55db26de8712f.js

Docker Compose

We are using Compose to run all our Docker containers in a desired, repeatable form.

Our Compose file is written to denote all the aspects that we mentioned above and utilize the power of Docker Compose tool to achieve those in a seamless fashion.

Docker Compose file would look like the one given below:

CODE: https://gist.github.com/velotiotech/d80e9955d10fd099adb7cc8cb9d0ae98.js

That brings us to the end of the whole environment setup. We can now run Docker Compose to build and run the containers.

Service Discovery using Consul

When all the services are up and running the Consul Web UI gives us a nice glance at our overall setup.

Service Discovery Using Consul
 Consul Web UI showing the set of services we are running and their current state

The MongoDB service is available for Django app to discover by virtue of Consul’s DNS interface.

CODE: https://gist.github.com/velotiotech/1ece8a4a15e765c0bac2b08c3ad41c70.js

Django App can now connect MongoDB Primary instance and start writing data to it.

We can use Fabio load-balancer to connect to Django App instance by auto-discovering it via Consul registry using specialized service tags and render the page with all the database connection information we are talking about.

Our load-balancer is sitting at ‘33.10.0.100’ and ‘/web’ is configured to be routed to one of our Django application instances running behind the load-balancer.

 Fabio auto-detecting the Django Web Application end-points
 Fabio auto-detecting the Django Web Application end-points

As you can see from the auto-detection and configuration of Fabio load-balancer from its UI above, it has weighted the Django Web Application end-points equally. This will help balance the request or traffic load on the Django application instances.

When we visit our Fabio URL ‘33.10.0.100:9999’ and use the source route as ‘/web’ we are routed to one of the Django instances. So, visiting ‘33.10.0.100:9999/web’ gives us following output.

 Django Web Application renders the MongoDB connection status
Django Web Application renders the MongoDB connection status on the home page

We are able to restrict Fabio to only load-balance Django app instances by only adding required tags to Consul’s service definitions of Django app services.

This MongoDB Primary instance discovery helps Django app to do database migration and app deployment.

One can explore Consul Web UI to see all the instances of Django web application services.

 Django Web Application services
 Django Web Application services as seen on Consul’s Web UI

Similarly, see how MongoDB Replica Set instances are laid out.

 MongoDB Replica Set Primary Services
MongoDB Replica Set Primary service as seen on Consul’s Web UI
 MongoDB Replica Set Secondary services
 MongoDB Replica Set Secondary services as seen on Consul’s Web UI

Let’s see how Consul helps with health-checking services and discovering only the alive services.

We will stop the current MongoDB Replica Set Primary (‘mongo_2’) container, to see what happens.

 MongoDB Primary service being swapped
MongoDB Primary service being swapped with one of the MongoDB Secondary instances
 MongoDB Secondary instance
 MongoDB Secondary instance set is now left with only one service instance

Consul has started failing the health-check for previous MongoDB Primary service. MongoDB Replica Set has also detected that the node is down and the re-election of Primary node needs to be done. Thus, getting us a new MongoDB Primary (‘mongo_3’) automatically.

Our checks toggle has kicked-in and swapped the check on ‘mongo_3’ from MongoDB Secondary check to MongoDB Primary check.

When we take a look at the view from the Django app, we see it is now connected to a new MongoDB Primary service (‘mongo_3’).

 Switching of the MongoDB Primary
Switching of the MongoDB Primary is also reflected in the Django Web Application

Let’s see how this plays out when we bring back the stopped MongoDB instance.

 Failing MongoDB Primary service
 Failing MongoDB Primary service instance is now cleared out from service instances as it is now healthy MongoDB Secondary service instance
Re-adopted as MongoDB Secondary service instance
 Previously failed MongoDB Primary service instance is now re-adopted as MongoDB Secondary service instance as it has become healthy again

Similarly, if we stop the service instances of Django application, Fabio would now be able to detect only a healthy instance and would only route the traffic to that instance.

 Fabio Auto-configure
 Fabio is able to auto-configure itself using Consul’s service registry and detecting alive service instances

This is how one can use Consul’s service discovery capability to discover, monitor and health-check services.

Service Configuration using Consul

Currently, we are configuring Django application instances directly either from environment variables set within the containers by Docker Compose and consuming them in Django project settings or by hard-coding the configuration parameters directly.

We can use Consul’s Key/Value store to share configuration across both the instances of Django app.

We can use Consul’s HTTP interface to store key/value pair and retrieve them within the app using the open-source Python client for Consul, called python-consul. You may also use any other Python library that can interact with Consul’s KV store if you want.

Let’s begin by looking at how we can set a key/value pair in Consul using its HTTP interface.

CODE: https://gist.github.com/velotiotech/00de67f9d100e89a85a3ad7de55474d3.js

Once we set the KV store we can consume it on Django app instances to configure it with these values.

Let’s install python-consul and add it as a project dependency.

CODE: https://gist.github.com/velotiotech/7a3c7ec4d5f267c21daa5f664b637a7e.js

We will need to connect our app to Consul using python-consul.

CODE: https://gist.github.com/velotiotech/86b51640c90d50b3b93a3eba3c4744c0.js

We can capture and configure our Django app accordingly using the ‘python-consul’ library.

CODE: https://gist.github.com/velotiotech/3904f14982ce492748da0ba4d2bda161.js

These key/value pair from Consul’s KV store can also be viewed and updated from its Web UI.

 Consul KV store
 Consul KV store as seen on Consul Web UI with Django app configuration parameters

The code used as part of this guide for Consul’s service configuration section is available on ‘service-configuration’ branch of pranavcode/consul-demo project.

That is how one can use Consul’s KV store and configure individual services in their architecture with ease.

Service Segmentation using Consul

As part of Consul’s Service Segmentation we are going to look at Consul Connect intentions and data center distribution.

Connect provides service-to-service connection authorization and encryption using mutual TLS.

To use Consul you need to enable it in the server configuration. Connect needs to be enabled across the Consul cluster for proper functioning of the cluster.

CODE: https://gist.github.com/velotiotech/7fac767e2123bc012d6b382ed29ea9c1.js

In our context, we can define that the communication is to be TLS identified and secured we will define an upstream sidecar service with a proxy on Django app for its communication with MongoDB Primary instance.

CODE: https://gist.github.com/velotiotech/a0a6c9802f8e4d8941b27174648fa452.js

Along with Connect configuration of sidecar proxy, we will also need to run the Connect proxy for Django app as well. This could be achieved by running the following command.

CODE: https://gist.github.com/velotiotech/a0a6c9802f8e4d8941b27174648fa452.js

We can add Consul Connect Intentions to create a service graph across all the services and define traffic pattern. We can create intentions as shown below:

CODE: https://gist.github.com/velotiotech/00574fdf0314aaa9bda347753aeda1a5.js

Intentions for service graph can also be managed from Consul Web UI.

 Define access control for services
 Define access control for services via Connect and service connection restrictions

This defines the service connection restrictions to allow or deny them to talk via Connect.

We have also added ability on Consul agents to denote which datacenters they belong to and be accessible via one or more Consul servers in a given datacenter.

The code used as part of this guide for Consul’s service segmentation section is available on ‘service-segmentation’ branch of velotiotech/consul-demo project.

That is how one can use Consul’s service segmentation feature and configure service level connection access control.

Conclusion

Having an ability to seamlessly control the service mesh that Consul provides makes the life of an operator very easy. We hope you have learnt how Consul can be used for service discovery, configuration, and segmentation with its practical implementation.

As usual, we hope it was an informative ride on the journey of Consul. This was the final piece of this two part series. This part tries to cover most of the aspects of Consul architecture and how it fits into your current project. In case you miss the first part, find it here.

We will continue our endeavors with different technologies and get you the most valuable information that we possibly can in every interaction. Let’s us know what you would like to hear from us more or if you have any questions around the topic, we will be more than happy to answer those.

References

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

A Practical Guide To HashiCorp Consul - Part 2

This is part 2 of 2 part series on A Practical Guide to HashiCorp Consul. The previous part was primarily focused on understanding the problems that Consul solves and how it solves them. This part is focused on a practical application of Consul in a real-life example. Let’s get started.

With most of the theory covered in the previous part, let's move on to Consul’s practical example.

What are we Building?

We are going to build a Django Web Application that stores its persistent data in MongoDB. We will containerize both of them using Docker. Build and run them using Docker Compose.

To show how our web app would scale in this context, we are going to run two instances of Django app. Also, to make this even more interesting, we will run MongoDB as a Replica Set with one primary node and two secondary nodes.

Given we have two instances of Django app, we will need a way to balance a load among those two instances, so we are going to use Fabio, a Consul aware load-balancer, to reach Django app instances.

This example will roughly help us simulate a real-world practical application.

 Example Application nodes and services
 Example Application nodes and services deployed on them

The complete source code for this application is open-sourced and is available on GitHub - pranavcode/consul-demo.

Note: The architecture we are discussing here is not specifically constraint with any of the technologies used to create app or data layers. This example could very well be built using a combination of Ruby on Rails and Postgres, or Node.js and MongoDB, or Laravel and MySQL.

How Does Consul Come into the Picture?

We are deploying both, the app and the data, layers with Docker containers. They are going to be built as services and will talk to each other over HTTP.

Thus, we will use Consul for Service Discovery. This will allow Django servers to find MongoDB Primary node. We are going to use Consul to resolve services via Consul’s DNS interface for this example.

Consul will also help us with the auto-configuration of Fabio as load-balancer to reach instances of our Django app.

We are also using the health-check feature of Consul to monitor the health of each of our instances in the whole infrastructure.

Consul provides a beautiful user interface, as part of its Web UI, out of the box to show all the services on a single dashboard. We will use it to see how our services are laid out.

Let’s begin.

Setup: MongoDB, Django, Consul, Fabio, and Dockerization

We will keep this as simple and minimal as possible to the extent it fulfills our need for a demonstration.

MongoDB

The MongoDB setup we are targeting is in the form of MongoDB Replica Set. One primary node and two secondary nodes.

The primary node will manage all the write operations and the oplog to maintain the sequence of writes, and replicate the data across secondaries. We are also configuring the secondaries for the read operations. You can learn more about MongoDB Replica Set on their official documentation.

We will call our replication set as ‘consuldemo’.

We will run MongoDB on a standard port 27017 and supply the name of the replica set on the command line using the parameter ‘--replSet’.

As you may read from the documentation MongoDB also allows configuring replica set name via configuration file with the parameter for replication as below:

CODE: https://gist.github.com/velotiotech/dece9e1c0a5ecadb6339e8fe08e04e20.js

In our case, the replication set configuration that we will apply on one of the MongoDB nodes, once all the nodes are up and running is as given below:

CODE: https://gist.github.com/velotiotech/adbd8e5d0d1cba3432742bcb3d062d1f.js

This configuration will be applied to one of the pre-defined nodes and MongoDB will decide which node will be primary and secondary.

Note: We are not forcing the set creation with any pre-defined designations on who becomes primary and secondary to allow the dynamism in service discovery. Normally, the nodes would be defined for a specific role.

We are allowing slave reads and reads from the nearest node as a Read Preference.

We will start MongoDB on all nodes with the following command:

CODE: https://gist.github.com/velotiotech/e3e4af6214859b36ee9bfb2637fc30a7.js

This gives us a MongoDB Replica Set with one primary instance and two secondary instances, running and ready to accept connections.

We will discuss containerizing the MongoDB service in the latter part of this article.

Django

We will create a simple Django project that represents Blog application and containerizes it with Docker.

Building the Django app from scratch is beyond the scope of this tutorial, we recommend you to refer to Django’s official documentation to get started with Django project. But, we will still go through some important aspects.

As we need our Django app to talk to MongoDB, we will use a MongoDB connector for Django ORM, Djongo. We will set up our Django settings to use Djongo and connect with our MongoDB. Djongo is pretty straightforward in configuration.

For a local MongoDB installation it would only take two lines of code:

CODE: https://gist.github.com/velotiotech/2fe5567e44756dd5350a88b999cc42ee.js

In our case, as we will need to access MongoDB over another container, our config would look like this:

CODE: https://gist.github.com/velotiotech/4b69cde165e1ed8e235a67d026fbaeec.js

Details:

  • ENGINE: The database connector to use for Django ORM.
  • NAME: Name of the database.
  • HOST: Host address that has MongoDB running on it.
  • PORT: Which port is your MongoDB listening for requests.

Djongo internally talks to PyMongo and uses MongoClient for executing queries on Mongo. We can also use other MongoDB connectors available for Django to achieve this, like, for instance, django-mongodb-engine or pymongo directly, based on our needs.

Note: We are currently reading and writing via Django to a single MongoDB host, the primary one, but we can configure Djongo to also talk to secondary hosts for read-only operations. That is not in the scope of our discussion. You can refer to Djongo’s official documentation to achieve exactly this.

Continuing our Django app building process, we need to define our models. As we are building a blog-like application, our models would look like this:

CODE: https://gist.github.com/velotiotech/526fb38b5fc72f087a60d26b48ee9849.js

We can run a local MongoDB instance and create migrations for these models. Also, register these models into our Django Admin, like so:

CODE: https://gist.github.com/velotiotech/77b19c713b690ab9101c90b6e5099725.js

We can play with the Entry model’s CRUD operations via Django Admin for this example.

Also, to realize the Django-MongoDB connectivity we will create a custom View and Template that displays information about MongoDB setup and currently connected MongoDB host.

Our Django views look like this:

CODE: https://gist.github.com/velotiotech/77a9a9414a1c9b7900649dd2d05fc76b.js

Our URLs or routes configuration for the app looks like this:

CODE: https://gist.github.com/velotiotech/6b9097186c9dec7982ac89ec4447f78b.js

And for the project - the app URLs are included like so:

CODE: https://gist.github.com/velotiotech/e06093e8f6e3740dce464c18e1e75b1e.js

Our Django template, templates/home.html looks like this:

CODE: https://gist.github.com/velotiotech/08d29af158be0d1845697692216766fc.js

To run the app we need to migrate the database first using the command below:

CODE: https://gist.github.com/velotiotech/388be41b4be2895d46e18fb045ac119f.js

And also collect all the static assets into static directory:

CODE: https://gist.github.com/velotiotech/6a1b049998f8c25efeab60083a3e01b6.js

Now run the Django app with Gunicorn, a WSGI HTTP server, as given below:

CODE: https://gist.github.com/velotiotech/ae6c88ec0ca955f3e0c018a874c62879.js

This gives us a basic blog-like Django app that connects to MongoDB backend.

We will discuss containerizing this Django web application in the latter part of this article.

Consul

We place a Consul agent on every service as part of our Consul setup.

The Consul agent is responsible for service discovery by registering the service on the Consul cluster and also monitors the health of every service instance.

Consul on nodes running MongoDB Replica Set

We will discuss Consul setup in the context of MongoDB Replica Set first - as it solves an interesting problem. At any given point of time, one of the MongoDB instances can either be a Primary or a Secondary.

The Consul agent registering and monitoring our MongoDB instance within a Replica Set has a unique mechanism - dynamically registering and deregistering MongoDB service as a Primary instance or a Secondary instance based on what Replica Set has designated it.

We achieve this dynamism by writing and running a shell script after an interval that toggles the Consul service definition for MongoDB Primary and MongoDB Secondary on the instance node’s Consul Agent.

The service definitions for MongoDB services are stored as JSON files on the Consul’s config directory ‘/etc/config.d’.

Service definition for MongoDB Primary instance:

CODE: https://gist.github.com/velotiotech/32872e14d1afef0f6daefcc604bf28c3.js

If you look closely, the service definition allows us to get a DNS entry specific to MongoDB Primary, rather than a generic MongoDB instance. This allows us to send the database writes to a specific MongoDB instance. In the case of Replica Set, the writes are maintained by MongoDB Primary.

Thus, we are able to achieve both service discovery as well as health monitoring for Primary instance of MongoDB.

Similarly, with a slight change the service definition for MongoDB Secondary instance goes like this:

CODE: https://gist.github.com/velotiotech/3cbefe4460871ee111d47a7807f17385.js

Given all this context, can you think of the way we can dynamically switch these service definitions?

We can identify if the given MongoDB instance is primary or not by running command `db.isMaster()` on MongoDB shell.

The check can we drafted as a shell script as:

CODE: https://gist.github.com/velotiotech/8b4e484f02d786c051ed014a81d12599.js

Similarly, the non-master or non-primary instances of MongoDB can also be checked against the same command, by checking a `secondary` value:

CODE: https://gist.github.com/velotiotech/bd3d7b23fbc880e9373537f222e7ce02.js

Note: We are using jq - a lightweight and flexible command-line JSON processor - to process the JSON encoded output of MongoDB shell commands.

One way of writing a script that does this dynamic switch looks like this:

CODE: https://gist.github.com/velotiotech/fffc779fa901a5a0abc6cbf06692c667.js

Note: This is an example script, but we can be more creative and optimize the script further.

Once we are done with our service definitions we can run the Consul agent on each MongoDB nodes. To run a agent we will use the following command:

CODE: https://gist.github.com/velotiotech/871b8e19cee02a8964ad5d02f749423e.js

Here,  ‘consul_server’ represents the Consul Server running host. Similarly, we can run such agents on each of the other MongoDB instance nodes.

Note: If we have multiple MongoDB instances running on the same host, the service definition would change to reflect the different ports used by each instance to uniquely identify, discover and monitor individual MongoDB instance.

Consul on nodes running Django App

For the Django application, Consul setup will be very simple. We only need to monitor Django app’s port on which Gunicorn is listening for requests.

The Consul service definition would look like this:

CODE: https://gist.github.com/velotiotech/02ccdd9b8cb53ab074566c1e7e90a794.js

Once we have the Consul service definition for Django app in place, we can run the Consul agent sitting on the node Django app is running as a service. To run the Consul agent we would fire the following command:

CODE: https://gist.github.com/velotiotech/eeb045ec2c1f309477e95f46366734cc.js

Consul Server

We are running the Consul cluster with a dedicated Consul server node. The Consul server node can easily host, discover and monitor services running on it, exactly the same way as we did in the above sections for MongoDB and Django app.

To run Consul in server mode and allow agents to connect to it, we will fire the following command on the node that we want to run our Consul server:

CODE: https://gist.github.com/velotiotech/2fd08381bffd83bb9ac83b37a7d32034.js

There are no services on our Consul server node for now, so there are no service definitions associated with this Consul agent configuration.

Fabio

We are using the power of Fabio to be auto-configurable and being Consul-aware.

This makes our task of load-balancing the traffic to our Django app instances very easy.

To allow Fabio to auto-detect the services via Consul, one of the ways is to add a tag or update a tag in the service definition with a prefix and a service identifier `urlprefix-/<service>`. Our Consul’s service definition for Django app would now look like this:</service>

CODE: https://gist.github.com/velotiotech/1f20414896ba3a1cf32e7546af91e445.js

In our case, the Django app or service is the only service that will need load-balancing, thus this Consul service definition change completes the requirement on Fabio setup.

Dockerization

Our whole app is going to be deployed as a set of Docker containers. Let’s talk about how we are achieving it in the context of Consul.

Dockerizing MongoDB Replica Set along with Consul Agent

We need to run a Consul agent as described above alongside MongoDB on the same Docker container, so we will need to run a custom ENTRYPOINT on the container to allow running two processes.

Note: This can also be achieved using Docker container level checks in Consul. So, you will be free to run a Consul agent on the host and check across service running in Docker container. Which, will essentially exec into the container to monitor the service.

To achieve this we will use a tool similar to Foreman. It is a lifecycle management tool for physical and virtual servers - including provisioning, monitoring and configuring.

To be precise, we will use the Golang adoption of Foreman, Goreman. It takes the configuration in the form of Heroku’s Procfile to maintain which processes to be kept alive on the host.

In our case, the Procfile looks like this:

CODE: https://gist.github.com/velotiotech/32481c6f358831621ba545014b173747.js

The `consul_check` at the end of the Profile maintains the dynamism between both Primary and Secondary MongoDB node checks, based on who is voted for which role within MongoDB Replica Set.

The shell scripts that are executed by the respective keys on the Procfile are as defined previously in this discussion.

Our Dockerfile, with some additional tools for debug and diagnostics, would look like:

CODE: https://gist.github.com/velotiotech/0fa7d63d44a3073746abff1d07602e63.js

Note: We have used bare Ubuntu 18.04 image here for our purposes, but you can use official MongoDB image and adapt it to run Consul alongside MongoDB or even do Consul checks on Docker container level as mentioned in the official documentation.

Dockerizing Django Web Application along with Consul Agent

We also need to run a Consul agent alongside our Django App on the same Docker container as we had with MongoDB container.

CODE: https://gist.github.com/velotiotech/7a62aa0d22b9681b99c25090a0f4af7c.js

Similarly, we will have the Dockerfile for Django Web Application as we had for our MongoDB containers.

CODE: https://gist.github.com/velotiotech/a60d1b7964a18678736c92e3952805ac.js

Dockerizing Consul Server

We are maintaining the same flow with Consul server node to run it with custom ENTRYPOINT. It is not a requirement, but we are maintaining a consistent view of different Consul run files.

Also, we are using Ubuntu 18.04 image for the demonstration. You can very well use Consul’s official image for this, that accepts all the custom parameters as are mentioned here.

CODE: https://gist.github.com/velotiotech/27eb0144ccd54cec8ee55db26de8712f.js

Docker Compose

We are using Compose to run all our Docker containers in a desired, repeatable form.

Our Compose file is written to denote all the aspects that we mentioned above and utilize the power of Docker Compose tool to achieve those in a seamless fashion.

Docker Compose file would look like the one given below:

CODE: https://gist.github.com/velotiotech/d80e9955d10fd099adb7cc8cb9d0ae98.js

That brings us to the end of the whole environment setup. We can now run Docker Compose to build and run the containers.

Service Discovery using Consul

When all the services are up and running the Consul Web UI gives us a nice glance at our overall setup.

Service Discovery Using Consul
 Consul Web UI showing the set of services we are running and their current state

The MongoDB service is available for Django app to discover by virtue of Consul’s DNS interface.

CODE: https://gist.github.com/velotiotech/1ece8a4a15e765c0bac2b08c3ad41c70.js

Django App can now connect MongoDB Primary instance and start writing data to it.

We can use Fabio load-balancer to connect to Django App instance by auto-discovering it via Consul registry using specialized service tags and render the page with all the database connection information we are talking about.

Our load-balancer is sitting at ‘33.10.0.100’ and ‘/web’ is configured to be routed to one of our Django application instances running behind the load-balancer.

 Fabio auto-detecting the Django Web Application end-points
 Fabio auto-detecting the Django Web Application end-points

As you can see from the auto-detection and configuration of Fabio load-balancer from its UI above, it has weighted the Django Web Application end-points equally. This will help balance the request or traffic load on the Django application instances.

When we visit our Fabio URL ‘33.10.0.100:9999’ and use the source route as ‘/web’ we are routed to one of the Django instances. So, visiting ‘33.10.0.100:9999/web’ gives us following output.

 Django Web Application renders the MongoDB connection status
Django Web Application renders the MongoDB connection status on the home page

We are able to restrict Fabio to only load-balance Django app instances by only adding required tags to Consul’s service definitions of Django app services.

This MongoDB Primary instance discovery helps Django app to do database migration and app deployment.

One can explore Consul Web UI to see all the instances of Django web application services.

 Django Web Application services
 Django Web Application services as seen on Consul’s Web UI

Similarly, see how MongoDB Replica Set instances are laid out.

 MongoDB Replica Set Primary Services
MongoDB Replica Set Primary service as seen on Consul’s Web UI
 MongoDB Replica Set Secondary services
 MongoDB Replica Set Secondary services as seen on Consul’s Web UI

Let’s see how Consul helps with health-checking services and discovering only the alive services.

We will stop the current MongoDB Replica Set Primary (‘mongo_2’) container, to see what happens.

 MongoDB Primary service being swapped
MongoDB Primary service being swapped with one of the MongoDB Secondary instances
 MongoDB Secondary instance
 MongoDB Secondary instance set is now left with only one service instance

Consul has started failing the health-check for previous MongoDB Primary service. MongoDB Replica Set has also detected that the node is down and the re-election of Primary node needs to be done. Thus, getting us a new MongoDB Primary (‘mongo_3’) automatically.

Our checks toggle has kicked-in and swapped the check on ‘mongo_3’ from MongoDB Secondary check to MongoDB Primary check.

When we take a look at the view from the Django app, we see it is now connected to a new MongoDB Primary service (‘mongo_3’).

 Switching of the MongoDB Primary
Switching of the MongoDB Primary is also reflected in the Django Web Application

Let’s see how this plays out when we bring back the stopped MongoDB instance.

 Failing MongoDB Primary service
 Failing MongoDB Primary service instance is now cleared out from service instances as it is now healthy MongoDB Secondary service instance
Re-adopted as MongoDB Secondary service instance
 Previously failed MongoDB Primary service instance is now re-adopted as MongoDB Secondary service instance as it has become healthy again

Similarly, if we stop the service instances of Django application, Fabio would now be able to detect only a healthy instance and would only route the traffic to that instance.

 Fabio Auto-configure
 Fabio is able to auto-configure itself using Consul’s service registry and detecting alive service instances

This is how one can use Consul’s service discovery capability to discover, monitor and health-check services.

Service Configuration using Consul

Currently, we are configuring Django application instances directly either from environment variables set within the containers by Docker Compose and consuming them in Django project settings or by hard-coding the configuration parameters directly.

We can use Consul’s Key/Value store to share configuration across both the instances of Django app.

We can use Consul’s HTTP interface to store key/value pair and retrieve them within the app using the open-source Python client for Consul, called python-consul. You may also use any other Python library that can interact with Consul’s KV store if you want.

Let’s begin by looking at how we can set a key/value pair in Consul using its HTTP interface.

CODE: https://gist.github.com/velotiotech/00de67f9d100e89a85a3ad7de55474d3.js

Once we set the KV store we can consume it on Django app instances to configure it with these values.

Let’s install python-consul and add it as a project dependency.

CODE: https://gist.github.com/velotiotech/7a3c7ec4d5f267c21daa5f664b637a7e.js

We will need to connect our app to Consul using python-consul.

CODE: https://gist.github.com/velotiotech/86b51640c90d50b3b93a3eba3c4744c0.js

We can capture and configure our Django app accordingly using the ‘python-consul’ library.

CODE: https://gist.github.com/velotiotech/3904f14982ce492748da0ba4d2bda161.js

These key/value pair from Consul’s KV store can also be viewed and updated from its Web UI.

 Consul KV store
 Consul KV store as seen on Consul Web UI with Django app configuration parameters

The code used as part of this guide for Consul’s service configuration section is available on ‘service-configuration’ branch of pranavcode/consul-demo project.

That is how one can use Consul’s KV store and configure individual services in their architecture with ease.

Service Segmentation using Consul

As part of Consul’s Service Segmentation we are going to look at Consul Connect intentions and data center distribution.

Connect provides service-to-service connection authorization and encryption using mutual TLS.

To use Consul you need to enable it in the server configuration. Connect needs to be enabled across the Consul cluster for proper functioning of the cluster.

CODE: https://gist.github.com/velotiotech/7fac767e2123bc012d6b382ed29ea9c1.js

In our context, we can define that the communication is to be TLS identified and secured we will define an upstream sidecar service with a proxy on Django app for its communication with MongoDB Primary instance.

CODE: https://gist.github.com/velotiotech/a0a6c9802f8e4d8941b27174648fa452.js

Along with Connect configuration of sidecar proxy, we will also need to run the Connect proxy for Django app as well. This could be achieved by running the following command.

CODE: https://gist.github.com/velotiotech/a0a6c9802f8e4d8941b27174648fa452.js

We can add Consul Connect Intentions to create a service graph across all the services and define traffic pattern. We can create intentions as shown below:

CODE: https://gist.github.com/velotiotech/00574fdf0314aaa9bda347753aeda1a5.js

Intentions for service graph can also be managed from Consul Web UI.

 Define access control for services
 Define access control for services via Connect and service connection restrictions

This defines the service connection restrictions to allow or deny them to talk via Connect.

We have also added ability on Consul agents to denote which datacenters they belong to and be accessible via one or more Consul servers in a given datacenter.

The code used as part of this guide for Consul’s service segmentation section is available on ‘service-segmentation’ branch of velotiotech/consul-demo project.

That is how one can use Consul’s service segmentation feature and configure service level connection access control.

Conclusion

Having an ability to seamlessly control the service mesh that Consul provides makes the life of an operator very easy. We hope you have learnt how Consul can be used for service discovery, configuration, and segmentation with its practical implementation.

As usual, we hope it was an informative ride on the journey of Consul. This was the final piece of this two part series. This part tries to cover most of the aspects of Consul architecture and how it fits into your current project. In case you miss the first part, find it here.

We will continue our endeavors with different technologies and get you the most valuable information that we possibly can in every interaction. Let’s us know what you would like to hear from us more or if you have any questions around the topic, we will be more than happy to answer those.

References

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings