Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
This is part 2 of 2 part series on A Practical Guide to HashiCorp Consul. The previous part was primarily focused on understanding the problems that Consul solves and how it solves them. This part is focused on a practical application of Consul in a real-life example. Let’s get started.
With most of the theory covered in the previous part, let's move on to Consul’s practical example.
To show how our web app would scale in this context, we are going to run two instances of Django app. Also, to make this even more interesting, we will run MongoDB as a Replica Set with one primary node and two secondary nodes.
Given we have two instances of Django app, we will need a way to balance a load among those two instances, so we are going to use Fabio, a Consul aware load-balancer, to reach Django app instances.
This example will roughly help us simulate a real-world practical application.
The complete source code for this application is open-sourced and is available on GitHub - pranavcode/consul-demo.
Note: The architecture we are discussing here is not specifically constraint with any of the technologies used to create app or data layers. This example could very well be built using a combination of Ruby on Rails and Postgres, or Node.js and MongoDB, or Laravel and MySQL.
How Does Consul Come into the Picture?
We are deploying both, the app and the data, layers with Docker containers. They are going to be built as services and will talk to each other over HTTP.
Consul will also help us with the auto-configuration of Fabio as load-balancer to reach instances of our Django app.
We are also using the health-check feature of Consul to monitor the health of each of our instances in the whole infrastructure.
Consul provides a beautiful user interface, as part of its Web UI, out of the box to show all the services on a single dashboard. We will use it to see how our services are laid out.
Let’s begin.
Setup: MongoDB, Django, Consul, Fabio, and Dockerization
We will keep this as simple and minimal as possible to the extent it fulfills our need for a demonstration.
MongoDB
The MongoDB setup we are targeting is in the form of MongoDB Replica Set. One primary node and two secondary nodes.
The primary node will manage all the write operations and the oplog to maintain the sequence of writes, and replicate the data across secondaries. We are also configuring the secondaries for the read operations. You can learn more about MongoDB Replica Set on their official documentation.
We will call our replication set as ‘consuldemo’.
We will run MongoDB on a standard port 27017 and supply the name of the replica set on the command line using the parameter ‘--replSet’.
As you may read from the documentation MongoDB also allows configuring replica set name via configuration file with the parameter for replication as below:
In our case, the replication set configuration that we will apply on one of the MongoDB nodes, once all the nodes are up and running is as given below:
Note: We are not forcing the set creation with any pre-defined designations on who becomes primary and secondary to allow the dynamism in service discovery. Normally, the nodes would be defined for a specific role.
We are allowing slave reads and reads from the nearest node as a Read Preference.
We will start MongoDB on all nodes with the following command:
As we need our Django app to talk to MongoDB, we will use a MongoDB connector for Django ORM, Djongo. We will set up our Django settings to use Djongo and connect with our MongoDB. Djongo is pretty straightforward in configuration.
For a local MongoDB installation it would only take two lines of code:
ENGINE: The database connector to use for Django ORM.
NAME: Name of the database.
HOST: Host address that has MongoDB running on it.
PORT: Which port is your MongoDB listening for requests.
Djongo internally talks to PyMongo and uses MongoClient for executing queries on Mongo. We can also use other MongoDB connectors available for Django to achieve this, like, for instance, django-mongodb-engine or pymongo directly, based on our needs.
Note: We are currently reading and writing via Django to a single MongoDB host, the primary one, but we can configure Djongo to also talk to secondary hosts for read-only operations. That is not in the scope of our discussion. You can refer to Djongo’s official documentation to achieve exactly this.
Continuing our Django app building process, we need to define our models. As we are building a blog-like application, our models would look like this:
We can play with the Entry model’s CRUD operations via Django Admin for this example.
Also, to realize the Django-MongoDB connectivity we will create a custom View and Template that displays information about MongoDB setup and currently connected MongoDB host.
We will discuss Consul setup in the context of MongoDB Replica Set first - as it solves an interesting problem. At any given point of time, one of the MongoDB instances can either be a Primary or a Secondary.
The Consul agent registering and monitoring our MongoDB instance within a Replica Set has a unique mechanism - dynamically registering and deregistering MongoDB service as a Primary instance or a Secondary instance based on what Replica Set has designated it.
We achieve this dynamism by writing and running a shell script after an interval that toggles the Consul service definition for MongoDB Primary and MongoDB Secondary on the instance node’s Consul Agent.
The service definitions for MongoDB services are stored as JSON files on the Consul’s config directory ‘/etc/config.d’.
If you look closely, the service definition allows us to get a DNS entry specific to MongoDB Primary, rather than a generic MongoDB instance. This allows us to send the database writes to a specific MongoDB instance. In the case of Replica Set, the writes are maintained by MongoDB Primary.
Thus, we are able to achieve both service discovery as well as health monitoring for Primary instance of MongoDB.
Similarly, with a slight change the service definition for MongoDB Secondary instance goes like this:
Here, ‘consul_server’ represents the Consul Server running host. Similarly, we can run such agents on each of the other MongoDB instance nodes.
Note: If we have multiple MongoDB instances running on the same host, the service definition would change to reflect the different ports used by each instance to uniquely identify, discover and monitor individual MongoDB instance.
Consul on nodes running Django App
For the Django application, Consul setup will be very simple. We only need to monitor Django app’s port on which Gunicorn is listening for requests.
The Consul service definition would look like this:
Once we have the Consul service definition for Django app in place, we can run the Consul agent sitting on the node Django app is running as a service. To run the Consul agent we would fire the following command:
We are running the Consul cluster with a dedicated Consul server node. The Consul server node can easily host, discover and monitor services running on it, exactly the same way as we did in the above sections for MongoDB and Django app.
To run Consul in server mode and allow agents to connect to it, we will fire the following command on the node that we want to run our Consul server:
There are no services on our Consul server node for now, so there are no service definitions associated with this Consul agent configuration.
Fabio
We are using the power of Fabio to be auto-configurable and being Consul-aware.
This makes our task of load-balancing the traffic to our Django app instances very easy.
To allow Fabio to auto-detect the services via Consul, one of the ways is to add a tag or update a tag in the service definition with a prefix and a service identifier `urlprefix-/<service>`. Our Consul’s service definition for Django app would now look like this:</service>
In our case, the Django app or service is the only service that will need load-balancing, thus this Consul service definition change completes the requirement on Fabio setup.
Dockerization
Our whole app is going to be deployed as a set of Docker containers. Let’s talk about how we are achieving it in the context of Consul.
Dockerizing MongoDB Replica Set along with Consul Agent
We need to run a Consul agent as described above alongside MongoDB on the same Docker container, so we will need to run a custom ENTRYPOINT on the container to allow running two processes.
Note: This can also be achieved using Docker container level checks in Consul. So, you will be free to run a Consul agent on the host and check across service running in Docker container. Which, will essentially exec into the container to monitor the service.
To achieve this we will use a tool similar to Foreman. It is a lifecycle management tool for physical and virtual servers - including provisioning, monitoring and configuring.
To be precise, we will use the Golang adoption of Foreman, Goreman. It takes the configuration in the form of Heroku’s Procfile to maintain which processes to be kept alive on the host.
The `consul_check` at the end of the Profile maintains the dynamism between both Primary and Secondary MongoDB node checks, based on who is voted for which role within MongoDB Replica Set.
The shell scripts that are executed by the respective keys on the Procfile are as defined previously in this discussion.
Our Dockerfile, with some additional tools for debug and diagnostics, would look like:
Note: We have used bare Ubuntu 18.04 image here for our purposes, but you can use official MongoDB image and adapt it to run Consul alongside MongoDB or even do Consul checks on Docker container level as mentioned in the official documentation.
Dockerizing Django Web Application along with Consul Agent
We also need to run a Consul agent alongside our Django App on the same Docker container as we had with MongoDB container.
We are maintaining the same flow with Consul server node to run it with custom ENTRYPOINT. It is not a requirement, but we are maintaining a consistent view of different Consul run files.
Also, we are using Ubuntu 18.04 image for the demonstration. You can very well useConsul’s official image for this, that accepts all the custom parameters as are mentioned here.
We are using Compose to run all our Docker containers in a desired, repeatable form.
Our Compose file is written to denote all the aspects that we mentioned above and utilize the power of Docker Compose tool to achieve those in a seamless fashion.
Docker Compose file would look like the one given below:
Django App can now connect MongoDB Primary instance and start writing data to it.
We can use Fabio load-balancer to connect to Django App instance by auto-discovering it via Consul registry using specialized service tags and render the page with all the database connection information we are talking about.
Our load-balancer is sitting at ‘33.10.0.100’ and ‘/web’ is configured to be routed to one of our Django application instances running behind the load-balancer.
As you can see from the auto-detection and configuration of Fabio load-balancer from its UI above, it has weighted the Django Web Application end-points equally. This will help balance the request or traffic load on the Django application instances.
When we visit our Fabio URL ‘33.10.0.100:9999’ and use the source route as ‘/web’ we are routed to one of the Django instances. So, visiting ‘33.10.0.100:9999/web’ gives us following output.
We are able to restrict Fabio to only load-balance Django app instances by only adding required tags to Consul’s service definitions of Django app services.
This MongoDB Primary instance discovery helps Django app to do database migration and app deployment.
One can explore Consul Web UI to see all the instances of Django web application services.
Similarly, see how MongoDB Replica Set instances are laid out.
Let’s see how Consul helps with health-checking services and discovering only the alive services.
We will stop the current MongoDB Replica Set Primary (‘mongo_2’) container, to see what happens.
Consul has started failing the health-check for previous MongoDB Primary service. MongoDB Replica Set has also detected that the node is down and the re-election of Primary node needs to be done. Thus, getting us a new MongoDB Primary (‘mongo_3’) automatically.
Our checks toggle has kicked-in and swapped the check on ‘mongo_3’ from MongoDB Secondary check to MongoDB Primary check.
When we take a look at the view from the Django app, we see it is now connected to a new MongoDB Primary service (‘mongo_3’).
Let’s see how this plays out when we bring back the stopped MongoDB instance.
Similarly, if we stop the service instances of Django application, Fabio would now be able to detect only a healthy instance and would only route the traffic to that instance.
This is how one can use Consul’s service discovery capability to discover, monitor and health-check services.
We can use Consul’s Key/Value store to share configuration across both the instances of Django app.
We can use Consul’s HTTP interface to store key/value pair and retrieve them within the app using the open-source Python client for Consul, called python-consul. You may also use any other Python library that can interact with Consul’s KV store if you want.
Let’s begin by looking at how we can set a key/value pair in Consul using its HTTP interface.
These key/value pair from Consul’s KV store can also be viewed and updated from its Web UI.
The code used as part of this guide for Consul’s service configuration section is available on ‘service-configuration’ branch of pranavcode/consul-demo project.
That is how one can use Consul’s KV store and configure individual services in their architecture with ease.
Connect provides service-to-service connection authorization and encryption using mutual TLS.
To use Consul you need to enable it in the server configuration. Connect needs to be enabled across the Consul cluster for proper functioning of the cluster.
In our context, we can define that the communication is to be TLS identified and secured we will define an upstream sidecar service with a proxy on Django app for its communication with MongoDB Primary instance.
Along with Connect configuration of sidecar proxy, we will also need to run the Connect proxy for Django app as well. This could be achieved by running the following command.
We can add Consul Connect Intentions to create a service graph across all the services and define traffic pattern. We can create intentions as shown below:
Intentions for service graph can also be managed from Consul Web UI.
This defines the service connection restrictions to allow or deny them to talk via Connect.
We have also added ability on Consul agents to denote which datacenters they belong to and be accessible via one or more Consul servers in a given datacenter.
The code used as part of this guide for Consul’s service segmentation section is available on ‘service-segmentation’ branch of velotiotech/consul-demo project.
That is how one can use Consul’s service segmentation feature and configure service level connection access control.
Conclusion
Having an ability to seamlessly control the service mesh that Consul provides makes the life of an operator very easy. We hope you have learnt how Consul can be used for service discovery, configuration, and segmentation with its practical implementation.
As usual, we hope it was an informative ride on the journey of Consul. This was the final piece of this two part series. This part tries to cover most of the aspects of Consul architecture and how it fits into your current project. In case you miss the first part, find it here.
We will continue our endeavors with different technologies and get you the most valuable information that we possibly can in every interaction. Let’s us know what you would like to hear from us more or if you have any questions around the topic, we will be more than happy to answer those.
This is part 2 of 2 part series on A Practical Guide to HashiCorp Consul. The previous part was primarily focused on understanding the problems that Consul solves and how it solves them. This part is focused on a practical application of Consul in a real-life example. Let’s get started.
With most of the theory covered in the previous part, let's move on to Consul’s practical example.
To show how our web app would scale in this context, we are going to run two instances of Django app. Also, to make this even more interesting, we will run MongoDB as a Replica Set with one primary node and two secondary nodes.
Given we have two instances of Django app, we will need a way to balance a load among those two instances, so we are going to use Fabio, a Consul aware load-balancer, to reach Django app instances.
This example will roughly help us simulate a real-world practical application.
The complete source code for this application is open-sourced and is available on GitHub - pranavcode/consul-demo.
Note: The architecture we are discussing here is not specifically constraint with any of the technologies used to create app or data layers. This example could very well be built using a combination of Ruby on Rails and Postgres, or Node.js and MongoDB, or Laravel and MySQL.
How Does Consul Come into the Picture?
We are deploying both, the app and the data, layers with Docker containers. They are going to be built as services and will talk to each other over HTTP.
Consul will also help us with the auto-configuration of Fabio as load-balancer to reach instances of our Django app.
We are also using the health-check feature of Consul to monitor the health of each of our instances in the whole infrastructure.
Consul provides a beautiful user interface, as part of its Web UI, out of the box to show all the services on a single dashboard. We will use it to see how our services are laid out.
Let’s begin.
Setup: MongoDB, Django, Consul, Fabio, and Dockerization
We will keep this as simple and minimal as possible to the extent it fulfills our need for a demonstration.
MongoDB
The MongoDB setup we are targeting is in the form of MongoDB Replica Set. One primary node and two secondary nodes.
The primary node will manage all the write operations and the oplog to maintain the sequence of writes, and replicate the data across secondaries. We are also configuring the secondaries for the read operations. You can learn more about MongoDB Replica Set on their official documentation.
We will call our replication set as ‘consuldemo’.
We will run MongoDB on a standard port 27017 and supply the name of the replica set on the command line using the parameter ‘--replSet’.
As you may read from the documentation MongoDB also allows configuring replica set name via configuration file with the parameter for replication as below:
In our case, the replication set configuration that we will apply on one of the MongoDB nodes, once all the nodes are up and running is as given below:
Note: We are not forcing the set creation with any pre-defined designations on who becomes primary and secondary to allow the dynamism in service discovery. Normally, the nodes would be defined for a specific role.
We are allowing slave reads and reads from the nearest node as a Read Preference.
We will start MongoDB on all nodes with the following command:
As we need our Django app to talk to MongoDB, we will use a MongoDB connector for Django ORM, Djongo. We will set up our Django settings to use Djongo and connect with our MongoDB. Djongo is pretty straightforward in configuration.
For a local MongoDB installation it would only take two lines of code:
ENGINE: The database connector to use for Django ORM.
NAME: Name of the database.
HOST: Host address that has MongoDB running on it.
PORT: Which port is your MongoDB listening for requests.
Djongo internally talks to PyMongo and uses MongoClient for executing queries on Mongo. We can also use other MongoDB connectors available for Django to achieve this, like, for instance, django-mongodb-engine or pymongo directly, based on our needs.
Note: We are currently reading and writing via Django to a single MongoDB host, the primary one, but we can configure Djongo to also talk to secondary hosts for read-only operations. That is not in the scope of our discussion. You can refer to Djongo’s official documentation to achieve exactly this.
Continuing our Django app building process, we need to define our models. As we are building a blog-like application, our models would look like this:
We can play with the Entry model’s CRUD operations via Django Admin for this example.
Also, to realize the Django-MongoDB connectivity we will create a custom View and Template that displays information about MongoDB setup and currently connected MongoDB host.
We will discuss Consul setup in the context of MongoDB Replica Set first - as it solves an interesting problem. At any given point of time, one of the MongoDB instances can either be a Primary or a Secondary.
The Consul agent registering and monitoring our MongoDB instance within a Replica Set has a unique mechanism - dynamically registering and deregistering MongoDB service as a Primary instance or a Secondary instance based on what Replica Set has designated it.
We achieve this dynamism by writing and running a shell script after an interval that toggles the Consul service definition for MongoDB Primary and MongoDB Secondary on the instance node’s Consul Agent.
The service definitions for MongoDB services are stored as JSON files on the Consul’s config directory ‘/etc/config.d’.
If you look closely, the service definition allows us to get a DNS entry specific to MongoDB Primary, rather than a generic MongoDB instance. This allows us to send the database writes to a specific MongoDB instance. In the case of Replica Set, the writes are maintained by MongoDB Primary.
Thus, we are able to achieve both service discovery as well as health monitoring for Primary instance of MongoDB.
Similarly, with a slight change the service definition for MongoDB Secondary instance goes like this:
Here, ‘consul_server’ represents the Consul Server running host. Similarly, we can run such agents on each of the other MongoDB instance nodes.
Note: If we have multiple MongoDB instances running on the same host, the service definition would change to reflect the different ports used by each instance to uniquely identify, discover and monitor individual MongoDB instance.
Consul on nodes running Django App
For the Django application, Consul setup will be very simple. We only need to monitor Django app’s port on which Gunicorn is listening for requests.
The Consul service definition would look like this:
Once we have the Consul service definition for Django app in place, we can run the Consul agent sitting on the node Django app is running as a service. To run the Consul agent we would fire the following command:
We are running the Consul cluster with a dedicated Consul server node. The Consul server node can easily host, discover and monitor services running on it, exactly the same way as we did in the above sections for MongoDB and Django app.
To run Consul in server mode and allow agents to connect to it, we will fire the following command on the node that we want to run our Consul server:
There are no services on our Consul server node for now, so there are no service definitions associated with this Consul agent configuration.
Fabio
We are using the power of Fabio to be auto-configurable and being Consul-aware.
This makes our task of load-balancing the traffic to our Django app instances very easy.
To allow Fabio to auto-detect the services via Consul, one of the ways is to add a tag or update a tag in the service definition with a prefix and a service identifier `urlprefix-/<service>`. Our Consul’s service definition for Django app would now look like this:</service>
In our case, the Django app or service is the only service that will need load-balancing, thus this Consul service definition change completes the requirement on Fabio setup.
Dockerization
Our whole app is going to be deployed as a set of Docker containers. Let’s talk about how we are achieving it in the context of Consul.
Dockerizing MongoDB Replica Set along with Consul Agent
We need to run a Consul agent as described above alongside MongoDB on the same Docker container, so we will need to run a custom ENTRYPOINT on the container to allow running two processes.
Note: This can also be achieved using Docker container level checks in Consul. So, you will be free to run a Consul agent on the host and check across service running in Docker container. Which, will essentially exec into the container to monitor the service.
To achieve this we will use a tool similar to Foreman. It is a lifecycle management tool for physical and virtual servers - including provisioning, monitoring and configuring.
To be precise, we will use the Golang adoption of Foreman, Goreman. It takes the configuration in the form of Heroku’s Procfile to maintain which processes to be kept alive on the host.
The `consul_check` at the end of the Profile maintains the dynamism between both Primary and Secondary MongoDB node checks, based on who is voted for which role within MongoDB Replica Set.
The shell scripts that are executed by the respective keys on the Procfile are as defined previously in this discussion.
Our Dockerfile, with some additional tools for debug and diagnostics, would look like:
Note: We have used bare Ubuntu 18.04 image here for our purposes, but you can use official MongoDB image and adapt it to run Consul alongside MongoDB or even do Consul checks on Docker container level as mentioned in the official documentation.
Dockerizing Django Web Application along with Consul Agent
We also need to run a Consul agent alongside our Django App on the same Docker container as we had with MongoDB container.
We are maintaining the same flow with Consul server node to run it with custom ENTRYPOINT. It is not a requirement, but we are maintaining a consistent view of different Consul run files.
Also, we are using Ubuntu 18.04 image for the demonstration. You can very well useConsul’s official image for this, that accepts all the custom parameters as are mentioned here.
We are using Compose to run all our Docker containers in a desired, repeatable form.
Our Compose file is written to denote all the aspects that we mentioned above and utilize the power of Docker Compose tool to achieve those in a seamless fashion.
Docker Compose file would look like the one given below:
Django App can now connect MongoDB Primary instance and start writing data to it.
We can use Fabio load-balancer to connect to Django App instance by auto-discovering it via Consul registry using specialized service tags and render the page with all the database connection information we are talking about.
Our load-balancer is sitting at ‘33.10.0.100’ and ‘/web’ is configured to be routed to one of our Django application instances running behind the load-balancer.
As you can see from the auto-detection and configuration of Fabio load-balancer from its UI above, it has weighted the Django Web Application end-points equally. This will help balance the request or traffic load on the Django application instances.
When we visit our Fabio URL ‘33.10.0.100:9999’ and use the source route as ‘/web’ we are routed to one of the Django instances. So, visiting ‘33.10.0.100:9999/web’ gives us following output.
We are able to restrict Fabio to only load-balance Django app instances by only adding required tags to Consul’s service definitions of Django app services.
This MongoDB Primary instance discovery helps Django app to do database migration and app deployment.
One can explore Consul Web UI to see all the instances of Django web application services.
Similarly, see how MongoDB Replica Set instances are laid out.
Let’s see how Consul helps with health-checking services and discovering only the alive services.
We will stop the current MongoDB Replica Set Primary (‘mongo_2’) container, to see what happens.
Consul has started failing the health-check for previous MongoDB Primary service. MongoDB Replica Set has also detected that the node is down and the re-election of Primary node needs to be done. Thus, getting us a new MongoDB Primary (‘mongo_3’) automatically.
Our checks toggle has kicked-in and swapped the check on ‘mongo_3’ from MongoDB Secondary check to MongoDB Primary check.
When we take a look at the view from the Django app, we see it is now connected to a new MongoDB Primary service (‘mongo_3’).
Let’s see how this plays out when we bring back the stopped MongoDB instance.
Similarly, if we stop the service instances of Django application, Fabio would now be able to detect only a healthy instance and would only route the traffic to that instance.
This is how one can use Consul’s service discovery capability to discover, monitor and health-check services.
We can use Consul’s Key/Value store to share configuration across both the instances of Django app.
We can use Consul’s HTTP interface to store key/value pair and retrieve them within the app using the open-source Python client for Consul, called python-consul. You may also use any other Python library that can interact with Consul’s KV store if you want.
Let’s begin by looking at how we can set a key/value pair in Consul using its HTTP interface.
These key/value pair from Consul’s KV store can also be viewed and updated from its Web UI.
The code used as part of this guide for Consul’s service configuration section is available on ‘service-configuration’ branch of pranavcode/consul-demo project.
That is how one can use Consul’s KV store and configure individual services in their architecture with ease.
Connect provides service-to-service connection authorization and encryption using mutual TLS.
To use Consul you need to enable it in the server configuration. Connect needs to be enabled across the Consul cluster for proper functioning of the cluster.
In our context, we can define that the communication is to be TLS identified and secured we will define an upstream sidecar service with a proxy on Django app for its communication with MongoDB Primary instance.
Along with Connect configuration of sidecar proxy, we will also need to run the Connect proxy for Django app as well. This could be achieved by running the following command.
We can add Consul Connect Intentions to create a service graph across all the services and define traffic pattern. We can create intentions as shown below:
Intentions for service graph can also be managed from Consul Web UI.
This defines the service connection restrictions to allow or deny them to talk via Connect.
We have also added ability on Consul agents to denote which datacenters they belong to and be accessible via one or more Consul servers in a given datacenter.
The code used as part of this guide for Consul’s service segmentation section is available on ‘service-segmentation’ branch of velotiotech/consul-demo project.
That is how one can use Consul’s service segmentation feature and configure service level connection access control.
Conclusion
Having an ability to seamlessly control the service mesh that Consul provides makes the life of an operator very easy. We hope you have learnt how Consul can be used for service discovery, configuration, and segmentation with its practical implementation.
As usual, we hope it was an informative ride on the journey of Consul. This was the final piece of this two part series. This part tries to cover most of the aspects of Consul architecture and how it fits into your current project. In case you miss the first part, find it here.
We will continue our endeavors with different technologies and get you the most valuable information that we possibly can in every interaction. Let’s us know what you would like to hear from us more or if you have any questions around the topic, we will be more than happy to answer those.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms