Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
Are your EC2 instances under-utilized? Have you been wondering if there was an easy way to maximize the EC2/VM usage?
Are you investing too much in your Control Plane and wish you could divert some of that investment towards developing more features in your applications (business logic)?
Is your Configuration Management system overwhelming you and seems to have got a life of its own?
Do you have legacy applications that do not need Docker at all?
Would you like to simplify your deployment toolchain to streamline your workflows?
Have you been recommended to use Kubernetes as a problem to fix all your woes, but you aren’t sure if Kubernetes is actually going to help you?
Do you feel you are moving towards Docker, just so that Kubernetes can be used?
If you answered “Yes” to any of the questions above, do read on, this article is just what you might need.
There are steps to create a simple setup on your laptop at the end of the article.
Introduction
In the following article, we will present the typical components of a multi-tier application and how it is setup and deployed.
We shall further go on to see how the same application deployment can be remodeled for scale using any Cloud Infrastructure. (The same software toolchain can be used to deploy the application on your On-Premise Infrastructure as well)
The tools that we propose are Nomad and Consul. We shall focus more on how to use these tools, rather than deep-dive into the specifics of the tools. We will briefly see the features of the software which would help us achieve our goals.
Nomad is a distributed workload manager for not only Docker containers, but also for various other types of workloads like legacy applications, JAVA, LXC, etc.
Consul is a distributed service mesh, with features like service registry and a key-value store, among others.
Using these tools, the application/startup workflow would be as follows:
Nomad will be responsible for starting the service.
Nomad will publish the service information in Consul. The service information will include details like:
Where is the application running (IP:PORT) ?
What "service-name" is used to identify the application?
What "tags" (metadata) does this application have?
A Typical Application
A typical application deployment consists of a certain fixed set of processes, usually coupled with a database and a set of few (or many) peripheral services.
These services could be primary (must-have) or support (optional) features of the application.
Note: We are aware about what/how a proper “service-oriented-architecture” should be, though we will skip that discussion for now. We will rather focus on how real-world applications are setup and deployed.
Simple Multi-tier Application
In this section, let’s see the components of a multi-tier application along with typical access patterns from outside the system and within the system.
Load Balancer/Web/Front End Tier
Application Services Tier
Database Tier
Utility (or Helper Servers): To run background, cron, or queued jobs.
Using a proxy/loadbalancer, the services (Service-A, Service-B, Service-C) could be accessed using distinct hostnames:
a.example.tld
b.example.tld
c.example.tld
For an equivalent path-based routing approach, the setup would be similar. Instead of distinct hostnames, the communication mechanism would be:
common-proxy.example.tld/path-a/
common-proxy.example.tld/path-b/
common-proxy.example.tld/path-c/
Problem Scenario 1
Some of the basic problems with the deployment of the simple multi-tier application are:
What if the service process crashes during its runtime?
What if the host on which the services run shuts down, reboots or terminates?
This is where Nomad’s feature of always keep the service running would be useful.
In spite of this auto-restart feature, there could be issues if the service restarts on a different machine (i.e. different IP address).
In case of Docker and ephemeral ports, the service could start on a different port as well.
To solve this, we will use the service discovery feature provided by Consul, combined with a with a Consul-aware load-balancer/proxy to redirect traffic to the appropriate service.
The order of the operations within the Nomad job will thus be:
Nomad will launch the job/task.
Nomad will register the task details as a service definition in Consul. (These steps will be re-executed if/when the application is restarted due to a crash/fail-over)
The Consul-aware load-balancer will route the traffic to the service (IP:PORT)
Multi-tier Application With Load Balancer
Using the Consul-aware load-balancer, the diagram will now look like:
The details of the setup now are:
A Consul-aware load-balancer/proxy; the application will access the services via the load-balancer.
3 (three) instances of service A; A1, A2, A3
3 (three) instances of service B; B1, B2, B3
The Routing Question
At this moment, you could be wondering, “Why/How would the load-balancer know that it has to route traffic for service-A to A1/A2/A3 and route traffic for service-B to B1/B2/B3 ?”
The answer lies in the Consul tags which will be published as part of the service definition (when Nomad registers the service in Consul).
The appropriate Consultags will tell the load-balancer to route traffic of a particular service to the appropriate backend. (+++)
Let’s read that statement again (very slowly, just to be sure); The Consul tags, which are part of the service definition, will inform (advertise) the load-balancer to route traffic to the appropriate backend.
The reason to dwell upon this distinction is very important, as this is different from how the classic load-balancer/proxy software like HAProxy or NGINX are configured. For HAProxy/NGINX the backend routing information resides with the load-balancer instance and is not “advertised” by the backend.
The traditional load-balancers like NGINX/HAProxy do not natively support dynamic reloading of the backends. (when the backends stop/start/move-around). The heavy lifting of regenerating the configuration file and reloading the service is left up to an external entity like Consul-Template.
The use of a Consul-aware load-balancer, instead of a traditional load-balancer, eliminates the need of external workarounds.
The setup can thus be termed as a zero-configuration setup; you don’t have to re-configure the load-balancer, it will discover the changing backend services based on the information available from Consul.
Problem Scenario 2
So far we have achieved a method to “automatically” discover the backends, but isn’t the Load-Balancer itself a single-point-of-failure (SPOF)?
It absolutely is, and you should always have redundant load-balancers instances (which is what any cloud-provided load-balancer has).
As there is a certain cost associated with using “cloud-provided load-balancer”, we would create the load-balancers ourselves and not use cloud-provided load-balancers.
To provide redundancy to the load-balancer instances, you should configure them using and AutoScalingGroup (AWS), VM Scale Sets (Azure), etc.
The same redundancy strategy should also be used for the worker nodes, where the actual services reside, by using AutoScaling Groups/VMSS for the worker nodes.
The Complete Picture
Installation and Configuration
Given that nowadays laptops are pretty powerful, you can easily create a test setup on your laptop using VirtualBox, VMware Workstation Player, VMware Workstation, etc.
As a prerequisite, you will need a few virtual machines which can communicate with each other.
NOTE: Create the VMs with networking set to bridged mode.
The machines needed for the simple setup/demo would be:
1 Linux VM to act as a server (srv1)
1 Linux VM to act as a load-balancer (lb1)
2 Linux VMs to act as worker machines (client1, client2)
*** Each machine can be 2 CPU 1 GB memory each.
The configuration files and scripts needed for the demo, which will help you set up the Nomad and Consul cluster are available here.
For the sake of simplicity, we shall assume the following IP addresses for the machines. (You can adapt the IPs as per your actual cluster configuration)
srv1: 192.168.1.11
lb1: 192.168.1.101
client1: 192.168.201
client1: 192.168.202
You can access the web GUI for Consul and Nomad at the following URLs:
Consul: http://192.168.1.11:8500
Nomad: http://192.168.1.11:4646
Login into the server and start the following watch command:
Pressing F5 to refresh the browser should keep changing the backend service that you are eventually connected to.
Conclusion
This article should give you a fair idea about the common problems of a distributed application and how they can be solved.
Remodeling an existing application deployment as it scales can be quite a challenge. Hopefully the sample/demo setup will help you to explore, design and optimize the deployment workflows of your application, be it On-Premise or any Cloud Environment.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
How Much Do You Really Know About Simplified Cloud Deployments?
Is your EC2/VM bill giving you sleepless nights?
Are your EC2 instances under-utilized? Have you been wondering if there was an easy way to maximize the EC2/VM usage?
Are you investing too much in your Control Plane and wish you could divert some of that investment towards developing more features in your applications (business logic)?
Is your Configuration Management system overwhelming you and seems to have got a life of its own?
Do you have legacy applications that do not need Docker at all?
Would you like to simplify your deployment toolchain to streamline your workflows?
Have you been recommended to use Kubernetes as a problem to fix all your woes, but you aren’t sure if Kubernetes is actually going to help you?
Do you feel you are moving towards Docker, just so that Kubernetes can be used?
If you answered “Yes” to any of the questions above, do read on, this article is just what you might need.
There are steps to create a simple setup on your laptop at the end of the article.
Introduction
In the following article, we will present the typical components of a multi-tier application and how it is setup and deployed.
We shall further go on to see how the same application deployment can be remodeled for scale using any Cloud Infrastructure. (The same software toolchain can be used to deploy the application on your On-Premise Infrastructure as well)
The tools that we propose are Nomad and Consul. We shall focus more on how to use these tools, rather than deep-dive into the specifics of the tools. We will briefly see the features of the software which would help us achieve our goals.
Nomad is a distributed workload manager for not only Docker containers, but also for various other types of workloads like legacy applications, JAVA, LXC, etc.
Consul is a distributed service mesh, with features like service registry and a key-value store, among others.
Using these tools, the application/startup workflow would be as follows:
Nomad will be responsible for starting the service.
Nomad will publish the service information in Consul. The service information will include details like:
Where is the application running (IP:PORT) ?
What "service-name" is used to identify the application?
What "tags" (metadata) does this application have?
A Typical Application
A typical application deployment consists of a certain fixed set of processes, usually coupled with a database and a set of few (or many) peripheral services.
These services could be primary (must-have) or support (optional) features of the application.
Note: We are aware about what/how a proper “service-oriented-architecture” should be, though we will skip that discussion for now. We will rather focus on how real-world applications are setup and deployed.
Simple Multi-tier Application
In this section, let’s see the components of a multi-tier application along with typical access patterns from outside the system and within the system.
Load Balancer/Web/Front End Tier
Application Services Tier
Database Tier
Utility (or Helper Servers): To run background, cron, or queued jobs.
Using a proxy/loadbalancer, the services (Service-A, Service-B, Service-C) could be accessed using distinct hostnames:
a.example.tld
b.example.tld
c.example.tld
For an equivalent path-based routing approach, the setup would be similar. Instead of distinct hostnames, the communication mechanism would be:
common-proxy.example.tld/path-a/
common-proxy.example.tld/path-b/
common-proxy.example.tld/path-c/
Problem Scenario 1
Some of the basic problems with the deployment of the simple multi-tier application are:
What if the service process crashes during its runtime?
What if the host on which the services run shuts down, reboots or terminates?
This is where Nomad’s feature of always keep the service running would be useful.
In spite of this auto-restart feature, there could be issues if the service restarts on a different machine (i.e. different IP address).
In case of Docker and ephemeral ports, the service could start on a different port as well.
To solve this, we will use the service discovery feature provided by Consul, combined with a with a Consul-aware load-balancer/proxy to redirect traffic to the appropriate service.
The order of the operations within the Nomad job will thus be:
Nomad will launch the job/task.
Nomad will register the task details as a service definition in Consul. (These steps will be re-executed if/when the application is restarted due to a crash/fail-over)
The Consul-aware load-balancer will route the traffic to the service (IP:PORT)
Multi-tier Application With Load Balancer
Using the Consul-aware load-balancer, the diagram will now look like:
The details of the setup now are:
A Consul-aware load-balancer/proxy; the application will access the services via the load-balancer.
3 (three) instances of service A; A1, A2, A3
3 (three) instances of service B; B1, B2, B3
The Routing Question
At this moment, you could be wondering, “Why/How would the load-balancer know that it has to route traffic for service-A to A1/A2/A3 and route traffic for service-B to B1/B2/B3 ?”
The answer lies in the Consul tags which will be published as part of the service definition (when Nomad registers the service in Consul).
The appropriate Consultags will tell the load-balancer to route traffic of a particular service to the appropriate backend. (+++)
Let’s read that statement again (very slowly, just to be sure); The Consul tags, which are part of the service definition, will inform (advertise) the load-balancer to route traffic to the appropriate backend.
The reason to dwell upon this distinction is very important, as this is different from how the classic load-balancer/proxy software like HAProxy or NGINX are configured. For HAProxy/NGINX the backend routing information resides with the load-balancer instance and is not “advertised” by the backend.
The traditional load-balancers like NGINX/HAProxy do not natively support dynamic reloading of the backends. (when the backends stop/start/move-around). The heavy lifting of regenerating the configuration file and reloading the service is left up to an external entity like Consul-Template.
The use of a Consul-aware load-balancer, instead of a traditional load-balancer, eliminates the need of external workarounds.
The setup can thus be termed as a zero-configuration setup; you don’t have to re-configure the load-balancer, it will discover the changing backend services based on the information available from Consul.
Problem Scenario 2
So far we have achieved a method to “automatically” discover the backends, but isn’t the Load-Balancer itself a single-point-of-failure (SPOF)?
It absolutely is, and you should always have redundant load-balancers instances (which is what any cloud-provided load-balancer has).
As there is a certain cost associated with using “cloud-provided load-balancer”, we would create the load-balancers ourselves and not use cloud-provided load-balancers.
To provide redundancy to the load-balancer instances, you should configure them using and AutoScalingGroup (AWS), VM Scale Sets (Azure), etc.
The same redundancy strategy should also be used for the worker nodes, where the actual services reside, by using AutoScaling Groups/VMSS for the worker nodes.
The Complete Picture
Installation and Configuration
Given that nowadays laptops are pretty powerful, you can easily create a test setup on your laptop using VirtualBox, VMware Workstation Player, VMware Workstation, etc.
As a prerequisite, you will need a few virtual machines which can communicate with each other.
NOTE: Create the VMs with networking set to bridged mode.
The machines needed for the simple setup/demo would be:
1 Linux VM to act as a server (srv1)
1 Linux VM to act as a load-balancer (lb1)
2 Linux VMs to act as worker machines (client1, client2)
*** Each machine can be 2 CPU 1 GB memory each.
The configuration files and scripts needed for the demo, which will help you set up the Nomad and Consul cluster are available here.
For the sake of simplicity, we shall assume the following IP addresses for the machines. (You can adapt the IPs as per your actual cluster configuration)
srv1: 192.168.1.11
lb1: 192.168.1.101
client1: 192.168.201
client1: 192.168.202
You can access the web GUI for Consul and Nomad at the following URLs:
Consul: http://192.168.1.11:8500
Nomad: http://192.168.1.11:4646
Login into the server and start the following watch command:
Pressing F5 to refresh the browser should keep changing the backend service that you are eventually connected to.
Conclusion
This article should give you a fair idea about the common problems of a distributed application and how they can be solved.
Remodeling an existing application deployment as it scales can be quite a challenge. Hopefully the sample/demo setup will help you to explore, design and optimize the deployment workflows of your application, be it On-Premise or any Cloud Environment.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms