Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

How Much Do You Really Know About Simplified Cloud Deployments?

Shantanu Gadgil

Cloud & DevOps

Is your EC2/VM bill giving you sleepless nights?

Are your EC2 instances under-utilized? Have you been wondering if there was an easy way to maximize the EC2/VM usage?

Are you investing too much in your Control Plane and wish you could divert some of that investment towards developing more features in your applications (business logic)?

Is your Configuration Management system overwhelming you and seems to have got a life of its own?

Do you have legacy applications that do not need Docker at all?

Would you like to simplify your deployment toolchain to streamline your workflows?

Have you been recommended to use Kubernetes as a problem to fix all your woes, but you aren’t sure if Kubernetes is actually going to help you?

Do you feel you are moving towards Docker, just so that Kubernetes can be used?

If you answered “Yes” to any of the questions above, do read on, this article is just what you might need.

There are steps to create a simple setup on your laptop at the end of the article.

Introduction

In the following article, we will present the typical components of a multi-tier application and how it is setup and deployed.

We shall further go on to see how the same application deployment can be remodeled for scale using any Cloud Infrastructure. (The same software toolchain can be used to deploy the application on your On-Premise Infrastructure as well)

The tools that we propose are Nomad and Consul. We shall focus more on how to use these tools, rather than deep-dive into the specifics of the tools. We will briefly see the features of the software which would help us achieve our goals.

  • Nomad is a distributed workload manager for not only Docker containers, but also for various other types of workloads like legacy applications, JAVA, LXC, etc.

More about Nomad Drivers here: Nomadproject.io, application delivery with HashiCorp, introduction to HashiCorp Nomad.

  • Consul is a distributed service mesh, with features like service registry and a key-value store, among others.

Using these tools, the application/startup workflow would be as follows:

Nomad will be responsible for starting the service.

Nomad will publish the service information in Consul. The service information will include details like:

  • Where is the application running (IP:PORT) ?
  • What "service-name" is used to identify the application?
  • What "tags" (metadata) does this application have?

A Typical Application

A typical application deployment consists of a certain fixed set of processes, usually coupled with a database and a set of few (or many) peripheral services.

These services could be primary (must-have) or support (optional) features of the application.

Typical Application

Note: We are aware about what/how a proper “service-oriented-architecture” should be, though we will skip that discussion for now. We will rather focus on how real-world applications are setup and deployed.

Simple Multi-tier Application

In this section, let’s see the components of a multi-tier application along with typical access patterns from outside the system and within the system.

  • Load Balancer/Web/Front End Tier
  • Application Services Tier
  • Database Tier
  • Utility (or Helper Servers): To run background, cron, or queued jobs.
Simple Multi-tier Application

Using a proxy/loadbalancer, the services (Service-A, Service-B, Service-C) could be accessed using distinct hostnames:

  • a.example.tld
  • b.example.tld
  • c.example.tld

For an equivalent path-based routing approach, the setup would be similar. Instead of distinct hostnames, the communication mechanism would be:

  • common-proxy.example.tld/path-a/
  • common-proxy.example.tld/path-b/
  • common-proxy.example.tld/path-c/

Problem Scenario 1

Some of the basic problems with the deployment of the simple multi-tier application are:

  • What if the service process crashes during its runtime?
  • What if the host on which the services run shuts down, reboots or terminates?

This is where Nomad’s feature of always keep the service running would be useful.

In spite of this auto-restart feature, there could be issues if the service restarts on a different machine (i.e. different IP address).

In case of Docker and ephemeral ports, the service could start on a different port as well.

To solve this, we will use the service discovery feature provided by Consul, combined with a with a Consul-aware load-balancer/proxy to redirect traffic to the appropriate service.

The order of the operations within the Nomad job will thus be:

  • Nomad will launch the job/task.
  • Nomad will register the task details as a service definition in Consul.
    (These steps will be re-executed if/when the application is restarted due to a crash/fail-over)
  • The Consul-aware load-balancer will route the traffic to the service (IP:PORT)

Multi-tier Application With Load Balancer

Using the Consul-aware load-balancer, the diagram will now look like:

Multi-tier Application With Load Balancer

The details of the setup now are:

  • A Consul-aware load-balancer/proxy; the application will access the services via the load-balancer.
  • 3 (three) instances of service A; A1, A2, A3
  • 3 (three) instances of service B; B1, B2, B3

The Routing Question

At this moment, you could be wondering, “Why/How would the load-balancer know that it has to route traffic for service-A to A1/A2/A3 and route traffic for service-B to B1/B2/B3 ?”

The answer lies in the Consul tags which will be published as part of the service definition (when Nomad registers the service in Consul).

The appropriate Consul tags will tell the load-balancer to route traffic of a particular service to the appropriate backend. (+++)

Let’s read that statement again (very slowly, just to be sure); The Consul tags, which are part of the service definition, will inform (advertise) the load-balancer to route traffic to the appropriate backend.

The reason to dwell upon this distinction is very important, as this is different from how the classic load-balancer/proxy software like HAProxy or NGINX are configured. For HAProxy/NGINX the backend routing information resides with the load-balancer instance and is not “advertised” by the backend.

The traditional load-balancers like NGINX/HAProxy do not natively support dynamic reloading of the backends. (when the backends stop/start/move-around). The heavy lifting of regenerating the configuration file and reloading the service is left up to an external entity like Consul-Template.

The use of a Consul-aware load-balancer, instead of a traditional load-balancer, eliminates the need of external workarounds.

The setup can thus be termed as a zero-configuration setup; you don’t have to re-configure the load-balancer, it will discover the changing backend services based on the information available from Consul.

Problem Scenario 2

So far we have achieved a method to “automatically” discover the backends, but isn’t the Load-Balancer itself a single-point-of-failure (SPOF)?

It absolutely is, and you should always have redundant load-balancers instances (which is what any cloud-provided load-balancer has).

As there is a certain cost associated with using “cloud-provided load-balancer”, we would create the load-balancers ourselves and not use cloud-provided load-balancers.

To provide redundancy to the load-balancer instances, you should configure them using and AutoScalingGroup (AWS), VM Scale Sets (Azure), etc.

The same redundancy strategy should also be used for the worker nodes, where the actual services reside, by using AutoScaling Groups/VMSS for the worker nodes.

The Complete Picture

Problem Scenario Picture

Installation and Configuration

Given that nowadays laptops are pretty powerful, you can easily create a test setup on your laptop using VirtualBox, VMware Workstation Player, VMware Workstation, etc.

As a prerequisite, you will need a few virtual machines which can communicate with each other.

NOTE: Create the VMs with networking set to bridged mode.

The machines needed for the simple setup/demo would be:

  • 1 Linux VM to act as a server (srv1)
  • 1 Linux VM to act as a load-balancer (lb1)
  • 2 Linux VMs to act as worker machines (client1, client2)

*** Each machine can be 2 CPU 1 GB memory each.

The configuration files and scripts needed for the demo, which will help you set up the Nomad and Consul cluster are available here.

Setup the Server

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/225c70e94d5ca23a2944ff5dce4ffa0c.js

Create the Server Configuration

CODE: https://gist.github.com/velotiotech/e44013ed0579a48f9ac7ff047f52fa2b.js

Setup the Load-Balancer

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/7a1317c36b1990eeb4c092674d55f004.js

Create the Load-Balancer Configuration

CODE: https://gist.github.com/velotiotech/45fe2c4aa50e3879681789c0f91ae3d6.js

Setup the Client (Worker) Machines

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/331b0b57f92756aab8df663290d7b9c5.js

Create the Worker Configuration

CODE: https://gist.github.com/velotiotech/6a928bd7c65f2f6c07e88e3e762c606d.js

Test the Setup

For the sake of simplicity, we shall assume the following IP addresses for the machines. (You can adapt the IPs as per your actual cluster configuration)

srv1: 192.168.1.11

lb1: 192.168.1.101

client1: 192.168.201

client1: 192.168.202

You can access the web GUI for Consul and Nomad at the following URLs:

Consul: http://192.168.1.11:8500

Nomad: http://192.168.1.11:4646

Login into the server and start the following watch command:

CODE: https://gist.github.com/velotiotech/7f736073156bbb871d9609b01649ea08.js

Output:

CODE: https://gist.github.com/velotiotech/5cf010fc1fc85b0d33d2bb9681826d3b.js

Submit Jobs

Login into the server (srv1) and download the sample jobs

Run the load-balancer job

CODE: https://gist.github.com/velotiotech/cfe5812c5ef1d678e2d2f90df1ba800e.js

Output:

CODE: https://gist.github.com/velotiotech/c57c2e8813bd31f57d7e324c1c9d56ea.js

Check the status of the load-balancer

CODE: https://gist.github.com/velotiotech/7b53671b2cc3fc91c819fcad7aa81c33.js

Output:

CODE: https://gist.github.com/velotiotech/f00f5f63077c433857b38047fc70d247.js

Run the service 'foo'

CODE: https://gist.github.com/velotiotech/a371d145a803757f2015a5a112724806.js

Output:

CODE: https://gist.github.com/velotiotech/2322dee139c0366783b6fde1f849ed6b.js

Check the status of service 'foo'

CODE: https://gist.github.com/velotiotech/bd79a355c78854555750db857ac6edfd.js

Output:

CODE: https://gist.github.com/velotiotech/9b31b9ac7975150e2b331d3cb57084da.js

Run the service 'bar'

CODE: https://gist.github.com/velotiotech/983e12cd785b9c475025e9b62f3010de.js

Output:

CODE: https://gist.github.com/velotiotech/e6edb3c90812bb6de02d2f0106574742.js

Check the status of service 'bar'

CODE: https://gist.github.com/velotiotech/247dac70ce793f2e8fe2b5eebc81cd86.js

Output:

CODE: https://gist.github.com/velotiotech/0f7d084a3e7ccde216d39f309fcab436.js

Check the Fabio Routes

http://192.168.1.101:9998/routes

Checking Fabio Routes

Connect to the Services

The services "foo" and "bar" are available at:

http://192.168.1.101:9999/foo

http://192.168.1.101:9999/bar

Output:

CODE: https://gist.github.com/velotiotech/f747480547b734fac0517eb17614e297.js

Pressing F5 to refresh the browser should keep changing the backend service that you are eventually connected to.

Conclusion

This article should give you a fair idea about the common problems of a distributed application and how they can be solved.

Remodeling an existing application deployment as it scales can be quite a challenge. Hopefully the sample/demo setup will help you to explore, design and optimize the deployment workflows of your application, be it On-Premise or any Cloud Environment.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

How Much Do You Really Know About Simplified Cloud Deployments?

Is your EC2/VM bill giving you sleepless nights?

Are your EC2 instances under-utilized? Have you been wondering if there was an easy way to maximize the EC2/VM usage?

Are you investing too much in your Control Plane and wish you could divert some of that investment towards developing more features in your applications (business logic)?

Is your Configuration Management system overwhelming you and seems to have got a life of its own?

Do you have legacy applications that do not need Docker at all?

Would you like to simplify your deployment toolchain to streamline your workflows?

Have you been recommended to use Kubernetes as a problem to fix all your woes, but you aren’t sure if Kubernetes is actually going to help you?

Do you feel you are moving towards Docker, just so that Kubernetes can be used?

If you answered “Yes” to any of the questions above, do read on, this article is just what you might need.

There are steps to create a simple setup on your laptop at the end of the article.

Introduction

In the following article, we will present the typical components of a multi-tier application and how it is setup and deployed.

We shall further go on to see how the same application deployment can be remodeled for scale using any Cloud Infrastructure. (The same software toolchain can be used to deploy the application on your On-Premise Infrastructure as well)

The tools that we propose are Nomad and Consul. We shall focus more on how to use these tools, rather than deep-dive into the specifics of the tools. We will briefly see the features of the software which would help us achieve our goals.

  • Nomad is a distributed workload manager for not only Docker containers, but also for various other types of workloads like legacy applications, JAVA, LXC, etc.

More about Nomad Drivers here: Nomadproject.io, application delivery with HashiCorp, introduction to HashiCorp Nomad.

  • Consul is a distributed service mesh, with features like service registry and a key-value store, among others.

Using these tools, the application/startup workflow would be as follows:

Nomad will be responsible for starting the service.

Nomad will publish the service information in Consul. The service information will include details like:

  • Where is the application running (IP:PORT) ?
  • What "service-name" is used to identify the application?
  • What "tags" (metadata) does this application have?

A Typical Application

A typical application deployment consists of a certain fixed set of processes, usually coupled with a database and a set of few (or many) peripheral services.

These services could be primary (must-have) or support (optional) features of the application.

Typical Application

Note: We are aware about what/how a proper “service-oriented-architecture” should be, though we will skip that discussion for now. We will rather focus on how real-world applications are setup and deployed.

Simple Multi-tier Application

In this section, let’s see the components of a multi-tier application along with typical access patterns from outside the system and within the system.

  • Load Balancer/Web/Front End Tier
  • Application Services Tier
  • Database Tier
  • Utility (or Helper Servers): To run background, cron, or queued jobs.
Simple Multi-tier Application

Using a proxy/loadbalancer, the services (Service-A, Service-B, Service-C) could be accessed using distinct hostnames:

  • a.example.tld
  • b.example.tld
  • c.example.tld

For an equivalent path-based routing approach, the setup would be similar. Instead of distinct hostnames, the communication mechanism would be:

  • common-proxy.example.tld/path-a/
  • common-proxy.example.tld/path-b/
  • common-proxy.example.tld/path-c/

Problem Scenario 1

Some of the basic problems with the deployment of the simple multi-tier application are:

  • What if the service process crashes during its runtime?
  • What if the host on which the services run shuts down, reboots or terminates?

This is where Nomad’s feature of always keep the service running would be useful.

In spite of this auto-restart feature, there could be issues if the service restarts on a different machine (i.e. different IP address).

In case of Docker and ephemeral ports, the service could start on a different port as well.

To solve this, we will use the service discovery feature provided by Consul, combined with a with a Consul-aware load-balancer/proxy to redirect traffic to the appropriate service.

The order of the operations within the Nomad job will thus be:

  • Nomad will launch the job/task.
  • Nomad will register the task details as a service definition in Consul.
    (These steps will be re-executed if/when the application is restarted due to a crash/fail-over)
  • The Consul-aware load-balancer will route the traffic to the service (IP:PORT)

Multi-tier Application With Load Balancer

Using the Consul-aware load-balancer, the diagram will now look like:

Multi-tier Application With Load Balancer

The details of the setup now are:

  • A Consul-aware load-balancer/proxy; the application will access the services via the load-balancer.
  • 3 (three) instances of service A; A1, A2, A3
  • 3 (three) instances of service B; B1, B2, B3

The Routing Question

At this moment, you could be wondering, “Why/How would the load-balancer know that it has to route traffic for service-A to A1/A2/A3 and route traffic for service-B to B1/B2/B3 ?”

The answer lies in the Consul tags which will be published as part of the service definition (when Nomad registers the service in Consul).

The appropriate Consul tags will tell the load-balancer to route traffic of a particular service to the appropriate backend. (+++)

Let’s read that statement again (very slowly, just to be sure); The Consul tags, which are part of the service definition, will inform (advertise) the load-balancer to route traffic to the appropriate backend.

The reason to dwell upon this distinction is very important, as this is different from how the classic load-balancer/proxy software like HAProxy or NGINX are configured. For HAProxy/NGINX the backend routing information resides with the load-balancer instance and is not “advertised” by the backend.

The traditional load-balancers like NGINX/HAProxy do not natively support dynamic reloading of the backends. (when the backends stop/start/move-around). The heavy lifting of regenerating the configuration file and reloading the service is left up to an external entity like Consul-Template.

The use of a Consul-aware load-balancer, instead of a traditional load-balancer, eliminates the need of external workarounds.

The setup can thus be termed as a zero-configuration setup; you don’t have to re-configure the load-balancer, it will discover the changing backend services based on the information available from Consul.

Problem Scenario 2

So far we have achieved a method to “automatically” discover the backends, but isn’t the Load-Balancer itself a single-point-of-failure (SPOF)?

It absolutely is, and you should always have redundant load-balancers instances (which is what any cloud-provided load-balancer has).

As there is a certain cost associated with using “cloud-provided load-balancer”, we would create the load-balancers ourselves and not use cloud-provided load-balancers.

To provide redundancy to the load-balancer instances, you should configure them using and AutoScalingGroup (AWS), VM Scale Sets (Azure), etc.

The same redundancy strategy should also be used for the worker nodes, where the actual services reside, by using AutoScaling Groups/VMSS for the worker nodes.

The Complete Picture

Problem Scenario Picture

Installation and Configuration

Given that nowadays laptops are pretty powerful, you can easily create a test setup on your laptop using VirtualBox, VMware Workstation Player, VMware Workstation, etc.

As a prerequisite, you will need a few virtual machines which can communicate with each other.

NOTE: Create the VMs with networking set to bridged mode.

The machines needed for the simple setup/demo would be:

  • 1 Linux VM to act as a server (srv1)
  • 1 Linux VM to act as a load-balancer (lb1)
  • 2 Linux VMs to act as worker machines (client1, client2)

*** Each machine can be 2 CPU 1 GB memory each.

The configuration files and scripts needed for the demo, which will help you set up the Nomad and Consul cluster are available here.

Setup the Server

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/225c70e94d5ca23a2944ff5dce4ffa0c.js

Create the Server Configuration

CODE: https://gist.github.com/velotiotech/e44013ed0579a48f9ac7ff047f52fa2b.js

Setup the Load-Balancer

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/7a1317c36b1990eeb4c092674d55f004.js

Create the Load-Balancer Configuration

CODE: https://gist.github.com/velotiotech/45fe2c4aa50e3879681789c0f91ae3d6.js

Setup the Client (Worker) Machines

Install the binaries on the server

CODE: https://gist.github.com/velotiotech/331b0b57f92756aab8df663290d7b9c5.js

Create the Worker Configuration

CODE: https://gist.github.com/velotiotech/6a928bd7c65f2f6c07e88e3e762c606d.js

Test the Setup

For the sake of simplicity, we shall assume the following IP addresses for the machines. (You can adapt the IPs as per your actual cluster configuration)

srv1: 192.168.1.11

lb1: 192.168.1.101

client1: 192.168.201

client1: 192.168.202

You can access the web GUI for Consul and Nomad at the following URLs:

Consul: http://192.168.1.11:8500

Nomad: http://192.168.1.11:4646

Login into the server and start the following watch command:

CODE: https://gist.github.com/velotiotech/7f736073156bbb871d9609b01649ea08.js

Output:

CODE: https://gist.github.com/velotiotech/5cf010fc1fc85b0d33d2bb9681826d3b.js

Submit Jobs

Login into the server (srv1) and download the sample jobs

Run the load-balancer job

CODE: https://gist.github.com/velotiotech/cfe5812c5ef1d678e2d2f90df1ba800e.js

Output:

CODE: https://gist.github.com/velotiotech/c57c2e8813bd31f57d7e324c1c9d56ea.js

Check the status of the load-balancer

CODE: https://gist.github.com/velotiotech/7b53671b2cc3fc91c819fcad7aa81c33.js

Output:

CODE: https://gist.github.com/velotiotech/f00f5f63077c433857b38047fc70d247.js

Run the service 'foo'

CODE: https://gist.github.com/velotiotech/a371d145a803757f2015a5a112724806.js

Output:

CODE: https://gist.github.com/velotiotech/2322dee139c0366783b6fde1f849ed6b.js

Check the status of service 'foo'

CODE: https://gist.github.com/velotiotech/bd79a355c78854555750db857ac6edfd.js

Output:

CODE: https://gist.github.com/velotiotech/9b31b9ac7975150e2b331d3cb57084da.js

Run the service 'bar'

CODE: https://gist.github.com/velotiotech/983e12cd785b9c475025e9b62f3010de.js

Output:

CODE: https://gist.github.com/velotiotech/e6edb3c90812bb6de02d2f0106574742.js

Check the status of service 'bar'

CODE: https://gist.github.com/velotiotech/247dac70ce793f2e8fe2b5eebc81cd86.js

Output:

CODE: https://gist.github.com/velotiotech/0f7d084a3e7ccde216d39f309fcab436.js

Check the Fabio Routes

http://192.168.1.101:9998/routes

Checking Fabio Routes

Connect to the Services

The services "foo" and "bar" are available at:

http://192.168.1.101:9999/foo

http://192.168.1.101:9999/bar

Output:

CODE: https://gist.github.com/velotiotech/f747480547b734fac0517eb17614e297.js

Pressing F5 to refresh the browser should keep changing the backend service that you are eventually connected to.

Conclusion

This article should give you a fair idea about the common problems of a distributed application and how they can be solved.

Remodeling an existing application deployment as it scales can be quite a challenge. Hopefully the sample/demo setup will help you to explore, design and optimize the deployment workflows of your application, be it On-Premise or any Cloud Environment.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings