A Practical Guide to Deploying Multi-tier Applications on Google Container Engine (GKE)


All modern era programmers can attest that containerization has afforded more flexibility and allows us to build truly cloud-native applications. Containers provide portability - ability to easily move applications across environments. Although complex applications comprise of many (10s or 100s) containers. Managing such applications is a real challenge and that’s where container orchestration and scheduling platforms like Kubernetes, Mesosphere, Docker Swarm, etc. come into the picture. 
Kubernetes, backed by Google is leading the pack given that Redhat, Microsoft and now Amazon are putting their weight behind it.

Kubernetes can run on any cloud or bare metal infrastructure. Setting up & managing Kubernetes can be a challenge but Google provides an easy way to use Kubernetes through the Google Container Engine(GKE) service.

What is GKE?

Google Container Engine is a Management and orchestration system for Containers. In short, it is a hosted Kubernetes. The goal of GKE is to increase the productivity of DevOps and development teams by hiding the complexity of setting up the Kubernetes cluster, the overlay network, etc.

Why GKE? What are the things that GKE does for the user?

In this blog, we will see how to create your own Kubernetes cluster in GKE and how to deploy a multi-tier application in it. The blog assumes you have a basic understanding of Kubernetes and have used it before. It also assumes you have created an account with Google Cloud Platform. If you are not familiar with Kubernetes, this guide from Deis - https://deis.com/blog/2016/kubernetes-illustrated-guide/ is a good place to start.

Google provides a Command-line interface (gcloud) to interact with all Google Cloud Platform products and services. gcloud is a tool that provides the primary command-line interface to Google Cloud Platform. You can use this tool to perform many common platform tasks either from the command-line or in scripts. Follow this guide to install the gcloud tool.

Now let's begin! The first step is to create the cluster.

Basic Steps to create cluster

In this section, I would like to explain about how to create GKE cluster. We will use a command-line tool to setup the cluster.

Set the zone in which you want to deploy the cluster

$ gcloud config set compute/zone us-west1-a

Create the cluster using following command,

$ gcloud container --project <project-name> \
clusters create <cluster-name> \
--machine-type n1-standard-2 \
--image-type "COS" --disk-size "50" \
--num-nodes 2 --network default \
--enable-cloud-logging --no-enable-cloud-monitoring

Let's try to understand what each of these parameters mean:

--project: Project Name

--machine-type: Type of the machine like n1-standard-2, n1-standard-4

--image-type: OS image."COS" i.e. Container Optimized OS from Google: More Info here: https://cloud.google.com/container-optimized-os/

--disk-size: Disk size of each instance.

--num-nodes: Number of nodes in the cluster.

--network: Network that users want to use for the cluster. In this case, we are using default network.

Apart from the above options, you can also use the following to provide specific requirements while creating the cluster:

--scopes: Scopes enable containers to direct access any Google service without needs credentials. You can specify comma separated list of scope APIs. For example:

You can find all the Scopes that Google supports here: .

--additional-zones: Specify additional zones to high availability. Eg. --additional-zones us-east1-b, us-east1-d . Here Kubernetes will create a cluster in 3 zones (1 specified at the beginning and additional 2 here).

--enable-autoscaling : To enable the autoscaling option. If you specify this option then you have to specify the minimum and maximum required nodes as follows; You can read more about how auto-scaling works here: https://cloud.google.com/container-engine/docs/cluster-autoscaler Eg: --enable-autoscaling --min-nodes=15 --max-nodes=50

You can fetch the credentials of the created cluster. This step is to update the credentials in the kubeconfig file, so that kubectl will point to required cluster.

$ gcloud container clusters get-credentials my-first-cluster --project project-name

Now, your First Kubernetes cluster is ready. Let’s check the cluster information & health.

$ kubectl get nodes
gke-first-cluster-default-pool-d344484d-vnj1  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-kdd7  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-ytre2  Ready  2h  v1.6.4

After creating Cluster, now let's see how to deploy a multi tier application on it. Let’s use simple Python Flask app which will greet the user, store employee data & get employee data.

Application Deployment

I have created simple Python Flask application to deploy on K8S cluster created using GKE. you can go through the source code at https://github.com/velotio-tech/GKE-and-Sample-App. If you check the source code then you will find directory structure as follows:

├── Dockerfile
├── mysql-deployment.yaml
├── mysql-service.yaml
├── src
    ├── app.py
    └── requirements.txt
    ├── testapp-deployment.yaml
    └── testapp-service.yaml

In this, I have written a Dockerfile for the Python Flask application in order to build our own image to deploy. For MySQL, we won’t build an image of our own. We will use the latest MySQL image from the public docker repository.

Before deploying the application, let’s re-visit some of the important Kubernetes terms:


The pod is a Docker container or a group of Docker containers which are deployed together on the host machine. It acts as a single unit of deployment.


Deployment is an entity which manages the ReplicaSets and provides declarative updates to pods. It is recommended to use Deployments instead of directly using ReplicaSets. We can use deployment to create, remove and update ReplicaSets. Deployments have the ability to rollout and rollback the changes.


Service in K8S is an abstraction which will connect you to one or more pods. You can connect to pod using the pod’s IP Address but since pods come and go, their IP Addresses change.  Services get their own IP & DNS and those remain for the entire lifetime of the service. 

Each tier in an application is represented by a Deployment. A Deployment is described by the YAML file. We have two YAML files - one for MySQL and one for the Python application.

1. MySQL Deployment YAML

2. Python Application Deployment YAML

Each Service is also represented by a YAML file as follows:

1. MySQL service YAML

2. Python Application service YAML

You will find a ‘kind’ field in each YAML file. It is used to specify whether the given configuration is for deployment, service, pod, etc.

In the Python app service YAML, I am using type = LoadBalancer. In GKE, There are two types of cloud load balancers available to expose the application to outside world.

  1. TCP load balancer: This is a TCP Proxy-based load balancer. We will use this in our example.

  2. HTTP(s) load balancer: It can be created using Ingress. For more information, refer to this post that talks about Ingress in detail: https://velotio.com/blog/2017/7/5/http-load-balancing-in-kubernetes-with-ingress

In the MySQL service, I’ve not specified any type, in that case, type ‘ClusterIP’ will get used, which will make sure that MySQL container is exposed to the cluster and the Python app can access it.

If you check the app.py, you can see that I have used “mysql-service.default” as a hostname. “Mysql-service.default” is a DNS name of the service. The Python application will refer to that DNS name while accessing the MySQL Database.

Now, let's actually setup the components from the configurations. As mentioned above, we will first create services followed by deployments.


$ kubectl create -f mysql-service.yaml
$ kubectl create -f testapp-service.yaml


$ kubectl create -f mysql-deployment.yaml
$ kubectl create -f testapp-deployment.yaml

Check the status of the pods and services. Wait till all pods come to the running state and Python application service to get external IP like below:

$ kubectl get services
NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes     <none>        443/TCP        5h
mysql-service    <none>        3306/TCP       1m
test-service     80:32546/TCP   11s

Once you get the external IP, then you should be able to make APIs calls using simple curl requests.

Eg. To Store Data :

curl -H "Content-Type: application/x-www-form-urlencoded" -X POST -d id=1 -d name=NoOne

Eg. To Get Data :


At this stage your application is completely deployed and is externally accessible.

Manual scaling of pods

Scaling your application up or down in Kubernetes is quite straightforward. Let’s scale up the test-app deployment.

$ kubectl scale deployment test-app --replicas=3

Deployment configuration for test-app will get updated and you can see 3 replicas of test-app are running. Verify it using,

kubectl get pods

In the same manner, you can scale down your application by reducing the replica count.

Cleanup :

Un-deploying an application from Kubernetes is also quite straightforward. All we have to do is delete the services and delete the deployments. The only caveat is that the deletion of the load balancer is an asynchronous process. You have to wait until it gets deleted.

$ kubectl delete service mysql-service
$ kubectl delete service test-service

The above command will deallocate Load Balancer which was created as a part of test-service. You can check the status of the load balancer with the following command.

$ gcloud compute forwarding-rules list

Once the load balancer is deleted, you can clean-up the deployments as well.

$ kubectl delete deployments test-app
$ kubectl delete deployments mysql

Delete the Cluster:

$ gcloud container clusters delete my-first-cluster


In this blog, we saw how easy it is to deploy, scale & terminate applications on Google Container Engine. Google Container Engine abstracts away all the complexity of Kubernetes and gives us a robust platform to run containerised applications. I am super excited about what the future holds for Kubernetes!

Check out some of Velotio's other blogs on Kubernetes.

About the Author

Screen Shot 2017-08-21 at 10.24.13 AM.png

Ajay is a Cloud & Virtualization specialist. He has strong understanding of VMWare Virtualization Platform, Amazon Web Services & Google Cloud Platform. Lately, he has been working in the world of Docker & Kubernetes. He has helped several customers to adopt Kubernetes and has built tooling and automation around it. He has a big fan of FRIENDS and Game Of Thrones!