Need & Challenges for Cloud Migration & Containerization

Containerized applications are becoming more popular with each passing year. All enterprise applications are adopting container technologvolumey as they modernize their IT systems. Migrating your applications from VMs or physical machines to containers comes with multiple advantages like optimal resource utilization, faster deployment times, replication, quick cloning, lesser lock-in and so on. Various container orchestration platforms like Kubernetes, Google Container Engine (GKE), Amazon EC2 Container Service (Amazon ECS) help in quick deployment and easy management of your containerized applications. But in order to use these platforms, you need to migrate your legacy applications to containers or rewrite/redeploy your applications from scratch with the containerization approach. Rearchitecting your applications using containerization approach is preferable, but is that possible for complex legacy applications? Is your deployment team capable enough to list down each and every detail about the deployment process of your application? Do you have the patience of authoring a Docker file for each of the components of your complex application stack?

Automated migrations!

Velotio has been helping customers with automated migration of VMs and bare-metal servers to various container platforms. We have developed automation to convert these migrated applications as containers on various container deployment platforms like GKE, Amazon ECS and Kubernetes. In this blog post, we will cover one such migration tool developed at Velotio which will migrate your application running on a VM or physical machine to Google Container Engine (GKE) by running a single command.

Migration tool details

We have named our migration tool as A2C(Anything to Container). It can migrate applications running on any Unix or Windows operating system. 

The migration tool requires the following information about the server to be migrated:

  • IP of the server

  • SSH User, SSH Key/Password of the application server

  • Configuration file containing data paths for application/database/components (more details below)

  • Required name of your docker image (The docker image that will get created for your application)

  • GKE Container Cluster details

In order to store persistent data, volumes can be defined in container definition. Data changes done on volume path remain persistent even if the container is killed or crashes. Volumes are basically filesystem path from host machine on which your container is running, NFS or cloud storage. Containers will mount the filesystem path from your local machine to container, leading to data changes being written on the host machine filesystem instead of the container's filesystem. Our migration tool supports data volumes which can be defined in the configuration file. It will automatically create disks for the defined volumes and copy data from your application server to these disks in a consistent way.

The configuration file we have been talking about is basically a YAML file containing filesystem level information about your application server. A sample of this file can be found below:

includes:
 - /
volumes:
 - var/log/httpd
 - var/log/mariadb
 - var/www/html
 - var/lib/mysql
excludes:
 - mnt
 - var/tmp
 - etc/fstab
 - proc
 - tmp

The configuration file contains 3 sections: includes, volumes and excludes:

  • Includes contains filesystem paths on your application server which you want to add to your container image.
  • Volumes contain filesystem paths on your application server which stores your application data. Generally, filesystem paths containing database files, application code files, configuration files, log files are good candidates for volumes.
  • The excludes section contains filesystem paths which you don’t want to make part of the container. This may include temporary filesystem paths like /proc, /tmp and also NFS mounted paths. Ideally, you would include everything by giving “/” in includes section and exclude specifics in exclude section.

Docker image name to be given as input to the migration tool is the docker registry path in which the image will be stored, followed by the name and tag of the image. Docker registry is like GitHub of docker images, where you can store all your images. Different versions of the same image can be stored by giving version specific tag to the image. GKE also provides a Docker registry. Since in this demo we are migrating to GKE, we will also store our image to GKE registry.

GKE container cluster details to be given as input to the migration tool, contains GKE specific details like GKE project name, GKE container cluster name and GKE region name. A container cluster can be created in GKE to host the container applications. We have a separate set of scripts to perform cluster creation operation. Container cluster creation can also be done easily through GKE UI. For now, we will assume that we have a 3 node cluster created in GKE, which we will use to host our application.

Tasks performed under migration

Our migration tool (A2C), performs the following set of activities for migrating the application running on a VM or physical machine to GKE Container Cluster:

1. Install the A2C migration tool with all it’s dependencies to the target application server

2. Create a docker image of the application server, based on the filesystem level information given in the configuration file

3. Capture metadata from the application server like configured services information, port usage information, network configuration, external services, etc.

4.  Push the docker image to GKE container registry

5. Create disk in Google Cloud for each volume path defined in configuration file and prepopulate disks with data from application server

6. Create deployment spec for the container application in GKE container cluster, which will open the required ports, configure required services, add multi container dependencies, attach the pre populated disks to containers, etc.

7. Deploy the application, after which you will have your application running as containers in GKE with application software in running state. New application URL’s will be given as output.

8. Load balancing, HA will be configured for your application.

Demo

For demonstration purpose, we will deploy a LAMP stack (Apache+PHP+Mysql) on a CentOS 7 VM and will run the migration utility for the VM, which will migrate the application to our GKE cluster. After the migration we will show our application preconfigured with the same data as on our VM, running on GKE.

Step 1

We setup LAMP stack using Apache, PHP and Mysql on a CentOS 7 VM in GCP. The PHP application can be used to list, add, delete or edit user data. The data is getting stored in MySQL database. We added some data to the database using the application and the UI would show the following:

App+Screenshot.png

 

Step 2

Now we run the A2C migration tool, which will migrate this application stack running on a VM into a container and auto-deploy it to GKE.

[root@a2c-host velotio]# ./migrate.py -c lamp_data_handler.yml -d "tcp://35.202.201.247:4243" -i migrate-lamp -p glassy-chalice-XXXXX -u root -k ~/mykey -l a2c-host --gcecluster a2c-demo --gcezone us-central1-b 130.211.231.58

Pushing converter binary to target machine
Pushing data config to target machine
Pushing installer script to target machine
Running converter binary on target machine
[130.211.231.58] out: creating docker image
[130.211.231.58] out: image created with id 6dad12ba171eaa8615a9c353e2983f0f9130f3a25128708762228f293e82198d
[130.211.231.58] out: Collecting metadata for image
[130.211.231.58] out: Generating metadata for cent7
[130.211.231.58] out: Building image from metadata
Pushing the docker image to GCP container registry

Initiate remote data copy
Activated service account credentials for: [glassy-chaliceXXXXX@appspot.gserviceaccount.com]
for volume var/log/httpd
Creating disk migrate-lamp-0
Disk Created Successfully
transferring data from source

for volume var/log/mariadb
Creating disk migrate-lamp-1
Disk Created Successfully
transferring data from source

for volume var/www/html
Creating disk migrate-lamp-2
Disk Created Successfully
transferring data from source

for volume var/lib/mysql
Creating disk migrate-lamp-3
Disk Created Successfully
transferring data from source

Connecting to GCP cluster for deployment
Created service file /tmp/gcp-service.yaml
Created deployment file /tmp/gcp-deployment.yaml

Deploying to GKE

$ kubectl get pod

NAMEREADY STATUSRESTARTS AGE
migrate-lamp-3707510312-6dr5g 0/1 ContainerCreating 058s

$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
migrate-lamp 1 1 10 1m

$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S)AGE
kubernetes 10.59.240.1<none>443/TCP23hmigrate-lamp 10.59.248.44 35.184.53.100 3306:31494/TCP,80:30909/TCP,22:31448/TCP 53s

You can access your application using above connection details!

Step 3

Access LAMP stack on GKE using the IP 35.184.53.100 on default 80 port as was done on the source machine.

app+on+GKE.png

 

Here is the Docker image being created in GKE Container Registry:

GCP+Container+Registry.png

 

We can also see that disks were created with migrate-lamp-x, as part of this automated migration.

GKE+Disks.png

 

Load Balancer also got provisioned in GCP as part of the migration process

GCP+Load+Balancer.png

 

Following service files and deployment files were created by our migration tool to deploy the application on GKE:

[root@a2c-host ~]# cat /tmp/gcp-service.yaml
apiVersion: v1
kind: Service
metadata:
 labels:
 app: migrate-lamp
 name: migrate-lamp
spec:
 ports:
 - name: migrate-lamp-3306
 port: 3306
 - name: migrate-lamp-80
 port: 80
 - name: migrate-lamp-22
 port: 22
 selector:
 app: migrate-lamp
 type: LoadBalancer

[root@a2c-host ~]# cat /tmp/gcp-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 labels:
 app: migrate-lamp
 name: migrate-lamp
spec:
 replicas: 1
 selector:
 matchLabels:
 app: migrate-lamp
 template:
 metadata:
 labels:
 app: migrate-lamp
 spec:
 containers:
 - image: us.gcr.io/glassy-chalice-129514/migrate-lamp
 name: migrate-lamp
 ports:
 - containerPort: 3306
 - containerPort: 80
 - containerPort: 22
 securityContext:
 privileged: true
 volumeMounts:
 - mountPath: /var/log/httpd
 name: migrate-lamp-var-log-httpd
 - mountPath: /var/www/html
 name: migrate-lamp-var-www-html
 - mountPath: /var/log/mariadb
 name: migrate-lamp-var-log-mariadb
 - mountPath: /var/lib/mysql
 name: migrate-lamp-var-lib-mysql
 volumes:
 - gcePersistentDisk:
 fsType: ext4
 pdName: migrate-lamp-0
 name: migrate-lamp-var-log-httpd
 - gcePersistentDisk:
 fsType: ext4
 pdName: migrate-lamp-2
 name: migrate-lamp-var-www-html
 - gcePersistentDisk:
 fsType: ext4
 pdName: migrate-lamp-1
 name: migrate-lamp-var-log-mariadb
 - gcePersistentDisk:
 fsType: ext4
 pdName: migrate-lamp-3
 name: migrate-lamp-var-lib-mysql

Conclusion

Migrations are always hard for IT and development teams. At Velotio, we have been helping customers to migrate to cloud and container platforms using streamlined processes and automation. Feel free to reach out to us at contact@velotio.com to know more about our cloud and container adoption/migration offerings.


madhur.jpeg

Madhur is a full stack engineer, who likes to explore new cutting-edge technologies. He has worked extensively on Cloud Native development, DevOps & Big Data in the past. He is currently exploring the world of containers, Docker, Kubernetes and micro-services! If you’d like to chat about anything related to this article, any questions around Kubernetes, containers, networking, or anything else, get in touch.