Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

The Ultimate Guide to Disaster Recovery for Your Kubernetes Clusters

Prafull Ladha

Cloud & DevOps

Kubernetes allows us to run a containerized application at scale without drowning in the details of application load balancing. You can ensure high availability for your applications running on Kubernetes by running multiple replicas (pods) of the application. All the complexity of container orchestrations is hidden away safely so that you can focus on developing application instead of deploying it. Learn more about high availability of Kubernetes Clusters and how you can use Kubedm for high availability in Kubernetes here.

But using Kubernetes has its own challenges and getting Kubernetes up and running takes some real work. If you are not familiar with getting Kubernetes up and running, you might want to take a look here.

Kubernetes allows us to have a zero downtime deployment, yet service interrupting events are inevitable and can occur at any time. Your network can go down, your latest application push can introduce a critical bug, or in the rarest case, you might even have to face a natural disaster.

When you are using Kubernetes, sooner or later, you need to set up a backup. In case your cluster goes into an unrecoverable state, you will need a backup to go back to the previous stable state of the Kubernetes cluster.

Why Backup and Recovery?

There are three reasons why you need a backup and recovery mechanism in place for your Kubernetes cluster. These are:

  1. To recover from Disasters: like someone accidentally deleted the namespace where your deployments reside.
  2. Replicate the environment: You want to replicate your production environment to staging environment before any major upgrade.
  3. Migration of Kubernetes Cluster: Let’s say, you want to migrate your Kubernetes cluster from one environment to another.

What to Backup?

Now that you know why, let’s see what exactly do you need to backup. The two things you need to backup are:

  1. Your Kubernetes control plane is stored into etcd storage and you need to backup the etcd state to get all the Kubernetes resources.
  2. If you have stateful containers (which you will have in real world), you need a backup of persistent volumes as well.

How to Backup?

There have been various tools like Heptio ark and Kube-backup to backup and restore the Kubernetes cluster for cloud providers. But, what if you are not using managed Kubernetes cluster? You might have to get your hands dirty if you are running Kubernetes on Baremetal, just like we are.

We are running 3 master Kubernetes cluster with 3 etcd members running on each master. If we lose one master, we can still recover the master because etcd quorum is intact. Now if we lose two masters, we need a mechanism to recover from such situations as well for production grade clusters.

Kubernetes Cluster Backup

Want to know how to set up multi-master Kubernetes cluster? Keep reading!

Taking etcd backup:

There is a different mechanism to take etcd backup depending on how you set up your etcd cluster in Kubernetes environment.

There are two ways to setup etcd cluster in kubernetes environment:

  1. Internal etcd cluster: It means you’re running your etcd cluster in the form of containers/pods inside the Kubernetes cluster and it is the responsibility of Kubernetes to manage those pods.
  2. External etcd cluster: Etcd cluster you’re running outside of Kubernetes cluster mostly in the form of Linux services and providing its endpoints to Kubernetes cluster to write to.

Backup Strategy for Internal Etcd Cluster:

To take a backup from inside a etcd pod, we will be using Kubernetes CronJob functionality which will not require any etcdctl client to be installed on the host.

Following is the definition of Kubernetes CronJob which will take etcd backup every minute:

CODE: https://gist.github.com/velotiotech/57cb0774508f0594e97efbcc2d9f3f82.js

Backup Strategy for External Etcd Cluster:

If you running etcd cluster on Linux hosts as a service, you should set up a Linux cron job to take backup of your cluster.

Run the following command to save etcd backup

CODE: https://gist.github.com/velotiotech/d5a5c5f990e233944ae928b4a001a18e.js

Disaster Recovery

Now, Let’s say the Kubernetes cluster went completely down and we need to recover the Kubernetes cluster from the etcd snapshot.

Normally, start the etcd cluster and do the kubeadm init on the master node with etcd endpoints.

Make sure you put the backup certificates into /etc/kubernetes/pki folder before kubeadm init. It will pick up the same certificates.

Restore Strategy for Internal Etcd Cluster:

CODE: https://gist.github.com/velotiotech/53c5dd7737853b69fcc6e2888f72274a.js

Restore Strategy for External Etcd Cluster

Restore the etcd on 3 nodes using following commands:

CODE: https://gist.github.com/velotiotech/ae59e9c96435c3c644ddf9ef10749661.js

The above three commands will give you three restored folders on three nodes named master:

0.etcd, master-1.etcd and master-2.etcd

Now, Stop all the etcd service on the nodes, replace the restored folder with the restored folders on all nodes and start the etcd service. Now you can see all the nodes, but in some time you will see that only master node is ready and other nodes went into the not ready state. You need to join those two nodes again with the existing ca.crt file (you should have a backup of that).

Run the following command on master node:

CODE: https://gist.github.com/velotiotech/b41f74d33fa9d970226e1c54287183a1.js

It will give you kubeadm join command, add one --ignore-preflight-errors and run that command on other two nodes for them to come into the ready state.

Conclusion

One way to deal with master failure is to set up multi-master Kubernetes cluster, but even that does not allow you to completely eliminate the Kubernetes etcd backup and restore, and it is still possible that you may accidentally destroy data on the HA environment.

Need help with disaster recovery for your Kubernetes Cluster? Connect with the experts at Velotio!

For more insights into Kubernetes Disaster Recovery check out here.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

The Ultimate Guide to Disaster Recovery for Your Kubernetes Clusters

Kubernetes allows us to run a containerized application at scale without drowning in the details of application load balancing. You can ensure high availability for your applications running on Kubernetes by running multiple replicas (pods) of the application. All the complexity of container orchestrations is hidden away safely so that you can focus on developing application instead of deploying it. Learn more about high availability of Kubernetes Clusters and how you can use Kubedm for high availability in Kubernetes here.

But using Kubernetes has its own challenges and getting Kubernetes up and running takes some real work. If you are not familiar with getting Kubernetes up and running, you might want to take a look here.

Kubernetes allows us to have a zero downtime deployment, yet service interrupting events are inevitable and can occur at any time. Your network can go down, your latest application push can introduce a critical bug, or in the rarest case, you might even have to face a natural disaster.

When you are using Kubernetes, sooner or later, you need to set up a backup. In case your cluster goes into an unrecoverable state, you will need a backup to go back to the previous stable state of the Kubernetes cluster.

Why Backup and Recovery?

There are three reasons why you need a backup and recovery mechanism in place for your Kubernetes cluster. These are:

  1. To recover from Disasters: like someone accidentally deleted the namespace where your deployments reside.
  2. Replicate the environment: You want to replicate your production environment to staging environment before any major upgrade.
  3. Migration of Kubernetes Cluster: Let’s say, you want to migrate your Kubernetes cluster from one environment to another.

What to Backup?

Now that you know why, let’s see what exactly do you need to backup. The two things you need to backup are:

  1. Your Kubernetes control plane is stored into etcd storage and you need to backup the etcd state to get all the Kubernetes resources.
  2. If you have stateful containers (which you will have in real world), you need a backup of persistent volumes as well.

How to Backup?

There have been various tools like Heptio ark and Kube-backup to backup and restore the Kubernetes cluster for cloud providers. But, what if you are not using managed Kubernetes cluster? You might have to get your hands dirty if you are running Kubernetes on Baremetal, just like we are.

We are running 3 master Kubernetes cluster with 3 etcd members running on each master. If we lose one master, we can still recover the master because etcd quorum is intact. Now if we lose two masters, we need a mechanism to recover from such situations as well for production grade clusters.

Kubernetes Cluster Backup

Want to know how to set up multi-master Kubernetes cluster? Keep reading!

Taking etcd backup:

There is a different mechanism to take etcd backup depending on how you set up your etcd cluster in Kubernetes environment.

There are two ways to setup etcd cluster in kubernetes environment:

  1. Internal etcd cluster: It means you’re running your etcd cluster in the form of containers/pods inside the Kubernetes cluster and it is the responsibility of Kubernetes to manage those pods.
  2. External etcd cluster: Etcd cluster you’re running outside of Kubernetes cluster mostly in the form of Linux services and providing its endpoints to Kubernetes cluster to write to.

Backup Strategy for Internal Etcd Cluster:

To take a backup from inside a etcd pod, we will be using Kubernetes CronJob functionality which will not require any etcdctl client to be installed on the host.

Following is the definition of Kubernetes CronJob which will take etcd backup every minute:

CODE: https://gist.github.com/velotiotech/57cb0774508f0594e97efbcc2d9f3f82.js

Backup Strategy for External Etcd Cluster:

If you running etcd cluster on Linux hosts as a service, you should set up a Linux cron job to take backup of your cluster.

Run the following command to save etcd backup

CODE: https://gist.github.com/velotiotech/d5a5c5f990e233944ae928b4a001a18e.js

Disaster Recovery

Now, Let’s say the Kubernetes cluster went completely down and we need to recover the Kubernetes cluster from the etcd snapshot.

Normally, start the etcd cluster and do the kubeadm init on the master node with etcd endpoints.

Make sure you put the backup certificates into /etc/kubernetes/pki folder before kubeadm init. It will pick up the same certificates.

Restore Strategy for Internal Etcd Cluster:

CODE: https://gist.github.com/velotiotech/53c5dd7737853b69fcc6e2888f72274a.js

Restore Strategy for External Etcd Cluster

Restore the etcd on 3 nodes using following commands:

CODE: https://gist.github.com/velotiotech/ae59e9c96435c3c644ddf9ef10749661.js

The above three commands will give you three restored folders on three nodes named master:

0.etcd, master-1.etcd and master-2.etcd

Now, Stop all the etcd service on the nodes, replace the restored folder with the restored folders on all nodes and start the etcd service. Now you can see all the nodes, but in some time you will see that only master node is ready and other nodes went into the not ready state. You need to join those two nodes again with the existing ca.crt file (you should have a backup of that).

Run the following command on master node:

CODE: https://gist.github.com/velotiotech/b41f74d33fa9d970226e1c54287183a1.js

It will give you kubeadm join command, add one --ignore-preflight-errors and run that command on other two nodes for them to come into the ready state.

Conclusion

One way to deal with master failure is to set up multi-master Kubernetes cluster, but even that does not allow you to completely eliminate the Kubernetes etcd backup and restore, and it is still possible that you may accidentally destroy data on the HA environment.

Need help with disaster recovery for your Kubernetes Cluster? Connect with the experts at Velotio!

For more insights into Kubernetes Disaster Recovery check out here.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings