Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
Kubernetes allows us to run a containerized application at scale without drowning in the details of application load balancing. You can ensure high availability for your applications running on Kubernetes by running multiple replicas (pods) of the application. All the complexity of container orchestrations is hidden away safely so that you can focus on developing application instead of deploying it. Learn more about high availability of Kubernetes Clusters and how you can use Kubedm for high availability in Kubernetes here.
But using Kubernetes has its own challenges and getting Kubernetes up and running takes some real work. If you are not familiar with getting Kubernetes up and running, you might want to take a look here.
Kubernetes allows us to have a zero downtime deployment, yet service interrupting events are inevitable and can occur at any time. Your network can go down, your latest application push can introduce a critical bug, or in the rarest case, you might even have to face a natural disaster.
When you are using Kubernetes, sooner or later, you need to set up a backup. In case your cluster goes into an unrecoverable state, you will need a backup to go back to the previous stable state of the Kubernetes cluster.
Why Backup and Recovery?
There are three reasons why you need a backup and recovery mechanism in place for your Kubernetes cluster. These are:
To recover from Disasters: like someone accidentally deleted the namespace where your deployments reside.
Replicate the environment: You want to replicate your production environment to staging environment before any major upgrade.
Migration of Kubernetes Cluster: Let’s say, you want to migrate your Kubernetes cluster from one environment to another.
What to Backup?
Now that you know why, let’s see what exactly do you need to backup. The two things you need to backup are:
Your Kubernetes control plane is stored into etcd storage and you need to backup the etcd state to get all the Kubernetes resources.
If you have stateful containers (which you will have in real world), you need a backup of persistent volumes as well.
How to Backup?
There have been various tools like Heptio ark and Kube-backup to backup and restore the Kubernetes cluster for cloud providers. But, what if you are not using managed Kubernetes cluster? You might have to get your hands dirty if you are running Kubernetes on Baremetal, just like we are.
We are running 3 master Kubernetes cluster with 3 etcd members running on each master. If we lose one master, we can still recover the master because etcd quorum is intact. Now if we lose two masters, we need a mechanism to recover from such situations as well for production grade clusters.
Want to know how to set up multi-master Kubernetes cluster? Keep reading!
Taking etcd backup:
There is a different mechanism to take etcd backup depending on how you set up your etcd cluster in Kubernetes environment.
There are two ways to setup etcd cluster in kubernetes environment:
Internal etcd cluster: It means you’re running your etcd cluster in the form of containers/pods inside the Kubernetes cluster and it is the responsibility of Kubernetes to manage those pods.
External etcd cluster: Etcd cluster you’re running outside of Kubernetes cluster mostly in the form of Linux services and providing its endpoints to Kubernetes cluster to write to.
Backup Strategy for Internal Etcd Cluster:
To take a backup from inside a etcd pod, we will be using Kubernetes CronJob functionality which will not require any etcdctl client to be installed on the host.
Following is the definition of Kubernetes CronJob which will take etcd backup every minute:
The above three commands will give you three restored folders on three nodes named master:
0.etcd, master-1.etcd and master-2.etcd
Now, Stop all the etcd service on the nodes, replace the restored folder with the restored folders on all nodes and start the etcd service. Now you can see all the nodes, but in some time you will see that only master node is ready and other nodes went into the not ready state. You need to join those two nodes again with the existing ca.crt file (you should have a backup of that).
It will give you kubeadm join command, add one --ignore-preflight-errors and run that command on other two nodes for them to come into the ready state.
Conclusion
One way to deal with master failure is to set up multi-master Kubernetes cluster, but even that does not allow you to completely eliminate the Kubernetes etcd backup and restore, and it is still possible that you may accidentally destroy data on the HA environment.
The Ultimate Guide to Disaster Recovery for Your Kubernetes Clusters
Kubernetes allows us to run a containerized application at scale without drowning in the details of application load balancing. You can ensure high availability for your applications running on Kubernetes by running multiple replicas (pods) of the application. All the complexity of container orchestrations is hidden away safely so that you can focus on developing application instead of deploying it. Learn more about high availability of Kubernetes Clusters and how you can use Kubedm for high availability in Kubernetes here.
But using Kubernetes has its own challenges and getting Kubernetes up and running takes some real work. If you are not familiar with getting Kubernetes up and running, you might want to take a look here.
Kubernetes allows us to have a zero downtime deployment, yet service interrupting events are inevitable and can occur at any time. Your network can go down, your latest application push can introduce a critical bug, or in the rarest case, you might even have to face a natural disaster.
When you are using Kubernetes, sooner or later, you need to set up a backup. In case your cluster goes into an unrecoverable state, you will need a backup to go back to the previous stable state of the Kubernetes cluster.
Why Backup and Recovery?
There are three reasons why you need a backup and recovery mechanism in place for your Kubernetes cluster. These are:
To recover from Disasters: like someone accidentally deleted the namespace where your deployments reside.
Replicate the environment: You want to replicate your production environment to staging environment before any major upgrade.
Migration of Kubernetes Cluster: Let’s say, you want to migrate your Kubernetes cluster from one environment to another.
What to Backup?
Now that you know why, let’s see what exactly do you need to backup. The two things you need to backup are:
Your Kubernetes control plane is stored into etcd storage and you need to backup the etcd state to get all the Kubernetes resources.
If you have stateful containers (which you will have in real world), you need a backup of persistent volumes as well.
How to Backup?
There have been various tools like Heptio ark and Kube-backup to backup and restore the Kubernetes cluster for cloud providers. But, what if you are not using managed Kubernetes cluster? You might have to get your hands dirty if you are running Kubernetes on Baremetal, just like we are.
We are running 3 master Kubernetes cluster with 3 etcd members running on each master. If we lose one master, we can still recover the master because etcd quorum is intact. Now if we lose two masters, we need a mechanism to recover from such situations as well for production grade clusters.
Want to know how to set up multi-master Kubernetes cluster? Keep reading!
Taking etcd backup:
There is a different mechanism to take etcd backup depending on how you set up your etcd cluster in Kubernetes environment.
There are two ways to setup etcd cluster in kubernetes environment:
Internal etcd cluster: It means you’re running your etcd cluster in the form of containers/pods inside the Kubernetes cluster and it is the responsibility of Kubernetes to manage those pods.
External etcd cluster: Etcd cluster you’re running outside of Kubernetes cluster mostly in the form of Linux services and providing its endpoints to Kubernetes cluster to write to.
Backup Strategy for Internal Etcd Cluster:
To take a backup from inside a etcd pod, we will be using Kubernetes CronJob functionality which will not require any etcdctl client to be installed on the host.
Following is the definition of Kubernetes CronJob which will take etcd backup every minute:
The above three commands will give you three restored folders on three nodes named master:
0.etcd, master-1.etcd and master-2.etcd
Now, Stop all the etcd service on the nodes, replace the restored folder with the restored folders on all nodes and start the etcd service. Now you can see all the nodes, but in some time you will see that only master node is ready and other nodes went into the not ready state. You need to join those two nodes again with the existing ca.crt file (you should have a backup of that).
It will give you kubeadm join command, add one --ignore-preflight-errors and run that command on other two nodes for them to come into the ready state.
Conclusion
One way to deal with master failure is to set up multi-master Kubernetes cluster, but even that does not allow you to completely eliminate the Kubernetes etcd backup and restore, and it is still possible that you may accidentally destroy data on the HA environment.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms