Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
Kubernetes, the awesome container orchestration tool is changing the way applications are being developed and deployed. You can specify the required resources you want and have it available without worrying about the underlying infrastructure. Kubernetes is way ahead in terms of high availability, scaling, managing your application, but storage section in the k8s is still evolving. Many storage supports are getting added and are production ready.
People are preferring clustered applications to store the data. But, what about the non-clustered applications? Where does these applications store data to make it highly available? Considering these questions, let’s go through the Ceph storage and its integration with Kubernetes.
What is Ceph Storage?
Ceph is open source, software-defined storage maintained by RedHat. It’s capable of block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.
There are good storage solutions like Gluster, Swift but we are going with Ceph for following reasons:
File, Block, and Object storage in the same wrapper.
Better transfer speed and lower latency
Easily accessible storage that can quickly scale up or down
We are going to use 2 types of storage in this blog to integrate with kubernetes.
Deploying highly available Ceph cluster is pretty straightforward and easy. I am assuming that you are familiar with setting up the Ceph cluster. If not then refer the official document here.
If you check the status, you should see something like:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Here notice that my Ceph monitors IPs are 10.0.1.118, 10.0.1.227 and 10.0.1.172
K8s Integration
After setting up the Ceph cluster, we would consume it with Kubernetes. I am assuming that your Kubernetes cluster is up and running. We will be using Ceph-RBD and CephFS as storage in Kubernetes.
Ceph-RBD and Kubernetes
We need a Ceph RBD client to achieve interaction between Kubernetes cluster and CephFS. This client is not in the official kube-controller-manager container so let’s try to create the external storage plugin for Ceph.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Check RBD volume provisioner status and wait till it comes up in running state. You would see something like following:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once the provisioner is up, provisioner needs the admin key for the storage provision. You can run the following command to get the admin key:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Let’s create a separate Ceph pool for Kubernetes and the new client key:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Get the auth token which we created in the above command and create kubernetes secret for new client secret for kube pool.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We are all set now. We can test the Ceph-RBD by creating the PVC. After creating the PVC, PV will get created automatically. Let’s create the PVC now:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If you check pvc, you’ll find it shows that it’s been bounded with the pv which got created by storage class.
Let’s check the persistent volume
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Till now we have seen how to use the block based storage i.e Ceph-RBD with kubernetes by creating the dynamic storage provisioner. Now let’s go through the process for setting up the storage using file system based storage i.e. CephFS.
CephFS and Kubernetes
Let’s create the provisioner and storage class for the CephFS. Create the dedicated namespace for CephFS
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Create the kubernetes secrete using the Ceph admin auth token
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Create the cluster role, role binding, provisioner
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We are all set now. CephFS provisioner is created. Let’s wait till it gets into running state.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once the CephFS provider is up, try creating the persistent volume claim. In this step, storage class will take care of creating the persistent volume dynamically.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We have seen how to integrate the Ceph storage with Kubernetes. In the integration, we covered ceph-rbd and cephfs. This approach is highly useful when your application is not clustered application and if you are looking to make it highly available.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
An Innovator’s Guide to Kubernetes Storage Using Ceph
Kubernetes, the awesome container orchestration tool is changing the way applications are being developed and deployed. You can specify the required resources you want and have it available without worrying about the underlying infrastructure. Kubernetes is way ahead in terms of high availability, scaling, managing your application, but storage section in the k8s is still evolving. Many storage supports are getting added and are production ready.
People are preferring clustered applications to store the data. But, what about the non-clustered applications? Where does these applications store data to make it highly available? Considering these questions, let’s go through the Ceph storage and its integration with Kubernetes.
What is Ceph Storage?
Ceph is open source, software-defined storage maintained by RedHat. It’s capable of block, object, and file storage. The clusters of Ceph are designed in order to run on any hardware with the help of an algorithm called CRUSH (Controlled Replication Under Scalable Hashing). This algorithm ensures that all the data is properly distributed across the cluster and data quickly without any constraints. Replication, Thin provisioning, Snapshots are the key features of the Ceph storage.
There are good storage solutions like Gluster, Swift but we are going with Ceph for following reasons:
File, Block, and Object storage in the same wrapper.
Better transfer speed and lower latency
Easily accessible storage that can quickly scale up or down
We are going to use 2 types of storage in this blog to integrate with kubernetes.
Deploying highly available Ceph cluster is pretty straightforward and easy. I am assuming that you are familiar with setting up the Ceph cluster. If not then refer the official document here.
If you check the status, you should see something like:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Here notice that my Ceph monitors IPs are 10.0.1.118, 10.0.1.227 and 10.0.1.172
K8s Integration
After setting up the Ceph cluster, we would consume it with Kubernetes. I am assuming that your Kubernetes cluster is up and running. We will be using Ceph-RBD and CephFS as storage in Kubernetes.
Ceph-RBD and Kubernetes
We need a Ceph RBD client to achieve interaction between Kubernetes cluster and CephFS. This client is not in the official kube-controller-manager container so let’s try to create the external storage plugin for Ceph.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Check RBD volume provisioner status and wait till it comes up in running state. You would see something like following:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once the provisioner is up, provisioner needs the admin key for the storage provision. You can run the following command to get the admin key:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Let’s create a separate Ceph pool for Kubernetes and the new client key:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Get the auth token which we created in the above command and create kubernetes secret for new client secret for kube pool.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We are all set now. We can test the Ceph-RBD by creating the PVC. After creating the PVC, PV will get created automatically. Let’s create the PVC now:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
If you check pvc, you’ll find it shows that it’s been bounded with the pv which got created by storage class.
Let’s check the persistent volume
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Till now we have seen how to use the block based storage i.e Ceph-RBD with kubernetes by creating the dynamic storage provisioner. Now let’s go through the process for setting up the storage using file system based storage i.e. CephFS.
CephFS and Kubernetes
Let’s create the provisioner and storage class for the CephFS. Create the dedicated namespace for CephFS
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Create the kubernetes secrete using the Ceph admin auth token
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Create the cluster role, role binding, provisioner
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We are all set now. CephFS provisioner is created. Let’s wait till it gets into running state.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once the CephFS provider is up, try creating the persistent volume claim. In this step, storage class will take care of creating the persistent volume dynamically.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
We have seen how to integrate the Ceph storage with Kubernetes. In the integration, we covered ceph-rbd and cephfs. This approach is highly useful when your application is not clustered application and if you are looking to make it highly available.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms