Extending Kubernetes APIs with Custom Resource Definitions (CRDs)

Introduction:

Custom resources definition (CRD) is a powerful feature introduced in Kubernetes 1.7 which enables users to add their own/custom objects to the Kubernetes cluster and use it like any other native Kubernetes objects. In this blog post, we will see how we can add a custom resource to a Kubernetes cluster using the command line as well as using the Golang client library thus also learning how to programmatically interact with a Kubernetes cluster.

What is a custom resource definition (CRD)?

In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in pods' resource contains a collection of Pod objects. The standard Kubernetes distribution ships with many inbuilt API objects/resources. CRD comes into picture when we want to introduce our own object into the Kubernetes cluster to full fill our requirements. Once we create a CRD in Kubernetes we can use it like any other native Kubernetes object thus leveraging all the features of Kubernetes like its CLI, security, API services, RBAC etc.

The custom resource created is also stored in the etcd cluster with proper replication and lifecycle management. CRD allows us to use all the functionalities provided by a Kubernetes cluster for our custom objects and saves us the overhead of implementing them on our own.

How to register a CRD using command line interface (CLI):

Step-1: Create a CRD definition file sslconfig-crd.yaml

Here we are creating a custom resource definition for an object of kind SslConfig. This object allows us to store the SSL configuration information for a domain. As we can see under the validation section specifying the cert, key and the domain are mandatory for creating objects of this kind, along with this we can store other information like the provider of the certificate etc. The name metadata that we specify must be spec.names.plural+"."+spec.group.

An API group (blog.velotio.com here) is a collection of API objects which are logically related to each other. We also specify a version for our objects (spec.version), if the definition of the object is expected to evolve then it is better to start with alpha so that the users of the object knows that the definition might change later. In the scope, we have specified Namespaced, by default a custom resource name is clustered scoped. 


# kubectl create -f crd.yaml  # kubectl get crd  NAME                         AGE sslconfigs.blog.velotio.com   5s

 

Step-2:  Create objects using the definition we created above


# kubectl create -f crd-obj.yaml # kubectl get sslconfig  NAME                      AGE sslconfig-velotio.com   12s 

Along with the mandatory fields cert, key and domain, we have also stored the information of the provider ( certifying authority ) of the cert.

How to register a CRD programmatically using client-go

Client-go project provides us with packages using which we can easily create go client and access the Kubernetes cluster.  For creating a client first we need to create a connection with the API server.
How we connect to the API server depends on whether we will be accessing it from within the cluster (our code running in the Kubernetes cluster itself) or if our code is running outside the cluster (locally)

If the code is running outside the cluster then we need to provide either the path of the config file or URL of the Kubernetes proxy server running on the cluster.


kubeconfig := filepath.Join(
         os.Getenv("HOME"), ".kube", "config",
    )
    config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
    if err != nil {
        log.Fatal(err)
    }

OR

var (
    // Set during build
    version string

    proxyURL = flag.String("proxy", "",
        `If specified, it is assumed that a kubctl proxy server is running on the
        given url and creates a proxy client. In case it is not given InCluster
        kubernetes setup will be used`)
)
if *proxyURL != "" {
        config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
            &clientcmd.ClientConfigLoadingRules{},
            &clientcmd.ConfigOverrides{
                ClusterInfo: clientcmdapi.Cluster{
                    Server: *proxyURL,
                },
            }).ClientConfig()
        if err != nil {
            glog.Fatalf("error creating client configuration: %v", err)
        }

When the code is to be run as a part of the cluster then we can simply use

       import "k8s.io/client-go/rest"          ...                rest.InClusterConfig() 

Once the connection is established we can use it to create clientset. For accessing kubenetes objects generally the clientset from the client-go project is used, but for CRD related operations we need to use the clientset from apiextensions-apiserver project

apiextension "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"


kubeClient, err := apiextension.NewForConfig(config)
    if err != nil {
        glog.Fatalf("Failed to create client: %v.", err)
    }

Now we can use the client to make the API call which will create the CRD for us.

In the create CRD function, we first create the definition of our custom object and then pass it to the create method which creates it in our cluster. Just like we did while creating our definition using CLI, here also we set the parameters like version, group, kind etc.

Once our definition is ready we can create objects of its type just like we did earlier using the CLI. First we need to define our object.

Kubernetes API conventions  suggests that each object must have two nested object fields that govern the object’s configuration: the object spec and the object status. Objects must also have metadata associated with them. The custom objects that we define here comply with these standards. It is also recommended to create a list type for every type thus we have also created a SslConfigList struct.

Now we need to write a function which will create a custom client which is aware of the new resource that we have created.

Building the custom client library :

Once we have registered our custom resource definition with the kubernetes cluster we can create objects of its type using the kubernetes cli as we did earlier but for creating controllers for these objects or for developing some custom functionalities around them we need to build a client library also using which we can access them from go API. For native kubernetes objects, this type of library is provided for each object.

We can add more methods like watch, update status etc. Their implementation will also be similar to the methods we have defined above. For looking at the methods available for various kubernetes objects like pod, node etc. we can refer to the v1 package.

Putting all things together :

Now in our main function we will get all the things together.

Now if we run our code then our custom resource definition will get created in the kubernetes cluster and also an object of its type will be there just like with the cli. The docker image akash125/crdblog is build using the code discussed above it can be directly pulled from docker hub and run in a kubernetes cluster. After the image is run successfully, the CRD definition that we discussed above will get created in the cluster along with an object of its type. We can verify the same using the CLI the way we did earlier, we can also check the logs of the pod running the docker image to verify it. The complete code is available here.

Conclusion and future work:

We learned how to create a custom resource definition and objects using kubernetes command line interface as well as the Golang client. We also learned how to programmatically access a kubernetes cluster, using which we can build some really cool stuff on kubernetes, we can now also create custom controllers for our resources which continuously watches the cluster for various life cycle events of our object and takes desired action accordingly. To read more about CRD refer the following links:


New Doc 3_1.jpg

Akash is an AWS certified developer with expertise in infrastructure automation, containerized deployments, and microservice design patterns. He is also a gopher and has built custom kubernetes controllers. In his free time, he likes to read books.