Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm


Kubernetes is getting adopted rapidly across the software industry and is becoming the most preferred option for deploying and managing containerized applications. Once we have a fully functional Kubernetes cluster we need to have an automated process to deploy our applications on it. In this blog post, we will create a fully automated “commit to deploy” pipeline for Kubernetes. We will use CircleCI & helm for it.

What is CircleCI?

CircleCI is a fully managed saas offering which allows us to build, test or deploy our code on every check in. For getting started with circle we need to log into their web console with our GitHub or bitbucket credentials then add a project for the repository we want to build and then add the CircleCI config file to our repository. The CircleCI config file is a yaml file which lists the steps we want to execute on every time code is pushed to that repository.

Some salient features of CircleCI is:

  1. Little or no operational overhead as the infrastructure is managed completely by CircleCI.

  2. User authentication is done via GitHub or bitbucket so user management is quite simple.

  3. It automatically notifies the build status on the github/bitbucket email ids of the users who are following the project on CircleCI.

  4. The UI is quite simple and gives a holistic view of builds.

  5. Can be integrated with Slack, hipchat, jira, etc.

What is Helm?

Helm is chart manager where chart refers to package of Kubernetes resources. Helm allows us to bundle related Kubernetes objects into charts and treat them as a single unit of deployment referred to as release.  For example, you have an application app1 which you want to run on Kubernetes. For this app1 you create multiple Kubernetes resources like deployment, service, ingress, horizontal pod scaler, etc. Now while deploying the application you need to create all the Kubernetes resources separately by applying their manifest files. What helm does is it allows us to group all those files into one chart (helm chart) and then we just need to deploy the chart. This also makes deleting and upgrading the resources quite simple.

Some other benefits of Helm is:

  1. It makes the deployment highly configurable. Thus just by changing the parameters, we can use the same chart for deploying on multiple environments like stag/prod or multiple cloud providers.

  2. We can rollback to a previous release with a single helm command.

  3. It makes managing and sharing Kubernetes specific application much simpler.

Note: Helm is composed of two components one is helm client and the other one is tiller server. Tiller is the component which runs inside the cluster as deployment and serves the requests made by helm client. Tiller has potential security vulnerabilities thus we will use tillerless helm in our pipeline which runs tiller only when we need it.

Building the Pipeline

CICD for kubernetes - 1.png


We will create the pipeline for a Golang application. The pipeline will first build the binary, create a docker image from it, push the image to ECR, then deploy it on the Kubernetes cluster using its helm chart.

We will use a simple app which just exposes a `hello` endpoint and returns the hello world message:

We will create a docker image for hello app using the following Dockerfile:

Creating Helm Chart:

Now we need to create the helm chart for hello app.

First, we create the Kubernetes manifest files. We will create a deployment and a service file:

In the above file, you must have noticed that we have used .Values object. All the values that we specify in the values.yaml file in our helm chart can be accessed using the .Values object inside the template.

Let’s create the helm chart now:

helm create helloapp

Above command will create a chart helm chart folder structure for us.

      |- .helmignore   # Contains patterns to ignore when packaging Helm charts.
      |- Chart.yaml    # Information about your chart
      |- values.yaml   # The default values for your templates
      |- charts/       # Charts that this chart depends on
      |- templates/    # The template files

We can remove the charts/ folder inside our helloapp chart as our chart won’t have any sub-charts. Now we need to move our Kubernetes manifest files to the template folder and update our values.yaml and Chart.yaml

Our values.yaml looks like:

This allows us to make our deployment more configurable. For example, here we have set our service type as LoadBalancer in values.yaml but if we want to change it to nodePort we just need to set is as NodePort while installing the chart (--set service.type=NodePort). Similarly, we have set the image pull policy as Always which is fine for development/staging environment but when we deploy to production we may want to set is as ifNotPresent. In our chart, we need to identify the parameters/values which may change from one environment to another and make them configurable. This allows us to be flexible with our deployment and reuse the same chart

Finally, we need to update Chart.yaml file. This file mostly contains metadata about the chart like the name, version, maintainer, etc, where name & version are two mandatory fields for Chart.yaml.

Now our Helm chart is ready we can start with the pipeline. We need to create a folder named .circleci in the root folder of our repository and create a file named config.yml in it. In our config.yml we have defined two jobs one is build&pushImage and deploy.

Configure the pipeline:

  1. We set the working directory for our job, we are setting it on the gopath so that we don’t need to do anything additional.

  2. We set the docker image inside which we want the job to run, as our app is built using golang we are using the image which already has golang installed in it.

  3. This step checks out our repository in the working directory

  4. In this step, we build the binary

  5. Here we setup docker with the help of  setup_remote_docker  key provided by CircleCI.

  6. In this step we create the tag we will be using while building the image, we use the app version available in the VERSION file and append the $CIRCLE_BUILD_NUM value to it, separated by a dash (`-`).

  7. Here we build the image and tag.

  8. Installing AWS CLI to interact with the ECR later.

  9. Here we log into ECR

  10. We tag the image build in step 7 with the ECR repository name.

  11. Finally, we push the image to ECR.

Now we will deploy our helm charts. For this, we have a separate job deploy.

  1. Set the docker image inside which we want to execute the job.

  2. Check out the code using `checkout` key

  3. Install AWS CLI.

  4. Setting the value of tag just like we did in case of build&pushImage job. Note that here we are using CIRCLE_PREVIOUS_BUILD_NUM variable which gives us the build number of build&pushImage job and ensures that the tag values are the same.

  5. Download kubectl and making it executable.

  6. Installing aws-iam-authenticator this is required because my k8s cluster is on EKS.

  7. Here we install the latest version of AWS CLI, EKS is a relatively newer service from AWS and older versions of AWS CLI doesn’t have it.

  8. Here we fetch the kubeconfig file. This step will vary depending upon where the k8s cluster has been set up. As my cluster is on EKS am getting the kubeconfig file via. AWS CLI similarly if your cluster in on GKE then you need to configure gcloud and use the command  `gcloud container clusters get-credentials <cluster-name> --zone=<zone-name>`. We can also have the kubeconfig file on some other secure storage system and fetch it from there.

  9. Download Helm and make it executable

  10. Initializing helm, note that we are initializing helm in client only mode so that it doesn’t start the tiller server.

  11. Download the tillerless helm plugin

  12. Execute the shell script and pass it TAG value from step 4.

In the script we first start tiller, after this, we check if the release is already present or not if it is present then we upgrade otherwise we make a new release. Here we override the value of tag for the image present in the chart by setting it to the tag of the newly built image, finally, we stop the tiller server.

The complete CircleCI config.yml file looks like:

At the end of the file, we see the workflows, workflows control the order in which the jobs specified in the file are executed and establishes dependencies and conditions for the job. For example, we may want our deploy job trigger only after my build job is complete so we added a dependency between them. Similarly, we may want to exclude the jobs from running on some particular branch then we can specify those type of conditions as well.

We have used a few environment variables in our pipeline configuration some of them were created by us and some were made available by CircleCI. We created AWS_REGION, HELLOAPP_ECR_REPO, EKS_CLUSTER_NAME, AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY variables. These variables are set via. CircleCI web console by going to the projects settings. Other variables that we have used are made available by CircleCI as a part of its environment setup process. Complete list of environment variables set by CircleCI can be found here.

Verify the working of the pipeline:

Once everything is set up properly then our application will get deployed on the k8s cluster and should be available for access. Get the external IP of the helloapp service and make a curl request to the hello endpoint

akash@EMPID17037:~$ curl && printf "\n"

{"Msg":"Hello World"}

Now update the code and change the message “Hello World” to “Hello World Returns” and push your code. It will take a few minutes for the pipeline to complete execution and once it is complete make the curl request again to see the changes getting reflected.

akash@EMPID17037:~$ curl && printf "\n"

{"Msg":"Hello World Returns"}

Also, verify that a new tag is also created for the helloapp docker image on ECR.


In this blog post, we explored how we can set up a CI/CD pipeline for kubernetes and got basic exposure to CircleCI and Helm. Although helm is not absolutely necessary for building a pipeline, it has lots of benefits and is widely used across the industry. We can extend the pipeline to consider the cases where we have multiple environments like dev, staging & production and make the pipeline deploy the application to any of them depending upon some conditions. We can also add more jobs like integration tests. All the codes used in the blog post are available here.

Related Reads:

  1. Continuous Deployment with Azure Kubernetes Service, Azure Container Registry & Jenkins

  2. Know Everything About Spinnaker & How to Deploy Using Kubernetes Engine

About the Author


Akash is an AWS certified developer and Kubernetes expert. He has deep expertise in infrastructure automation, containerized deployments, and micro-service design patterns. He is also a gopher and has built custom Kubernetes controllers. In his free time, he likes to read books.