Oops! Something went wrong while submitting the form.
We use cookies to improve your browsing experience on our website, to show you personalised content and to analize our website traffic. By browsing our website, you consent to our use of cookies. Read privacy policy.
Kubernetes is currently the hottest and standard way of deploying workloads in the cloud. It’s well-suited for companies and vendors that need self-healing, high availability, cloud-agnostic characteristics, and easy extensibility.
Now, on another front, a problem has arisen within the CI/CD domain. Since people are using Kubernetes as the underlying orchestrator, they need a robust CI/CD tool that is entirely Kubernetes-native.
Enter Prow
Prow compliments the Kubernetes family in the realm of automation and CI/CD.
In fact, it is the only project that best exemplifies why and how Kubernetes is such a superb platform to execute CI/CD at scale.
Prow (meaning: portion of a ship’s bow—ship’s front end–that’s above water) is a Kubernetes-native CI/CD system, and it has been used by many companies over the past few years like Kyma, Istio, Kubeflow, Openshift, etc.
Where did it come from?
Kubernetes is one of the largest and most successful open-source projects on GitHub. When it comes to Prow’s conception , the Kubernetes community was trying hard to keep its head above water in matters of CI/CD. Their needs included the execution of more than 10k CI/CD jobs/day, spanning over 100+ different repositories in various GitHub organizations—and other automation technology stacks were just not capable of handling everything at this scale.
So, the Kubernetes Testing SIG created their own tools to compliment Prow. Because Prow is currently residing under Kubernetes test-infra project, one might underestimate its true prowess/capabilities. I personally would like to see Prow receive a dedicated repo coming out from under the umbrella of test-infra.
What is Prow?
Prow is not too complex to understand but still vast in a subtle way. It is designed and built on a distributed microservice architecture native to Kubernetes.
It has many components that integrate with one another (plank, hook, etc.) and a bunch of standalone ones that are more of a plug-n-play nature (trigger, config-updater, etc.).
For the context of this blog, I will not be covering Prow’s entire architecture, but feel free to dive into it on your own later.
Just to name the main building blocks for Prow:
Hook - acts as an API gateway to intercept all requests from Github, which then creates a Prow job custom resource that reads the job configuration as well as calls any specific plugin if needed.
Plank - is the Prow job controller; after Hook creates a Prow job, Plank processes it and creates a Kubernetes pod for it to run the tests.
Deck - serves as the UI for the history of jobs that ran in the past or are currently running.
Horologium - is the component that processes periodic jobs only.
Sinker- responsible for cleaning up old jobs and pods from the cluster.
More can be found here: Prow Architecture. Note that this link is not the official doc from Kubernetes but from another great open source project that uses Prow extensively day-in-day-out - Kyma.
This is how Prow can be picturized:
Here is a list of things Prow can do and why it was conceived in the first place.
GitHub Automation on a wide range - ChatOps via slash command like “/foo” - Fine-tuned policies and permission management in GitHub via OWNERS files - tide - PR/merge automation - ghProxy - to avoid hitting API limits and to use GitHub API request cache - label plugin - labels management - branchprotector - branch protection configuration - releasenote - release notes management
And many more, like sinker, branch protector, etc.
Possible Jobs in Prow
Here, a job means any “task that is executed over a trigger.” This trigger can be anything from a github commit to a new PR or a periodic cron trigger. Possible jobs in Prow include:
Presubmit - these jobs are triggered when a new github PR is created.
Postsubmit - triggered when there is a new commit.
Periodic - triggered on a specific cron time trigger.
Possible states for a job
triggered - a new Prow-job custom resource is created reading the job configs
pending - a pod is created in response to the Prow-job to run the scripts/tests; Prow-job will be marked pending while the pod is getting created and running
success - if a pod succeeds, the Prow-job status will change to success
failure - if a pod fails, the Prow-job status will be marked failure
aborted - when a job is running and the same one is retriggered, then the first pro-job execution will be aborted and its status will change to aborted and the new one is marked pending
Here, this job is a “presubmit” type, meaning it will be executed when a PR is created against the “master” branch in repo “kubernetes/community”.
As shown in spec, a pod will be created from image “Golang” where this repo will be cloned, and the mentioned command will be executed at the start of the container.
The output of that command will decide if the pod has succeeded or failed, which will, in turn, decide if the Prow job has successfully completed.
More jobs configs used by Kubernetes itself can be found here - Jobs
Getting a minimalistic Prow cluster up and running on the local system in minutes.
Pre-reqs:
Knowledge of Kubernetes
Knowledge of Google Cloud and IAM
For the context of this blog, I have created a sample github repo containing all the basic manifest files and config files. For this repo, the basic CI has also been configured. Feel free to clone/fork this and use it as a getting started guide.
Let’s look at the directory structure for the repo:
6. For exposing a webhook from GitHub repo and pointing it to the local machine, use Ultrahook. Install Ultrahook. This will give you a publicly accessible endpoint. In my case, the result looked like this: http://github.sanster23.ultrahook.com.
7. Create a webhook in your repo so that all events can be published to Hook via the public URL above:
Set the webhook URL from Step 6
Set Content Type as application/json
Set the value of token the same as hmac token secret, created in Step 2
Check the “Send me everything” box
8. Create a new PR and see the magic.
9. Dashboard for Prow will be accessible at http://<minikube_ip>:<deck_node_port></deck_node_port></minikube_ip>
MINIKUBE_IP : 192.168.99.100 ( Run “minikube ip”)
DECK_NODE_PORT : 32710 ( Run “kubectl get svc deck” )
I will leave you guys with an official reference of Prow Dashboard:
What’s Next
Above is an effort just to give you a taste of what Prow can do with and how easy it is to set up at any scale of infra and for a project of any complexity.
---
P.S. - The content surrounding Prow is scarce, making it a bit unexplored in certain ways, but I found this helpful channel on the Kubernetes Slack#prow. Hopefully, this helps you explore the uncharted waters of Kubernetes Native CI/CD.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Prow + Kubernetes - A Perfect Combination To Execute CI/CD At Scale
Intro
Kubernetes is currently the hottest and standard way of deploying workloads in the cloud. It’s well-suited for companies and vendors that need self-healing, high availability, cloud-agnostic characteristics, and easy extensibility.
Now, on another front, a problem has arisen within the CI/CD domain. Since people are using Kubernetes as the underlying orchestrator, they need a robust CI/CD tool that is entirely Kubernetes-native.
Enter Prow
Prow compliments the Kubernetes family in the realm of automation and CI/CD.
In fact, it is the only project that best exemplifies why and how Kubernetes is such a superb platform to execute CI/CD at scale.
Prow (meaning: portion of a ship’s bow—ship’s front end–that’s above water) is a Kubernetes-native CI/CD system, and it has been used by many companies over the past few years like Kyma, Istio, Kubeflow, Openshift, etc.
Where did it come from?
Kubernetes is one of the largest and most successful open-source projects on GitHub. When it comes to Prow’s conception , the Kubernetes community was trying hard to keep its head above water in matters of CI/CD. Their needs included the execution of more than 10k CI/CD jobs/day, spanning over 100+ different repositories in various GitHub organizations—and other automation technology stacks were just not capable of handling everything at this scale.
So, the Kubernetes Testing SIG created their own tools to compliment Prow. Because Prow is currently residing under Kubernetes test-infra project, one might underestimate its true prowess/capabilities. I personally would like to see Prow receive a dedicated repo coming out from under the umbrella of test-infra.
What is Prow?
Prow is not too complex to understand but still vast in a subtle way. It is designed and built on a distributed microservice architecture native to Kubernetes.
It has many components that integrate with one another (plank, hook, etc.) and a bunch of standalone ones that are more of a plug-n-play nature (trigger, config-updater, etc.).
For the context of this blog, I will not be covering Prow’s entire architecture, but feel free to dive into it on your own later.
Just to name the main building blocks for Prow:
Hook - acts as an API gateway to intercept all requests from Github, which then creates a Prow job custom resource that reads the job configuration as well as calls any specific plugin if needed.
Plank - is the Prow job controller; after Hook creates a Prow job, Plank processes it and creates a Kubernetes pod for it to run the tests.
Deck - serves as the UI for the history of jobs that ran in the past or are currently running.
Horologium - is the component that processes periodic jobs only.
Sinker- responsible for cleaning up old jobs and pods from the cluster.
More can be found here: Prow Architecture. Note that this link is not the official doc from Kubernetes but from another great open source project that uses Prow extensively day-in-day-out - Kyma.
This is how Prow can be picturized:
Here is a list of things Prow can do and why it was conceived in the first place.
GitHub Automation on a wide range - ChatOps via slash command like “/foo” - Fine-tuned policies and permission management in GitHub via OWNERS files - tide - PR/merge automation - ghProxy - to avoid hitting API limits and to use GitHub API request cache - label plugin - labels management - branchprotector - branch protection configuration - releasenote - release notes management
And many more, like sinker, branch protector, etc.
Possible Jobs in Prow
Here, a job means any “task that is executed over a trigger.” This trigger can be anything from a github commit to a new PR or a periodic cron trigger. Possible jobs in Prow include:
Presubmit - these jobs are triggered when a new github PR is created.
Postsubmit - triggered when there is a new commit.
Periodic - triggered on a specific cron time trigger.
Possible states for a job
triggered - a new Prow-job custom resource is created reading the job configs
pending - a pod is created in response to the Prow-job to run the scripts/tests; Prow-job will be marked pending while the pod is getting created and running
success - if a pod succeeds, the Prow-job status will change to success
failure - if a pod fails, the Prow-job status will be marked failure
aborted - when a job is running and the same one is retriggered, then the first pro-job execution will be aborted and its status will change to aborted and the new one is marked pending
Here, this job is a “presubmit” type, meaning it will be executed when a PR is created against the “master” branch in repo “kubernetes/community”.
As shown in spec, a pod will be created from image “Golang” where this repo will be cloned, and the mentioned command will be executed at the start of the container.
The output of that command will decide if the pod has succeeded or failed, which will, in turn, decide if the Prow job has successfully completed.
More jobs configs used by Kubernetes itself can be found here - Jobs
Getting a minimalistic Prow cluster up and running on the local system in minutes.
Pre-reqs:
Knowledge of Kubernetes
Knowledge of Google Cloud and IAM
For the context of this blog, I have created a sample github repo containing all the basic manifest files and config files. For this repo, the basic CI has also been configured. Feel free to clone/fork this and use it as a getting started guide.
Let’s look at the directory structure for the repo:
6. For exposing a webhook from GitHub repo and pointing it to the local machine, use Ultrahook. Install Ultrahook. This will give you a publicly accessible endpoint. In my case, the result looked like this: http://github.sanster23.ultrahook.com.
7. Create a webhook in your repo so that all events can be published to Hook via the public URL above:
Set the webhook URL from Step 6
Set Content Type as application/json
Set the value of token the same as hmac token secret, created in Step 2
Check the “Send me everything” box
8. Create a new PR and see the magic.
9. Dashboard for Prow will be accessible at http://<minikube_ip>:<deck_node_port></deck_node_port></minikube_ip>
MINIKUBE_IP : 192.168.99.100 ( Run “minikube ip”)
DECK_NODE_PORT : 32710 ( Run “kubectl get svc deck” )
I will leave you guys with an official reference of Prow Dashboard:
What’s Next
Above is an effort just to give you a taste of what Prow can do with and how easy it is to set up at any scale of infra and for a project of any complexity.
---
P.S. - The content surrounding Prow is scarce, making it a bit unexplored in certain ways, but I found this helpful channel on the Kubernetes Slack#prow. Hopefully, this helps you explore the uncharted waters of Kubernetes Native CI/CD.
Velotio Technologies is an outsourced software product development partner for top technology startups and enterprises. We partner with companies to design, develop, and scale their products. Our work has been featured on TechCrunch, Product Hunt and more.
We have partnered with our customers to built 90+ transformational products in areas of edge computing, customer data platforms, exascale storage, cloud-native platforms, chatbots, clinical trials, healthcare and investment banking.
Since our founding in 2016, our team has completed more than 90 projects with 220+ employees across the following areas:
Building web/mobile applications
Architecting Cloud infrastructure and Data analytics platforms