Taking Amazon's Elastic Kubernetes Service for a spin

Introduction:

With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

What is Elastic Kubernetes service (EKS)?

Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organisations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

Key EKS concepts:

EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs K8s with three masters across three AZs to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Prerequisites for launching an EKS cluster:

1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

Launching an EKS cluster:

1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

img5.png

2. Using awscli :

aws eks create-cluster --name eks-blog-cluster --role-arn arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role  --resources-vpc-config subnetIds=subnet-0b8da2094908e1b23,subnet-01a46af43b2c5e16c,securityGroupIds=sg-03fa0c02886c183d4

{
    "cluster": {
        "status": "CREATING",
        "name": "eks-blog-cluster",
        "certificateAuthority": {},
        "roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role",
        "resourcesVpcConfig": {
            "subnetIds": [
                "subnet-0b8da2094908e1b23",
                "subnet-01a46af43b2c5e16c"
            ],
            "vpcId": "vpc-0364b5ed9f85e7ce1",
            "securityGroupIds": [
                "sg-03fa0c02886c183d4"
            ]
        },
        "version": "1.10",
        "arn": "arn:aws:eks:us-east-1:XXXXXXXXXXXX:cluster/eks-blog-cluster",
        "createdAt": 1535269577.147
    }
}

In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

aws eks describe-cluster --name=eks-blog-cluster

Configure kubectl for EKS:

We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

1.  Install the kubectl binary

curl -o kubectl                                                         

Give executable permission to the binary.

chmod +x ./kubectl

Copy the binary to a folder in your $PATH.

sudo cp ./kubectl /bin/kubectl && export PATH=$HOME/bin:$PATH

As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

2.  Install aws-iam-authenticator

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator

Give executable permission to the binary

chmod +x ./aws-iam-authenticator

Copy the binary to a folder in your $PATH.

sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator

3.  Create the kubeconfig file

First create the directory.

mkdir -p ~/.kube

Open a config file in the folder created above

sudo vi ./kube/config-eks-blog-cluster

Paste the below code in the file

apiVersion: v1
       clusters:
       - cluster:
       server: https://DBFE36D09896EECAB426959C35FFCC47.sk1.us-east-1.eks.amazonaws.com
        certificate-authority-data: ”....................”
        name: kubernetes
        contexts:
        - context:
              cluster: kubernetes
              user: aws
          name: aws
        current-context: aws
        kind: Config
        preferences: {}
        users:
           - name: aws
             user:
                exec:
                    apiVersion: client.authentication.k8s.io/v1alpha1
                    command: aws-iam-authenticator
                    args:
                       - "token"
                       - "-i"
                       - “eks-blog-cluster"

Replace the values of the server and certificate-authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

aws eks describe-cluster --name=eks-blog-cluster

Save and exit.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-eks-blog-cluster

To verify that the kubectl is now properly configured :

kubectl get all
NAME               TYPE      CLUSTER-IP  EXTERNAL-IP PORT(S) AGE       
service/kubernetes ClusterIP 172.20.0.1  <none>      443/TCP 50m

Launch and configure worker nodes :

Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

  • ClusterName: Name of the Amazon EKS cluster we created earlier.

  • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.

  • NodeGroupName: Name for the worker node auto scaling group.

  • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.

  • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.

  • NodeInstanceType: Type of worker node you wish to launch.

  • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively

  • KeyName: Name of the key you will use to ssh into the worker node.

  • VpcId: Id of the VPC that we created earlier.

  • Subnets: Subnets from the VPC we created earlier.

image7.png

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

Download the config map:

curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml

Open it in a editor

apiVersion: v1

kind: ConfigMap

metadata:

  name: aws-auth

  namespace: kube-system

data:

  mapRoles: |

    - rolearn: <ARN of instance role (not instance profile)>

      username: system:node:{{EC2PrivateDNSName}}

      groups:

        - system:bootstrappers

- system:nodes

Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

kubectl apply -f aws-auth-cm.yaml 

Now you can check if the nodes have joined the cluster or not.

kubectl get nodes

NAME                         STATUS     ROLES   AGE  VERSION

ip-10-0-2-171.ec2.internal   Ready      <none>  12s  v1.10.3

ip-10-0-3-58.ec2.internal    Ready      <none>  14s  v1.10.3

Deploying an application:

As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

1. MongoDB Deployment YAML

2. Test Application Development YAML

3. MongoDB Service YAML

4. Test Application Service YAML

Services

$ kubectl create -f mongodb-service.yaml                                                                                    $ kubectl create -f testapp-service.yaml

Deployments

$ kubectl create -f mongodb-deployment.yaml                                                                                 $ kubectl create -f testapp-deployment.yaml
$ kubectl get services

NAME            TYPE         CLUSTER-IP    EXTERNAL-IP PORT(S)      AGE
kubernetes      ClusterIP    172.20.0.1    <none>   443/TCP         12m
mongodb-service ClusterIP    172.20.55.194 <none>  27017/TCP           4m
test-service LoadBalancer 172.20.188.77 a7ee4f4c3b0ea 80:31427/TCP   3m

In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

To Store Data :

curl -X POST -d '{"name":"A Game of Thrones (A Song of Ice and Fire)“, "author":"George R.R. Martin","price":343}' 

http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books

{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}

To Get Data :

curl -X GET 

http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books

[{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}]

We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

unnamed-6.png

Now our application is deployed on EKS and can be access by the users.

Comparison BETWEEN GKE, ECS and EKS:

Cluster creation: Creating GKE and ECS cluster is way simpler then creating an EKS cluster. GKE being the simplest of all three.

Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

Serverless: ECS cluster can be created using Fargate which is container as a service (Caas) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

In terms of availability and scalability all the services are at par with each other.

Conclusion:

In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

References:


About the Author

akash.jpg

Akash is an AWS certified developer and Kubernetes expert. He has deep expertise in infrastructure automation, containerized deployments, and micro-service design patterns. He is also a gopher and has built custom Kubernetes controllers. In his free time, he likes to read books.