Howto Use the Kubernetes Operations Tool

Today there are multiple ways to deploy a Kubernetes cluster. A good summary of the available options can be found here.  For deploying to AWS cloud, my personal favourite is the Kubernetes Operations (kops) tool. The availability of kops tool have made deployment of Kubernetes clusters to AWS a breeze. I would love to see kops being extended for on-premise environments but let’s park that for now.

Pre-built binaries for Mac and Linux can be downloaded from the following link –
As of this writing binaries for version 1.4.1 can be downloaded from the following links:

In this article we’ll see how to use the kops tool to deploy a Kubernetes cluster in AWS cloud. Additionally we’ll also see how to setup Ingress and use it to expose services deployed in the Kubernetes cluster.

Pre-requisites for using KOPS

  • AWS Access ID and Secret Key for the user

Set the access id and secret key in $HOME/.aws/credentials file.

A sample credentials file will look like this

aws_access_key_id = AAAAAAAAAAAAAAAAAAA

Ensure the user have the IAMFullAccess policy attached otherwise kops will fail to create the cluster.

  • AWS S3 bucket

This is used to store the cluster configuration.

  • DNS Hosted Zone in AWS Route53

If you are using an existing domain from a different provider, then ensure that you add the AWS NS records in your provider’s NS configuration. For example, my domain name is  which is from BIGROCK.  When creating a hosted zone for in AWS, the following NS records as shown in the diagram below gets created.

Route 53 Management Console

These AWS Route53 NS records needs to be updated in BIGROCK for my domain. Every domain hosting provider will have a configuration option to update the NS records.

Setup and Configuration

The below commands assume that the  kops tool command is named as ‘kops’ and is available in PATH.

Before creating a cluster, set KOPS_STATE_STORE to point to the S3 bucket

# export KOPS_STATE_STORE=s3://pradipta-kops-bucket

Create Cluster
The following command creates cluster configuration (and saves it in S3) but doesn’t actually provision the cluster.

$ kops create cluster --cloud=aws --zones=us-west-2a


* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster
* edit your node instance group: kops edit ig nodes
* edit your master instance group: kops edit ig master-us-west-2a

Finally configure your cluster with: kops update cluster --yes

By default the maximum number of nodes is 2 and the instance type is t2.medium. For server the default instance type is m3.medium. The output of the above step suggests what commands to use for changing these defaults.
For example, if you want to edit the machine type of the AWS instance for master to t2.medium, then run the following command

$ kops edit ig master-us-west-2a
creationTimestamp: "2016-11-18T06:25:32Z"
name: master-us-west-2a
associatePublicIp: true
machineType: m3.medium   
maxSize: 1
minSize: 1
role: Master
- us-west-2a
Edit the machineType to t2.medium and save it

Provision the Cluster

$ kops update cluster --yes

I1118 12:02:01.920825   51894 populate_cluster_spec.go:196] Defaulting DNS zone to: Z1UA3Q1EEG7GA0
I1118 12:02:04.663625   51894 executor.go:68] Tasks: 0 done / 51 total; 26 can run
I1118 12:02:06.013447   51894 vfs_castore.go:384] Issuing new certificate: "master"
I1118 12:02:06.311324   51894 vfs_castore.go:384] Issuing new certificate: "kubecfg"
I1118 12:02:06.375961   51894 vfs_castore.go:384] Issuing new certificate: "kubelet"
I1118 12:02:11.556805   51894 executor.go:68] Tasks: 26 done / 51 total; 10 can run
I1118 12:02:13.771592   51894 executor.go:68] Tasks: 36 done / 51 total; 13 can run
I1118 12:02:16.105466   51894 launchconfiguration.go:276] Waiting for IAM to replicate
I1118 12:02:16.202462   51894 launchconfiguration.go:276] Waiting for IAM to replicate
I1118 12:02:27.957765   51894 executor.go:68] Tasks: 49 done / 51 total; 2 can run
I1118 12:02:29.112813   51894 executor.go:68] Tasks: 51 done / 51 total; 0 can run
I1118 12:02:30.873410   51894 update_cluster.go:150] Exporting kubecfg for cluster

Wrote config for to "/Users/bpradipt/.kube/config"

Cluster is starting.  It should be ready in a few minutes.

* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa
* read about installing addons:

Note that it takes some time before the DNS entries for API server is available. You’ll need to wait for few mins. In my case the DNS entries for API server were available approximately 10 min after completion of the update command.

The final entries in Route53 for my setup looks like the following:

Route 53 Management Console-1

Additional Kubernetes Cluster Configurations

In the following section we’ll deploy Kubernetes-dashboard and Ingress.

Deploy Kubernetes Dashboard

$ kubectl create -f
deployment "kubernetes-dashboard-v1.4.0" created
service "kubernetes-dashboard" created

$ kubectl cluster-info

Kubernetes master is running at
KubeDNS is running at

kubernetes-dashboard is running at

In order to verify the correct working of the cluster, you can try the sample guestbook application available here.
Note that the guestbook example exposes the service as type ‘Loadbalancer’ by leveraging the AWS Loadbalancer.

Deploy Ingress Controller
The following command will create an Ingress Controller and expose it via AWS Loadbalancer.

$ kubectl create –f

$ kubectl get svc/ingress-nginx -o wide

NAME           CLUSTER-IP     EXTERNAL-IP                                                            PORT(S)         AGE   SELECTOR
ingress-nginx 80/TCP,443/TCP  54m   app=ingress-nginx

Here is a sample ingress service definition for the guestbook application.

$ cat guestbook-ingress
apiVersion: extensions/v1beta1
kind: Ingress
  name: guestbook-ingress
  - host:
      - path:
          serviceName: guestbook
          servicePort: 3000
$ kubectl get ing
NAME                HOSTS                           ADDRESS          PORTS     AGE
guestbook-ingress   80        6m

You can verify the working of the ingress by using curl with -H option. Following is an example verifying the guestbook-ingress service.

Sample output described below:

$ curl -H "Host:"
"HOME": "/",
"HOSTNAME": "guestbook-rt0z1",
"KUBERNETES_PORT": "tcp://",
"KUBERNETES_PORT_443_TCP": "tcp://",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"REDIS_MASTER_PORT": "tcp://",
"REDIS_MASTER_PORT_6379_TCP": "tcp://",
"REDIS_SLAVE_PORT": "tcp://",
"REDIS_SLAVE_PORT_6379_TCP": "tcp://",

If ingress is not working then  ‘default backend – 404’ will be displayed when accessing the guestbook service.

$ curl
default backend - 404%

Another option for verification is to use the Virtual Host extension from the chrome store

The VHost Domain needs to be set to the HTTP host. In this example it is The VHost IP is the actual host serving the ingress. This is the external IP of the ingress service. In this example it is

This entire exercise of creating the cluster and configuring dashboard and ingress took less than half an hour for me. Pretty impressive.

If you are into the operations and planning to deploy a Kubernetes cluster in AWS, I would strongly suggest you to try the kops tool.

Pradipta Kumar Banerjee

I'm a Cloud and Linux/ OpenSource enthusiast, with 16 years of industry experience at IBM. You can find more details about me here - Linkedin

You may also like...