Deploying a Service on a Kubernetes Cluster

In this article let us see how to deploy a Kubernetes Service. If you are looking to setup and install Kubernetes please have a look at my previous article.

First, let us understand what exactly is a Kubernetes Service?

Simply put, a Kubernetes Service is a logical grouping of Pods. While Pods can get created or destroyed dynamically, a Service is a permanent (stable) entity with a unique IP address and a DNS hostname. This allows applications to consume this Service using a well defined address.

Deploying an application via Kubernetes
Let us deploy a mysql Pod and expose it as a Service so that other application can access it by referring to Service IP and port.

The source code for the mysql container used in this article is available from the link below –
https://github.com/bpradipt/docker-mysql.git

The source code contains Dockerfiles and related setup scripts to build the mysql docker image on both Intel and OpenPower systems.

Given below is a sample YAML file to create a Mysql Pod on an OpenPower system. Modify the ‘image’ attribute accordingly when creating Pod on Intel system.

 
# cat mysqlpod.yml
apiVersion: "v1"
 kind: "Pod"
 metadata:
  name: "mysqlpod"
  labels:
    name: "db"
  spec:
     containers:
       - name: "mysql"
         image: "registry-rhel71.kube.com:5000/ppc64le/mysql"
         imagePullPolicy: "IfNotPresent"
         env:
            - name: MYSQL_USER
              value: test
            - name: MYSQL_PASSWORD
              value: test
            - name: MYSQL_ROOT_PASSWORD
              value: password
            - name: MYSQL_DB
              value: BucketList
         ports:
           - containerPort: 3306

Create the Mysql Pod using the above YAML file:

# kubelet create -f mysqlpod.yml

Given below is a sample YAML file to create the Mysql Service:

# cat mysqlservice.yml
apiVersion: v1
 kind: Service
 metadata:
   labels:
     name: dbfrontend
   name: dbfrontend
 spec:
   ports:
     # the port that this service should serve on
     - port: 3306
   # label keys and values that must match in order to receive traffic for this service
   selector:
     name: db

Note that the selector label (name: db in the above yaml) should match the Pod label for all the Pods that is to be part of this Service.

Create the Mysql Service using the above YAML file:

# kubelet create -f mysqlservice.yml
# kubectl get services
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dbfrontend   10.254.93.20                 3306/TCP   7m

As shown above the dbfrontend Service (Mysql) is available at the following tuple 10.254.93.20:3306. Note that the CLUSTER-IP is available only within the Kubernetes cluster and is not available outside of the cluster.

For automatic Service discovery, environment variables containing the IP address of each Service in the cluster are injected into all containers that are created after the Service creation. For example, for the above Service named “dbfrontend” following environment variables are available in all the pods created after the Service creation thereby enabling access to the Service using the variables:

DBFRONTEND_PORT=tcp://10.254.93.20:3306
DBFRONTEND_PORT_3306_TCP=tcp://10.254.93.20:3306
DBFRONTEND_PORT_3306_TCP_ADDR=10.254.93.20
DBFRONTEND_PORT_3306_TCP_PORT=3306
DBFRONTEND_PORT_3306_TCP_PROTO=tcp
DBFRONTEND_SERVICE_HOST=10.254.93.20
DBFRONTEND_SERVICE_PORT=3306

The above Service definition is an example of ClusterIP ServiceType. This is the default ServiceType in Kubernetes and ensures that the Service is accessible from within the cluster. There are two other ServiceTypes available in Kubernetes viz. NodePort and LoadBalancer which can be used to expose the Service externally.

NodePort: Expose the Service on a specific port on each node in the cluster. In other words the Service will be available on the same port on each node. The Service will be available on any <NodeIP>:NodePort address.

LoadBalancer: Some cloud providers support external load balancers and when running on such cloud providers this ServiceType can be used. This will provision a load balancer for the specific Service which will then forward any requests to the Service at <NodeIP>:NodePort for each Node.

There is another option to expose a Service externally. If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on these externalIPs.

Following example YAML adds externalIPs to the Service definition.

apiVersion: v1
 kind: Service
 metadata:
   labels:
     name: dbfrontend
   name: dbfrontend
 spec:
   ports:
     # the port that this service should serve on
     - port: 3306
   # label keys and values that must match in order to receive traffic for this service
   selector:
     name: db
   externalIPs: ["172.16.127.2"]

The Service is now available at 172.16.127.2:3306, where 172.16.127.2 is a routable IP that reaches to one of the Kubernetes cluster node.

A brief primer on the working of Service IPs
Service IPs are virtual IPs that are available only within the cluster. This is unlike Pod IP addresses, which actually route to a fixed destination. Kubernetes uses Linux kernel’s iptables functionality to define virtual Service IP addresses which are transparently redirected as needed. When clients connect to the Service IP, their traffic is automatically redirected to an appropriate endpoint via iptables rules that Kubernetes (kube-proxy) inserts for every Service creation on every cluster node.

The kube-proxy instance that runs on every Kubernetes cluster node watches for any new Service creation. When the kube-proxy sees a new Service created, it opens a new random port, establishes an iptables redirect from the Service IP to this new port, and starts accepting connections on it.  When a client connects to the Service IP the iptables rule kicks in, and redirects the packets to the proxy’s own port. The proxy chooses a backend, and starts proxying traffic from the client to the backend. You can see the iptables rules by running iptables-save on the Kubernetes nodes. For example this is the output from one of the nodes in my environment.

-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-BQQ4WLK5U2ZTHPP5 -s 172.16.127.32/32 -m comment --comment "default/dbfrontend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-BQQ4WLK5U2ZTHPP5 -p tcp -m comment --comment "default/dbfrontend:" -m tcp -j DNAT --to-destination 172.16.127.32:3306
-A KUBE-SERVICES -d 10.254.93.20/32 -p tcp -m comment --comment "default/dbfrontend: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-BDXKXB7Z72QP7S2E
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-BDXKXB7Z72QP7S2E -m comment --comment "default/dbfrontend:" -j KUBE-SEP-BQQ4WLK5U2ZTHPP5

Pradipta Kumar Banerjee

I'm a Cloud and Linux/ OpenSource enthusiast, with 16 years of industry experience at IBM. You can find more details about me here - Linkedin

You may also like...