Experimenting with Socketplane for Docker

Current docker network functionality is limited. There is no support for openvswitch, tunneling, multiple networks among others. There are solutions like pipework, socketplane, weave, flannel from third party vendors providing advanced network functionality for docker. All these solutions are not officially part of docker, but nonetheless very useful in a practical large scale cloud environment.

However, with recent acquisition of socketplane by docker, things look very promising from docker networking point of view.

In this article, let’s see how socketplane works.

Socketplane provides the following functionality:

1. Allows you to connect docker containers on multiple hosts together,

2. Supports multiple networks

3. Supports OpenVswitch

4. Provides distributed IP address management across socketplane cluster

Socketplane Installation

Master Node:
curl -sSL http://get.socketplane.io/ | sudo BOOTSTRAP=true sh
Slave Nodes:
curl -sSL http://get.socketplane.io/ | sudo sh

This will install the socketplane agent container and install the ‘socketplane’ docker wrapper command that needs to be used for managing containers and associated networks.

/usr/bin/socketplane is a wrapper script to manage socketplane and other containers, required to connect using socketplane technology.

Socketplane container is started as a privileged container so as to manipulate OVS and the host and network namespaces.

This is the command used:

docker run --name socketplane -itd --privileged=true \
-v /etc/socketplane/socketplane.toml:/etc/socketplane/socketplane.toml \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker -v /proc:/hostproc -e PROCFS=/hostproc \
--net=host socketplane/socketplane socketplane

Conceptually the high level architecture looks like the following

socketplane-conceptual

What happens when socketplane container starts ?

  1. Create ovs bridge (docker0-ovs)

  2. Create default network 10.1.0.0/16

    1. create OVS internal port – ‘default’

    2. Assign IP (10.1.0.1) to ‘default’ interface

    3. Create iptables masquerade rule

  3. If master start cluster, else join the cluster

 

vagrant@socketplane:~$ sudo socketplane network list
[
    {
        "gateway": "10.1.0.1",
        "id": "default",
        "subnet": "10.1.0.0/16",
        "vlan": 1
    }
]

vagrant@socketplane:~$ sudo ovs-vsctl show
b293c7d7-62fa-4fed-ba69-0e7a89d201ec
    Manager "ptcp:6640"
        is_connected: true
    Bridge "docker0-ovs"
        Port default
            tag: 1
            Interface default
                type: internal
        Port "docker0-ovs"
            Interface "docker0-ovs"
                type: internal
    ovs_version: "2.1.3"

IP is assigned to the 'default' interface connected to OVS bridge

[--ip addr show--]
default: <BROADCAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default 
link/ether ce:ad:14:00:f2:c3 brd ff:ff:ff:ff:ff:ff
inet 10.1.0.1/16 scope global default
valid_lft forever preferred_lft forever
inet6 fe80::ccad:14ff:fe00:f2c3/64 scope link 
valid_lft forever preferred_lft forever 
[--ip addr show--]

An iptables masquerade rule is also set for the network allowing containers to reach outside.
vagrant@socketplane:~$ sudo  iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  all  --  10.1.0.0/16          0.0.0.0/0

What happens when a container is started via socketplane wrapper ?
  1. Start the container with ‘net=none’ [docker run –net=none]

  2. Call socketplane API to attach to network (attach to ‘default’ network if no specific network is provided)

    1. Create an OVS internal port and add it to the OVS switch

    2. Move the OVS internal port to the container network namespace

Now let’s see the above steps in action.

vagrant@socketplane:~$ sudo socketplane run -itd busybox /bin/sh
03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991
vagrant@socketplane:~$ sudo socketplane info
{
	"03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991": {
		"connection_details": {
			"gateway": "10.1.0.1",
			"ip": "10.1.0.2",
			"mac": "02:42:0a:01:00:02",
			"name": "ovs14763e7",
			"subnet": "/16"
		},
		"container_id": "03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991",
		"container_name": "/prickly_nobel",
		"container_pid": "19684",
		"network": "default",
		"ovs_port_id": "ovs14763e7"
	}
}
vagrant@socketplane:~$ sudo ovs-vsctl show
5e19792f-8215-489f-a01f-f7f0b4d20aaa
	Manager "ptcp:6640"
		is_connected: true
	Bridge "docker0-ovs"
		Port "docker0-ovs"
			Interface "docker0-ovs"
				type: internal
		Port default
			tag: 1
			Interface default
				type: internal
		Port "ovs14763e7"
			tag: 1
			Interface "ovs14763e7"
				type: internal
	ovs_version: "2.1.3"

Conceptually it looks like the following:
socketplane-container

vagrant@socketplane:~$ sudo socketplane attach 03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991

/ # ifconfig
lo        Link encap:Local Loopback  
		  inet addr:127.0.0.1  Mask:255.0.0.0
		  UP LOOPBACK RUNNING  MTU:65536  Metric:1
		  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
		  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
		  collisions:0 txqueuelen:0 
		  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ovs14763e7 Link encap:Ethernet  HWaddr 02:42:0A:01:00:02  
		  inet addr:10.1.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
		  UP BROADCAST RUNNING  MTU:1440  Metric:1
		  RX packets:16 errors:0 dropped:0 overruns:0 frame:0
		  TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
		  collisions:0 txqueuelen:0 
		  RX bytes:800 (800.0 B)  TX bytes:906 (906.0 B)

vagrant@socketplane:~$ sudo ovs-appctl fdb/show docker0-ovs
port  VLAN                 MAC    Age
   2     1   86:bd:9c:cc:e8:f7    223
   2     1   02:42:0a:01:00:02     81
   1     1   ce:ad:14:00:f2:c3      2

What happens when a new network is created ?

vagrant@socketplane:~$sudo socketplane network create test-net 15.15.0.0/16
{
	"gateway": "15.15.0.1",
	"id": "test-net",
	"subnet": "15.15.0.0/16",
	"vlan": 2
}
vagrant@socketplane:~$sudo iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0           
MASQUERADE  all  --  10.1.0.0/16          0.0.0.0/0           
MASQUERADE  all  --  15.15.0.0/16         0.0.0.0/0           


test-net: <BROADCAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default 
	link/ether 36:dc:07:12:84:4f brd ff:ff:ff:ff:ff:ff
	inet 15.15.0.1/16 scope global test-net
	   valid_lft forever preferred_lft forever
	inet6 fe80::34dc:7ff:fe12:844f/64 scope link 
	   valid_lft forever preferred_lft forever



vagrant@socketplane:~$sudo ovs-vsctl show
5e19792f-8215-489f-a01f-f7f0b4d20aaa
	Manager "ptcp:6640"
		is_connected: true
	Bridge "docker0-ovs"
		Port test-net
			tag: 2
			Interface test-net
				type: internal
		Port "docker0-ovs"
			Interface "docker0-ovs"
				type: internal
		Port default
			tag: 1
			Interface default
				type: internal
		Port "ovs14763e7"
			tag: 1
			Interface "ovs14763e7"
				type: internal
	ovs_version: "2.1.3"

Creating a container and connecting it to the new network

The network name needs to be specified as an option when starting the container.

vagrant@socketplane:~$sudo socketplane run -n test-net -itd busybox /bin/sh
0fd5383a684c78458df9b007a0e46273e8e9c2502516ab43586102715953ce81
vagrant@socketplane:~$sudo socketplane info
{
	"03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991": {
		"connection_details": {
			"gateway": "10.1.0.1",
			"ip": "10.1.0.2",
			"mac": "02:42:0a:01:00:02",
			"name": "ovs14763e7",
			"subnet": "/16"
		},
		"container_id": "03fac8ba30dc001d8ce8fad0f28453e90a42d38f6d52b23914ea76d62cf44991",
		"container_name": "/prickly_nobel",
		"container_pid": "19684",
		"network": "default",
		"ovs_port_id": "ovs14763e7"
	},
	"0fd5383a684c78458df9b007a0e46273e8e9c2502516ab43586102715953ce81": {
		"connection_details": {
			"gateway": "15.15.0.1",
			"ip": "15.15.0.2",
			"mac": "02:42:0f:0f:00:02",
			"name": "ovscb5edfb",
			"subnet": "/16"
		},
		"container_id": "0fd5383a684c78458df9b007a0e46273e8e9c2502516ab43586102715953ce81",
		"container_name": "/furious_shockley",
		"container_pid": "21162",
		"network": "test-net",
		"ovs_port_id": "ovscb5edfb"
	}
}

Hope this gives you a fair idea of the working of socketplane.

Pradipta Kumar Banerjee

I'm a Cloud and Linux/ OpenSource enthusiast, with 16 years of industry experience at IBM. You can find more details about me here - Linkedin

You may also like...