Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Routing, IPv6, secondary IFs, trafic control, tunelling trial... #122

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
133 changes: 103 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,22 @@ Pipework uses cgroups and namespace and works with "plain" LXC containers
* [Setting container internal interface](#setting_internal)
* [Using a different netmask](#different_netmask)
* [Setting a default gateway](#default_gateway)
* [Setting routes on the internal interface](#route_internal)
* [Connect a container to a local physical interface](#local_physical)
* [Let the Docker host communicate over macvlan interfaces](#macvlan)
* [Wait for the network to be ready](#wait_ready)
* [Add the interface without an IP address](#no_ip)
* [DHCP](#dhcp)
* [Specify a custom MAC address](#custom_mac)
* [Virtual LAN (VLAN)](#vlan)
* [IPv6](#ipv6)
* [Secondary addresses](#secondary)
* [Traffic Control (QoS)](#traffic_control)
* [Support Open vSwitch](#openvswitch)
* [Support Infiniband](#infiniband)
* [Cleanup](#cleanup)
* [Debugging](#debug)
* [Experimental](#experimental)


<a name="notes"/>
Expand Down Expand Up @@ -75,7 +81,7 @@ Let's create two containers, running the web tier and the database tier:

Now, bring superpowers to the web tier:

pipework br1 $APACHE 192.168.1.1/24
pipework br1 $APACHE -a ip 192.168.1.1/24

This will:

Expand All @@ -86,7 +92,7 @@ This will:

Now (drum roll), let's do this:

pipework br1 $MYSQL 192.168.1.2/24
pipework br1 $MYSQL -a ip 192.168.1.2/24

This will:

Expand All @@ -106,7 +112,7 @@ you gave to Pipework cannot be found, Pipework will try to resolve it
with `docker inspect`. This makes it even simpler to use:

docker run -name web1 -d apache
pipework br1 web1 192.168.12.23/24
pipework br1 web1 -a ip 192.168.12.23/24


<a name="peeking_inside"/>
Expand All @@ -120,9 +126,10 @@ Voilà!

<a name="setting_internal"/>
### Setting container internal interface ##
By default pipework creates a new interface `eth1` inside the container. In case you want to change this interface name like `eth2`, e.g., to have more than one interface set by pipework, use:
By default pipework creates a new interface `eth1` inside the container. In case you want to
change this interface name like `eth2`, e.g., to have more than one interface set by pipework, use:

`pipework br1 -i eth2 ...`
`pipework br1 web1 -i eth2 ...`

**Note:**: for infiniband IPoIB interfaces, the default interface name is `ib0` and not `eth1`.

Expand All @@ -134,7 +141,7 @@ tool; so you can append a subnet size using traditional CIDR notation.

I.e.:

pipework br1 $CONTAINERID 192.168.4.25/20
pipework br1 $CONTAINERID -a ip 192.168.4.25/20

Don't forget that all containers should use the same subnet size;
pipework is not clever enough to use your specified subnet size for
Expand All @@ -154,7 +161,19 @@ you want the container to use a specific outbound IP address.
This can be automated by Pipework, by adding the gateway address
after the IP address and subnet mask:

pipework br1 $CONTAINERID 192.168.4.25/[email protected]
pipework br1 $CONTAINERID -a ip 192.168.4.25/[email protected]


<a name="route_internal"/>
### Setting routes on the internal interface

If you add more than one internal interface, or perform specific use-cases, like multicast routing
you may want to add other routes than the default one.
This could be performed by adding network and masks after the gateway (comma-separated)

pipework br1 $CONTAINERID -a ip 192.168.4.25/[email protected] -r 192.168.5.0/25,192.168.6.0/24

Please note that the last added internal interface will take the default route


<a name="local_physical"/>
Expand All @@ -163,8 +182,8 @@ after the IP address and subnet mask:
Let's pretend that you want to run two Hipache instances, listening on real
interfaces eth2 and eth3, using specific (public) IP addresses. Easy!

pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) 107.22.140.5/24
pipework eth2 $(docker run -d hipache /usr/sbin/hipache) -a ip 50.19.169.157/24
pipework eth3 $(docker run -d hipache /usr/sbin/hipache) -a ip 107.22.140.5/24

Note that this will use `macvlan` subinterfaces, so you can actually put
multiple containers on the same physical interface.
Expand Down Expand Up @@ -195,26 +214,14 @@ Then, you would start a container and assign it a macvlan interface
the usual way:

CID=$(docker run -d ...)
pipework eth0 $CID 10.1.1.234/[email protected]
pipework eth0 $CID -a ip 10.1.1.234/[email protected]


<a name="wait_ready"/>
### Wait for the network to be ready

Sometimes, you want the extra network interface to be up and running *before*
starting your service. A dirty (and unreliable) solution would be to add
a `sleep` command before starting your service; but that could break in
"interesting" ways if the server happens to be a bit slower at one point.

There is a better option: add the `pipework` script to your Docker image,
and before starting the service, call `pipework --wait`. It will wait
until the `eth1` interface is present and in `UP` operational state,
then exit gracefully.

If you need to wait on an interface other than eth1, pass the -i flag like
this:

pipework --wait -i ib0
Since `docker create` allow to instantiate the container without starting it,
there is no more reason for pipework to provide tooling to wait for the network.

<a name="no_ip"/>
### Add the interface without an IP address
Expand All @@ -224,7 +231,7 @@ container, you can use `0/0` as the IP address. The interface will
be created, connected to the network, and assigned to the container,
but without configuring an IP address:

pipework br1 $CONTAINERID 0/0
pipework br1 $CONTAINERID -a link


<a name="dhcp"/>
Expand All @@ -233,7 +240,7 @@ but without configuring an IP address:
You can use DHCP to obtain the IP address of the new interface. Just
specify `dhcp` instead of an IP address; for instance:

pipework eth1 $CONTAINERID dhcp
pipework eth1 $CONTAINERID -a dhcp

The value of $CONTAINERID will be provided to the DHCP client to use
as the hostname in the DHCP request. Depending on the configuration of
Expand Down Expand Up @@ -267,15 +274,15 @@ If you need to specify the MAC address to be used (either by the `macvlan`
subinterface, or the `veth` interface), no problem. Just add it as the
command-line, as the last argument:

pipework eth0 $(docker run -d haproxy) 192.168.1.2/24 26:2e:71:98:60:8f
pipework eth0 $(docker run -d haproxy) -a ip 192.168.1.2/24 -m 26:2e:71:98:60:8f

This can be useful if your network environment requires whitelisting
your hardware addresses (some hosting providers do that), or if you want
to obtain a specific address from your DHCP server. Also, some projects like
[Orchestrator](https://github.com/cvlc/orchestrator) rely on static
MAC-IPv6 bindings for DHCPv6:

pipework br0 $(docker run -d zerorpcworker) dhcp fa:de:b0:99:52:1c
pipework br0 $(docker run -d zerorpcworker) -a dhcp -m fa:de:b0:99:52:1c

**Note:** if you generate your own MAC addresses, try remember those two
simple rules:
Expand All @@ -290,6 +297,8 @@ be `2`, `6`, `a`, or `e`. You can check [Wikipedia](
http://en.wikipedia.org/wiki/MAC_address) if you want even more details.

**Note:** Setting the MAC address of an IPoIB interface is not supported.


<a name="vlan"/>
### Virtual LAN (VLAN)

Expand All @@ -303,7 +312,40 @@ bridges are currently not supported.
The following will attach container zerorpcworker to the Open vSwitch bridge
ovs0 and attach the container to VLAN ID 10.

pipework ovsbr0 $(docker run -d zerorpcworker) dhcp @10
pipework ovsbr0 $(docker run -d zerorpcworker) -a dhcp -V @10

<a name="ipv6"/>
### IPv6

IPv6 global scope adressing is also supported, using the same options :

pipework eth0 eth0 $(docker run -d haproxy) -a ip 2001:db8::beef/64@2001:db8::1

**Note:** Docker 1.5 feature

<a name="secondary"/>
### Secondary addresses

You can attach secondary addresses the container, using the action `sec_ip` instead of `ip`

pipework eth0 $(docker run -d haproxy) -a sec_ip 192.168.1.2/24
pipework eth0 $(docker run -d haproxy) -a sec_ip 2001:db8::beef/64
pipework eth0 $(docker run -d haproxy) -a sec_ip 2001:db8::face/64

<a name="traffic_control"/>
### Traffic Control (QoS)

You can play with traffic control on an internal container interface, to emulate network
properties like bandwidth, packet drops, latency, protocol policing and marking, etc.

Here, we provide a simple wrapper around tc, so you can keep the control on all parameters

pipework eth0 $MYSQL -a tc qdisc add dev eth1 root netem loss 30%
pipework eth0 $MYSQL -a tc qdisc add dev eth1 root netem delay 100ms

See `man tc` for more details

**Note:** as it is a wrapper, be sure that all you pipework arguments are befoire `-a tc ...`

<a name="openvswitch"/>
### Support Open vSwitch
Expand All @@ -312,7 +354,7 @@ If you want to attach a container to the Open vSwitch bridge, no problem.

ovs-vsctl list-br
ovsbr0
pipework ovsbr0 $(docker run -d mysql /usr/sbin/mysqld_safe) 192.168.1.2/24
pipework ovsbr0 $(docker run -d mysql /usr/sbin/mysqld_safe) -a ip 192.168.1.2/24

If the ovs bridge doesn't exist, it will be automatically created

Expand All @@ -333,3 +375,34 @@ When a container is terminated (the last process of the net namespace exits),
the network interfaces are garbage collected. The interface in the container
is automatically destroyed, and the interface in the docker host (part of the
bridge) is then destroyed as well.

<a name="debug"/>
### Debugging

2 switchs makes you able to debug some tedious situations :

-v logs every iproute2 calls

-x enable shell debugging (similar to sh -x pipework ...)

<a name="experimental"/>
### Experimental

TBD : test/kernel watch/...

- Tunnel interfaces (GRE/IPIP/IP6_TUNNEL)

pipework eth0 $(docker run -d haproxy) -i eth1 -a ipip 192.168.1.3
pipework eth0 $(docker run -d haproxy) -a ipip 2001:db8::2

If the container has more than one internal interface, specify the internal interface (-i) to attach
the tunnel to the good device

No more driver/mode to remember (ipip, ip6_tunnel, ipip6, ip6ip6, gre, ip6_gre,...), pipeworks adapts itself
to the right situation regarding your adressing scheme (doing ipv4-in-ipv4 or ipv6-in-ipv6 encapsulation)

Be careful about the MTU in these situations... (tunneling in the container over tunneling on the host may lead
to problems).

- Clean OVS bridge unused ports

Loading