Overview of Docker Networking : Let’s connect the containers!

Mohit Talniya
3 min readJan 20, 2018

--

The Networking goals of the containers are conceptualised under a contract between containers and underlying network that forms CNM (Container Networking Model).

Libnetwork is a library that provides Docker’s native implementation of the CNM.

Docker networking utilizes already existing Linux Kernel Networking features like (iptables, namespaces, bridges etc.).

With Docker Networking, we can connect various docker images running on same host or across multiple hosts.

By default, three network modes are active in Docker.

  1. Bridge
  2. Host
  3. Null

To extract more information about each of the networks:

1. Host Networking:

Shares TCP/IP stack, namespace etc of host OS. That is, all of the network interfaces defined on the host will be accessible to the container. Below command connects the microservice docker image to the host network.

$ docker run --net=host microservice-demo

2. Bridge Network:

Bridge Network driver provides single host networking capabilities. By default containers connect to Bridge Network. Whenever container starts, it is provided an internal IP address. All the containers connected to the internal bridge can now communicate with one another. But they can’t communicate outside the bridge network.

With -p flag however, we can map the docker port to the native port.

$ docker run -p 4000:80 microservice-demo

Both Host and Bridge Networking are contained within a single host.

3. Overlay Networking:

Overlay Networking provides simple and secure multi host networking. The overlay network makes use of VXLAN over underlying network.

Containers that are part of Overlay Network can communicate with containers regardless of the host. Containers part of Overlay network see each other if they are on same L2 network.

http://docker-k8s-lab.readthedocs.io/en/latest/_images/docker-overlay.png

Each container in the Overlay network receives two IP address.

First IP allows for communication between the containers across host. While second IP maps to VXLAN-VTEP endpoint and contains all the actual data between hosts.

4. Macvlan Driver:

It’s a hardware way of networking where each of the containers are given a hardware or MAC address. As a result, each of the containers have a full TCP/IP stack of their own. So, it means each container act as a physical device directly connected to the underlying network.

https://docs.docker.com/engine/userguide/networking/images/multi_tenant_8021q_vlans.png

However, Macvlan carries additional overhead of network management as each container is now first class citizen of the underlying network.

References:

--

--