Docker utilizes n architecture called Container Network Model (CNM) to manage container networking.
CNM is based on the following concepts
Its all the network resources that the container will use, its an isolated environment by other containers and from the host
An Endpoint consists of two network interfaces, the one interface is connected to the Network sandbox of the container and the other one to a designated network. A network sandbox might have many Endpoints.
A network is a group of endpoints that allow containers to communicate each other.
An important think to know is that CNM its self is not a protocol, is an architecture, this means that there are a lot of implementations of CNM that cover different needs, the actual implementations of CNM are the Network Driver and the IPAM Driver.
Network Driver: is the actual implementation of the CNM architecture, there are many drivers that cover various uses. Docker contains by default the following drivers
- Bridge: Allows containers in the same node to communicate.
- Host: Allows the containers to communicate with the node that run
- Overlay: Allows containers that are in the same node or in another node to communicate
- MACVLAN: Each network interface of the container gets a MAC address which in turn uses a physical interface of a Docker host
- None: A totally isolated container
IPAM Driver: Does the IP addressing for the endpoints and the networks
How to create a bridge network and test that containers that use this network can communicate each other
To do our test we will create two containers, one container will use the image of nginx, a very popular web server, the other container will use an image called netshoot which contains a lot of network troubleshooting tools, one tool is curl, that can do http requests.
We will consider our test to create a bridge network successful if we accomplish to do an http request from the netshoot container to the nginx container and get back content from nginx.
Our first step is to create the bridge network on our docker host, this bridge network will allow all containers in this host to communicate each other, to accomplish this the network will need to have a virtual network interface on this host.
Lets examine the current network interfaces of the host using the ip link command
From the output we can identify three network interfaces, lo,wlps0,docker0. The first two interfaces are host specific, The loopback and the wifi interface of the host.
The docker0 interface is the default bridge interface which is created uppon docker installation. This is the default bridge network used if we dont define any network options when we run containers, lets ignore it and create our own bridge network using the following command
docker network create --driver bridge bridge-network-medium-test
We can see that this command created an interface named br-716f08d93a59 this is random and in your case should be something else. This interface will be the network interface of bridge-network-medium-test network.
Lets have a view on the parameters
- docker network create: the command to create a docker network
- — driver bridge: driver defines what kind of network we want to create, in this case we wanted to create a bridge network
- bridge-network-medium-test: the name of the network we selected
Now lets try our network, we will create the first container with nginx and we will configure it to use the bridge network we just created.
We can see from the output that we created an nginx container named medium-nginx and we have verified that is up and running.
On the parameters there is the — network parameter and the name of the network we just created, this instructs docker to attach the container to the bridge-network-medium-test.
Now lets run the busybox container and attach it to the bridge-network-medium-test and do an http get request to port 80 of medium-nginx using the curl command
From the output we can see that the container medium_busybox could communicate with medium-nginx using port 80.
Now if we create another busybox container that does not use the bridge-network-medium-test network and we still try to communicate with medium-nginx the http request will fail
Note that in the above parameter we didn't use the network option which defaults to use the default bridge network that we discussed before.
How to use a Host network
When using the Host network the container networking stack is not isolated from Docker host and the container gets the host ip address. If the container binds to port 80, the application is available from the port 80 of the Docker host.
Host networks are high performant because there not any NAT between the host and the container.
A limitation of Host networking is that two containers running in a Docker host cannot bind the same port, This might not be a problem if you intent to run only one container that will bind a specific port in the docker host, if this is not the case consider something like bridge networking or overlay network.
To use a host network you need to use the — net host parameter.
Lets create an nginx container and test the Host networking
$ docker run -d --net host --name host_nginx nginx
Now if we run the curl command from the docker host we will receive the default nginx web page.
$ curl http://localhost:80
How to create an overlay network
An overlay network is recommended for docker swarm, it allows containers spread in a docker swarm cluster to inter-communicate.
Lets create a custom overlay network, enter the following from a swarm manager.
$ docker network create --driver overlay overlay-net-test
Now lets create two swarm services that use this network, the first container will be an nginx container
$ docker service create --name ovrly_nginx --network overlay-net-test nginx
lets create a second container that will run curl and will do an http request to the nginx service, this service will start a busybox container and will do an http request to the nginx service and then wait for 3600 seconds
$ docker service create --name ovrly_busybox --network overlay-net-test radial/busyboxplus:curl sh -c 'curl ovrly_nginx:80 && sleep 3600'
In order to verify that the http request we can see the logs
Its a matter of luck, in which host the container will run, in my case the containers started in different hosts of the swarm, if we didn’t overlay networking the containers would not be able to communicate each other.
How to create a MACVLAN network and things you should be very careful
macvlan networking assigns a MAC address to the containers virtual interface making it to appear as a physical interface connected to the physical network, macvlan networks might be necessary in some cases of applications that monitor network traffic. To do this an physical interface of the docker host needs to be delegated for this purpose.
macvlan networking might turn problematic because of ip or mac address exhaustion if you have a very large number of containers going up and down very often.
Also your networking equipment needs to be able to handle “promiscuous mode”, where one physical interface can be assigned multiple MAC addresses.
Lets try macvlan networking!
$ docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=ens5 macvlan-test
macvlan-test is the name of the macvlan network we created.
The -o parrent=ens5 parameter delegates the ens5 interface of the docker host for the macvlan interface.
— subnet and — gateway parameters apply the subnet and the gateway ip address of this macvlan.
Now lets create two containers that will use this macvlan network
$ docker run -d --name macvlan_nginx --net macvlan-test nginx
$ docker run --rm --name macvlan_busybox --net macvlan-test radial/busyboxplus:curl curl 192.168.0.2:80
Note that we know that nginx has ip 192.168.0.2 because in macvlan mode the ip addresses assigned with the order of creation.
We can verify this with docker inspect macvlan_nginx
How to create a None network (and why to do this?)
Containers that use the None network are network isolated, this means that no containers can reach them, this is might be a good practice if you run an application that does not need networking and you want for security reasons to isolate it.
Running the bellow will verify that the none_nginx container is not reachable from the busybox container.
docker run --net none -d --name none_nginx nginx
docker run --rm radial/busyboxplus:curl curl none_nginx:80
curl: (6) Couldn't resolve host 'none_nginx
How to see network details
If we run the docker network inspect bridge-network-medium-test we can see the containers that use this network, and also details like the container assigned ip addresses, details like the network gateway ip address and the subnet it uses.
Other useful commands are
docker network ls, this commands lists all networks
docker network connect / disconnect command can be used to connect or disconnect an existing container for a network.
docker network rm / create can be used to delete or create a network.
I hope you find this article useful as side notes when you work with docker networking!