Fun with Docker - Part 5: Docker Networking

In Part 4, we talked about Volumes and Bind Mounts.  Now it's time to connect everything together (pun intended) with a discussion about networking in Docker; more specifically in Docker Compose.

The Fun with Docker Series

Links to the entire series are here:

Yes!  My favorite topic!  Or, at least the one I'm most comfortable with.  Docker Networking is also the topic that I'm least familiar with, because by default everything "just works," and I haven't had to mess with it much. That said, there are some cool things you can do with Docker Networking to customize your environment, and I'll go over some of the basics in this post.

Seriously, things just work

One thing you might have noticed after installing Docker is that it creates an interface on the host called docker0. You can see this on the host by typing ifconfig:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255ether 02:42:93:6e:af:4f  txqueuelen 0  (Ethernet)

This interface facilitates networking in Docker by default. Every container that you start using the docker run command will connect to this network unless otherwise speficied. Containers on this network will receive an IP address on the 172.17.0.0/16 network with 172.17.0.1 as a default gateway.

To demonstrate this, let's bring up a new container using the BusyBox image, which is a scaled-down embedded Linux image that can be used for creating your own Docker containers (I may cover this in an advanced Docker series in the future). I'll start the container with this command:

pi@raspberrypi:~/docker $ docker run -it --name busybox busybox

To verify the IP address and default gateway, I'll use the ifconfig and ip route commands as below:

/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2771 (2.7 KiB)  TX bytes:494 (494.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # ip route
default via 172.17.0.1 dev eth0 
172.17.0.0/16 dev eth0 scope link  src 172.17.0.2

Finally, let's ping 8.8.8.8 to show the connectivity:

/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=52 time=12.315 ms
64 bytes from 8.8.8.8: seq=1 ttl=52 time=13.007 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 12.315/12.661/13.007 ms

This is what I meant by things "just work."

This default network is named bridge and can be viewed with the docker network ls command:

pi@raspberrypi:~ $ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
dfba4cd7f562        bridge              bridge              local
884acfd2d91d        host                host                local
50a0756f187c        none                null                local

Note: If you want to see more a lot of detail about the bridge network, look at the output of: docker network inspect bridge

If we were to bring up another container, it would also receive an address on the bridge network, and would be able to communicate with other containers on the network. For you networky types, this is essentially a VLAN that is brought up by Docker and all containers are put into this VLAN by default.

Access from the outside

By default, Docker containers on the same bridge can communicate with each other and with the host on all TCP & UDP ports, but what if we need devices outside of the host or network to communicate with our containers (in most cases, this will be required).

You might remember the -p or the ports: options from previous posts. These options provide the connectivity that we need from outside of the host to the container.

Going back to our Nginx container from the previous post, we used the -p option on the docker run command to forward TCP port 80 on the host to TCP port 80 on the container. We also did the same with TCP port 443.

docker run --name nginx -v /var/www/html:/data/www -p 80:80 -p 443:443 -d nginx

This is how we accomplish the same thing in our docker-compose.yml file:

version: '3'
services:
  nginx: 
    image: nginx
    container_name: nginx
    volumes:
      - /var/www/html:/data/www
    ports:
      - 80:80
      - 443:443

Note: To forward UDP ports, you append /udp to the container port number.

With these ports forwarded (or "published"), any traffic to http://<hostip> or https://<hostip> will be forwarded to the container IP.

If you have multiple containers that need to use the same port, you can map a different port on the host to the same port each of the containers without a problem. For instance, if you had a second Nginx container listening on TCP port 80, you could forward TCP port 8080 to port 80 on the second container. Here's what that docker-compose.yml file would look like:

version: '3'
services:
  nginx: 
    image: nginx
    container_name: nginx
    volumes:
      - /var/www/html:/data/www
    ports:
      - 80:80
      - 443:443
  nginx2: 
    image: nginx
    container_name: nginx2
    volumes:
      - /var/www/html:/data/www
    ports:
      - 8080:80
      - 4443:443

With the above configuration, TCP port 8080 on the host will be forwarded to TCP port 80 on the nginx2 container.

Let's get a little crazy

So far, we've really only played with the default bridge network that Docker creates when it is installed.  There are a number of use-cases where you might need more than one bridge.  

Back in the early 2000s, as applications got more complex and distributed, network architectures started to evolve into tiers.  Companies would create a web server tier in an Internet DMZ that could only communicate with an application server tier behind it (along with the Internet).  The application server tier could only talk to the web server tier and the database tier behind it.

You can configure this kind of architecture using user-defined bridges in Docker.

Note: Docker Compose makes this really easy, so I will focus on that method.

If we wanted our Nginx web server container to communicate with an application container, in this case the blogging platform called Ghost, and then have Ghost communicate with a MySQL database container, this is what our docker-compose.yml configuration could look like (don't try to use this configuration in production, as I've left some lines out of it for the sake of clarity):

version: '3'
networks:
  nginx:
  ghost:
  mysql:

volumes:
  mysql-volume:
  ghost-volume:
  nginx-volume:
  
services:
  nginx: 
    image: nginx
    container_name: nginx
    volumes:
      - nginx-volume:/data/www
    networks:
      - nginx
      - ghost
    ports:
      - 80:80
      - 443:443
  ghost:
    image: ghost:latest
    container_name: ghost
    restart: unless-stopped
    environment:
      url: https://ccie.tv
    volumes:
      - ghost-volume:/var/lib/ghost/content
    networks:
      - ghost
      - mysql
  mysql:
    image: hypriot/rpi-mysql
    container_name: mysql
    volumes:
      - mysql-volume:/var/lib/mysql
    networks:
      - mysql

You can see that the first thing we do is declare three networks nginx, ghost, and mysql so that Docker can create them (along with the Volumes that we're using). Next, we assign each container (or service) to the appropriate network(s). The Nginx container connects to the nginx network for outside connectivity over TCP 80/443, and the ghost network for connectivity to Ghost. The Ghost container connects to the ghost network for connectivity to Nginx, and the mysql network for connectivity to MySQL, etc.

These networks are created when we issue the docker-compose up -d command:

pi@raspberrypi:~/blog $ docker-compose up -d  
Creating network "blog_ghost" with the default driver
Creating network "blog_nginx" with the default driver
Creating network "blog_mysql" with the default driver
Creating volume "blog_nginx-volume" with default driver
Creating volume "blog_ghost-volume" with default driver
Creating volume "blog_mysql-volume" with default driver

They can be seen with the docker network ls command:

pi@raspberrypi:~/blog $ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
cfb8f223db06        blog_ghost          bridge              local
4352d02f7c1c        blog_mysql          bridge              local
7a0e9bed6b2f        blog_nginx          bridge              local
e31cc0a8d4d9        bridge              bridge              local
884acfd2d91d        host                host                local
50a0756f187c        none                null                local

Note: Similar to Volumes, Docker Compose prepends network names with the directory name that the docker-compose.yml file lives in.

The beauty with this type of architecture is that there is no connectivity to Ghost and MySQL from outside of the host because we haven't forwarded any ports.

Docker Compose extras

Docker Compose takes things to the next level and creates a bridge network for each docker-compose.yml file that that it processes. This is the docker network ls output on my server:

[~/docker] root# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
6734819e981c        bridge              bridge              local
326af54db948        docker_default      bridge              local
acbb1eb51ca1        host                host                local
191cdd799fec        none                null                local

You can see above that we have a network named docker_default that Docker Compose created.

One benefit of user (or Docker Compose) defined networks is that containers can reach each other using their container name instead of by IP address. This is really handy for our configuration, as we don't have to worry about the IP addresses changing for our containers when we move them around or upgrade them, as the name will automatically resolve to the running container.

Let's go to our blog environment from earlier and attach to the Nginx container. From here, we will type ping ghost and ping mysql to see what happens:

root@e0c70f72ab67:/# ping ghost
PING ghost (172.22.0.2) 56(84) bytes of data.
64 bytes from ghost.blog_ghost (172.22.0.2): icmp_seq=1 ttl=64 time=0.341 ms
64 bytes from ghost.blog_ghost (172.22.0.2): icmp_seq=2 ttl=64 time=0.237 ms
64 bytes from ghost.blog_ghost (172.22.0.2): icmp_seq=3 ttl=64 time=0.240 ms
^C
--- ghost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 61ms
rtt min/avg/max/mdev = 0.237/0.272/0.341/0.051 ms
root@e0c70f72ab67:/# ping mysql
ping: mysql: Name or service not known

We can see that we can ping the Ghost container by name, but not the MySQL container. This is because the Nginx container is only connected to the outside world and the network with Ghost. If we connect to the Ghost container and try to ping the Nginx and MySQL containers, we get some better results:

root@6680c9215a3c:~# ping nginx
PING nginx (172.22.0.3): 56 data bytes
64 bytes from 172.22.0.3: icmp_seq=0 ttl=64 time=0.381 ms
64 bytes from 172.22.0.3: icmp_seq=1 ttl=64 time=0.327 ms
^C--- nginx ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.327/0.354/0.381/0.027 ms
root@6680c9215a3c:~# ping mysql
PING mysql (172.24.0.2): 56 data bytes
64 bytes from 172.24.0.2: icmp_seq=0 ttl=64 time=0.394 ms
64 bytes from 172.24.0.2: icmp_seq=1 ttl=64 time=0.324 ms
64 bytes from 172.24.0.2: icmp_seq=2 ttl=64 time=0.320 ms
^C--- mysql ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.320/0.346/0.394/0.034 ms

Whew..

Ok, this post was a lot longer than I had planned, but we managed to cover most of the basics of Docker networking and the options you have to allow your containers to communicate with each other as well as the outside world.

In the next post, I'll cover a really handy tool that you can use to manage and monitor your Docker installations.  Continue to Part 6.

If you enjoyed this post or series, please drop a note in the comments below or Tweet them at me @eiddor.  If you would like me to cover any other Docker topics, let me know!