So, now that we've got Docker installed, and we've tested it with a couple of containers and reviewed some basic commands, we need to get a little more involved in some actual use-cases. But first, I want to review some other concepts that will make your Docker experience a little easier.
The Fun with Docker Series
Links to the entire series are here:
- Part 1 - Docker - An introduction
- Part 2a - Getting started with Docker
- Part 2b - More getting started with Docker
- Part 3 - Docker Compose (this post)
- Part 4 - Docker Volumes (and Bind Mounts)
- Part 5 - Docker Networking
- Part 6 - Monitoring your Docker setup with Portainer
- Part 7 - Setting up this blog with Let's Encrypt, Nginx, and Ghost
The default way of starting Docker containers
Docker is cool - there, I said it.
Docker is also quite useful for things other than large production implementations. I covered my motivation for using it in Part 1, and you might have your own, but how do we actually use it?
By default, you rely on the docker run
command to download, create, and start containers in one command. There are a bunch of other Docker commands that you can use for more granular control of containers, but docker run
is the quick and dirty way.
So far, the examples I've shown were pretty simple and are mostly useful for testing things out, but as you saw in the CentOS example from Part 2b, the CLI commands can start to get pretty involved:
docker run --name centos-linux -d centos /bin/sh -c "while true; do ping 8.8.8.8; done"
This was still a relatively simple command for Docker. Here's an example of antoher fairly basic Docker command used to bring up Apache in a container:
docker run -dit --name container-name -p 8080:80 -v /home/user/website/:/usr/local/apache2/htdocs/ --restart unless-stopped httpd
Note: We've introduced a couple new options with this command. Most containers will use these in some way. Containers on Docker Hub generally come with instructions on how to use them and which options are required.
-p
tells Docker to publish a port (TCP by default, but you can also use UDP) from the container to the host. I will cover Docker networking in a later post (yay networking!)-v
maps a directory or file from the host to one on the container. This is used for both data sharing between containers and persistence. I'll cover this concept in the next post.--restart
tells Docker whether or not to restart the container automatically depending on how it was stopped previously. This option is important if you want containers to automatically start when the host is rebooted.
Here's an example of an even more involved docker run
command (from the haugene/transmission-openvpn container):
docker run --cap-add=NET_ADMIN -d \
-v /your/storage/path/:/data \
-v /etc/localtime:/etc/localtime:ro \
-e CREATE_TUN_DEVICE=true \
-e OPENVPN_PROVIDER=PIA \
-e OPENVPN_CONFIG=CA\ Toronto \
-e OPENVPN_USERNAME=user \
-e OPENVPN_PASSWORD=pass \
-e WEBPROXY_ENABLED=false \
-e LOCAL_NETWORK=192.168.0.0/16 \
--log-driver json-file \
--log-opt max-size=10m \
--restart unless-stopped
-p 9091:9091 \
haugene/transmission-openvpn
You can see from these examples how long the command can get. Whenever you want to start a new container based on your preferences, you also have to remember each of these options so that it will start properly and work as expected. Of course, you can always store these commands in a text file or script somewhere, but that's not practical and doesn't scale very well.
A tool that can help
Enter Docker Compose. Docker Compose helps us in a few ways; some covered in this post and others that will be covered in the Docker networking post. The major features are listed on the official page.
The biggest benefit that I've found is that I can define my Docker and application options in a single place for all containers that I want to run in a given environment. I can also have multiple environments on a single host system, which creates a level of isolation method beyond what Docker provides.
A Docker Compose environment is defined in a YAML (YAML Ain't Markup Language) file called docker-compose.yml
, which is where you list each of your containers along with any options that you want to use to run them.
Note: I'm one of those weirdos that actually likes YAML. Yes, it can be a pain in the ass, and one bad indentation can ruin your day, but still.. I get it.
Here's how the docker-compose.yml
file looks based on the command example above:
version: '3'
services:
transmission:
image: haugene/transmission-openvpn
cap_add:
- NET_ADMIN
container_name: transmission
devices:
- /dev/net/tun
restart: unless-stopped
ports:
- "9091:9091"
volumes:
- /etc/localtime:/etc/localtime:ro
- /your/storage/path/:/data
logging:
driver: json-file
options:
max-size: "10m"
environment:
- CREATE_TUN_DEVICE=true
- OPENVPN_PROVIDER=PIA
- OPENVPN_USERNAME=user
- OPENVPN_CONFIG=CA\ Toronto
- OPENVPN_PASSWORD=pass
- LOCAL_NETWORK=192.168.0.0/16
- WEBPROXY_ENABLED=false
Note: As you might know if you've worked with Ansible, YAML is very particular about indentation, so keep that in mind as you create and edit your own
docker-compose.yml
files.
You can think of this as the container configuration file that Docker will parse prior to starting your containers. It contains not only Docker options, but also application options via the environment:
field that can be pushed to the application while it starts.
Once your docker-compose.yml
file is created, you simply run docker-compose up -d
and wait for the magic to happen. Docker Compose will parse the file, download any container images that don't already exist, and then start each of the containers listed in the file. The -d
option will detach the containers so that they run in the background.
Let's install Docker Compose, create a docker-compose.yml
file, and test things out.
Installing Docker Compose
Due to the architecture differences between the Raspberry Pi and other platforms (I'm assuming that you're running Ubuntu in a VM of some sort), there are two ways to install Docker Compose.
Note: Docker for Mac already includes Docker Compose
On Ubuntu, it's quite simple. The first step is to download the file from Docker and save it to /usr/local/bin
- The command below will grab the correct build and architecture:
sudo curl -L https://github.com/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
Next, you need to set the newly downloaded file to be executable:
sudo chmod +x /usr/local/bin/docker-compose
Note: The steps above will download the most recent version (1.29.2) as of the date of this blog post. For the latest version, please go to this page, where you will find the same commands as above, but for the newest version.
On the Raspberry Pi, we need to use pip, the package installer for Python.
First, let's install some dependencies:
sudo apt install -y libffi-dev libssl-dev python3-dev python3 python3-pip
Finally, we'll install Docker Compose with pip.
sudo pip3 install docker-compose
And that's it! You can type docker-compose version
to verify that everything installed correctly.
root@ccie:~/docker# docker-compose version
docker-compose version 1.29.2, build c4eb3a1f
docker-py version: 4.4.4
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
Hello World revisted
The first thing we should do is create a directory to work in - This directory will hold our docker-compose.yml
file.
Now, let's go back to the last post with our hello-world image, and create a docker-compose.yml
file inside of this directory that looks like this:
version: '3'
services:
hello-world:
image: hello-world
Note: The format of the
docker-compose.yml
file should be pretty self-explanatory since it's in YAML, however you can read more detail on its structure here.
Next, let's run docker-compose up
and see what happens.
pi@raspberrypi:~/hello-world $ docker-compose up
Creating network "hello-world_default" with the default driver
Pulling hello-world (hello-world:)...
latest: Pulling from library/hello-world
c1eda109e4da: Pull complete
Digest: sha256:6540fc08ee6e6b7b63468dc3317e3303aae178cb8a45ed3123180328bcc1d20f
Status: Downloaded newer image for hello-world:latest
Creating hello-world_hello-world_1 ... done
Attaching to hello-world_hello-world_1
hello-world_1 |
hello-world_1 | Hello from Docker!
hello-world_1 | This message shows that your installation appears to be working correctly.
hello-world_1 |
hello-world_1 | To generate this message, Docker took the following steps:
hello-world_1 | 1. The Docker client contacted the Docker daemon.
hello-world_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
hello-world_1 | (arm32v7)
hello-world_1 | 3. The Docker daemon created a new container from that image which runs the
hello-world_1 | executable that produces the output you are currently reading.
hello-world_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
hello-world_1 | to your terminal.
hello-world_1 |
hello-world_1 | To try something more ambitious, you can run an Ubuntu container with:
hello-world_1 | $ docker run -it ubuntu bash
hello-world_1 |
hello-world_1 | Share images, automate workflows, and more with a free Docker ID:
hello-world_1 | https://hub.docker.com/
hello-world_1 |
hello-world_1 | For more examples and ideas, visit:
hello-world_1 | https://docs.docker.com/get-started/
hello-world_1 |
hello-world_hello-world_1 exited with code 0
You'll notice a couple of differences from the docker run
method:
- The
docker-compose.yml
is automatically read in the current directory without you having to specify it in the command. - A Docker network is automatically created (we'll cover this in a future post).
- The output from the container is prepended with
hello-world_1
. This is because Docker Compose can run multiple containers from a singledocker-compose.yml
file, so the output is shown with its corresponding container name for clarity.
Another example
Going back again to our last post, let's take the CentOS example and create a docker-compose.yml
file based on what we ran on the CLI.
Again, you should create this in a different directory from the Hello World example.
version: '3'
services:
centos:
image: centos
container_name: centos-linux
command: /bin/sh -c "while true; do ping 8.8.8.8; done"
The differences from the Hello World example are minor:
- we're giving the container a name for clarity
- we're using the
command:
field to specify the ping loop
Now we have two options, we could issue a docker-compose up -d
to start the new container and run it in the background, or we can run without the -d
option and see the output from the container. Let's go with the first option so that we can review some other commands:
pi@raspberrypi:~/centos $ docker-compose up -d
Creating network "centos_default" with the default driver
Pulling centos (centos:)...
latest: Pulling from library/centos
193bcbf05ff9: Pull complete
Digest: sha256:a799dd8a2ded4a83484bbae769d97655392b3f86533ceb7dd96bbac929809f3c
Status: Downloaded newer image for centos:latest
Creating centos-linux ... done
As with the Hello World example, we can see a new network created as well as the usual Docker mechanics of the container image downloading and starting. Let's verify with a docker-compose ps
(no line-wrapping workarounds required with this one!):
pi@raspberrypi:~/centos $ docker-compose ps
Name Command State Ports
-------------------------------------------------------------
centos-linux /bin/sh -c while true; do ... Up
We can see our container with the name we gave it along with the command that is running inside of it.
Now, let's stop the container with a docker-compose down
:
pi@raspberrypi:~/centos $ docker-compose down
Stopping centos-linux ... done
Removing centos-linux ... done
Removing network centos_default
Note: If you don't specify an image name with
docker-compose up
ordocker-compose down
, it will perform the function on all containers listed indocker-compose.yml
. You can specify an image name to just focus on one container.
Now, let's verify that the container has stopped:
pi@raspberrypi:~/centos $ docker-compose ps
Name Command State Ports
------------------------------
Let's use the second option that I mentioned and start the container without detaching it.
Note: You use a Ctrl-C to stop a container while you're attached to it.
pi@raspberrypi:~/centos $ docker-compose up
Creating network "centos_default" with the default driver
Creating centos-linux ... done
Attaching to centos-linux
centos-linux | PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
centos-linux | 64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=260 ms
centos-linux | 64 bytes from 8.8.8.8: icmp_seq=2 ttl=52 time=20.2 ms
centos-linux | 64 bytes from 8.8.8.8: icmp_seq=3 ttl=52 time=288 ms
centos-linux | 64 bytes from 8.8.8.8: icmp_seq=4 ttl=52 time=125 ms
centos-linux | 64 bytes from 8.8.8.8: icmp_seq=5 ttl=52 time=20.7 ms
^CGracefully stopping... (press Ctrl+C again to force)
Stopping centos-linux ... done
Voila! You've just created a couple of Docker Compose environments and learned how to stop and start them!
But that's not all!
One of the niftiest features of Docker Compose for me is that it will download updates to your container images from the Docker Hub with a single command. It will then update and recreate any containers as required, again with a single command.
To pull image updates for all containers listed in a docker-compose.yml
file, you run docker-compose pull
in the appropriate directory:
[~/docker] root# docker-compose pull
Pulling jackett ... done
Pulling portainer ... done
Pulling plex ... done
Pulling letsencrypt ... done
It's not shown in the output above, because it completed, but Docker Compose downloaded updated images for my letsencrypt and jackett containers.
To update the containers listed in a docker-compose.yml
file, you simply run docker-compose up -d
again:
[~/docker] root# docker-compose up -d
plex is up-to-date
portainer is up-to-date
Recreating letsencrypt ... done
Recreating jackett ... done
As you can see above, Docker Compose automatically updated and recreated only the containers that required updating.
You should then run a docker image prune
to clean up any old images and free up space on your host:
[~/docker] root# docker image prune
WARNING! This will remove all dangling images.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: linuxserver/jackett@sha256:bc18aa2f1157c7b9624c564ca4b86a257f0cca79f40959c059420756b3337af8
deleted: sha256:f8e7c951e40b0c413b8fbd8c5dc33d53f40c0d7f054585cfcc2fa03c9733660e
deleted: sha256:e09c6e375f26b8c3bd835a64468e717ce14d26d1bf2b7b91913d04f290c47053
deleted: sha256:b8e46f85db6581cb3a938114c81333f25cae739bf62b60d13bb2d3c4d8d1c9b0
untagged: linuxserver/letsencrypt@sha256:bc7a14ee5f3c309764df15568e9589eef06e509814367cb7e68f281940cbd648
deleted: sha256:8d16d83a9e8c83220f4ec7008fc456c942575cdc200e471d99d9b98dadc98c72
deleted: sha256:24ab74b28845235d805dda87df25b17ef0b7a9029f3413520d8d92f29b3b8391
deleted: sha256:b714a0a5a95d86dd28e39229338432e9255d973dedb277cc5b4181d789c408a9
Total reclaimed space: 349.4MB
Summary
There are many tools available to manage and monitor Docker containers on your host, but I've found Docker Compose to be the most useful and straightforward. I also use Portainer (running in a container, oddly enough) to keep an eye on things, but I will cover that in a future post.
Next-up - Volumes in Docker (and Docker Compose). Continue to Part 4.
I hope you found this post and series valuable! Feel free to leave a comment below if you would like to see more or have any topic suggestions for future posts. Or Tweet them at me @eiddor