Just a note - As discussed here, I'm primarily a Network nerd, so a lot of this information is new to me and I'm always on a learning journey. This post (and blog) is in no way definitive or to be taken as gospel.  I'm just relating what I've discovered so far and my perspective on this technology.

The Fun with Docker Series

Links to the entire series are here:

What is Docker?

A Google search for this question gives us this clear as mud answer:

Google result for "What is Docker?"

Not very helpful.

Is Docker virtualization?

Yes & no - It's not like VMware, VirtualBox, or KVM.  Docker does not emulate hardware allowing you to install an operating  system and run a full virtual machine.  Those systems certainly have their uses, but in some cases can be overkill for simpler tasks and services, especially if they need to communicate with each other.

So what is it?

As the muddy answer above mentions, Docker provides OS-level virtualization.  It allows you to run your software in a self-contained environment, aptly called  "containers," providing separation and segmentation from the underlying OS (called the Host OS) and from other containers, while still allowing explicit connectivity via networking and file systems (volumes) as required.

This gives us the ability to create sandboxes to test applications and services without leaving a mess behind in the Host OS. After testing, you can run those services in their containers permanently or move them to different Hosts with relative ease, making them quite portable and easy to manage.  Volumes are also persistent (discussed in the next post), making migration and upgrades super easy and safe.

Why do I need it?

That's a more complex question.  The best answer I can give is to describe what lead me to using Docker and how I use it.

I have had a bunch of personal servers that I've run for many years. These servers have evolved over time.  I started with bare metal servers running FreeBSD at home and at a colocation facility in 1997, and have migrated them to various other platforms/environments since then.

Today, I have an Ubuntu VM at home running on a VMware host, and an Ubuntu instance that I run in AWS.  These servers are for personal use and learning, and not anything that are used for production, so I am frequently messing with them to test various software packages and services.  I also have a whole slew of Raspberry Pis at home that I use for the same purposes.

Migration challenges

The process of upgrading or migrating these servers has never been easy.  Whenever it was time to upgrade, I'd have to go through this routine:

  • bring up the new server
  • gather a list of the applications and services running on the old server (not an easy task)
  • install or build each application, along with dependencies
  • figure out where each service or application kept its data and settings on the old server
  • copy that data and settings from the various locations to the new server
  • hope and pray that the applications would start and be happy on the new server
  • keep the old server around for a while just in case I missed something
  • spend a lot of time trying to figure out why things weren't working (typically due to behavior changes between application/OS versions)

For instance: Years ago, Apache 1.x had its default root directory in /usr/local/share/htdocs/, which eventually moved to /usr/local/httpd/htdocs , which eventually moved to /usr/local/apache/htdocs, which eventually moved to /usr/local/apache2/htdocs, which eventually moved to /usr/local/www/data, which eventually moved to /var/www/data/html, and well... you get the idea.

And that was just the place where the website files were kept - Apache modules and configuration files also moved around over that time, making each upgrade/migration a pain, which made me put off upgrades, sometimes for years.

A more modern day problem: some of the services (Sonarr/Radarr) that I use on my servers require Mono in order to run properly, which I have installed via the Ubuntu apt package system. Every week or so, I run sudo apt update and sudo apt upgrade in Ubuntu to keep packages up to date, which means that Mono will be automatically upgraded as new versions are released. Not infrequently, however, these new Mono versions are incompatible with the services that depend on them, causing them to no longer work, and worse, causing me to figure out how to downgrade the Mono version. Painful.

The answer for me

My solution? Run these services in Docker containers.  These particular containers are built with the compatible version of Mono and other dependencies (SQLite), making them ready to run without me having to install anything on the underlying Host.  When it comes time to upgrade the services, I simply destroy the container, download the latest image, and start it again.  Any settings or customizations I have made to the applications are saved in volumes which are persistent on the Host.  If any of the upgrades require different path locations, it's simply a matter of changing those volume mappings in my settings file (typically one line), and starting the container.

Even better, moving the application to a new host is simply a matter of copying my Docker Compose configuration file (explained in a future post) and the mapped volumes, which I keep in one easy-to-remember place, and firing up the container on the new Host. No more manually parsing the output of a sudo apt list --installed to figure out what needs to be installed, and no more re-reading install documents for the applications to make sure I have all of the dependencies straight.

And as an added bonus: no more confusion or futzing with startup scripts whenever Ubuntu changes the default system (SysV -> Upstart -> systemd)

Another use-case

As I mentioned, I have a bunch of Raspberry Pis that I use for various things. One of those functions is to test Docker containers and configurations before moving them to their permanent home. Once I'm finished customizing and testing, I can simply copy the contents of my docker-compose.yml file to the existing one on my main host and just start the new container to have it up and running in seconds.

So what now?

In my next post, I'll share instructions on installing and setting up Docker on both Ubuntu 18.04 and Raspberry Pi.

Continue to Part 2a.

Are you using Docker for non-production uses?  What images do you use?  Comment and discuss below, or Tweet them at me @eiddor!