Docker makes it easy to test software in different linux environments (different distributions, distribution versions, architectures) by enabling creation of lightweight throw-away containers.
Here is an example of how Docker can be used to ensure the compatibility of a Debian package with several versions of Debian. Suppose we are packaging a linux service for Debian and would like to test if it functions correctly under both Debian versions squeeze and wheezy.
Packaging a service for Linux is somewhat error-prone because the package manager does not normally provide a standard facility to create/delete the service user (it is commonly done via postinst and pre/post rm scripts), and because of all the distribution-specific cruft surrounding the service lifecycle (hopefully this will change in the future with the widespread adoption of systemd).
Here are the things we would like to test:
- the dependencies of the package are correct (satisfiable)
- the service user and group are correctly created upon installation
- the service is started upon installation
- the permissions of the files/directories are correct
- the service executes under the correct user
- the output is properly logged to a log file
- service start/stop work
- on uninstall, the service is stopped
- on uninstall, the user/group are deleted
- on uninstall, package files are removed but the config files are left intact
When developing the packaging procedure, we would not get all the things from the list above right the first time. We would need to iterate through the loop:
- change the packaging procedure
- re-create the package
- install the package
- perform the tests
Naturally, on each iteration we would like our target environment (the Debian system) to be pristine. Also we want to have several environments: one for each Debian version we are testing. Here is where Docker comes into play. It makes it super easy and fast to return multiple times to the same system state.
Creating the "pristine" images
We could grab the Debian images from the Docker hub, but we are truly paranoid, so we’ll create the images ourselves using the trusty old debootstrap:
# debootstrap squeeze ./squeeze
Docker makes it easy to import a root filesystem into an image:
# tar -C ./squeeze -c . | docker import - activeeon/debian-squeeze:pristine
The image named activeeon/debian-squeeze:pristine now exists in the local Docker list of images, and can be used to start containers:
# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE activeeon/debian-squeeze pristine e08a54c4a759 2 minutes ago 198.8 MB
Docker hub enables sharing of Docker images. We can make our image available to others with one command:
# docker push activeeon/debian-squeeze:pristine
We repeat the same steps for wheezy.
Starting the container
For manual testing, we just run the interactive shell:
# docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash root@d824ff1de395 #
We specified the image to use and the shell to execute. We also used the volumes feature of Docker to make available inside the container the directory from the host system with the source files for the package.
Now from within the container, we can build and install our package:
root@d824ff1de395 # make -C /src/proactive/linux-agent/ package root@d824ff1de395 # dpkg -i /src/proactive/linux-agent/linux-agent_1.0_all.deb root@d824ff1de395 # ps -ef # check that the service is started, etc.
Found a problem and need to repeat? Here is where Docker shines: to scrap the changes made by installing the package and return to the original state, all we need to do is to exit the shell (this stops the container), and execute the “run” command again:
root@d824ff1de395: # exit # docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash root@9427e7d35937: #
Notice that the id of the container has changed. It is a new container started from the same image, and it knows nothing of the changes made inside the previous one. All in a blink of an eye!
It is also pretty straightforward to automate the manual tests above: we could envision replacing the interactive shell invocation with a script that performs installation/verification. As an example, it would enable us to test our package in a continuous integration system such as Jenkins. If we publish our image to the Docker hub, the delivery of the image to the Jenkins slaves would be transparently handled by Docker for us.
Of course the Docker container is not a real system (no other services are running inside), so it is not possible to test some things like for example the interaction with other services, but most of the things can be tested, and the fast turnaround times provided by Docker make it a pleasant experience.
To summarize, here are the features of Docker we have used:
- creating images given the root filesystem
- image naming and indexing to manage the set of images on the local machine
- publishing images to the Docker hub to share with others / use on other machines
- instantaneous creation of throw-away environments for testing