Friday, March 6, 2015

ProActive and R: a Machine Learning Example

The ProActive Scheduler is an open-source software to orchestrate, scale and monitor tasks among many hosts. It supports several languages, one of them is the statistical computations and graphics environment R. This environment is known for providing computational intensive functionality, so write your R scripts on a laptop and execute them on different, more powerful machines.

Docker container for portability and isolation

On the Cloud Expo Europe in London you can see an exciting and heavily developed feature of the ProActive Scheduler which is: Docker container support. Running tasks in containerized form has the advantage of increasing isolation between tasks and providing self defined environments in which you can them. Thought further, containers could be used as a replacement for tasks and run them in your environment inside a container. The possibilities are endless and you do not have to care about error recovery, network outage or other complications running in distributed environments, because the ProActive software deals with that.

Machine Learning with ProActive and R: Local Setup

Following, a few steps on how to install and run ProActive and finally execute an R script with the ProActive Scheduler. The following steps are done using an Ubuntu operating system.

Requirement:Installing the R Environment and RJava

Install the R Environment and RJava by typing:

    # sudo apt-get install r-base r-cran-rjava

Download ProActive

  1. Create an account on
  2. Download the current ProActive Workflows & Scheduling
  3. Unzip
  4. Download the ProActive-R-Connector (
  5. Unzip into the ‘ProActiveWorkflowsScheduling-linux-x64-6.1.0/addons’ folder

Ready, you just installed ProActive and R support.

Start ProActive Server


# ./ProActiveWorkflowsScheduling-linux-x64-6.1.0/bin/proactive-server

The standard setting will run the ProActive Scheduler and local 4 nodes.

Note: ProActiveWorkflowsScheduling-linux-x64-6.1.0 is the ProActive home directory, it might be called different when you downloaded a newer version.

Wait until you see “Get started at” showing the link to access the web-interface.

Start the ProActive Studio

The interface will show three possibilities, the most left orange circle is a link to the ProActive Studio, which is used to create workflows and execute them. Click on the left circle to open the Studio. Login with: admin and password admin

Create an R task

After creating a workflow and opening it, the interface will show a 'Tasks' drop down menu, select 'Language R'
to create an R task.

Add your R code

Add your R code, here you can download an altered example from
Add your code under the "Execution" menu which appears after selecting the R_Task.

Note: When R is executed on another machine, it must have all necessary packages installed and loaded, ensure it by installing packages in advance, it can be done within a script by specifying the library and the mirror

Add datasets to R_Task

The script will load a dataset, the SP500_Shiller dataset which you can download here. The R script will be send to one ProActive Node and to ensure that the node has the data we need to specify the data dependency inside the 'Data Management' settings of the R_Task. Specify the SP500_Shiller.csv as an input file from user-space. The file must be copied to ProActiveWorkflowsScheduling-linux-x64-6.1.0/data/defaultuser/admin which is the user-space for the admin user.

Specify output file inside Scheduler

The R script will output an image 'ml-result.png', to see the result we need to tell the task to copy it into our user-space after the task finishes. That is done by adding ml-result.png as an output file to user-space.

Access result

To see the result open ProActiveWorkflowsScheduling-linux-x64-6.1.0/data/defaultuser/admin – the user-space of the admin user - which contains the 'ml-results.png' after the R script finished.

Workflows for Big Data

One of ActiveEon most remarkable contributions to the French project DataScale is the possibility to execute ProActive workflows on HPC platforms. Why are these workflow so interesting? They have lots of features! Some that come to my mind:
  • Workflow data management mechanisms (take and bring your files from one task to another without need of shared file system)
  • Our workflow is made of tasks, which can be native tasks (execute installed applications)
  • Tasks can also be implemented on OS-independent script languages like: Groovy, Ruby, Python, Java, R, javascript, and more to come...
  • Tasks support dependencies (don't execute Y unless X finished), replication (execute this task N times in parallel), loop (keep executing this task given a condition), conditionals (execute this task, or that one, given a condition)
  • Error handling at job and task levels, different re-scheduling policies (what to do if your task fails?)
  • Inter-task variables passing mechanisms (let tasks communicate between them through variables)

By allowing to execute these kind of workflows, and the help of predefined workflow templates, your use case could be easily tackled. To have a more complete overview of our features please try our product at  

One example of a use case is presented by the following demo video (enable subtitles for an explanation). Here we show how ProActive Workflows & Scheduling can be used on Big Data and HPC domains to:
1. Write any kind of workflows (involving tools like Hadoop, Unix command line tools, and even custom scripts on groovy).
2. Execute those workflows on an HPC platform.
3. Follow the execution of those workflows (tasks output, logs, execution time, execution node, etc.).
4. Have a workflow that prepares TXT book files for processing, word-count them (using Hadoop), generate a report, and upload such report on the cloud to make it public.

Maybe we can also help you boost your productivity, with ProActive!

Monday, March 2, 2015

Slurm, Big Data, Big Files on Lustre Parallel Distributed File System, and ActiveEon's Workflows

See the original post at

This blog post is aimed to describe what are the Big Data solutions we have decided to use for the French project DataScale, and more important, the reason why we have chosen them.
Let’s first have a walk-through about ActiveEon and its participation in DataScale.

ActiveEon & DataScale

ActiveEon has been running in the market since 2007, helping customers to optimize the use of their own computer infrastructures.
We often provide solutions to problems like under-exploitation of compute resources, business workflows that are too slow and that could be enormously accelerated, teams spending too much time on infrastructure management rather than on their own business. We do that. But lately we have been hearing increasingly about the same problem: Big Data being processed in regular HPC platforms. Research teams with growing amounts of data, plus, a platform that needs to somehow evolve to better handle it. So we decided to join DataScale project to make our product capable of providing answers to those questions.
But not so quickly. When you undergo this situation, data management, hierarchical storage, fast distributed file systems, efficient job schedulers, flexible and research oriented workflows engine, isolation of your infrastructure, security, are just some of the concepts that must come to your mind, especially if you plan to make your platform evolve to satisfy most of the requirements properly. But do not panic. Please do not. We have a good solution (as we always do). In this article we will simply explain why DataScale and its tools may be your guide to walk towards the light.
You start from a regular HPC cluster, several hosts, hundreds of cores, good network connectivity, lots of memory. You probably use your own scripts to launch processes, and also have some sort of mechanism for data synchronization. In the best case you have your own native scheduler that eases your life a bit. Believe it, that is what we encounter in many of our customers. For people who now want to go Big Data, here there are some tips.

What file system?

Big Data requires big throughput and scalability. So better think about Lustre. Lustre is a parallel distributed file system, used in more than 50% of the TOP100 supercomputers. That seems to us like a good argument already. But there is more.
Clients benefit from its standard POSIX semantics, so you will be able to play with it on the client-side pretty much like you would do with EXT4NFS and other familiar file systems. That means less time learning new stuff for people that just want to use it. But, why not using my business application specific file system? Although that is a valid solution, we consider that most of the teams exploit infrastructures using several business applications rather than just one. So sticking to only one filesystem incompatible with your other applications would probably have an important impact on your work process, one way or another. For instance, imagine data created in one specific file system, let’s say HDFS, that needs to be read by a different application incompatible with HDFS. So before proceeding in the processing we should migrate this data to a different file system, let’s say NFS, so that it can be used by the tool that follows in the workflow… Difficult and at the end time consuming. Why to do that if performance can be at least kept the same? Probably there are better ways: just use Lustre, and configure tools to use Lustre. Even use it as a regular POSIX file system. Make all your Big Data applications work on it, as much as possible, and exploit its great performance. There are small tunes that can be done to your app so it behaves better with Lustre, one simple tip: use only big files.
For the sake of adding extra information, Lustre also offers HSM (Hierarchical Storage Management), which works as a cache system putting more regularly accessed data in faster levels of storage, while data that is less accessed remains in slower levels of storage. I will not enter into details, because my colleagues from Bull will surely do it in a coming blog post.

What scheduler?

Big platforms require resource management. Booking nodes, complex allocation, scheduling algorithms. As we do not like reinventing the wheel, we use the scheduler that more than 50% of the TOP500 supercomputers use: SLURM.
Free, open-source, Linux native, very customizable and efficient, it is definitely a must have in your HPC cluster. It offers a maximum of around 1000 job submissions per second. A job is made of a shell script with annotations that will be interpreted by SLURM at submission time. The annotations are very simple, so if you are familiar with any shell scripting language you can quickly learn how to write SLURM jobs. With the use of a distributed file system such as Lustre, SLURM becomes a very powerful and simple tool, usable by almost anyone.

Why a Workflow Engine?

A flexible Workflow Engine adds additional flexibility to your platform, specially if it can be integrated with SLURM. This is the case if we talk about ProActive Workflows & Scheduling. This beauty offers some extra features to the infrastructure: support for flexible multi-task ProActive workflows, task dependencies, Web Studio for easy creation of ProActive Workflows, cron-based-submission jobs, control blocks like replication and loops, dataspaces, templates of jobs for management of data in cloud storage services including support for more than 8 different file systems, templates of jobs for interaction with SLURM and HADOOP, among others.
If multiple languages is what you are looking for, you need to know that ProActive tasks can be implemented using several languages: Java, Javascript, Groovy, Ruby, Python, Bash, Cmd or R. Also you can execute native Windows/Mac OS/Linux processes.
There is a Node Source integrated to SLURM: in other words, if ProActive requires compute resources, they can be taken from the SLURM pool of resources and added to ProActive Workflows & Scheduling so that ProActive Workflows are executed in SLURM nodes. After execution of such Workflows, SLURM nodes are released.
ProActive also offers monitoring of involved resources and possibility to extend the infrastructure using private clouds (such as OpenStackVMWare) and public clouds (such as NumergyWindows Azure).
Last but not least, ProActive provides a flexible mechanism for centralized authentication. It means that after user credentials loading (procedure done only once) by doing a simple user/password initial login, the execution of ProActive workflows will be done with no further password request, no matter what services are invoked by such workflow. Imagine your workflow accessing Cloud Storage accounts, executing business applications using given accounts, changing file permissions using specific credentials for Linux accounts,  etc. All user credentials will be safely stored in the ProActive Third Party Credential Store, once.
Request for execution of any workflow will be possible via a simple REST API call, making it simple to trigger your data processing from any cloud service.


Armadillo rocks when it comes to seismic use cases. ArmadilloDB has been optimized to work over Lustre FS and added support to perform correlation operations over seismic big data. But, not my topic, I will let them better explain it in a different blog post.

What does it give?

Having placed every piece in its place, we have an interesting result. It is a platform that allows you to bring data from outside the cluster (several cloud storage services supported) using intelligent workflows, process it via either SLURM, ProActive tasks (implementable in more than 8 languages), or external processes such as ArmadilloDB or Hadoop, and manipulate it to move it to a convenient place. But to give you a more clear example of what your work with a DataScale platform would look like, I will share with you a simple video (enable subtitles to better understand):

In this video the user executes a simple SLURM job on data available on a DataScale powered cluster. Then results are put back in a cloud storage server. All is done through a command line client that makes REST calls to ProActive Workflow Catalog server, a module of ProActive Workflows & Scheduling product. For now we will not show performance results, we will do it in coming blog posts.
Hope you enjoyed!

Thursday, January 22, 2015

Speeding up web app deployment in Jetty

When the ProActive Workflows and Scheduling server is started, no fewer than 5 web applications are deployed in the embedded Jetty servlet container. It was taking some time (a few seconds) and we decided to improve it. So, what can be done?

Servlet annotation scanning

By default, Jetty will scan all webapp jars for servlet annotations (those were introduced in the servlet spec version 2.5). Since we don't use any, we can safely disable the scanning, saving time at startup (around 1 second on my machine). This page on eclipse wiki describes how to do it in Jetty. Just set an attribute on your WebAppContext:
WebAppContext webApp = createWebAppContext(...);
webApp.setAttribute("org.eclipse.jetty.server.webapp.WebInfIncludeJarPattern", "^$");

Parallel deployment

By default, Jetty deploys web apps sequentially, but it can be configured to do so in parallel. Here is how:
HandlerList handlerList = createHandlerList(...);
This change shaved off 3 seconds from the startup time on my machine.
Note: In the Jetty version we are using (8.1.16.v20140903), there seems to be a problem in initialization sequence: the deployment starts before the thread pool is started, and hangs. A simple workaround is to start the thread pool manually before the starting Jetty:
org.eclipse.jetty.server.Server server = createHttpServer(...);
QueuedThreadPool threadPool = new QueuedThreadPool(maxThreads);

Unpack wars

This one is trivial, but the speedup is no less real than for the other ones! We were shipping some of our webapps as wars inside the distribution zip file. So on every startup the wars would be unpacked, and this was taking time. We now changed our release scripts to add unpacked wars in the distribution instead. This simple change further reduced the startup time by 2 seconds.

Moreutils: ts

The moreutils collection contains a nice little tool called ts: a utility to timestamp standard input. The latest version supports the -s switch, which displays the timestaps since the start of the program - very useful for looking at startup times of your app, especially if it prints a few lines of output when is starts.


So here is the final result, with all of the improvements described above, and with timestamps provided by ts:
 $ ./bin/proactive-server -ln 0 -c | ts -s "%.S" 
01.099732 Starting the scheduler...
01.107951 Starting the resource manager...
05.920992 The resource manager with 0 local nodes created on pnp://
07.095439 The scheduler created on pnp://
07.103600 Starting the web applications...
08.399879 The web application /studio created on
08.399985 The web application /scheduler created on
08.400001 The web application /rest created on
08.400014 The web application /rm created on
08.400027 *** Get started at ***
As you can see, the webapp startup takes only slightly over a second now. Much better!

Wednesday, December 31, 2014

Web UI testing with Nightwatch.js and Selenium

One of the new feature of the latest release of ProActive Workflows & Scheduling is the Workflow Studio. You might have known as an Eclipse RCP application, i.e a rich client application but we rewrote it from scratch as a Web application to simplify the usage. No more installation, it comes packaged with ProActive, started along with the Scheduler and Resource Manager portals.

If you haven’t tested it yet, we encourage you to do so on our demo platform.

The new Workflow Studio is also a new technical challenge as we developed it in Javascript where the existing portals were built with GWT. Enter the magic world of JS! Dozens of libraries to choose from! A new framework every day to use!

And they say Maven is evil...

Nethertheless we can acknowledge that coding in Javascript is much more pleasant now with all these tools and libraries but it still requires good engineering practices like testing.

Since the Workflow Studio is mostly targeted at end users, UI testing makes sense and we chose to go that way for now. This need also raised from the frustration of repeating the same manual tests again and again. Why not try to automate them?

We can distinguish two approaches to test web applications. First to use a real browser like Firefox or Chrome and to drive it with a tool like Selenium to perform actions (click, type text…) and checks. The raise of Javascript also pushed in favor of even faster testing with headless browser like PhantomJS.

As we wanted to be as realistic as possible we chose for now to use Selenium to be able to run tests with real browsers and easily follow the test execution. We also picked Nightwatch.js as the test framework, it uses Selenium underneath and provides the test runner as well as some useful commands and assertions. Let’s look at a concrete example:

module.exports = {    "Login": function (browser) {        browser            .url("")            .waitForElementVisible('button[type=submit]')            .setValue('#user', 'user')            .setValue('#password', 'pwd')            .click('button[type=submit]')            .waitForElementVisible('.no-workflows')
              .assert.containsText('.no-workflows', 'No workflows are open')

We simply define a test that opens a URL, check that the submit button is here, fill the login form and submits it. Then we check that the login succeeds. Now to run it:

$ nightwatch -t specs/login.js
[Login] Test Suite===================
Running:  Login 

✔  Element <button[type=submit]> was visible after 1221 milliseconds. 
✔  Element <.no-workflows-open-help-title> was visible after 974 milliseconds. 
✔  Testing if element <.no-workflows-open-help-title> contains text: "No workflows are open".
OK. 3 total assertions passed. (5.598s)

What happens here is that Nightwatch.js will start Selenium (or connect to an existing Selenium server), tell Selenium to launch a browser (you will actually see Firefox starting) and then drive it to perform the test’s actions.

Writing such tests is made easy with tools like Selenium and Nightwatch.js, however these are often very fragile tests, very easy to break if the UI changes or if something is slower as usual. A few good practices can help to make them more robust:
  • Rely on IDs selectors. These are used to select the elements you want to perform action on. IDs should be used to target a specific action, like the ‘#login-submit-button’ instead of selection any form button of type submit on the page. IDs should be more stable where a CSS class can easily be changed.
  • Avoid duplication. This is just standard coding practice but it is even more important here. Share common pieces of code in your tests to make them more stable. All of our Workflow Studio tests are going to login and then perform additional actions so this series of steps to login should be shared across all tests. Nightwatch.js enables you to write custom commands and assertions for this purpose ( In our tests, we have a login command, an open workflow command, a create task command and also assertions to check the content of a notification for instance.
  • Do not sleep! As often with asynchronous testing you might be tempted to wait at some point for some action to be performed. With UI tests you often have to wait for an element to be displayed before clicking on it. And you should wait but using the provided methods to do so instead of waiting for a few seconds. Active wait (checking periodically with a timeout) is more efficient and will help you understand failures.

Once you have a few tests you also want to make sure they are executed as part of your continuous integration. The NodeJS plugin is quite handy to get npm installed on your Jenkins slaves. Since your CI slaves probably don’t have desktop environments and graphical servers running, Xfvb is a good solution to be able to run the browsers on machines without X. Here is our simple job that runs the tests:

/usr/bin/Xvfb :99 -ac -screen 0 1280x1024x8 &
export DISPLAY=:99
npm install
cd test/ui && ../../node_modules/nightwatch/bin/nightwatch
pkill Xvfb

It just starts a virtual X server and runs the tests. Here we simply test again our test platform that is deployed frequently. The Jenkins job is configured to pick up JUnit test reports (generated by Nightwatch.js) and screenshots captured in case of failures are also available via the JUnit attachment plugin.

Hopefully the automated tests will pass and provide useful feedback for ongoing developments!

Monday, September 29, 2014

ProActive Workflows & Scheduling 6.0 is out!

The last few weeks have been quite busy at Activeeon! We are now very happy to deliver the result of our hard work, the new release of ProActive Workflows & Scheduling!

Getting started page

This is a major release that represents a long year of efforts. We labelled it as 6.0 and upgraded all components to this version. We previously had components that were using different version numbers, like the Scheduler in version 3.4 and ProActive in version 5.4. It was creating some confusion so we just simplified that.

So what’s in it?

As the name says, ProActive Workflows & Scheduling contains all the components that enable the creation of workflows and their execution. It embeds:
  • ProActive Programming: the low level library for ProActive Active Objects
  • ProActive Scheduling: the Scheduler and Resource Manager, server components that provide the execution of workflows and aggregation of resources to run them
  • ProActive REST: the REST API and its server, that exposes ProActive Scheduling functionalities over HTTP
  • ProActive Web Portals: the Scheduler and Resource Manager web applications
  • Agents for Linux and Windows

And last but not least ProActive Workflow Studio, the web application to create, edit and submit workflows. ProActive Workflow Studio now replaces the old Studio, an Eclipse RCP based application.

ProActive Workflow Studio

All the server components are embedded in the distribution of ProActive Workflows & Scheduling and started by default. It means you don’t have to bother with what to choose, what to install. Just download the distribution, unzip it and run it!

How do you run it?

We wanted to simplify a lot the usage of ProActive. To reach this goal, we improved the distribution to be self-contained. To run it is very easy too. We provide native scripts in the bin/ folder, so on Linux for instance just run:

$> ./bin/proactive-server

On Windows, navigate to the bin/ folder and double click proactive-server.bat.

You can notice that we simplified the native scripts and chose simpler names:
  • proactive-server: starts the server components
  • proactive-client: starts the command line client, to interact with ProActive Workflows & Scheduling via a console and to automate actions
  • proactive-node: starts a ProActive node, with the -r option you can specify the Resource Manager to connect to

The structure of the distribution archive has been simplified:
  • addons: where to place your custom Java tasks or policies
  • bin: native scripts to the server, client and node
  • config: all configuration files, with subfolders for each component
  • data: runtime data for the server part, by default it will contain the databases files, the default dataspaces, monitoring statistics, … Want to start with a fresh installation? Just delete this folder.
  • dist: the files used to run ProActive Workflows & Scheduling, mostly JARs and WARs
  • jre: the Java Runtime Environment that we now embed, no more installation of Java required, nor configuration of JAVA_HOME
  • logs: where log files are stored
  • samples: some workflows and scripts to help you get started
  • tools: native scripts to run tools such as dataspace server, create credentials,...

A release with “good defaults”

To make it easier to use, we simplified the configuration files, used good defaults where possible. For instance PNP is now the default protocol, because it is the most reliable and most performant protocol in most of the cases. Nodes will use the password method when executing runAsMe tasks, because it is simple to setup. The command line will use default credentials so you don’t bother with typing a fake password when testing the product (on a real installation, we strongly recommend that you change the default private key!). These are all examples of small improvements we made to enhance the user experience.

Still finding it hard to use? 

Checkout our new documentation, it is much nicer to read. The old documentation was a bit fragmented, you had to look in different places to find the answer to your question. Now we have two guides, one for the end-user, the person creating and running workflows and one for the administrator, the person responsible for the infrastructure, adding nodes and monitoring them.

To build the documentation, we replaced DocBook with Asciidoctor. It should now be much easier to write documentation and to update it. Let us know if there is missing information.

One more thing

Well we barely scratched the surface and there is many more things to say about this new release. Expect more blog posts to come that will present the new ProActive Workflow Studio and the technologies we used to build it, the Java API based on the REST API, the automatic update for ProActive nodes,...

In the mean time, you are welcome to download and test this new release of ProActive Workflows & Scheduling. We will be very happy to get your feedback!

Tuesday, July 8, 2014

Testing a Debian package with Docker

Docker makes it easy to test software in different linux environments (different distributions, distribution versions, architectures) by enabling creation of lightweight throw-away containers.

Here is an example of how Docker can be used to ensure the compatibility of a Debian package with several versions of Debian. Suppose we are packaging a linux service for Debian and would like to test if it functions correctly under both Debian versions squeeze and wheezy.

Packaging a service for Linux is somewhat error-prone because the package manager does not normally provide a standard facility to create/delete the service user (it is commonly done via postinst and pre/post rm scripts), and because of all the distribution-specific cruft surrounding the service lifecycle (hopefully this will change in the future with the widespread adoption of systemd).

Here are the things we would like to test:
  • the dependencies of the package are correct (satisfiable)
  • the service user and group are correctly created upon installation
  • the service is started upon installation
  • the permissions of the files/directories are correct
  • the service executes under the correct user
  • the output is properly logged to a log file
  • service start/stop work
  • on uninstall, the service is stopped
  • on uninstall, the user/group are deleted
  • on uninstall, package files are removed but the config files are left intact

When developing the packaging procedure, we would not get all the things from the list above right the first time. We would need to iterate through the loop:
  • change the packaging procedure
  • re-create the package
  • install the package
  • perform the tests

Naturally, on each iteration we would like our target environment (the Debian system) to be pristine. Also we want to have several environments: one for each Debian version we are testing. Here is where Docker comes into play. It makes it super easy and fast to return multiple times to the same system state.

Creating the "pristine" images

We could grab the Debian images from the Docker hub, but we are truly paranoid, so we’ll create the images ourselves using the trusty old debootstrap:
# debootstrap squeeze ./squeeze
Docker makes it easy to import a root filesystem into an image:
 # tar -C ./squeeze -c . | docker import - activeeon/debian-squeeze:pristine
The image named activeeon/debian-squeeze:pristine now exists in the local Docker list of images, and can be used to start containers:
 # docker images  
    REPOSITORY                 TAG           IMAGE ID        CREATED          VIRTUAL SIZE  
    activeeon/debian-squeeze   pristine      e08a54c4a759    2 minutes ago    198.8 MB  

Docker hub enables sharing of Docker images. We can make our image available to others with one command:
# docker push activeeon/debian-squeeze:pristine
We repeat the same steps for wheezy.

Starting the container

For manual testing, we just run the interactive shell:
# docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash
root@d824ff1de395 #
We specified the image to use and the shell to execute. We also used the volumes feature of Docker to make available inside the container the directory from the host system with the source files for the package.

Now from within the container, we can build and install our package:
root@d824ff1de395 # make -C /src/proactive/linux-agent/ package
root@d824ff1de395 # dpkg -i /src/proactive/linux-agent/linux-agent_1.0_all.deb
root@d824ff1de395 # ps -ef # check that the service is started, etc.
Found a problem and need to repeat? Here is where Docker shines: to scrap the changes made by installing the package and return to the original state, all we need to do is to exit the shell (this stops the container), and execute the “run” command again:
root@d824ff1de395: # exit
# docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash
root@9427e7d35937: # 
Notice that the id of the container has changed. It is a new container started from the same image, and it knows nothing of the changes made inside the previous one. All in a blink of an eye!

It is also pretty straightforward to automate the manual tests above: we could envision replacing the interactive shell invocation with a script that performs installation/verification. As an example, it would enable us to test our package in a continuous integration system such as Jenkins. If we publish our image to the Docker hub, the delivery of the image to the Jenkins slaves would be transparently handled by Docker for us.

Of course the Docker container is not a real system (no other services are running inside), so it is not possible to test some things like for example the interaction with other services, but most of the things can be tested, and the fast turnaround times provided by Docker make it a pleasant experience.

To summarize, here are the features of Docker we have used:
  • creating images given the root filesystem
  • image naming and indexing to manage the set of images on the local machine
  • publishing images to the Docker hub to share with others / use on other machines
  • instantaneous creation of throw-away environments for testing