Wednesday, December 31, 2014

Web UI testing with Nightwatch.js and Selenium

One of the new feature of the 6.0 version of ProActive Workflows & Scheduling is the Workflow Studio. You might have known as an Eclipse RCP application, i.e a rich client application but we rewrote it from scratch as a Web application to simplify the usage. No more installation, it comes packaged with ProActive, started along with the Scheduler and Resource Manager portals.

If you haven’t tested it yet, we encourage you to do so on our demo platform.

The new Workflow Studio is also a new technical challenge as we developed it in Javascript where the existing portals were built with GWT. Enter the magic world of JS! Dozens of libraries to choose from! A new framework every day to use!

And they say Maven is evil...

Nethertheless we can acknowledge that coding in Javascript is much more pleasant now with all these tools and libraries but it still requires good engineering practices like testing.

Since the Workflow Studio is mostly targeted at end users, UI testing makes sense and we chose to go that way for now. This need also raised from the frustration of repeating the same manual tests again and again. Why not try to automate them?

We can distinguish two approaches to test web applications. First to use a real browser like Firefox or Chrome and to drive it with a tool like Selenium to perform actions (click, type text…) and checks. The raise of Javascript also pushed in favor of even faster testing with headless browser like PhantomJS.

As we wanted to be as realistic as possible we chose for now to use Selenium to be able to run tests with real browsers and easily follow the test execution. We also picked Nightwatch.js as the test framework, it uses Selenium underneath and provides the test runner as well as some useful commands and assertions. Let’s look at a concrete example:

module.exports = {    "Login": function (browser) {        browser            .url("http://trydev.activeeon.com/studio")            .waitForElementVisible('button[type=submit]')            .setValue('#user', 'user')            .setValue('#password', 'pwd')            .click('button[type=submit]')            .waitForElementVisible('.no-workflows')
              .assert.containsText('.no-workflows', 'No workflows are open')
           .end(); 
   }
};

We simply define a test that opens a URL, check that the submit button is here, fill the login form and submits it. Then we check that the login succeeds. Now to run it:

$ nightwatch -t specs/login.js
[Login] Test Suite===================
Running:  Login 

✔  Element <button[type=submit]> was visible after 1221 milliseconds. 
✔  Element <.no-workflows-open-help-title> was visible after 974 milliseconds. 
✔  Testing if element <.no-workflows-open-help-title> contains text: "No workflows are open".
OK. 3 total assertions passed. (5.598s)

What happens here is that Nightwatch.js will start Selenium (or connect to an existing Selenium server), tell Selenium to launch a browser (you will actually see Firefox starting) and then drive it to perform the test’s actions.

Writing such tests is made easy with tools like Selenium and Nightwatch.js, however these are often very fragile tests, very easy to break if the UI changes or if something is slower as usual. A few good practices can help to make them more robust:

Rely on IDs selectors.These are used to select the elements you want to perform action on. IDs should be used to target a specific action, like the ‘#login-submit-button’ instead of selection any form button of type submit on the page. IDs should be more stable where a CSS class can easily be changed.

Avoid duplication. This is just standard coding practice but it is even more important here. Share common pieces of code in your tests to make them more stable. All of our Workflow Studio tests are going to login and then perform additional actions so this series of steps to login should be shared across all tests. Nightwatch.js enables you to write custom commands and assertions for this purpose (http://nightwatchjs.org/guide#custom-commands). In our tests, we have a login command, an open workflow command, a create task command and also assertions to check the content of a notification for instance.

Do not sleep! As often with asynchronous testing you might be tempted to wait at some point for some action to be performed. With UI tests you often have to wait for an element to be displayed before clicking on it. And you should wait but using the provided methods to do so instead of waiting for a few seconds. Active wait (checking periodically with a timeout) is more efficient and will help you understand failures.

Once you have a few tests you also want to make sure they are executed as part of your continuous integration. The NodeJS plugin is quite handy to get npm installed on your Jenkins slaves. Since your CI slaves probably don’t have desktop environments and graphical servers running, Xfvb is a good solution to be able to run the browsers on machines without X. Here is our simple job that runs the tests:

/usr/bin/Xvfb :99 -ac -screen 0 1280x1024x8 &
export DISPLAY=:99
npm install
cd test/ui && ../../node_modules/nightwatch/bin/nightwatch
pkill Xvfb

It just starts a virtual X server and runs the tests. Here we simply test again our test platform that is deployed frequently. The Jenkins job is configured to pick up JUnit test reports (generated by Nightwatch.js) and screenshots captured in case of failures are also available via the JUnit attachment plugin.


Hopefully the automated tests will pass and provide useful feedback for ongoing developments!


Monday, September 29, 2014

ProActive Workflows & Scheduling 6.0 is out!


The last few weeks have been quite busy at Activeeon! We are now very happy to deliver the result of our hard work, the new release of ProActive Workflows & Scheduling!

Getting started page

This is a major release that represents a long year of efforts. We labelled it as 6.0 and upgraded all components to this version. We previously had components that were using different version numbers, like the Scheduler in version 3.4 and ProActive in version 5.4. It was creating some confusion so we just simplified that.

So what’s in it?


As the name says, ProActive Workflows & Scheduling contains all the components that enable the creation of workflows and their execution. It embeds:
  • ProActive Programming: the low level library for ProActive Active Objects
  • ProActive Scheduling: the Scheduler and Resource Manager, server components that provide the execution of workflows and aggregation of resources to run them
  • ProActive REST: the REST API and its server, that exposes ProActive Scheduling functionalities over HTTP
  • ProActive Web Portals: the Scheduler and Resource Manager web applications
  • Agents for Linux and Windows

And last but not least ProActive Workflow Studio, the web application to create, edit and submit workflows. ProActive Workflow Studio now replaces the old Studio, an Eclipse RCP based application.

ProActive Workflow Studio

All the server components are embedded in the distribution of ProActive Workflows & Scheduling and started by default. It means you don’t have to bother with what to choose, what to install. Just download the distribution, unzip it and run it!

How do you run it?


We wanted to simplify a lot the usage of ProActive. To reach this goal, we improved the distribution to be self-contained. To run it is very easy too. We provide native scripts in the bin/ folder, so on Linux for instance just run:

$> ./bin/proactive-server

On Windows, navigate to the bin/ folder and double click proactive-server.bat.

You can notice that we simplified the native scripts and chose simpler names:
  • proactive-server: starts the server components
  • proactive-client: starts the command line client, to interact with ProActive Workflows & Scheduling via a console and to automate actions
  • proactive-node: starts a ProActive node, with the -r option you can specify the Resource Manager to connect to

The structure of the distribution archive has been simplified:
  • addons: where to place your custom Java tasks or policies
  • bin: native scripts to the server, client and node
  • config: all configuration files, with subfolders for each component
  • data: runtime data for the server part, by default it will contain the databases files, the default dataspaces, monitoring statistics, … Want to start with a fresh installation? Just delete this folder.
  • dist: the files used to run ProActive Workflows & Scheduling, mostly JARs and WARs
  • jre: the Java Runtime Environment that we now embed, no more installation of Java required, nor configuration of JAVA_HOME
  • logs: where log files are stored
  • samples: some workflows and scripts to help you get started
  • tools: native scripts to run tools such as dataspace server, create credentials,...

A release with “good defaults”


To make it easier to use, we simplified the configuration files, used good defaults where possible. For instance PNP is now the default protocol, because it is the most reliable and most performant protocol in most of the cases. Nodes will use the password method when executing runAsMe tasks, because it is simple to setup. The command line will use default credentials so you don’t bother with typing a fake password when testing the product (on a real installation, we strongly recommend that you change the default private key!). These are all examples of small improvements we made to enhance the user experience.

Still finding it hard to use? 


Checkout our new documentation, it is much nicer to read. The old documentation was a bit fragmented, you had to look in different places to find the answer to your question. Now we have two guides, one for the end-user, the person creating and running workflows and one for the administrator, the person responsible for the infrastructure, adding nodes and monitoring them.

To build the documentation, we replaced DocBook with Asciidoctor. It should now be much easier to write documentation and to update it. Let us know if there is missing information.

One more thing


Well we barely scratched the surface and there is many more things to say about this new release. Expect more blog posts to come that will present the new ProActive Workflow Studio and the technologies we used to build it, the Java API based on the REST API, the automatic update for ProActive nodes,...

In the mean time, you are welcome to download and test this new release of ProActive Workflows & Scheduling. We will be very happy to get your feedback!

Tuesday, July 8, 2014

Testing a Debian package with Docker

Docker makes it easy to test software in different linux environments (different distributions, distribution versions, architectures) by enabling creation of lightweight throw-away containers.

Here is an example of how Docker can be used to ensure the compatibility of a Debian package with several versions of Debian. Suppose we are packaging a linux service for Debian and would like to test if it functions correctly under both Debian versions squeeze and wheezy.

Packaging a service for Linux is somewhat error-prone because the package manager does not normally provide a standard facility to create/delete the service user (it is commonly done via postinst and pre/post rm scripts), and because of all the distribution-specific cruft surrounding the service lifecycle (hopefully this will change in the future with the widespread adoption of systemd).

Here are the things we would like to test:
  • the dependencies of the package are correct (satisfiable)
  • the service user and group are correctly created upon installation
  • the service is started upon installation
  • the permissions of the files/directories are correct
  • the service executes under the correct user
  • the output is properly logged to a log file
  • service start/stop work
  • on uninstall, the service is stopped
  • on uninstall, the user/group are deleted
  • on uninstall, package files are removed but the config files are left intact

When developing the packaging procedure, we would not get all the things from the list above right the first time. We would need to iterate through the loop:
  • change the packaging procedure
  • re-create the package
  • install the package
  • perform the tests

Naturally, on each iteration we would like our target environment (the Debian system) to be pristine. Also we want to have several environments: one for each Debian version we are testing. Here is where Docker comes into play. It makes it super easy and fast to return multiple times to the same system state.

Creating the "pristine" images

We could grab the Debian images from the Docker hub, but we are truly paranoid, so we’ll create the images ourselves using the trusty old debootstrap:
# debootstrap squeeze ./squeeze
Docker makes it easy to import a root filesystem into an image:
 # tar -C ./squeeze -c . | docker import - activeeon/debian-squeeze:pristine
The image named activeeon/debian-squeeze:pristine now exists in the local Docker list of images, and can be used to start containers:
 # docker images  
    REPOSITORY                 TAG           IMAGE ID        CREATED          VIRTUAL SIZE  
    activeeon/debian-squeeze   pristine      e08a54c4a759    2 minutes ago    198.8 MB  

Docker hub enables sharing of Docker images. We can make our image available to others with one command:
# docker push activeeon/debian-squeeze:pristine
We repeat the same steps for wheezy.

Starting the container

For manual testing, we just run the interactive shell:
# docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash
root@d824ff1de395 #
We specified the image to use and the shell to execute. We also used the volumes feature of Docker to make available inside the container the directory from the host system with the source files for the package.

Now from within the container, we can build and install our package:
root@d824ff1de395 # make -C /src/proactive/linux-agent/ package
root@d824ff1de395 # dpkg -i /src/proactive/linux-agent/linux-agent_1.0_all.deb
root@d824ff1de395 # ps -ef # check that the service is started, etc.
Found a problem and need to repeat? Here is where Docker shines: to scrap the changes made by installing the package and return to the original state, all we need to do is to exit the shell (this stops the container), and execute the “run” command again:
root@d824ff1de395: # exit
# docker run -v /src/proactive:/src/proactive -ti activeeon/debian-squeeze:pristine /bin/bash
root@9427e7d35937: # 
Notice that the id of the container has changed. It is a new container started from the same image, and it knows nothing of the changes made inside the previous one. All in a blink of an eye!

It is also pretty straightforward to automate the manual tests above: we could envision replacing the interactive shell invocation with a script that performs installation/verification. As an example, it would enable us to test our package in a continuous integration system such as Jenkins. If we publish our image to the Docker hub, the delivery of the image to the Jenkins slaves would be transparently handled by Docker for us.

Of course the Docker container is not a real system (no other services are running inside), so it is not possible to test some things like for example the interaction with other services, but most of the things can be tested, and the fast turnaround times provided by Docker make it a pleasant experience.

To summarize, here are the features of Docker we have used:
  • creating images given the root filesystem
  • image naming and indexing to manage the set of images on the local machine
  • publishing images to the Docker hub to share with others / use on other machines
  • instantaneous creation of throw-away environments for testing

Wednesday, June 11, 2014

Docker & ProActive

Nowadays the cloud is all about Docker! Docker there, Docker here! And with the recent release of Docker 1.0, you just can’t miss it. My Twitter timeline is filled with #docker!

In this blog post, I will demonstrate how ProActive can play along with Docker. If you are new to this I recommend to go through the documentation, to try it online with the emulator and even better try it on your machine.

Docker looks really interesting when it comes to ProActive because it can provide good isolation and control over the ProActive nodes. It means you could share a powerful machine between multiple users and enforce certain rules (i.e resources available) while keeping it lightweight. You could also use it to provide customized and portable environments for your users. Docker relies on Linux containers and it is just fast! 

I’ll assume that the latest release is used, you can download it from our website (use version 5.4 of Server). We will start by running the ProActive Scheduler:

$> unzip ProActiveScheduling-3.4.4_bin_full.zip
$> cd ProActiveScheduling-3.4.4_bin_full
$> ./bin/unix/scheduler-start-gui -Dproactive.useIPaddress=true -Dproactive.net.interface=docker0



Here we explicitly tell the Scheduler to bind to docker0 interface, the interface used between your machine, the host and the containers. We also rely on IPs instead of host names for simplicity. The URLs of the Scheduler and the web portals are printed out.

Starting the scheduler...
Starting the resource manager...
The resource manager with 4 local nodes created on rmi://jily.local:1099/
The scheduler created on rmi://jily.local:1099/
Deployed application: http://localhost:8080/rest
Deployed application: http://localhost:8080/rm
Deployed application: http://localhost:8080/sched



Now we want to add ProActive nodes running in containers and connect them to the Scheduler.

$> docker run -ti activeeon/proactive:3.4.4 /opt/proactive/rm-start-docker-node rmi://172.17.42.1:1099/


The address 172.17.42.1 is the address of the network interface docker0 on the host when containers are running, the ProActive node in a container will connect to rmi://172.17.42.1:1099/, the address of the Resource Manager. 

Here we run the docker run command interactively (-ti) so you should see the node’s output. You can check that a new ProActive node has been added in the Resource Manager web portal.

So what just happened there? Well Docker downloaded the image called activeeon/proactive:3.4.4 from the Docker registry and ran it with the command /opt/proactive/rm-start-docker-node rmi://172.17.42.1:1099/. The image is available in the Docker hub that is used to share images, it already contains Java and ProActive. We also added a script called rm-start-docker-node to easily start a node (using default credentials and the id of the container in the node’s name).

The first launch probably took some time because the image was downloaded but if you relaunch the same command, it will be just blazing fast.

Below is a screenshot of the Resource Manager portal, showing 4 Docker nodes. You can see that a Docker node has only two processes: the shell script passed to the command line and the node itself (running with Java).


The Docker ProActive nodes can be used to schedule tasks, let’s see an example with the follow job:

The following output is produced:

[550000@b2a793c40183;12:26:49] PID TTY TIME CMD
[550000@b2a793c40183;12:26:49] 1 ? 00:00:00 rm-start-docker
[550000@b2a793c40183;12:26:49] 7 ? 00:00:08 java
[550000@b2a793c40183;12:26:49] 236 ? 00:00:06 java
[550000@b2a793c40183;12:26:49] 283 ? 00:00:00 ps


There is one more Java process, the one running the task.

The Docker image that was downloaded is fairly simple, here is the Dockerfile:


It just takes an official Docker image that comes with Java, copy ProActive to it and voilà! To make easy to test, the image has been pushed to the Docker registry.

Docker is well known as a devops tool to easily share environments from development to production. We think it can also be used for HPC use cases by allowing ProActive users to configure and test their computations on their machines and run them at scale on powerful grids. Let us know how you use it!

Thursday, May 15, 2014

The tools we use: IntelliJ IDEA

As a software editor, most of us spend their days writing (or actually reading) code and the tools we use to do that matter a lot. At Activeeon, we are of course free to choose whatever tool does the job but the majority uses IntelliJ IDEA. Given that the Jetbrains guys recently extended our Open Source license, I can only return the favor by spreading my love for such a great IDE.


I personally switched to IntelliJ 3 years ago and never looked back. When I joined Activeeon, I was pleased to see that IntelliJ was already used and we then convinced other developers to switch. Let me share the reasons on why start coding with IntelliJ and a few tips.

Why switch?

For me, the main reason to switch was the Maven integration. It just works! It is always nice to be able to take a POM file, import it and have all the src and test folders, the dependencies configured. 

To build ProActive we rely on Gradle and I must say it works but it is not as good as the Maven integration. One big missing features for me is the ability to transform dependencies to IntelliJ module dependencies. Let’s say you have several Git repositories, for us Programming and Scheduling, Scheduling depends on Programming but since these are two different repositories we use Maven dependencies between them and not Gradle “project” dependencies. With Maven integration, this situation is detected and instead of having references to JARs in your local repository, IntelliJ creates module dependencies. If you change code somewhere, you see the changes directly in other modules in your IDE. Unfortunately, this does not work with Gradle apparently due to some limitations around the Gradle tooling API.

The Maven or Gradle integration is one thing that is really nice with IntelliJ : the support of many tools, languages and frameworks. HTML, Javascript, CSS, PHP, Python, Ruby, Groovy and Java of course, they all have decent support out of the box. It very pleasant to stop fighting with incompatible plugins and versions to get a working IDE.

New user? Learn one shortcut!

CTRL+SHIFT+A is the shortcut to remember because it will help to find all the other shortcuts. Simply type what you are trying to do.

Code navigation

Given that, as a developer, you spend more time reading code than writing it, you have to be efficient when it comes to navigating in your code base. Just learn the shortcuts that help you move quickly between files, code blocks,...

While doing that remember that IntelliJ is quite smart and you often don’t have to type everything, just type some characters that appears in the name. For instance if I type CTRL+N (go to class) and then RST, it will suggest to open RestSchedulerTest class.


IntelliJ indexes a lot of things making the search fast, in my IDE, I have all of the ProActive projects opened and it is just handy to check if a refactoring affects other projects. Just make sure that your project folders are well recognized to avoid indexing unnecessary files like build/ or target/ folders.

Code editing

Smart completion is often presented as a killer feature in IntelliJ. I personally don’t notice it anymore, it just works. This is maybe the best way a feature can work, by having the user not even noticing it.
For me, coding is often a matter of hitting the completion shortcut, the refactorings shortcuts and the intention actions shortcut (ALT+ENTER), I’m thinking about what the code should do and my IDE takes care of the boilerplate.

What could be improved?

The Gradle integration is definitely something I would like to see improved, mostly the issue I mentioned above about project dependencies and the build file editing support (so slow right now!).

I also had one issue with my dual screen setup and windows focus but this is probably more related to Java and Xorg. So I had to disable the option to grab the “focus follow mouse” option in XFCE.

Wrap up

If you are not already using it, just give it a try. The community edition is free and open source. The ultimate edition has more features and it is not that expensive given the amount of time you spend in your IDE. Jetbrains offers discounts for startups and educational purposes. And they even allow you to use the ultimate edition on your open source project, thanks for that!

Good tools are essential to build good software. Be it your machine, the operating system or the IDE, you need them to be productive. Along with SSD technology, Linux and Bash, IntelliJ IDEA is one tool I could not live without.

We Recommend IntelliJ IDEA logo

Wednesday, April 23, 2014

Devoxx France 2014

Une fois n’est pas coutume, cet article sera écrit en français. Après tout, je vais parler d’un événement français : Devoxx France. A l’origine Devoxx est une conférence qui se déroule en novembre à Anvers. C’est la plus grosse conférence Java d’Europe et une des plus populaires parmi les développeurs. Depuis quelques années sont apparus Devoxx France et UK, des déclinaisons plus locales. Devoxx France se déroule à Paris avec des sessions majoritairement en Français.

J’ai eu la chance de participer à Devoxx France 2014 qui était organisé la semaine dernière sur trois jours. Au programme, une première journée dite “Université” avec des conférences longues (3h) et des ateliers puis deux jours de conférence classique avec keynotes et sessions d’une heure. Devoxx est clairement orienté Java avec comme thématiques : Java SE/EE, languages alternatifs, agilité/devops, Web, startups & innovation, mobile, Cloud/Big data/NoSQL. En plus de tout cela il y a aussi des ateliers et événements annexes comme Devoxx4Kids, un Open Data Camp, … Allez jeter un coup d’oeil sur le programme pour vous rendre compte de la richesse du contenu.

Organisation

Au niveau de l’organisation, c’est très bien géré. Le plus gros souci à mon avis est le manque de place sachant que la conférence a lieu à l’hôtel Marriott rive gauche qui peine à accueillir 1500 personnes. Du coup entre les sessions c’est un peu la bousculade pour changer de salle et se promener à travers les stands. A midi ça devient sauvage pour essayer d’attraper à manger (l’instinct de survie prend le dessus et on se mord !). Les salles de cinéma comme à Devoxx (le vrai, le gros) sont tellement bien par rapport aux rangées de chaises où l’on est épaule contre épaule. Les organisateurs ont bien conscience du problème et l’année prochaine Devoxx France déménage pour une plus grande salle, le Palais des Congrès, ce qui permettra aussi d'accueillir un peu plus de monde (les places sont en général presque toutes vendues avant que le planning soit établi).

Conférences

Parlons maintenant un peu du contenu : il est très difficile de faire un choix car il y a 6 conférences en parallèle et l’on doute toujours de son choix (et si c’était mieux ailleurs…).

Mercredi

Le mercredi j’ai fait une journée Java 8, 3h le matin, 3h l’après-midi avec Rémi Forax et José Paumard. Pas vraiment motivé au départ par Java 8, je me suis laissé tenté par Rémi Forax pour voir si il est vraiment aussi fort qu’il en a l’air (c’est un peu la star française du moment dans la communauté Java). Rémi Forax est professeur à l’université, le professeur qu’on aurait aimé avoir pour nous apprendre Java… Il travaille aussi sur la JVM et la connaît donc très bien, sa présentation sur les lambdas se finit donc avec quelques lignes de bytecode Java. L’après-midi était consacrée à l’API streams et s’inscrit donc bien dans la continuité des lambdas. J’ai été moins séduit par les exemples qui poussent l’utilisation de l’API assez loin mais en complexifiant à mon goût la description d’un algorithme. L’enchaînement d’opérations de filtrage, mapping, reduce sur la même ligne est certe rapide mais finalement assez difficile à lire. Ou alors il faudra s’y faire comme disait José Paumard en parlant de la syntaxe des method reference (::println). En tout cas les deux sessions m’ont permis de faire le point sur Java 8 et les lambdas tout en sachant que ce n’est pas quelque chose que je vais utiliser dans un avenir proche (sauf en faisant du Groovy :)).

La journée n’étant pas fini, on enchaîne avec quelques Tools-in-Actions, sessions de 30 minutes pour présenter un outil. Je vais voir Redis par Nicolas Martignole, présentation bien rythmée mais qui nous laisse un peu sur notre faim, on voudrait en voir plus surtout sur la manière de modéliser avec ce type d’outil. Ensuite je vais assister à une présentation sur Vert.x l’équivalent de Node.js côté JVM, le speaker est mou, très monotone et je ne retiens quasiment rien… Le dernier Tools-in-Action que je vais voir parle de pac4j, une librairie Java pour gérer l’authentification. Le sujet m’intéresse car nous avons récemment utilisé Shiro à cette fin. La présentation est plutôt intéressante mais donne l’impression que le speaker développe ça dans son coin (https://github.com/leleuj/pac4j/graphs/contributors) ce qui est plutôt surprenant car l’idée est bonne et le problème courant.

Jeudi

Après une première journée déjà bien chargée, la deuxième journée commence avec les keynotes. La keynote de Gilles Babinet et Kwam Yamgnane était bien présentée, avec des remarques pertinentes sur l’éducation (style école 42). Le président du Syntec est ensuite venu nous parler, saluons sa démarche. Sur le contenu, venant du président du Syntec et d’une grosse SSII, j’ai du mal à croire en ses bonnes paroles sur le rôle du développeur, en tout cas rien de bien concret qui laisserait à penser qu’en France on sortirait du modèle SSII/marchand de viande. La dernière keynote présentation l’initiative simplon.co, accompagner et former des non-développeurs dans leur création d’entreprise. Enfin Tariq Krim est venu nous rendre une visite surprise pour parler de son rapport ministériel sur les développeurs. Sur ce sujet je vous invite à écouter l’épisode des CastCodeurs qui développe ce rapport plus en détails.

Pour la suite je vais adopter un style un peu plus concis pour commenter les différentes présentations auxquelles j’ai assisté, sinon l’article va faire quelques kilomètres…
  • Realtime Web avec Akka, Kafka, Spark et Mesos: le côté Mesos m’intéressait (forcément). L’intégration de toutes ces solutions semblent assez complexes mais cela semble motivé par leur cas d’utilisation et ça a le mérite d’intégrer de nombreuses briques open source. 
  • Software Craftsmanship: une de mes préférés, un très bon speaker et un très bon message sur ce sujet qui me tient à cœur.
  • Square: de la collecte d’information à la prise de décision: présentation plutôt complète de leur métier et des problèmes qu’ils ont dû résoudre, ça sentait le vécu !
  • Reactive Angular: bon là j’ai pas compris l’intérêt du truc (enfin peut-être parce que ne comprends pas encore très bien ces architectures “réactives” et que je connais pas assez Angular), c’était ma session sieste de l’après-midi…
  • La révolution Docker: j’attendais beaucoup de cette conférence et apparemment beaucoup l’attendait (la salle était plus que pleine). J’ai trouvé le contenu assez creux et je n’ai pas découvert grand-chose. Le sujet est tellement jeune mais sexy alors pour le moment on en parle juste beaucoup.
  • Soyons RESTful avec RESTX: du très très bon, sur le contenu et la forme. Le speaker était bien préparé et a fait beaucoup de live coding (presque un peu trop). RESTX m’en a mis plein la vue, j’ai vraiment eu envie de l’essayer en sortant.

J’ai aussi assisté à un quickie Chérie, j’ai rétréci le build ! 300kloc / 30kt / 3m plutôt intéressant mais comme c’est un sujet qui m’intéresse beaucoup je n’ai pas appris énormément de choses (le quick win chez eux c’était le compilateur Java 7 à priori).

Le soir j’ai aussi assisté au BOF Groovy, réunion informelle d’utilisateurs et contributeurs. C’était très sympathique et malheureusement le créneau d’une heure était bien court pour échanger.

Vendredi

On attaque ce vendredi avec les keynotes, avec la première, plutôt “inspirational” autour de notre créativité en tant que développeur. J’ai plutôt décroché sur les keynotes suivantes, la première d’Oracle sur l’éducation pour laquelle je n’ai pas compris (ou entendu) le message et la suivante sur PIMS, une manière de gérer nos données personnelles qui m’a semblé très théorique et déconnecté de la réalité.

Pour ce qui est des conférences :
  • SARAH: connecter et interagir avec l'Internet des Objets au quotidien : c’était chouette, le Jarvis d’Iron Man à la maison. Dommage que tout cela s’appuie sur des APIs Windows (la kinect est un peu clé pour la reconnaissance vocale et gestuelle).
  • Hadoop @ Criteo, du laboratoire à la production 24h/24: très très appliqué, certainement plus intéressant pour les gens qui font du Hadoop au quotidien. En tout cas chez Critéo c’est sérieux, ils ont plusieurs personnes dédiées à leur infrastructure Hadoop.
  • Web performances, regardons les résultats de près: l’idée était de prendre quelques frameworks webs et de comparer les performances. JAX-RS avec Jersey s’en sort bien mais avec de la sérialisation JSON à la main… Ça donne quoi dans la vraie vie ? Les performances ça reste un sujet très délicat à manipuler et la présentation ne m’a pas plus marqué que cela.
  • 33 things you want to do better: bon speaker qui présentait quelques outils pratiques pour développer en Java, j’en connaissais déjà pas mal (et l’outil qui fait tout c’est Groovy :)).
  • Go pour Javaneros: bonne présentation autour de Go du point de vue du développeur Java
  • Hacking your home: du Rasberry PI et de l’arduino pour bidouiller sa maison, chouettes démos et ça donne envie de sortir le fer à souder !
  • Et pour finir l’enregistrement live des CastCodeurs, très animé !

Le midi entre deux bouchées, j’ai aussi vu deux quickies Etre développeur à la sortie de l'école et Pourquoi vous devriez essayer sérieusement les Object Calisthenics tous deux bien présentés. J’aime bien ce format de quickie qui permet de faire rapidement et concrètement le zoom sur un sujet particulier. On a parfois l’impression qu’ils sont mieux préparés que les conférences d’une heure.

Conclusion

Trois jours très intenses avec beaucoup de contenu, du moyen et du très bon. Sur ce type d’événement, je me dis qu’il faut que j’aille voir des choses nouvelles, des choses que je ne fais pas pour sortir de ma zone de confort et apprendre. Par exemple, tous les ateliers et hackathons devaient être super intéressants.

Bravo aux organisateurs pour tout ce travail !

Et dans deux semaines, il y a Mix-IT à Lyon qui s’annonce super aussi ! En tout cas j’y serais.

Monday, April 21, 2014

By the way… Java 8 is out

Last month Java 8 was released! The previous version was released in 2011 and Java 6 in 2006. This version was expected for a long time but to me it did not generate a lot of buzz. I definitely heard more about the Google Cloud and Amazon EC2 price drops in the same week and now the Internet is all about heartbleed. The new features Java 8 have been discussed for a while now and few interesting ones (JDK modularity) have been dropped along the way sadly. It probably gave Java 8 a look of failed soufflé.

Nevertheless Java 8 introduces lambdas! A long time awaited feature, especially with the popularity of functional programming nowadays. Lambdas do not come along as there are now default methods in interfaces and the Stream API to process collections in parallel. You can find the full list of the new features on Oracle web site.

Now what does it mean for ProActive? Well, we are still supporting Java 6 as it often installed on our customer’s sites. We mostly develop using Java 7 runtime but do not code with Java 6 features to keep the compatibility. Given that Java 8 is now available for downloads, we will also start using it as a runtime and add it as part of our automated testing jobs on Jenkins. That being said we will still have to wait to use Java 8 features in the code and will probably jump from Java 6 to Java 8 once it is mainstream. As one of ProActive use case is to build desktop grids we often don’t control the Java runtime installed on these machines. If we were developing a server side software I would advocate to switch to Java 8 after some period of testing.

Originally I planned to write one blog post explaining how Java 8 would impact ProActive but as I tried to use it I found some issues along the way. Actually some issues have quite an impact for us and we should probably have started testing it months ago. Upcoming blog posts will detail the problems we faced and how we solved them.

If you are already using Java 8 as your runtime, you will have to add the JVM option “-noverify” to the native scripts we provided in order to start the Scheduler and the nodes. And do not expect the Javascript integration (for tasks and pre/post/selection scripts) to work properly, more details to come.

Monday, April 14, 2014

If ProActive, no Heartbleed

Heartbleed?

To understand a bit the rationale behind this bug called "heartbleed" we first need to understand some other concepts. 
First, SSL is an encryption technology used to protect privacy of web users while they transmit information over the internet. It was first introduced by Netscape in 1994. 
There are several implementations of this SSL protocol. One of these implementations is the popular library OpenSSL. It also implements TLS, an newer version of SSL. The implementation TLS provided by OpenSSL is buggy and hence vulnerable to attacks. 
As this bug is in its heartbeat mechanism, they named it heartbleed bug.

What services are affected? 

The affected component is the TLS implementation provided by OpenSSL. HTTPS servers that use OpenSSL are affected, as the HTTPS implementation uses the buggy TLS implementation (and its buggy heartbeat extension).
OpenSSH also uses OpenSSL (mainly for key generation functions), but not the buggy component (TLS implementation) so it is not affected. So there is no need to worry about SSH being compromised, though it is still a good idea to update openssl to 1.0.1g or 1.0.2-beta2 (but you don't have to worry about replacing SSH keypairs).

What are the OpenSSL affected versions?

OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable.
OpenSSL 1.0.1g is NOT vulnerable.
OpenSSL 1.0.0 branch is NOT vulnerable.
OpenSSL 0.9.8 branch is NOT vulnerable.
What is the estimated amount of affected servers?
The two most popular web servers, Apache and nginx use OpenSSL. Together, these vulnerable servers account for about two-thirds of the sites on the web.

How does it impact ProActive?

It does not impact ProActive as it does not depend on such implementation of SSL at all. 
However we often see our web portals exposed through nginx to beautify the URLs where the portals are exposed. In such cases (knowing that nginx uses OpenSSL) we encourage the sysadmin to check if the version of the OpenSSL library installed is affected by the bug, and in the affirmative case, to upgrade it, renew all HTTPS certificates, and ask the users to renew their passwords. 

Are there available tests?

Yes, there are a couple. 

References

Tuesday, March 25, 2014

ProActive Cloud Connectors

ProActive Scheduler is able not only to launch computing jobs on infrastructure where it’s installed but also to create infrastructures on demand. It is often useful when you want to isolate your computations, have a root access to machines or use an exotic software for your computations.

In the core of ProActive Scheduler stands a part responsible for managing resources. All resource are splitted by sources sharing the same infrastructure and access policy. A node source can be a set of desktop machines available through the network or a cloud infrastructure dynamically deployed on a servers.

We support two major cloud management software OpenStack and CloudStack. To use them with ProActive clone and build it from the dedicated repository



Then drop the jars into addons folder of ProActive Scheduler and enable them in configs by adding


  • org.ow2.proactive.iaas.cloudstack.CloudStackInfrastructure to the config/rm/nodesource/infrastructures configuration file (CloudStack)
  • org.ow2.proactive.iaas.openstack.NovaInfrastructure to the config/rm/nodesource/infrastructures configuration file (OpenStack)
  • org.ow2.proactive.iaas.IaasPolicy to the config/rm/nodesource/policies configuration file (deployment policy)

Once enabled you can either deploy infrastructure manually using ProActive Resource Manager interface or configure your computations to deploy the infrastructure on demand.




In orders to launch you computations on Virtual Machines we need to configure them so that ProActive daemon is launched at the moment of VM boot. For this purpose we use “user data” (see cloudinit) to pass a launching script to the VM instance or just preconfigured images. E.g. for cloudstack infrastructure ProActive Scheduler launches daemon using the following script (this daemon will be used later by the Scheduler to run computations on this host)


This script uses pre installed ProActive but it can be modified to download ProActive automatically. Once your VMs are up and running you can submit jobs to them through ProActive Scheduler.

Sometimes it’s important to launch a set of VMs for particular computations on demand and prevent other jobs to be scheduled on these hosts. We developed a special deployment policy (IaasPolicy) for this purpose (see our doc for details). It scans the queue of jobs in the scheduler and triggers the infrastructure deployment for jobs with special markers. The infrastructure will be protected by a special token (see the first parameter in generic information below) and only jobs having it will be scheduled there.  Here is an example of such job




For this job the policy will start the infrastructure described in the generic information. Once deployed the scheduler launches tasks on these computing resources and turns them off at the end of computations. It is also possible to control the exact moment of resource deployment / undeployment from a workflow but this will be discussed in another post.