One of the new feature of the 6.0 version of ProActive Workflows & Scheduling is the Workflow Studio. You might have known as an Eclipse RCP application, i.e a rich client application but we rewrote it from scratch as a Web application to simplify the usage. No more installation, it comes packaged with ProActive, started along with the Scheduler and Resource Manager portals.
If you haven’t tested it yet, we encourage you to do so on our demo platform.
The new Workflow Studio is also a new technical challenge as we developed it in Javascript where the existing portals were built with GWT. Enter the magic world of JS! Dozens of libraries to choose from! A new framework every day to use!
And they say Maven is evil...
Nethertheless we can acknowledge that coding in Javascript is much more pleasant now with all these tools and libraries but it still requires good engineering practices like testing.
Since the Workflow Studio is mostly targeted at end users, UI testing makes sense and we chose to go that way for now. This need also raised from the frustration of repeating the same manual tests again and again. Why not try to automate them?
We can distinguish two approaches to test web applications. First to use a real browser like Firefox or Chrome and to drive it with a tool like Selenium to perform actions (click, type text…) and checks. The raise of Javascript also pushed in favor of even faster testing with headless browser like PhantomJS.
As we wanted to be as realistic as possible we chose for now to use Selenium to be able to run tests with real browsers and easily follow the test execution. We also picked Nightwatch.js as the test framework, it uses Selenium underneath and provides the test runner as well as some useful commands and assertions. Let’s look at a concrete example:
module.exports = { "Login": function (browser) { browser .url("http://trydev.activeeon.com/studio") .waitForElementVisible('button[type=submit]') .setValue('#user', 'user') .setValue('#password', 'pwd') .click('button[type=submit]') .waitForElementVisible('.no-workflows').assert.containsText('.no-workflows', 'No workflows are open')
.end();
}
};
We simply define a test that opens a URL, check that the submit button is here, fill the login form and submits it. Then we check that the login succeeds. Now to run it:
$ nightwatch -t specs/login.js
[Login] Test Suite===================
Running: Login
✔ Element <button[type=submit]> was visible after 1221 milliseconds.
✔ Element <.no-workflows-open-help-title> was visible after 974 milliseconds.
✔ Testing if element <.no-workflows-open-help-title> contains text: "No workflows are open".
OK. 3 total assertions passed. (5.598s)
What happens here is that Nightwatch.js will start Selenium (or connect to an existing Selenium server), tell Selenium to launch a browser (you will actually see Firefox starting) and then drive it to perform the test’s actions.
Writing such tests is made easy with tools like Selenium and Nightwatch.js, however these are often very fragile tests, very easy to break if the UI changes or if something is slower as usual. A few good practices can help to make them more robust:
Rely on IDs selectors.These are used to select the elements you want to perform action on. IDs should be used to target a specific action, like the ‘#login-submit-button’ instead of selection any form button of type submit on the page. IDs should be more stable where a CSS class can easily be changed.
Avoid duplication. This is just standard coding practice but it is even more important here. Share common pieces of code in your tests to make them more stable. All of our Workflow Studio tests are going to login and then perform additional actions so this series of steps to login should be shared across all tests. Nightwatch.js enables you to write custom commands and assertions for this purpose (http://nightwatchjs.org/guide#custom-commands). In our tests, we have a login command, an open workflow command, a create task command and also assertions to check the content of a notification for instance.
Do not sleep! As often with asynchronous testing you might be tempted to wait at some point for some action to be performed. With UI tests you often have to wait for an element to be displayed before clicking on it. And you should wait but using the provided methods to do so instead of waiting for a few seconds. Active wait (checking periodically with a timeout) is more efficient and will help you understand failures.
Once you have a few tests you also want to make sure they are executed as part of your continuous integration. The NodeJS plugin is quite handy to get npm installed on your Jenkins slaves. Since your CI slaves probably don’t have desktop environments and graphical servers running, Xfvb is a good solution to be able to run the browsers on machines without X. Here is our simple job that runs the tests:
/usr/bin/Xvfb :99 -ac -screen 0 1280x1024x8 &
export DISPLAY=:99
npm install
cd test/ui && ../../node_modules/nightwatch/bin/nightwatch
pkill Xvfb
It just starts a virtual X server and runs the tests. Here we simply test again our test platform that is deployed frequently. The Jenkins job is configured to pick up JUnit test reports (generated by Nightwatch.js) and screenshots captured in case of failures are also available via the JUnit attachment plugin.
Hopefully the automated tests will pass and provide useful feedback for ongoing developments!