Monday, April 14, 2014

If ProActive, no Heartbleed

Heartbleed?

To understand a bit the rationale behind this bug called "heartbleed" we first need to understand some other concepts. 
First, SSL is an encryption technology used to protect privacy of web users while they transmit information over the internet. It was first introduced by Netscape in 1994. 
There are several implementations of this SSL protocol. One of these implementations is the popular library OpenSSL. It also implements TLS, an newer version of SSL. The implementation TLS provided by OpenSSL is buggy and hence vulnerable to attacks. 
As this bug is in its heartbeat mechanism, they named it heartbleed bug.

What services are affected? 

The affected component is the TLS implementation provided by OpenSSL. HTTPS servers that use OpenSSL are affected, as the HTTPS implementation uses the buggy TLS implementation (and its buggy heartbeat extension).
OpenSSH also uses OpenSSL (mainly for key generation functions), but not the buggy component (TLS implementation) so it is not affected. So there is no need to worry about SSH being compromised, though it is still a good idea to update openssl to 1.0.1g or 1.0.2-beta2 (but you don't have to worry about replacing SSH keypairs).

What are the OpenSSL affected versions?

OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable.
OpenSSL 1.0.1g is NOT vulnerable.
OpenSSL 1.0.0 branch is NOT vulnerable.
OpenSSL 0.9.8 branch is NOT vulnerable.
What is the estimated amount of affected servers?
The two most popular web servers, Apache and nginx use OpenSSL. Together, these vulnerable servers account for about two-thirds of the sites on the web.

How does it impact ProActive?

It does not impact ProActive as it does not depend on such implementation of SSL at all. 
However we often see our web portals exposed through nginx to beautify the URLs where the portals are exposed. In such cases (knowing that nginx uses OpenSSL) we encourage the sysadmin to check if the version of the OpenSSL library installed is affected by the bug, and in the affirmative case, to upgrade it, renew all HTTPS certificates, and ask the users to renew their passwords. 

Are there available tests?

Yes, there are a couple. 

References

Tuesday, March 25, 2014

ProActive Cloud Connectors

ProActive Scheduler is able not only to launch computing jobs on infrastructure where it’s installed but also to create infrastructures on demand. It is often useful when you want to isolate your computations, have a root access to machines or use an exotic software for your computations.

In the core of ProActive Scheduler stands a part responsible for managing resources. All resource are splitted by sources sharing the same infrastructure and access policy. A node source can be a set of desktop machines available through the network or a cloud infrastructure dynamically deployed on a servers.

We support two major cloud management software OpenStack and CloudStack. To use them with ProActive clone and build it from the dedicated repository



Then drop the jars into addons folder of ProActive Scheduler and enable them in configs by adding


  • org.ow2.proactive.iaas.cloudstack.CloudStackInfrastructure to the config/rm/nodesource/infrastructures configuration file (CloudStack)
  • org.ow2.proactive.iaas.openstack.NovaInfrastructure to the config/rm/nodesource/infrastructures configuration file (OpenStack)
  • org.ow2.proactive.iaas.IaasPolicy to the config/rm/nodesource/policies configuration file (deployment policy)

Once enabled you can either deploy infrastructure manually using ProActive Resource Manager interface or configure your computations to deploy the infrastructure on demand.




In orders to launch you computations on Virtual Machines we need to configure them so that ProActive daemon is launched at the moment of VM boot. For this purpose we use “user data” (see cloudinit) to pass a launching script to the VM instance or just preconfigured images. E.g. for cloudstack infrastructure ProActive Scheduler launches daemon using the following script (this daemon will be used later by the Scheduler to run computations on this host)


This script uses pre installed ProActive but it can be modified to download ProActive automatically. Once your VMs are up and running you can submit jobs to them through ProActive Scheduler.

Sometimes it’s important to launch a set of VMs for particular computations on demand and prevent other jobs to be scheduled on these hosts. We developed a special deployment policy (IaasPolicy) for this purpose (see our doc for details). It scans the queue of jobs in the scheduler and triggers the infrastructure deployment for jobs with special markers. The infrastructure will be protected by a special token (see the first parameter in generic information below) and only jobs having it will be scheduled there.  Here is an example of such job




For this job the policy will start the infrastructure described in the generic information. Once deployed the scheduler launches tasks on these computing resources and turns them off at the end of computations. It is also possible to control the exact moment of resource deployment / undeployment from a workflow but this will be discussed in another post.

Monday, March 17, 2014

Native script engine with JSR 233 and ProActive

Behind the cryptic name of JSR 223 lies an interesting feature of Java: the ability to run script engines inside the JVM. You might know dynamic languages that integrate with the JVM using this JSR such as Groovy, Jython or JRuby

At Activeeon, we also use this capability to enable the users to customize parts of a workflow. It is possible to specify pre, post, selection and cleaning scripts using the JSR 223. These scripts are often used to customize the task execution, to setup or clean the environment before running the task.

Last year we introduced a new type of task where you can directly write scripts. It is known as the script task and again leverages the JSR 223 to easily integrate with scripting languages. You can quickly test it on try.activeeon.com by following the quick start tutorial. It uses the JDK Javascript engine to run the well known Hello World example. 

The JSR 223 is an interesting and quite old (8 years!) specification. It lacks a few features that probably prevented it to become more popular. For instance, there is no way to secure the classes and methods the script runtime has access to, i.e sandboxing. It can become problematic if you intend to run scripts provided by end users as this is a big hole in your system's security. In ProActive, it is not really an issue as users tends to have full access to the system. Most of the installation are dedicated to a small set of power users and not opened to everyone. Then system policies are often in place to prevent abuse. For others it can become a problem and the Riviere Groovy User Group recently organized a hackathon to secure Groovy script execution. The solution is specific to Groovy and goes beyond JSR 223, you can find more details here

As I mentioned at the beginning, several dynamic languages running on the JVM are supported as script engines but surprisingly I could not find any implementation that supported Bash. Sometimes you have existing scripts that you would like to use inside a ProActive workflow. Sometimes Bash is just the right tool to do simple things (pipe to the rescue!). Well Bash is still a scripting language so why not create a JSR 223 implementation for Bash? And by the way could it work for BAT scripts too? 

Ta-dah! Here comes the JSR 223 for native scripts: https://github.com/youribonnaffe/jsr223-nativeshell

It enables you to create a script engine inside the JVM that will handle BAT and Bash scripts. For instance it means that you can write ProActive script tasks like this one: 


The implementation is fairly simple and you can take a look at the source code on Github. The idea is just to take the script, write it to a file and run a native process, Bash or Cmd.exe with this script file as a parameter. My first version passed the script content directly as a parameter but I quickly hit a limitation on the size of the command line… You can also access the output of the script easily using JSR 223. 

You can easily test it by cloning the project, building it and running the script engine directly as shown below: 


One interesting feature of script engines is the ability to pass data in and out, aka bindings. In the case of native scripts, you can define bindings that will be visible from the native script as environment variables. Due to the fact that the native script engine is a native process there is no easy way to modify bindings within the script and get them once the script engine exits. The only output you get from a native process is the exit code. As a workaround, you can always use a file and read it back in your application. A few common Java objects are supported as bindings as detailed here

If you want to integrate this particular script engine in your application, it is just a matter of adding the JAR to the classpath (no external dependencies required). To integrate it with ProActive, you need to have this JAR file on every ProActive node. The script engines are automatically detected and you can access them with names or extensions.

Tuesday, February 25, 2014

OpenStack & VMWare VM Disk Migration

OpenStack & VMWare VM Disk Migration


This short tutorial shows a way to perform VM migration from an OpenStack infrastructure to a VMWare infrastructure, and vice-versa. By migration we do not mean hot on-the-fly migration of a VM, but rather VM disk migration. Please note that this is just one out of many procedures, if you have found something better please let us all know.

Migration from OpenStack to VMWare


Current versions of OpenStack do not allow to retrieve VM images with a container format (such as OVF or VMX formats). However it is possible to obtain their disk images, which is enough for us.

1. Obtain VM disk image from OpenStack infrastructure

It is usually placed in Glance image directory (for example /opt/stack/data/glance/images/). We assume the format is QEMU QCOW Image (v2). OpenStack Image Services (API v2.0) allow to get the VM disk image through an HTTP GET at v2/images/{image_id}/file .

You can also use glance client (see the annexe for further information on installation and version):

   $ glance -I <user> -K <pass> -T <tenant> -N <url>/v2.0 image-download --progress --file <vmImageFile> <imageId>

2. Convert vmdisk.qcow2 into a vmdisk.vmdk format (supported by VMWare)

   $ qemu-img convert -f qcow2 -O vmdk vmdisk.qcow2 vmdisk.vmdk

3. Create a vm.vmx container for vmdisk.vmdk

When creating vm.vmx metadata file, make sure the disk file vmdisk.vmdk is in the same directory. There are plenty of parameters VMWare allows you to set, you can start using these (you can see the documentation for more information):



config.version = "6"
memsize = "1024"
displayName = "vm"
scsi0.present = "true"
scsi0.sharedBus = "none"
scsi0.virtualDev = "lsilogic"
scsi0:0.present = "true"
scsi0:0.fileName = "vmdisk.vmdk"
scsi0:0.deviceType = "scsi-hardDisk"
virtualHW.productCompatibility = "hosted"

4. Import the VM image (vm.vmx + vmdisk.vmdk) into VMWare infrastructure

   $ ovftool --powerOn vm.vmx vi://<user>:<password>@<ESXi-server>/

That's all. You should see your new VM called "vm" (see the .vmx file, property displayName) booting.

Migration from VMWare to OpenStack


We will now do the opposite, take a VM from OpenStack and make it run on a VMWare infrastructure.

1. Obtain the OVF container file of the VM VMWare infrastructure

   $ ovftool --powerOffSource vi://<user>:<password>@<ESXi-server>/<vmName> vm.ovf

This will download the OVF file together with some other files, like the VMDK disk file. It usually has the name vm-disk1.vmdk .

2. Convert VM disk file into a QCOW2 format disk file (supported by OpenStack)

   $ qemu-img convert vm-disk1.vmdk -O qcow2 vm-disk1.qcow2

Note: use qemu-img v1.2 or newer to avoid a weird sector error. See the Annexe for further information on version and installation.

3. Import the QCOW2 disk image into Glance 

   $ glance -I <user> -K <pass> -T <tenant> -N <url/v2.0> image-create --name <newVmName> --disk-format=qcow2 --container-format=bare --file <srcVmImageFile>

4. Start the VM using the OpenStack API

Use Horizon (OpenStack web portal) to launch a VM using the just imported VM image.
You can also do it through command line using nova client:

   $ nova --os-username <user> --os-password <password> --os-tenant-name <tenant> --os-auth-url <url/v2.0> boot --image <imageId> --flavor <newVmFlavorId> <newVmName>

Annexe

Install glance client (v0.12.0.49)

   $ https://github.com/openstack/python-glanceclient
   $ cd python-glanceclient
   $ python setup.py install 
   $ sudo pip install pbr

Install nova client (v2.15.0.237)

   $ git clone https://github.com/openstack/python-novaclient.git
   $ cd python-novaclient/
   $ sudo python setup.py install

Install qemu (v1.2.0)

   $ sudo apt-get install libglib2.0-dev
   $ wget http://wiki.qemu.org/download/qemu-1.2.0.tar.bz2
   $ tar xfj qemu-1.2.0.tar.bz2
   $ cd qemu-1.2.0
   $ ./configure && make qemu-img

Some other references

Here there is a very interesting article I found about VM image containers and disk formats.

That's all folks. 

Monday, September 16, 2013

Distributed Groovy with ProActive: GPars - Part 2


Since we leveraged ProActive to execute remote Groovy closure in the previous blog post, we will, in this one, integrate ProActive with GPars to allow the distribution of concurrent workloads.
GPars is a nice library written in Groovy to develop parallel applications. It leverages closures to provide a nice and concise API, something that we will “soon” benefit from in Java with the Lambda project. GPars covers many concepts of concurrent programming such as concurrent collections, actors, fork/join, agents, STM,... Here we will focus on concurrent collections processing.

GPars in its implementation details relies on Threads, plain old school Java synchronization and amongst other Fork/Join framework. To avoid changing GPars itself, I’ve chosen to implement the distribution as a decoration of the closure that we want to parallelize. The basic API should stay the same and the execution on the client side will keep the same concept, a thread pool that GPars uses to run the closure, except this time it will call a ProActive node to run the closure and get the result back. This simplistic approach is of course limited but it will be enough to illustrate the concept and the potential of mixing Groovy with ProActive.

Let’s start with an example of what GPars can do:
Here we ask Wally’s friends where they are and since they are not very disciplined they can all answer at the same time and will do! We see that they each live in a different thread, yet on the same JVM.
$> [Wally is here: ForkJoinPool-1-worker-1, Wilma is here: ForkJoinPool-1-worker-2, Woof is here: ForkJoinPool-1-worker-3]
Since Wally discovered worm holes (seriously?) his friends are able to travel to remote locations, such as a ProActive node!

Here we ask again where Wally’s friends are and again they answer as they wish. Now they are in different universes, aka different JVMs.

$> [Wally is here: RemoteClosure on rmi://jily.local:1099/PA_JVM1522417505_GCMNode-0, Wilma is here: RemoteClosure on rmi://jily.local:1099/PA_JVM139013877_GCMNode-0, Woof is here: RemoteClosure on rmi://jily.local:1099/PA_JVM1680423209_GCMNode-0]

If you want to get into the gory details of it, you can take a look at ProActivePool where it mimics GParsPool class to add the distributed call to a ProActive node.

With this simple example, we have demonstrated how GPars could be extended using ProActive to actually distribute the parallel computations. Groovy helps us to keep a simple API and to hide the implementation details to the user.

On a side note, we are adding Groovy as a script engine in ProActive latest release. Combined with the new script task, we hope it will help you to build ProActive application faster while still benefiting from a tight integration with Java.

Tuesday, July 30, 2013

Distributed Groovy with ProActive - Part 1


Being a new member of the ProActive team, I tried to teach to myself a bit of active objects, or at least create one. Active Objects are a core concept of ProActive, a distributed computing framework which is the foundation of the Scheduler and Resource Manager

I will not detail how the active objects work but illustrate how they can be used with concrete examples and especially showing how a language such as Groovy could integrate with to provide a nice and fluent distributed API. Groovy is a dynamic language running on the JVM providing scripting capacities and a transparent integration with Java. It also enables you to adopt functional programming in Java with closures for instance. To me, it is really a Java++ where you can increase your code readability. And if you are afraid of the performance of a dynamic language, well the latest version brought static typing so you can still use all the nice features and keep a coding style closer to Java with good performance. 

This is the first item of two blog posts where I will explain how to use Groovy with ProActive to execute code in a distributed manner and how GPars could be distributed with ProActive. 

The code presented here is available on GitHub and can be compiled and ran using Gradle wrapper (distributed with the sources). Useful commands are gradlew tasks, gradlew build. To build it you will also need the programming library (where the active objects live), available here and to set the project.ext.programmingLibDir property inside the build.gradle script. The programming library sources are available here

Let’s start with a very simple example showing the deployment of a remote execution node, the creation of a remote active object on this node and the remote execution of code.

 
This program finds Wally and his friend the Wizard whitebeard. Wally being a normal guy lives here in the present world (main thread) whereas the Wizard with its magic powers lives elsewhere (on a remote node). So when you run this program, you will see where each of them live.

$> gradle WhereIsWallyJava
Wally is here: main 
Wizard whitebeard is here: WallyFingerPointer on rmi://192.168.1.54:1099/PA_JVM862712172_GCMNode-0 

Here we deployed the node locally using a GCM descriptor. WallyFingerPointer is the active object, created by the call PAActiveObject.newActive(). It is very simple class with one method, printing the current thread name from which we can infer Wally’s location. Now this is all Java code and we will now try to improve it with some Groovy.

Here we have the same program written in Groovy, the output is exactly the same. Let’s explain how it works. We create a WallyFingerPointer object and using the with() method we express that all the code inside the closure (inside the braces) will be invoked on the WallyFingerPointer object. It is a shortcut to avoid repeating the variable name when calling several methods on the same object. Then we have the remote code execution using the same closure block but with a call to a static method remote(). This methods takes care of creating the remote deployment context and cleans it when the closure is executed. The foundHim() calls inside the closure will be executed by the active object. Groovy enables us to hide the technical details around active objects and helps to provide a clean API to the user. Of course this a very limited example, a few technical limitations exist related to object serialization for instance.

Now what would be nice is to be able to execute remote closure because closures are core to Groovy, used everywhere for conciseness (and that will be useful for the second blog post of this series). A closure can be seen as an interface with one method so it is itself an object, with call() methods, eventually taking parameters. The closure has also access to variables outside its scope but we will not allow that in our example to facilitate serialization. 

We still use the same program with the first line printing out the location of Wally (local execution) and the second line using a remote closure. The code inside the braces will be executed on the remote node. The way it works is that we created an active object that has one method taking a closure as a parameter and executing it. The closure object will be serialized and sent to the remote node for execution. To serialize the closure we use the dehydrate() method that simply remove references to object out of the closure scopes (the script itself for example). It is now even simpler to execute remote code with Groovy!

We have shown a very simple example of remote code execution with ProActive, adapted this code to use Groovy and introduced a way to run remote closures. In the next blog post, we will use these remote closures to turn GPars, a concurrency framework, into a distributed concurrent framework!

Wednesday, June 12, 2013

3D Game Remote Rendering with ProActive


ActiveEon is again on the cloud, providing real solutions to real use cases.

Within the framework of CompatibleOne project, ActiveEon collaborated with INRIA Sophia Antipolis and Eureva to furnish a proof of concept that represents the future of game computing, or as we like to call it, cloud gaming. 

It does not really matter whether your workstation has a powerful GPU to stand heavy 3D rendering loads, because as long as your Internet connection bandwidth is good enough, the cloud will do the hard work for you. 
3D Game Rendering scenario 



Launching a game in the cloud has become very simple with ProActive framework. The Game Player simply chooses a game from Eureva's Cloud Gaming Client interface. At this point the Eureva Games Broker receives the request and processes it. When constraints are determined, a resources request is sent to CompatibleOne Broker which uses a ProActive connector to furnish ProActive enabled physical resources matching constraints, like CPU, physical memory, existence of GPU, among others. After ProActive has booked the correct resources from the ProActive Games Infrastructure, the game is launched together with a streaming server that allows your computer to receive streaming with heavy frames processed in the cloud. 

Take a look at our video at ActiveEon's Youtube Portal and let us know if you see future as we do.