Friday, August 11, 2017

Activeeon Job Planner: in depth

Activeeon Job Planner: in depth

In the previous article, we already presented the new job planner, a tool to execute jobs periodically. Now let’s explore how to add exceptions: include or exclude specific dates. To sum up, be more flexible.
For example, if you want to send analytic reports each monday but not the day’s off, you can create a calendar which avoid these days. Or, if you execute a job once a month but need two extra ones, at the beginning and at the end of the summer, it is possible.

HOW TO USE IT ?

Saturday, August 5, 2017

Pilot with your Applications (ETL, CI/CD, etc.)

As any software solution nowadays, the key for company success is the ability to integrate all solutions into one. Indeed, as technology evolves, software solutions are becoming more and more specialized and require tight integration between each other to bring true value.

ActiveEon ProActive has been developed with an open approach. The solution is easy to integrate in any architecture and connect to external services. To be more precise, the solution has an comprehensive open Rest API to let any third party application integrate with it. On the other side, tasks can be developed in any language to execute rest calls or simply execute a command line in the relevant host.

Benefits of using ProActive as a pilot

Without going in too much detail, some of the benefits of using ProActive to pilot third party software are:

  • Shared resources, allocate resource dynamically depending on each service and application needs,
  • Priority, preempt resources from low priority tasks to give them to urgent ones,
  • Multi-language, automate business workflows through custom made scripts written in the most suited language,
  • Automation of preparation tasks before starting third party services,
  • Error handling, monitor and manage errors at a higher level for more control

How simple is it?

Nowadays, most companies are providing ways to connect to their system through API, CLI or SDKs.

  • API - If the company is providing an API, a ProActive task will be responsible for connecting and submitting Rest calls to the ETL endpoints. In Groovy, it is as simple as json = ['curl', '-X', 'GET', '--header', '', 'http://api.giphy.com/v1/stickers/search?q='+input+'&api_key=dc6zaTOxFJmzC'].execute().text
  • SDK - If the company is providing a library in Java/Python/etc., a ProActive task in the relevant language will be responsible for connecting and submitting relevant request to the ETL service. In that case, the library will have to be loaded within the ProActive folder or using a fork environment such as with Docker.
  • CLI - If the company is providing a CLI, a ProActive task will be responsible for connecting and submitting requests to the CLI service. In that case. a selection script may be used to select the relevant host and execute the command within it or as explained above a Docker container with the relevant SDK can be used.
Do not hesitate to try these solutions on our user platform.

Successful integrations at customers

Thursday, July 27, 2017

Accelerating machining processes



The MC-SUITE project proposes a new generation of ICT enabled process simulation and optimization tools enhanced by physical measurements and monitoring that can increase the competence of the European manufacturing industry, reducing the gap between the programmed process and the real part.

Automatization of the full machining process using ProActive workflows


Figure 1: The full machining process

The workflow controls the complete execution of all tools involved in the virtual machining process and automatically manages file transfers between tool executions. Figure 1 depicts the graphical representation of the orchestration xml file.Using dataspaces is crucial since tasks are submitted to ProActive nodes that could live remotely. Therefore, required files by a task must be placed in the scheduler dataspace to be automatically transferred to the running task temporary dir. To achieve that, ProActive provides dedicated tags (transferFromUserSpace, transferToUserSpace,...). Moreover, files will be referred from the task script using the file name, i.e. without specifying the path.

This workflow suffers from a lack of automatization. Indeed, the CAD task pops up a GUI, requiring parameters to be set for the Himill configuration file generation. This step breaks the full procedure automatization. To tackle that, we proposed an updated version of the workflow by first, migrating all the CAD parameters in the workflow parameters section. This can be easily achieved since the orchestration code follows the xml syntax, clearly separated from the functional code.
 removing the CAD task and the CAD installation path parameter. Then by adding a groovy section to dynamically generate the Himill configuration file according to the workflows parameters. Each task supports most of the main programming language, and we used Groovy which offers advanced methods to easily work with .ini files.




Friday, July 21, 2017

High Availability / Disaster Recovery Plan

Today let's discuss about high availability (HA) or more precisely disaster recovery plan (DRP).

As with any system, downtime can have major consequences for businesses. This quick article simply discuss two ways of achieving HA for ProActive.

Overall architecture

There are multiple ways for ProActive to be configured for High Availability (HA) / Disaster Recovery Plan (DRP).

  • ProActive stores its state within a database and includes an abstraction layer to configure the connection to a database. By default, the database is embedded within the ProActive folder. The objective is consequently to connect ProActive to a HA database (e.g. MariaDb can be configured this way, AWS RDS, ...)
  • The state being stored in an external database, it is important to monitor the behavior of ProActive. If it does not respond, it can be restarted which will then restart the scheduler, the diverse interfaces and connect to the database.

Below are two simple examples.

Monday, July 10, 2017

Introduction to Job Planner

Job Planner Methods

  • polling informations from other website,
  • updating regularly your data,
  • performing verifications and maintenance,
  • testing for new file in folder,
  • etc.

In this article, we will review these methods and go deeper into the latest one: the job planner.

Friday, June 23, 2017

Workflow Catalog through Examples

Introduction with an example : the workflow lifecycle management

Today let’s discover the new workflow catalog from ProActive. In a few words, the Workflow Catalog is a ProActive component that provides storage and versioning of Workflows through a REST API.

For a simplified explanation, we have here an example of ProActive utilization with three buckets. Each buckets represents a different stage of the workflow lifecycle. For instance, the workflow1, in the development bucket, was edited 3 times at the moment. Each edition corresponds to a revision. All the people who have access to the same bucket can read and write on all the workflows and their revisions.

A few use cases :

How would you handle sharing workflows ? Since buckets can be accessed by several users, transferring workflows between buckets simplifies the sharing process.

What about when you need a specific workflow within hundreds ? Don’t worry, use the search tool to narrow the list returned. Parameters such as owner can be used, other custom fields are also available thanks to generic information (e.g. infrastructure, language, etc.).

You found new bugs in the latest workflow revision ? The delete function can remove a selected revision to come back to another version.

Now let’s try some of these functions :



Tuesday, April 4, 2017

Legal & General Use Case

Resource consumption optimization and cloud leverage

Financial institutions are heavy consumers of computer resources for calculation of risks, opportunities, etc. They will take advantage of schedulers to ensure good distribution of workloads onto their existing infrastructure and minimize computing needs. Today let’s focus on Legal & General (L&G) case study and their transition to the new generation of open source scheduler.

Background and Specifications

Legal & General Group plc is a British multinational financial services company headquartered in London (UK). Its products include life insurance, general insurance, pensions and investments. It has operations in the United Kingdom, Egypt, France, Germany, the Gulf, India, the Netherlands and the United States. Their market capitalisation is at £13.5bn and they have £746bn assets under management.

Technologically, L&G used to base its Economic Capital and Solvency II simulation on IBM AlgoBatch. Their objective was to migrate from a private datacenter and Tibco DataSynapse to Azure Cloud and hybrid scheduling solution. Part of this migration the specifications were to handle Solvency II analysis on 2.5 million Monte Carlo scenarios, dynamically define and prioritize workloads and minimize time to delivery of results.