Wednesday, January 18, 2017

Build Your Employee Self-Service Portal With ProActive

 In the past few years, people are consuming more and more services to perform their jobs properly. Managing them is the role of the IT department. However some employees might want to bypass this department to be faster which can create what is called “shadow IT”. Some others might be frustrated at being queued and waiting for additional checks to be performed.

 A solution for this situation needs to balance IT and non-IT needs by providing a user friendly interface which is fast for non-IT (from another department) users and provides governance for IT users. More precisely, this could be achieved by joining all possible applications and services into a single platform. This way, application and service lifecycles can be easily managed and custom templates can be made available for all to use (and create). This would allow for faster and more agile deployment as well as improve governance by giving visibility over the current services and by using a common standard.

 ProActive Cloud Automation offers a solution through a self-service portal to monitor and manage application and service lifecycles. The IT department can easily create templates which follow business policies and could be made available to user groups.

 This portal enables service utilization based on user groups. Each user will then access services and configuration parameters according to its group level rights.

 For instance, some teams might have a portal to manage their EC2 instances lifecycle (IaaS), some other will only need to manage their Swarm or K8s cluster (PaaS) and some other at the SaaS level will need different end user services available on-demand (e.g. Wordpress, Graylog, etc.).

 Since many services can be created in Docker container, I will focus on explaining the process of creation of a generic service to launch a container. It must be adapted to fit the needs (i.e. it is an impractical service but can be used for everything if the users know how to use it).

Service Creation

 To add a service to the portal:

  • Write a deployment script for the service
  • Put the parameters your users want to modify as variables
  • Add a few informations
  • Add to the catalog (a space to store workflows)
The service will then be ready. It is possible to do the same for more than deployment: deletion, pausing, resuming, etc. After that, the services can be launched from the portal by filling a few fields.

 To be fully functional with Cloud-Automation, a workflow needs to fulfill a few requirements:

  1. The following Generic:
    • pca.service.model: an identification number for the service
    • pca.service.type
    • pca.service.name (for the portal to display)
    • pca.service.description
    • pca.action.type (e.g.: create, delete, …)
    • pca.action.name
    • pca.action.description
    • pca.action.origin_state (null for “every state”)
    • pca.action.icon (the path relative to PROACTIVE_HOME/dist/war/cloud-automation)
  2. The following workflow variables:
    • instance_name
    • infrastructure_name
  3. Variables for the final user (creation workflow only)
  4. A end_deployment task (creation workflow only)(see the example below to create it)

Time for the example

First step: base structure

 First, in the creation workflow, add the Generic info. The important ones are pca.service.model (must not be in use by another service), pca.action.name : create and pca.service.origin_state : null . Then add the 2 workflow variables and the end_deployment task. It will contain (for now).

 Finally, create a main bash task preceding the first task.

 It is now time to create the deletion workflow. The important Generic info are pca.service.model (same as for the deployment), pca.action.name : delete and pca.service.origin_state : null . For the body of the workflow, a bash task with is enough.

 Now, the service will be usable as soon as the two workflows are in the catalog. The name of the service will be that of the creation workflow.

Second step: (optional) other images

 To be able to select the image at launching of the container, add the workflow variable “image” in the deployment, and replace all occurrences of “java” by “$variables_image” and of “javaContainer” by “$variables_instance_name” in both workflows. As a side effect, the container name will be set by the user at creation.

Third step: (optional) networking

 First, make sure to target the right host. For that, use a selection script (“checkIP” from the samples for instance), the target may be passed as variable. Then the -P or -p option can be given at deployment of docker to bind ports.

 To configure the ports at launch time, add two workflow variables (container_port and exterior_port for instance) and a task to process the input. This task can be a python task containing It must be before the deployment task.
The deployment script would then be

 To offer the choice of the method to use, it is possible to use a “if” control based on a variable. In order to offer a clear choice, have the variable selecting the behavior as a drop-down menu in the cloud-automation portal. For that, set the default value of the variable to #{direct, precise} .

Fourth step: (optional) volumes

 First, make sure to target the right host. For that, use a selection script (“checkIP” from the samples for instance), the target may be passed as variable. Then the -v option can be given at deployment of docker to link volumes.

 To select volumes at launch time, add two workflow variables (container_volume and host_volume for instance) and a task to process the input. This task can be a python task containing It must be before the deployment task.

 The deployment script would then be

 To offer the choice to use or not volumes, it is possible to use a “if” control based on a variable. In order to offer a clear choice, have the variable selecting the behavior as a drop-down menu in the cloud-automation portal. For that, set the default value of the variable to #{Yes,No} .

Last step: errors

 The last step is to make the deployment script aware of its state. However it is only possible to retrieve the status of a task in a clean-script which can not write variables. A way to bypass this limitation is to use a post-script since a post-script is not executed in case of failure.

 This script, associated with the deployment task can write into a variable (let’s say “success”), then the end_deployment task can read it with

Now a concrete example

 Let’s choose Wordpress.

 The description of the official Wordpress container on Docker hub read

$ docker run --name some-wordpress --link some-mysql:mysql -d wordpress
The container for Wordpress is meant to use a mysql container but I will use a mariadb one instead (no difference).

 The mariadb container description read

$ docker run --name some-mariadb -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag

 For this example, the parameters that will be exposed are:

  • the name of the containers,
  • the password for the db,
  • the versions
as well as the address of the target host for the services.

 Use them to create the docker run commands.

No comments:

Post a Comment