Creating a Virtual Environment for the OpenShift Platform Using Oracle VirtualBox

Posted: February 23rd, 2014 | Author: | Filed under: Technology | Tags: , | No Comments »

VirtualBox Logo

In the past, I have demonstrated several projects and features for OpenShift, Red Hat’s Platform as a Service product. You may be aware of OpenShift Online, available at http://openshift.com, for seamless deployment of applications to the cloud. What you may not be aware is that OpenShift Online is only one of the OpenShift products available. The OpenShift line of products features 3 solutions: Online, Origin and Enterprise. While OpenShift Online may suit your public cloud needs, there may be interest in establishing a private or hybrid cloud solution within your enterprise. The best way to become familiar with a product is to install and configure it yourself on a local machine, such as a personal desktop or laptop. Evaluating cloud-based technologies on a single machine typically requires the use of virtual machines to replicate multiple machines or environments. However, applying the appropriate configurations when leveraging multiple virtual machines can be challenging as they must be able to communicate both with each other as well as the external Internet. In the following discussion, we will walk through the process of configuring the supporting infrastructure of a minimal installation of a private OpenShift environment using Oracle VirtualBox.

While there are several virtualization products on the market today, we will utilize Oracle VirtualBox primarily due to its ease of use and compatibility with multiple platforms. Before we begin the configuration process, let’s discuss what an OpenShift environment consists of. A typical basic installation of OpenShift requires the allocation of two machines: A Broker, which handles user and application management, and a Node which provides hosting of cartridges and the actual storage of applications deployed by users, which are contained within gears. Communication between broker and node instances is critical to the performance of the OpenShift environment and presents the greatest challenge when configuring the virtual instances. We need to provide an environment that fulfills the following requirements:

  • Host can communicate with each virtual guest
  • Virtual guests can communicate with each other
  • Virtual guests can access external resources on the Internet

VirtualBox, similar to most virtualization software, supports several network hardware devices that can be run in a variety of networking modes. A full discussion of the different types of network devices and modes VirtualBox can be configured in can be found on the VirtualBox website . To accommodate the aforementioned requirements, we will leverage multiple network adapters running in separate networking modes. The first adapter will be configured in host-only mode as it creates a private network among each virtual machine configured with this network type and the host. The limitation when running in host-only mode is there is no access to external resources such as the Internet. To allow access to external networks within each guest, a second adapter is configured in NAT networking mode. NAT networking is one of the modes VirtualBox provides for allowing a guest to connect to external resources. It was chosen as it requires the least amount of work and configuration by the end user and is ideal for simple access to external systems. An overall diagram of the virtual machine network architecture can be found below.

VirtualBox Network Diagram


OpenShift Camel Component – Manage OpenShift from Camel

Posted: February 10th, 2014 | Author: | Filed under: Technology | Tags: , | 3 Comments »

Apache Camel

Since its creation in 2007, the Apache Camel project has allowed developers to integrate systems by creating Service Oriented Architecture (SOA) applications using industry standard Enterprise Integration Patterns (EIP). At its core, Camel is a routing engine builder for which messages can be received and processed. The processing of messages uses a standardized API library backed by an extensive collection of components which abstracts the actual implementation of routing and transformation of messages. These components also include the ability to communicate with external systems through an array of protocols ranging from web services, REST and JMS. With more than 80 components in the standard Camel distribution, developers have the flexibility to perform power operations with minimal effort to suit their business needs. One of the benefits of the Camel component framework is that it was designed as a factory system where new components can be easily added to solve a business use case.

As more companies and individuals look to migrate their operations and applications into cloud-based solutions, the management of these resources can prove to be a challenge. The overall architecture may be different than what many individuals are accustomed. The OpenShift Platform as a Service (PaaS) is one such cloud offering where developers can quickly create powerful scaled applications without having to worry about managing infrastructure or complex software installation.

Introducing the OpenShift Camel Component

Imagine being able to communicate with the OpenShift Platform directly through a Camel route? With the OpenShift Camel component, it is now possible. Exposed through an Endpoint as both a Camel Consumer and Producer, OpenShift resource details can be retrieved or modified all within a Camel route. Want to find out information about a particular OpenShift Domain, such as the applications and their details? How about automating application deployments? These are only some of the possibilities now available.

Communication between the OpenShift Camel component and OpenShift is facilitated by using the REST API exposed by the OpenShift platform and the OpenShift Java Client. Responses from the OpenShift Camel component will be returned using the API from the OpenShift Java Client.

Using the OpenShift Camel Component in your project

The OpenShift Camel Component can be easily added to a new or existing project. The following steps will describe how this can be accomplished.

GitHub Project: https://github.com/sabre1041/camel-openshift

  1. Fork/Clone the GitHub repository 
    git clone https://github.com/sabre1041/camel-openshift.git
  2. Build the project using Maven
    mvn clean install
  3. Add the component to a new or existing Camel Project as a Maven dependency in the projects’ POM file.
    <dependency>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-openshift</artifactId>
        <version>0.0.1-SNAPSHOT</version>
    </dependency>
  4. The following example implementation will retrieve information about the authenticated user and print out their user details to the log:Java DSL
     
    import org.apache.camel.builder.RouteBuilder;
    
    public class PrintOpenShiftUserRoute extends RouteBuilder {
    
        public void configure() throws Exception {
            // Retrieve User Details
            from("openshift://user?userName=<openshift_username>&password=<openshift_password>")
                //Print to Log
                .log("${body}");
        }
    
    }

    Spring XML

    <route>
      <description>Spring XML Route. Full XML omitted for brevity</description>
      <from uri="openshift://user?userName=<openshift_user>&amp;password=<openshift_password>"/>
      <log message="${body}"/>
    </route>

With the steps above complete, the project can be deployed to a running container or unit tests can be created to verify the expected results. Additional functionality including how to leverage Authentication as Message Headers can be found on the GitHub project page.

In an upcoming post, I will demonstrate several uses for the OpenShift Camel including how the component can be leveraged within a SwitchYard application.


Nexus Cartridge for OpenShift

Posted: November 16th, 2013 | Author: | Filed under: Technology | Tags: | No Comments »

OpenShift Nexus CartridgeDeploying an application to the OpenShift Platform as a Service (PaaS) can be accomplished in only a matter of minutes due to the wide array of tools available to the developer. One of the tools provided by OpenShift is the ability to add the Jenkins Continuous Integration server to the development lifecycle of an application. The inclusion of Jenkins affords several incentives for both developers and stakeholders of any development project. First, by adding Jenkins to an OpenShift application, the build process is delegated to Jenkins resulting in far less downtime for the application as typically the application is taken offline during the build and deployment process. Secondly, it brings in the Jenkins Continuous Integration server, with its wide array of plugins and large community support, into your environment. Jenkins is one of the components that make up a continuous integration environment which is a set of software tools responsible for the assembly and distribution of software artifacts. The primary components of this environment consist of a build process, version control system, continuous integration server and finally a repository manager. The OpenShift ecosystem contains the majority of the components with an exception of not being able to easily access and store artifacts that have been assembled as a result of a build process. This process is typically delegated to the final component of the continuous integration environment: the repository manager. One of the more popular repository managers available on the market today is Sonatype Nexus. Nexus can serve multiple purposes in an environment as it can be a central location for binary artifacts and their dependencies, act as a proxy between your organization and publically available repositories, and serve as a deployment destination for internally created artifacts.

Each component of an OpenShift application, whether it is a web or database server framework, is known as a cartridge and there are wide number of cartridges readily available today. One of the benefits of the OpenShift cartridge system is the ability to leverage downloadable user cartridges to extend the base set of included cartridges. By offering Nexus as a downloadable cartridge, it would allow for the completion of creating a continuous integration environment in the cloud. To learn more about a continuous integration environment and how to build one of your own, refer to the following link: http://www.redhat.com/resourcelibrary/whitepapers/eap-create-compile-and-deployment-environment

The Nexus cartridge project is hosted both on GitHub and on OpenShft and built using the Cartridge Development Kit. By leveraging the CDK, an end user can choose the particular build of a cartridge to add to their application. The Nexus cartridge project is available at the following locations:

After selecting the CDK link above, you can use the OpenShift RHC Client tools or the web console to add the cartridge to an application based on the information provided. If you are using the client tools, you can use the command presented at the top of the page which represents the most recent build from the master branch of the project to create an application. When using the web console, first login and click Add Application. Downloadable cartridges can be added by entering the location of the cartridge manifest in the textbox provided in the “Code Anything” section. Click Next, enter the name for the newly created application using the Nexus cartridge, and click Create Application. After the cartridge has been created, you will be presented with a confirmation and the user name and password for the Nexus administrator account. After waiting a few moments to allow the server to come online, you can click the application link provided which will bring you to the Nexus server start page. More information on how to configure and utilize Nexus can be found at the following documentation page: http://books.sonatype.com/nexus-book/reference/

At this point, you have a fully functional repository manager hosted in the cloud. In an upcoming post, I will demonstrate how you can create a  fully functional continuous integration environment hosted on the OpenShift platform.


Java/JBoss EAP 6.1+ – Power to the Properties

Posted: October 28th, 2013 | Author: | Filed under: Technology | Tags: , | 1 Comment »

Separating the configuration parameters of an application from the source code is considered best practice as it allows for an application to be more versatile. Instead of being tied to a particular configuration by hardcoding values in the source code, parameters can be externally accessed and loaded according to the particular environment the application is running in. In Java, these values are typically stored in Properties files on the Java classpath or as system properties within the JVM. Java containers also provide methods and utilities for storing and loading configuration values for use by applications. The following discussion will explore the various methods for using property values on the JBoss Enterprise Application Platform version 6.1 and above.

Most Java developers are familiar with the basics of Properties files: sets of key/value pair strings stored in files with the .properties extensions. A basic example of loading a properties file within an application can be accomplished with the following code:

InputStream is = Thread.currentThread().getContextClassLoader()
     .getResourceAsStream(PROPERTIES_FILE_NAME);

Properties properties = new Properties();

try {
	properties.load(is);
} catch (IOException e) {
	LOGGER.error(e.getMessage(), e);
}
finally {
	try {
		is.close();
	}
	catch(Exception e) {

	}
}

The most common method of using a properties file is to package them within an application such as a Java archive (JAR) or web archive (WAR). Doing so will automatically place the file on the classpath for use by the application. In addition, there are several alternate methods that can be used. To demonstrate some of these methods, a sample application has been created to illustrate how this can be accomplished within an application deployed on JBoss EAP 6.1. It is hosted on GitHub at the following location:

https://github.com/sabre1041/jboss-properties

The project consists of a simple servlet which loads a specific properties file from the classpath, iterates through each of the keys, and prints out the value. In addition to using the value found from the properties file, the application will search for the existence of a system property from within the JVM and print out its corresponding value. The project can be built using Maven by navigating to the location where it is stored on your machine and executing the following from the command line:

mvn clean install

This will produce a web archive file named jboss-properties.war file inside the target folder which can be deployed to the JBoss server. For more information on how to deploy applications to JBoss, please refer to this link. After deployment and assuming the JBoss server is running on your local machine, the application can be accessed at http://localhost:8080/jboss-properties with the following presented:

JBoss Properties


Setting the Stage for a SwitchYard Implementation

Posted: October 6th, 2013 | Author: | Filed under: Technology | Tags: | 1 Comment »

Creating and deploying software in a service oriented architecture (SOA) environment can be a daunting task.  Some of the challenges one can face are the complexities of communicating with external systems, transforming data between multiple formats and the management and orchestration of multiple services. There are a number of well-established frameworks available that one can choose from including IBM WebSphere ESB, ActiveMatrix from TIBCO, and Microsoft’s BizTalk server. One of the more recent implementations of a services framework is SwitchYard. While similar to a traditional Enterprise Service Bus (ESB), SwitchYard aims to provide a “lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications”. So what does that really mean to both developers and stakeholders? For developers, Switchyard provides integration with multiple well established frameworks including Apache Camel, Drools, and jBPM for managing long running business processes. In addition, rich testing support allows for the validation of components at each level of the development process. For stakeholders, SwitchYard provides multiple levels of governance support by interacting with the S-RAMP artifact repository for design time governance and JBoss Overlord for runtime governance. SwitchYard has gained the support and confidence from Red Hat to drive the next version of their Service Oriented Architecture Platform (SOA-P) replacing the existing Rosetta based ESB offering. Over the course of the following several posts, we will discuss how SwitchYard can be deployed within an organization using a real life business use case. In the process, we will discuss several key components within the SwitchYard toolkit. Finally, we will explore how to deploy a SwitchYard project to both a local JBoss installation and an installation deployed on Red Hat’s Openshift Platform as a Service (PaaS) offering.

To establish a business use case for a SwitchYard and to illustrate practical implementations of various SwitchYard components, we will utilize SwitchBrick; a fictitious company in the electronics recycling market who is looking to improve a key aspect of their business. SwitchBrick’s business model is similar to other competitors in their marketplace; consumers trade in their used electronics in exchange for cash. What differentiates SwitchBrick from its competitors is that instead of requiring consumers to ship their product to the company to receive compensation, consumers can visit an automated kiosk to complete the exchange process. SwitchBrick sales have been consistently improving largely due in part to the customer feedback they have received. Seeing how this feedback mechanism has been so critical to the success of the business, they are looking at ways to improve their business process and technology stack. At the present moment, consumers can submit feedback to SwitchBrick in one of two ways:

  • At the conclusion of a transaction at an automated kiosk
  • A form on the SwitchBrick website

While the two systems are not currently integrated, they each perform the same overall functionality. A customer enters their email address and any comments they may have. In addition, the customer is given the opportunity to subscribe to the SwitchBrick newsletter which will entitle them to exclusive offers such as bonuses on future exchanges. Once records are submitted and hit backend systems, they are validated and if the consumer chose to subscribe to the newsletter, a notification is sent to the email provided confirming their subscription. The following diagram provides a visualization of the feedback process.

SwitchYard Overview

Now that we have a basic understanding of the feedback process, let’s discuss some of the specifics of their implementation as we continue to set the stage for integrating SwitchYard components. Feedback submitted at a physical kiosk and from the website are not treated the same primarily due to the process in which the records are received. Records submitted on the SwitchBrick website are received in real time via JMS while at the end of the day, each kiosk will batch up an entire day’s worth of records represented as a flat file in a CSV format. SwitchBrick already has a canonical model for their Feedback system which the website adheres to. The key is to transform the CSV records received from kiosk locations into this format before continuing on with the process. Once the records have been transformed into a canonical model format, validation and business rules can then be applied. Finally, if a subscription is requested, a notification is sent to the email submitted as part of the original request. For brevity of this example application, no persistence to a datastore will be attempted but one can assume this step exists in a real life implementation. With an end to end understanding of the feedback process, here’s how it can be visualized:

SwitchBrick Detailed Implementation

For those who have had previous experience working in a message oriented middleware (MOM) environment may recognize the icons representing the transform and subscription steps of the applications in the diagram above. Those icons signify a transformation and content based router Enterprise Integration Patterns (EIP). An EIP is based on an accepted solution for a given context when designing complex message based systems. Gregor Hohpe and Bobby Woolf’s Enterprise Integration Pattern book, which is approaching its 10 year anniversary, is still considered the go to reference when designing  message based architectures such as SwitchYard. The Apache Camel project has also assembled a quick reference guide for the majority of commonly used patterns on their website. Having a basic understanding of Enterprise Integration Patterns is a key still to attain before working with SwitchYard.

A key feature of SwitchYard and the development toolkit it provides is the support for visualizing components of a SwitchYard application graphically. This visualization allows for a live representation of the state of the application instead of having to leverage third party tools such as Microsoft Visio or open source applications such as OpenOffice/LibreOffice Draw or Dia to depict the application after the fact. Since we have all the steps in place to start developing an application, here’s how it would be modeled within SwitchYard (yes, like a good TV chef, the final product has already been completed):

SwitchYard Overview

In the next post, we will get down to business demonstrating in detail the steps required to create to create a SwitchYard application using the SwitchBrick use case as our guide. Along the way, we will introduce commonly used SwitchYard components, illustrate how they can be used, and review how to test and validate the execution of an application. In the meantime, be sure to check out the SwitchYard product page which includes references for getting your hands on SwitchYard, product documentation, and sample projects.


Configuring a Connection Timeout Value for the JBoss CLI

Posted: July 31st, 2013 | Author: | Filed under: Technology | Tags: , , | 2 Comments »
Yes Chris, even you can set a timeout value

Yes Chris, even you can set a timeout value

There are multiple methods for configuring the JBoss Enterprise Application 6 server. These options range from using the Management web interface, the Command Line Interface (CLI) tool, or manually editing the XML configuration file (not recommended, but still available as an option). One of the benefits of using the JBoss CLI tool is that it not only allows for the management of almost every option available in the EAP server remotely (the Management console only exposes a subset), but it allows for tasks to be scripted and run in a repetitive manner. I have used the CLI tool for everything from provisioning the initial configuration of server instances, monitoring platform metrics, to performing nightly deployments as part of a continuous integration environment. One of the issues that I have frequently faced is the CLI tool would time out while connecting to the JBoss Instance. This situation tends to occur more frequently on remote instances or on instances that have their management interface secured with SSL as it takes longer for the handshake exchange process to complete.

The default timeout value for the CLI to connect to a remote JBoss instance is 5000ms (5 seconds). Users can provide their own timeout value along with a number of other configuration options in a file called jboss-cli.xml. The CLI tool will search for this file in three (3) locations:

  1. The current directory where CLI tool is invoked from
  2. Setting the location of the file in a Java Property called jboss.cli.config
  3. Within the bin folder of the JBoss server defined by the JBOSS_HOME environmental variable.

The JBoss server itself provides a jboss-cli.xml file in the <JBOSS_HOME>/bin folder which can be used as a starting point for any custom modifications.  This file is used when invoking either the jboss-cli.bat or jboss-cli.sh script depending on the operating system platform to connect to a local or remote instance. As previously mentioned, I utilize the CLI tool to handle remote deployments of artifacts. This is a centralized server and JBoss is not installed locally on this machine. The JBoss platform provides a client jar to leverage access to the CLI named jboss-cli-client.jar within the <JBOSS_HOME>/bin/client folder. To specify the location of the xml file as a Java Property while invoking the client jar, use the following syntax:

java –Djboss.cli.config=<location_of_xml_file> -jar jboss-client-jar [OPTIONS]

Specifying a timeout value within the jboss-cli.xml file was introduced in JBoss AS 7.2/EAP 6.1 by means of a connection-timeout option. If a timeout value of 10 second is desired, the jboss-cli.xml file can be configured with the following options:

<connection-timeout>10000</connection-timeout>

The full range of configuration options for the jboss-cli.xml file is available within the XML schema file located at <JBOSS_HOME>/docs/schema/jboss-as-cli_1.2.xsd file. The <JBOSS_HOME>/docs/schema directory contains a collection of schemas available for use within the JBoss server.

With the value set in the jboss-cli.xml file, it should resolve any timeout related issues experienced while connecting to remote JBoss servers using the Command Line Interface tool.


Integrated Windows Authentication (SPNEGO) for Web Applications on JBoss EAP 6.1

Posted: July 7th, 2013 | Author: | Filed under: Technology | Tags: , , | 16 Comments »

One of the benefits of web applications living in a windows domain is their ability to utilize single sign on and integrated authentication as an authentication mechanism. This functionality is natively supported on Microsoft’s own Internet Information Services (IIS) web server, but can be also employed on non Windows based web servers and applications. In both cases, each leverages an implementation of a negotiation of credentials between the web browser and a server implementing the Kerberos protocol (Active Directory in a Windows environment) called SPNEGO (Simple and Protected GSS-API Negotiation Mechanism). The SPNEGO protocol is used when a client wishes to authenticate to a server, but each are unaware of the security mechanisms that each supports. The client will propose both a preferred authentication mechanism and a set of supported mechanisms. The server will either accept the proposed mechanism, choose from one of the supported mechanisms or reject the authentication request altogether. If both the client (web browser) and the server support the Kerberos authentication mechanism, it will be chosen. Otherwise NTLM can be used as a fallback, though it is less secure and often not a supported mechanism. SPNEGO is supported on a range of Java frameworks and containers such as Spring, Tomcat and JBoss. Its functionality is enabled in JBoss through a component of the PicketLink framework called JBoss Negotiation. Initially as a standalone product, the JBoss Negotiation toolkit provides a Tomcat authenticator and a JAAS login module which enable applications to leverage single sign on. While EJB’s can participate in single sign on, web applications account for the majority of its use case. For a web application to leverage SPNEGO on JBoss, configurations must be made within the web application, the JBoss server, and in Active Directory. Sections of these steps are documented in various locations including the JBoss Negotiation User Guide (Written for EAP 5) and the JBoss EAP 6.1 Security Guide to name a few; however a complete end to end reference is hard to find. This article will discuss the steps involved in configuring a web application to utilize integrated Windows authentication (SPNEGO) on JBoss EAP 6.1.

Before diving in to the specific configurations, let’s discuss the process of how a web application in general is able to obtain the user name of the currently logged in user through integrated authentication. First, a user utilizes a browser to navigate to the web application. The server will respond stating that it must negotiate authentication. If both the browser and server support SPNEGO, it will be used. This is determined by verifying the existence of a trust between the browser and server. Since the browser will be seamlessly passing authentication information about the user to the server, a trust must be configured within the web browser otherwise SPNEGO authentication will not be attempted. If this trust exists, the browser must then contact Active Directory, which is acting as a Kerberos Ticket Granting Service (TGS), and retrieve a Service Ticket (ST) which is used to prove the identity of the caller. This ticket is then sent to the web server. The web server, which is configured within Active Directory to be a Service Principal (SP), uses a keytab file to communicate with Active Directory to decode the ticket and return the desired user name. A diagram of this process is shown below.

spnego

Don’t worry if some of these concepts seem unclear. These will be revealed over the course of the discussion. At a high level, the following are the steps required to configure a web application with integrated authentication on EAP 6.1:

  1. Generate and create a keytab file which will store a Service Principal used by the JBoss Server to authenticate against Active Directory
  2. Configure the JBoss server
  3. Configure the web application
  4. Configure the client web browser

Exploring the Errai Framework

Posted: April 11th, 2013 | Author: | Filed under: Technology | Tags: , , , | No Comments »

One of the prominent features of Java EE 6 was the introduction of Contexts and Dependency Injection (JSR 299). Built on top of Dependency Injection for Java (JSR 330), CDI allowed Dependency Injection (DI) and Aspect Oriented Programming (AOP) to become standardized giving developers the flexibility to implement a native solution rather than having to utilize a third party library such as Spring. CDI makes it easy to inject resources such as managed beans and services. It has done wonders for server side components, but if there is one thing web 2.0 has taught us, more and more functionality is being leveraged on the client side. Java based web frameworks have also moved in this direction as well with implementations such as Google Web Toolkit (GWT) compiling Java code into highly optimized JavaScript. Until now, CDI was restricted to being a server side technology, but with the help of the Errai Framework, CDI can now be leveraged on the browser. Errai is more than a dependency injection toolkit but rather a set of technologies built on top of the ErraiBus messaging framework for developing rich web applications using GWT. In this post, we will demonstrate several components of the Errai framework and how they can be used to build powerful web applications.

To help illustrate the various components of the Errai framework, we will walk through a sample application. This application allows users to perform basic mathematic operations such as addition, subtraction, multiplication and division among others. After a successful submission, the result of the operation is displayed below along with a log of all of the operations they have previously performed. Also included next to each submission is an icon indicating the users’ operating system and browser.

Errai Math

This application was designed specifically for showcasing various Errai components. The source code for this application is found at https://github.com/sabre1041/errai-math which will be useful when we begin discussing the implementation. The application can also be accessed directly as it is deployed on Red Hat’s OpenShift PaaS by navigating to http://erraimath.andyserver.com. The application can be deployed using GWT development mode, JBoss EAP 6 or OpenShift. It utilizes the following Errai components:

  • Errai JAX-RS
  • Errai CDI
  • Errai UI
  • Errai Data Binding

We will discuss each component in detail moving forward.

Before we dive into the details of the application, let’s briefly discuss how Errai applications fit into the conventions of a GWT application. A GWT application is configured into Modules which are defined in xml files. These files are placed at the top of the project hierarchy. In our application, the module file is named ErraiMath.gwt.xml. Standalone GWT applications need to also define an EntryPoint class which denotes the starting point for the application in their module file. Errai applications can forgo this requirement by instead annotating a class with the @EntryPoint annotation. Our EntryPoint class is called ErraiMath in the com.redhat.errai.math.client.local package. All client side files will be placed in this package.


Business Process Management using jBPM – Part II

Posted: April 7th, 2013 | Author: | Filed under: Technology | Tags: , | No Comments »

In our introductory post to jBPM, we demonstrated how jBPM3 can be utilized as a Business Process Management solution. While we did not explore the full potential jBPM3 has to offer, we were able to introduce and examine key features that are present in most BPM solutions. Since its introduction back in 2007, jBPM3 has been a rock star, especially in the SOA Platform. However, many have felt that it was too complex and required a deep understanding of the data model and the API. A number of features would need to be improved for upcoming releases. At the same time, a separate project located on the JBoss community side of the house called Drools Flow attempted to leverage BPM on the Drools Platform. It contained the industry standard Business Process Modeling Notation (BPMN) for modeling processes and Human Tasks based on the WS-HumanTask specification for executing tasks. jBPM4 had improved many of the shortcomings of its predecessor by reworking the process execution engine, but the project was ultimately abandoned after numerous issues arose. After a complete rework, jBPM5 was born as a merge of both the jBPM and Drools Flow. We will forego the discussion on Drools Flow and get right to jBPM5. During the course of our discussion, we will introduce key concepts and walk through a sample application in a similar fashion from the first post. Both applications are identical in functionality which will allow for comparisons between jBPM3 and jBPM5.

Installation and configuration

BRMS 5.3.1 was released in mid December 2012 and is available from both as a standalone application or as a deployable package for JBoss Enterprise Application Platform 5 and 6. Before we can walk through the example application, we need to make sure our local environment is configured properly. These include the following tools:

  • Apache Maven
  • JBoss Developer Studio 5 (Eclipse with the BPMN plugin can also be used but this post will focus on a JBoss Developer Studio installation)
  • JBoss SOA-P

Seeing as our previous discussion on jBPM3 targeted JBoss Service Oriented Architecture Platform 5.3.1, we will install the BRMS deployable package for EAP 5. Obtain the BRMS 5.3.1 deployable package for EAP 5 (not EAP 6) and have the package available on your local machine. You may be able to configure BRMS on EAP 6 and utilize the sample project, but this was not tested.

The BRMS 5.3.1 deployable package contains the following:

  • Guvnor Repository Manager
  • jBPM5 Console
  • Modeshape data store
  • jBPM5 library
  • BRMS library

Since the necessary artifacts for building BRMS projects are not found in the JBoss public Maven repository, we will use the artifacts found in the jBPM5 library package to populate the our local Maven repository. Extract the contents of the jboss-jbpm-engine.zip to the directory of your choosing. This package contains the necessary jBPM5 artifacts other BRMS artifacts that may be required by the example project. Navigate to the directory you extracted the archive and execute the following commands from the terminal to install the necessary artifacts into your local Maven repository.

mvn install:install-file -Dfile=lib/drools-core-5.3.1.BRMS.jar -DgroupId=org.drools -DartifactId=drools-core -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=lib/knowledge-api-5.3.1.BRMS.jar -DgroupId=org.drools -DartifactId=knowledge-api -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=lib/drools-compiler-5.3.1.BRMS.jar -DgroupId=org.drools -DartifactId=drools-compiler -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-workitems-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-workitems -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-bpmn2-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-bpmn2 -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-flow-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-flow -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-flow-builder-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-flow-builder -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-human-task-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-human-task -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-persistence-jpa-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-persistence-jpa -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-bam-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-bam -Dversion=5.3.1.BRMS -Dpackaging=jar
mvn install:install-file -Dfile=jbpm-test-5.3.1.BRMS.jar -DgroupId=org.jbpm -DartifactId=jbpm-test -Dversion=5.3.1.BRMS -Dpackaging=jar

To help illustrate the components of jBPM5, we will utilize the BPMN designer within JBoss Developer Studio 5. This designer is not installed by default in fresh installations and must be installed separately through the JBoss Central console. Navigate to JBoss Central by selecting Help -> JBoss Central. At the bottom of the panel, select the Software/Update tab. Locate the Business Rules Tooling component which contains tools for jBPM, Drools and Guvnor interaction. Check the box next to the component and click Install. Accept any End User Licensing Agreements which might appear, complete the installation and restart JBoss Developer Studio when prompted.

With Maven and JBoss Developer studio configured, the sample project can be obtained from https://github.com/sabre1041/jbpm3-jbpm5. Once the code has been downloaded on your local machine, import the project as an existing Maven project in JBoss Developer Studio.

With the project imported, let’s open up the SampleWorkflow.bpmn BPMN 2.0 compliant file located in the src/main/resources folder. This file is the business process itself. JBoss Developer Studio displays a graphical editor for ease of process creation and manipulation. The raw XML can also be edited by right clicking the file -> Open With -> Text Editor. By opening the file with the BPMN2 process editor, you are presented with the following workflow:

jBPM5 Workflow


Business Process Management using jBPM – Part I

Posted: March 31st, 2013 | Author: | Filed under: Technology | Tags: , | No Comments »

Companies are constantly searching for ways to innovate and improve efficiency. One such method is to incorporate a Business Process Management (BPM) solution into their organization. BPM involves the modeling of a business goal into distinct steps and expressing these steps visually as a workflow. There are a number of BPM solutions available on the market today, with one such solution being jBPM. jBPM is a suite of tools which aims to bridge the gap between business analysts and technical developers. There are two versions currently available, jBPM3 and jBPM5. While they each have similar goals, they are built on very different technologies. In this post, I will introduce the core technology behind jBPM3 and illustrate the underlying technology by building a simple application and deploying  the application on the JBoss Service Oriented Platform (SOA-P).

The key behind any BPM solution is the workflow defining the business process itself. A jBPM3 workflow is created as a process definition written in the jBPM Process Definition Language (jPDL) and expressed as an XML schema to describe a business process graphically. A process definition contains the logic for the workflow and is expressed using nodes, transitions and tasks. The process definition, supporting classes and libraries are packaged into a process archive (.par) file and deployed to the JBoss server. Process archives are deployed one of three ways: programmatically, using an Ant task or through the jBPM console. Seeing as jBPM3 has been available for a few years, there is a wealth of documentation compiled by others in the open source community. However, the best place to start is with the https://access.redhat.com/knowledge/docs/en-US/JBoss_Enterprise_SOA_Platform/5/html-single/JBPM_Reference_Guide/index.html as it covers the entire stack.

To help illustrate various concepts, we will build a sample application for an order intake system. At a high level, this is a very simple application. The contents of an order are received. Based on the details of the order, the order is either approved or denied. A notification of the results is then sent via email to a single recipient’s email inbox.

Before we get to the project itself, we need to make sure our environment is set up first with the following prerequisites being met:

  • Apache Maven
  • JBoss Developer Studio with SOA Tooling (Eclipse with JBoss Tools can also be used but this post will focus on a JBoss Developer Studio installation)
  • JBoss SOA-P 5.0+

jBPM3 is included in the default, production and all profiles of the JBoss SOA-P so one of these profiles must be used for running the application. jBPM is functional out of the box, however further configuration is required for providing additional features as you will see shortly. Process definition and state is persisted to a database and is configured in the jBPM datasource file located at <JBoss_Root>/server/<JBoss_Server_Profile>/deploy/jbpm.esb/jbpm-ds.xml.

The latest jbpm-jpdl dependency is not available in the public JBoss Maven repository and must installed into the local Maven repository. Navigate to the following path:

cd <JBOSS_ROOT>/server/<Server_Profile>/deploy/jbpm.esb/

Run the following command to install the required dependencies to your local Maven repository:

mvn install:install-file -Dfile=jbpm-jpdl.jar -DgroupId=org.jbpm -DartifactId=jbpm-jpdl -Dversion=3.2.13 -Dpackaging=jar

With the dependencies in place, the sample project can be downloaded from the Git repository.The sample project is available on GitHub

Import the project into JBoss Developer Studio as an existing Maven project. If JBoss Developer Studio 5 is used, the SOA Tools add on must also be installed. It can be installed from the Software Update page within JBoss Central.

The process graph for the sample application is called processdefinition.xml and located within the workflow-jbpm3 folder in the jbpm-jbpm3 project. It should look like the following:

jBPM3 Workflow

You can view process definition and node properties through the Properties view. If this view is not currently visible, it can be opened by navigating to Window -> Show View -> Properties. If this is not an option, click Other… Within the Show View window, expand General and select Properties.

 

Process Variables and Nodes

Process variables are key/value pairs which can be used to maintain process instance information through the execution of a workflow. In the sample application, we use process variables to store the id of the user who submitted the order and the total amount of the order. In a real world application, these values would be passed in at the beginning of the workflow. However, if we wanted to test the application from the jBPM Console, there is no way to create these variables. To work around this limitation, we create test variables at the beginning of the workflow. We accomplish this by placing the logic in a node. There are several types of nodes (task, decision, state, fork, join) whose responsibility is to compete the function of the node and to pass on process execution to the next phase of the workflow. For applying the test values, we will use a plain old node and attach a Java handler class to it. The handler class must implement the ActionHandler interface and the execute method. The sample application defines the class TestDataActionHandler as follows:

public class TestDataActionHandler implements ActionHandler {

	static final long serialVersionUID = 1L;
	private static final Logger LOGGER = LoggerFactory.getLogger(TestDataActionHandler.class);

	public void execute(ExecutionContext executionContext) throws Exception {

		try
		{
			if(executionContext.getVariable("user") == null)
			{
				LOGGER.info("Values Not Found. Setting Random Values");

				int high = 150;
				int low = 50;

				Random rand = new Random();

				String user = "User"+rand.nextInt(10);
				String amount = String.valueOf(rand.nextInt(high-low)+low);

				executionContext.setVariable("user", user);
				executionContext.setVariable("amount", amount);
			}
		}
		catch(Exception e)
		{
			LOGGER.error("An Exception Occurred", e);
		}
		finally
		{
			executionContext.leaveNode();
		}
	}
}

Within the execute method, we create a random user id and random order amount and set two process variables within the ExecutionContext. Finally, we tell the ExecutionContext to leave the node and proceed to the next step in the workflow.

Decisions

We want to have our workflow take different actions depending on whether the price of the order is above or below a certain threshold.  This is accomplished by adding a decision node to the workflow. Decisions can be modeled directly in the workflow using Expression Language, Beanshell scripts, or an external entity. Personally, I prefer using the external entity route by providing a Java class as it gives you finer grained control over the decision making process. A Java class handler which extends the DecisionHandler interface is used. The interface specifies the decide method which must be implemented returning a string value as the name of the outgoing transition which should be taken by the workflow. In our application, we have created the AmountDecisionHandler class for this purpose and created two outgoing transitions from the decision node to fulfill the handler requirements. In our handler class, we check to see if the order amount is greater than $100. If the condition is met, the transition with the name “Approved” is taken, otherwise the transition with the name “Denied” is taken.

public class AmountDecisionHandler implements DecisionHandler {

	private static final long serialVersionUID = 4843616362227490130L;

	private static final String AMOUNT_PROP = "amount";
	private static final String RESULT_PROP = "result";

	private static final String APPROVAL_TRANSITION = "to-approval";
	private static final String DENIAL_TRANSITION = "to-denial";
	private static final BigDecimal AMOUNT_THRESHOLD = new BigDecimal(100.00);

	public String decide(ExecutionContext executionContext) throws Exception {

		String amountString = (String) executionContext.getVariable(AMOUNT_PROP);

		if(amountString == null)
		{	
			executionContext.setVariable(RESULT_PROP, "Denied");
			return DENIAL_TRANSITION;
		}

		BigDecimal amount = new BigDecimal(amountString);

		if(amount.compareTo(AMOUNT_THRESHOLD) >= 0)
		{
			executionContext.setVariable(RESULT_PROP, "Approved");
			return APPROVAL_TRANSITION;
		}
		else
		{
			executionContext.setVariable(RESULT_PROP, "Denied");
			return DENIAL_TRANSITION;
		}
	}
}

 

Sending Emails

Emails are sent in jBPM3 using an Email node. In our application, we want to notify a recipient with the details of the order process. This will include the user id, order amount and result of the order approval process which was implemented previously. There are two steps required to send email via jBPM:

  1. Configure the email server (jBPM system configuration)
  2. Configure the email node for each individual email being sent

Configuration of email server properties is made in the jBPM configuration file (jbpm.cfg.xml). Alternatively, these settings can be placed in a properties file which is configured by setting the resource.settings.properties property to the name of the properties file. Both files would be located at the root of the classpath. The following table describes the settings required for email configuration.

jBPM Configuration File Resource Properties File Description
jbpm.mail.smtp.host mail.smtp.host SMTP Email Server host
jbpm.mail.smtp.port mail.smtp.port SMTP Email Server port
jbpm.mail.user jbpm.mail.user SMTP Email Username
jbpm.mail.password jbpm.mail.password SMTP Email Password
jbpm.mail.smtp.starttls N/A Optional Boolean value required when communicating over TLS to the Email server

 
You will find the jbpm.cfg.xml and a jbpm.mail.properties file in the src/main/resources folder of the sample application. Fill out these values accordingly to match your email server configuration. If you plan to deploy the sample application to your JBoss server, you must modify the jbpm.cfg.xml file in the <JBoss_Root>/server/<JBoss_Server_Profile>/deploy/jbpm.esb/ folder. Modify the property jbpm.mail.smtp.host which by default is set at local and update with the appropriate value. Be sure to add the other configuration, or if using an external properties file, remove the jbpm.mail.smtp.host configuration property and add a resource.settings.properties configuration.

The second component to sending emails within jBPM is to configure email nodes within the process definition. In the sample application, open the process definition and you will notice two email nodes are defined. Click on one of the email nodes and inspect the properties.  You will see a series of tabs on the left side of the pane. Click the Mail Info tab to view the details of the outgoing email. The only value requiring modification is the “To” field. Enter a destination email address you would like to have emails sent to from the sample application.The subject and body of the message has already been preconfigured.

Mail Configuration

Running the Sample Application

Within the previous sections, we have described how to modify the sample application in order to run as a JUnit test or deployed on a JBoss server. I’ll admit a lot of content was covered so I’ll provide the CliffNotes version just in case you skipped the sections above. Feel free to skip this section if you already made the necessary modifications otherwise refer back to the previous section for any material which may appear unclear.

  1. Configure the email server in the jbpm.cfg.xml file
    • Host, Port, Username, Password
    • Configuration is made either within this file or configured within the an external properties file specified by the resource.settings.properties property
  2. Configure each email node in the process definition
    • Set the “To” address to the recipient email address

 

Running the JUnit Test

The provided JUnit test class Jbpm3WorkflowTestIT emulates the executions of jBPM processes. The provided sample will test both workflow decision conditions: an approved order and a denied order.

Deploying the sample application

As mentioned previously, process definitions can be deployed multiple ways. Personally, I prefer to build the project using Maven and deploy the resulting process archive in the jBPM console because it allows me to view the contents of the process archive prior to deployment.

Build the project in Maven by navigating to the project directory and packaging the project using the following command which will produce a process archive (.par) in the target folder:

mvn clean package

Start up the JBoss server and navigate to the jBPM Console at http://localhost:8080/jbpm-console. Login using the default credentials of admin:admin

On the left side of the page, click “Deploy a new process” to deploy a new process definition

Click Browse and navigate to the process archive located in the target folder of the project directory and click Deploy to deploy the process

Click “Start a new instance of this process”

Tokens represent the current position within a workflow. By default, once a process is initiated, the token is positioned at the start node. Click Tokens under the Views panel on the left side of the page. You should see a token listed at the start-state node with a status of Running. Under the actions column, click Examine

Listing Process Instance Tokens

Click Signal to move the token out of the start-state node

Displaying Node Transitions

If you view the server console, you will notice the workflow has taken the appropriate decision based on random sample values, an email notification has been sent to the configured recipient and a message has been printed with the results of the decision to the server console.

Result of Process Execution

That’s it! We have successfully run our sample jBPM3 application as both a JUnit test and deployed on JBoss SOA-P. In an upcoming post, we will cover the latest release,  jBPM5, and how the jBPM platform transitioned to the JBoss Business Rules Management Platform (BRMS) world and how jBPM5 compares to jBPM3.