Integrating OpenShift Authentication into Jenkins

Posted: October 19th, 2016 | Author: | Filed under: Technology | Tags: , , , | 1 Comment »

Note: This post contains details about components which may be in active development. Please refer to any release notes or documentation as it relates to the current status and supportability

Jenkins, the Open Source continuous integration server, has been included as a supported offering on OpenShift since the release of OpenShift version 3.1. With only a few simple clicks, an entire continuous integration and delivery pipeline for automating and simplifying the application build and deployment process can be created. All a user needs to do is use the OpenShift web console or command line tool, select the Jenkins template, and log in to the deployed Jenkins environment using the password entered or automatically generated. The Jenkins image for OpenShift provides a fully containerize solution supported by plugins specifically designed for integration with OpenShift. One feature that had been missing previously was the ability to leverage OpenShift as a single sign on (SSO) provider and apply the same authentication mechanisms to access Jenkins. As a user attempts to access a protected resource, they are redirected to authenticate with OpenShift. After authenticating successfully, they are redirected back to the original application with an OAuth token that can be used by the application to make requests on behalf the user. A similar approach already exists for the Kibana integrated aggregated logging solution. A newly released Jenkins plugin, called the OpenShift Login Plugin, now provides the support to enable this functionality.

The first step towards implementing this solution is to add the OpenShift Login Plugin to Jenkins. Adding a plugin to Jenkins takes only a few steps. Login to Jenkins and on the left side of the page, select Manage Jenkins and then Manage Plugins. On the Available tab, enter “OpenShift Login” in the textbox to limit the available plugins presented.

Jenkins Login Plugin Installation

Select the “OpenShift Login Plugin” and click Download Now and Install After Restart. As it’s downloading, be sure to check the “Restart Jenkins when the installation is complete and no jobs are running” so that Jenkins restarts once the plugin has been downloaded. After Jenkins restarts, login once again.

Since the OpenShift Login Plugin interacts with the OpenShift OAuth server to facilitate the single sign on process, Jenkins must be first configured as an OAuth client within OpenShift. There are two ways this can be configured:

  • Create a new OAuth client
  • Use a service account to represent an OAuth client

The first method is to create a new standalone OAuth client. An OAuthClient is another API object type, similar to other API objects such as a Pod, Service and Route. However, since it is configured at the cluster level, the ability to create or modify is restricted to only elevated users, such as cluster administrators. If your user does not have this level of access, the service account method as described in a subsequent section must be used.

An OAuthClient is represented by a structure similar to the following:

{    
    "kind": "OAuthClient",
    "apiVersion": "v1",
    "metadata": {
        "name": "oauth-client-example",
    },
    "secret": "...",
    "redirectURIs": [
        "https://example.com"
    ]
}

When configuring the OAuth client within an application such as Jenkins, the name in the metadata section is analogous to a client_id. The secret is an access credential that is shared with both the authorization server (OpenShift) and the client (Jenkins) and is used to determine trust between each other. Finally, the redirect URI field specifies the location the OAuth server will redirect the user once the authorization process completes (successfully or unsuccessfully).

To simplify the creation of an OAuth client, an OpenShit template has been created to streamline the process and allows for the user to provide values for the OAuthClient name, secret and a single redirect URI. If a secret is not provided, one will be automatically generated.

Execute the following command to add the template to OpenShift.

oc create -f - <<EOF
{
    "kind": "Template",
    "apiVersion": "v1",
    "metadata": {
        "name": "jenkins-oauth-template"
    },
    "labels": {
        "template": "jenkins-oauth-template"
    },
    "parameters": [
        {
            "description": "The name for the oauth client.",
            "name": "OAUTH_CLIENT_NAME",
            "value": "jenkins-oauth",
            "required": true
        },
        {
            "description": "Oauth client secret",
            "name": "OAUTH_CLIENT_SECRET",
            "from": "user[a-zA-Z0-9]{64}",
            "generate": "expression"
        },
        {
            "description": "The name for the oauth client.",
            "name": "OAUTH_CLIENT_REDIRECT_URI",
            "required": true
        }
    ],
    "objects": [
        {
            "kind": "OAuthClient",
            "apiVersion": "v1",
            "metadata": {
                "name": "\${OAUTH_CLIENT_NAME}"
            },
            "secret": "\${OAUTH_CLIENT_SECRET}",
            "redirectURIs": [
                "\${OAUTH_CLIENT_REDIRECT_URI}"
            ]
        }
    ]
}
EOF

The most important parameter, and only required parameter, in the template is the redirect URI. Jenkins exposes an endpoint at /securityRealm/finishLogin that processes the OAuth response and stores the OAuth token for subsequent use. The following command will create the OAuth client by first looking up the hostname of the exposed route and pass the returned value as an input parameter:

oc process -v=OAUTH_CLIENT_REDIRECT_URI=`oc get route jenkins --template='{{if .spec.tls }}https{{ else }}http{{ end }}://{{ .spec.host }}/securityRealm/finishLogin'` jenkins-oauth-template | oc create -f-

Confirm the new OAuthClient called oauth-jenkins has been created by running the following command:

oc get oauthclients

As mentioned previously, if the client secret is not provided during template instantiation, one will be randomly generated. This secret will need to be configured in Jenkins to establish the client/server trust. To locate the secret that was configured in the OAuthClient API object, execute the following command:

oc get oauthclients jenkins-oauth --template='{{ .secret }}'

With the OAuthClient in place, configure Jenkins to make use of the OpenShift Login plugin. Navigate to the Jenkins summary page and click Manage Jenkins and then Configure Global Security. Select the Login with OpenShift radio button to expose several textboxes for further configuration.

Jenkins Login Security Realm

The OpenShift login plugin will attempt to make use of preconfigured variables based on the environment, however, several fields do require manual input. The following fields require modifications:

  • OpenShift Server Prefix: Location of the internal OpenShift service (For externally hosted Jenkins servers, utilize the Master API address)
    • Enter https://openshift.default.svc
  • OpenShift Redirect URL: Location of the OpenShift Master API which will be used to redirect the user to authenticate with OpenShift
    • Dependent on the environment. An example could be https://master.example.com:8443
  • Client Id: Name of the OAuth Client
    • Enter the name of the OAuthClient created previously: jenkins-oauth
  • Client Secret. OAuth Client Secret
    • Enter the client secret value obtained from the oauth client previously

Click Save to apply the changes. Once the changes have been made, click the logout button on the top right corner of the page. This will trigger a new authentication process and redirect the user to the OpenShift web console to authenticate.

OpenShift Login Console

Enter a valid username and password, click Login and authentication will be performed against the realm configured within OpenShift and upon success, the browser will redirect back to the Jenkins home page.


Visualizing the OpenShift API with Swagger – Revisited

Posted: September 8th, 2015 | Author: | Filed under: Technology | Tags: , | 1 Comment »

Several months ago, an article was published by OpenShift lead engineer Clayton Coleman on how the OpenShift API can be expressed using Swagger. For those who have never heard of Swagger previously, it is a framework for describing and producing a visualization of a RESTful API. The post gave users an excellent introduction to the resources exposed by OpenShift. Fast forward a few months, and now that OpenShift has been officially released, it is a fine time to revisit Clayton’s post and provide updates to allow users to interact with the GA version of the the product and cover any necessary configurations that need to be made beforehand.

First, let’s revisit the OpenShift architecture and how the Swagger framework is integrated into the product. Two restful endpoints are exposed by OpenShift: The Kubernetes API and the OpenShift API. Each provide decorated swagger endpoints which can be accessed at the following locations:

Kubernetes API:

https://<openshift_master>:8443/swaggerapi/api/<api_version>/

OpenShift API:

https://<openshift_master>:8443/swaggerapi/oapi/<api_version>/

The cURL command can be used to validate the endpoints against an existing OpenShift environment which will produce a Swagger formatted json result. The power of the Swagger framework is realized by the ability to take this API representation and present it in a consumable form for the user to interact with such as retrieving results and invoking requests. These interactions are facilitated using the Swagger UI.

User Interface

The Swagger UI is a set of HTML, CSS and JavaScript components that produce the representation of the API. An updated customized Swagger UI implementation is available to communicate with OpenShift and can be found at http://openshiftv3swagger-sabre1041.rhcloud.com/.

Since the Swagger UI is a client based application that may be served from a domain other than OpenShift, settings within OpenShift would need to be configured on the master to allow for Cross-origin resource sharing (CORS). The OpenShift master configuration file located at /etc/openshift/master/master-config.yaml contains the key coorsAllowOrigins which defines the origins that are allowed to invoke the API. To allow all origins, add a – “.*” to the list similar to the following:

corsAllowedOrigins:
  - 127.0.0.1
  - localhost
  - ".*"

Restart the OpenShift service on the master to apply the changes:

systemctl openshift-master restart

Return back to the Swagger UI and you will notice several input fields at the top of the page which correspond to the following fields:

OpenShift Swagger Navigation Bar

 

  • Master Base URL – URL (host and port) of the OpenShift Master
  • API Type – Type of API to invoke (Kubernetes or OpenShift)
  • API Version – Version of the OpenShift API (use v1 for OpenShift V3 GA)
  • Authentication Token – User token of a privileged user to execute commands as (explained later)

The endpoint is unsecured by default so listing the functions can be completed without entering a token. First, enter the URL of your OpenShift master. For example, this could be https://master.openshift.example.com:8443. Next, select the API to invoke (either the Kubernetes or OpenShift API) from the dropdown. Then enter the api version in the next textbox (A dropdown of values is available to aid with input selection). Then hit the Explore button. It may take a moment for the page to render, but once finished, the  list of functions for the selected API should be presented.

 

OpenShift Swagger Functions

Consult the Swagger documentationfor the full range of capabilities provided by the the framework.

Authentication Token

Many of the API functions require an authenticated user for invocation. An authentication token can be obtained from a user registered in OpenShift to invoke additional functions in the swagger UI. The token can be obtained using the OpenShift CLI from a developer machine.

Assuming the CLI tools have been previously installed, first login to OpenShift

oc login https://<openshift_master>:8443

After successfully authenticating, obtain the token for the session

oc whoami -t

The token value should be provided. Enter this value in the token textbox in the Swagger UI to allow for the execution of additional resources the user is authorized to invoke.

The source code for the project is located at https://github.com/sabre1041/openshift-api-swagger and contributions are always welcome.


OpenShift Camel Component in a SwitchYard Project – An Overview

Posted: April 3rd, 2014 | Author: | Filed under: Technology | Tags: , , | 2 Comments »

Camel OpenShift SwitchYard LogoIn an earlier post, I introduced a custom Apache Camel component that can be used to manage the OpenShift platform. It allows for the invocation of operations that control key components of the OpenShift lifecycle such as displaying user metrics and managing applications. The operations and metrics generated from this component can be used in a variety of applications that leverage the Camel framework to solve business goals. Camel can be deployed to a variety of runtime environments ranging from servlet containers such as Tomcat, application servers such as JBoss , OSGi containers such as Karaf, or even self sufficient as a standalone application. To demonstrate the functionality of the OpenShift Camel component in a project, we will leverage SwitchYard and JBoss as the running environment within a sample project called camel-openshift-switchyard. SwitchYard is a services delivery framework for running and managing service-oriented applications. By running on SwitchYard, it will allow for the opportunity to cover both the OpenShift Camel component and developing and running applications using Switchyard. This post will provide an overview to the sample project including the steps necessary to configure and deploy either in your own environment or on the OpenShift platform. In subsequent posts, we will review the project in depth including how Camel can be integrated into a SwitchYard project and how to integrate the OpenShift Camel component in a project of your own.

The camel-openshift-switchyard project consists of a SwitchYard application and a basic web application that can be used to invoke services exposed by SwitchYard. The SwitchYard application contains two services: OpenShiftOperationsService and OpenShiftApplicationsStartupService. Each contains Camel routes utilizing the OpenShift Camel Component. The OpenShiftOperationsService invokes operations against a given users’ account such as retrieving the details of a given domain or applications within a domain. Meanwhile, the OpenShiftApplicationStartupService will attempt to start any Application associated to a user that is not currently started. A diagram depicting these services and the overall SwitchYard composite is shown below:

Camel OpenShift SwitchYard Composite

Both services contain a RestEasy binding to allow for external invocation via REST. The OpenShiftApplicationsStatup service also contains a timer binding which will initiate the service at the top of each hour based on a cron schedule. To take advantage of the scheduled service operation, the following two system properties must be configured either within the application itself or on application platform.

  • openshift.user – The login of the OpenShift user
  • openshift.password – The password on the account

Testing

Included in the project are a series of integration tests designed to validate the functionality of the application and to demonstrate the testing capabilities of SwitchYard applications. Injection of property values and invocation of Services and HTTP resources are some of the components demonstrated.

Building and Deploying

The project is hosted on GitHub and can be cloned into a local environment by running the following command:

git clone https://github.com/sabre1041/camel-openshift-switchyard.git

Prior to built or imported into an IDE, certain project dependencies must be configured. A core project dependency is the OpenShift Camel component which must be installed and available in the local Maven repository. Because this library is not part of any publicly available repository, initialization scripts for both Windows and Unix have been provided to configure the required dependencies. This script is found in the support folder of the project. Execute the init.sh or init.bat depending on your Operating System.

Next, build the project using Maven by running the following command:

mvn clean install

With the project now built, deploy the archive to Fuse Service Works or a JBoss container with SwitchYard installed. Once deployed, the application can be accessed by browsing to http://localhost:8080/camel-openshift-switchyard/

 Camel OpenShift SwitchYard Application

Deployment to OpenShift

The project has been configured to be seamlessly deployed to the OpenShift Platform. OpenShift provides the functionality to utilize custom cartridges which are not part of the core OpenShift platform to host applications. Since this project is built on SwitchYard, we will leverage a custom JBoss EAP 6 cartridge preconfigured with the SwitchYard runtime. The OpenShift Web interface provides the ability to specify a custom cartridge and existing source code during new application creation. However, due to the number of dependencies required by the SwitchYard platform, the project is unable to be built and deployed prior to the default application creation timeout. To get around this limitation, we will create the application using the OpenShift command line tools (RHC), merge the local OpenShift project with the camel-openshift-switchyard project hosted on GitHub, then finally push the application to OpenShift. Using the terminal in the location where the repository of the newly created application will be created on your local machine, issue the following command to create an application using the custom SwitchYard cartridge as the container.

rhc app create <app-name> "http://cartridge-switchyard.rhcloud.com/manifest/466c7020661420c4604e870802fe673244861a5a"

After the application was successfully created and the source code now available on your machine, change into the application directory:

cd <app-name>

First, add the GitHub hosted project as an upstream Git remote repository

git remote add upstream -m master https://github.com/sabre1041/camel-openshift-switchyard.git

Next, pull in the changes from the upstream repository

git pull -s recursive -X theirs upstream master

Finally push the changes to OpenShift

git push origin master

OpenShift will then handle obtaining the required dependencies, building and deploying the project. Once complete, the application can be viewed in a web browser. To determine the URL of an OpenShift application from the command line, execute the following command:

rhc app show <app-name>

You should be able to browse to the URL and utilize the application as you would on your local machine.

As mentioned previously, we will walk through the project in detail in a future post. This will allow for the demonstration of SwitchYard components and provide examples of how the OpenShift Camel component can be used in projects of your own.


Creating a Virtual Environment for the OpenShift Platform Using Oracle VirtualBox

Posted: February 23rd, 2014 | Author: | Filed under: Technology | Tags: , | No Comments »

VirtualBox Logo

In the past, I have demonstrated several projects and features for OpenShift, Red Hat’s Platform as a Service product. You may be aware of OpenShift Online, available at http://openshift.com, for seamless deployment of applications to the cloud. What you may not be aware is that OpenShift Online is only one of the OpenShift products available. The OpenShift line of products features 3 solutions: Online, Origin and Enterprise. While OpenShift Online may suit your public cloud needs, there may be interest in establishing a private or hybrid cloud solution within your enterprise. The best way to become familiar with a product is to install and configure it yourself on a local machine, such as a personal desktop or laptop. Evaluating cloud-based technologies on a single machine typically requires the use of virtual machines to replicate multiple machines or environments. However, applying the appropriate configurations when leveraging multiple virtual machines can be challenging as they must be able to communicate both with each other as well as the external Internet. In the following discussion, we will walk through the process of configuring the supporting infrastructure of a minimal installation of a private OpenShift environment using Oracle VirtualBox.

While there are several virtualization products on the market today, we will utilize Oracle VirtualBox primarily due to its ease of use and compatibility with multiple platforms. Before we begin the configuration process, let’s discuss what an OpenShift environment consists of. A typical basic installation of OpenShift requires the allocation of two machines: A Broker, which handles user and application management, and a Node which provides hosting of cartridges and the actual storage of applications deployed by users, which are contained within gears. Communication between broker and node instances is critical to the performance of the OpenShift environment and presents the greatest challenge when configuring the virtual instances. We need to provide an environment that fulfills the following requirements:

  • Host can communicate with each virtual guest
  • Virtual guests can communicate with each other
  • Virtual guests can access external resources on the Internet

VirtualBox, similar to most virtualization software, supports several network hardware devices that can be run in a variety of networking modes. A full discussion of the different types of network devices and modes VirtualBox can be configured in can be found on the VirtualBox website . To accommodate the aforementioned requirements, we will leverage multiple network adapters running in separate networking modes. The first adapter will be configured in host-only mode as it creates a private network among each virtual machine configured with this network type and the host. The limitation when running in host-only mode is there is no access to external resources such as the Internet. To allow access to external networks within each guest, a second adapter is configured in NAT networking mode. NAT networking is one of the modes VirtualBox provides for allowing a guest to connect to external resources. It was chosen as it requires the least amount of work and configuration by the end user and is ideal for simple access to external systems. An overall diagram of the virtual machine network architecture can be found below.

VirtualBox Network Diagram


OpenShift Camel Component – Manage OpenShift from Camel

Posted: February 10th, 2014 | Author: | Filed under: Technology | Tags: , | 3 Comments »

Apache Camel

Since its creation in 2007, the Apache Camel project has allowed developers to integrate systems by creating Service Oriented Architecture (SOA) applications using industry standard Enterprise Integration Patterns (EIP). At its core, Camel is a routing engine builder for which messages can be received and processed. The processing of messages uses a standardized API library backed by an extensive collection of components which abstracts the actual implementation of routing and transformation of messages. These components also include the ability to communicate with external systems through an array of protocols ranging from web services, REST and JMS. With more than 80 components in the standard Camel distribution, developers have the flexibility to perform power operations with minimal effort to suit their business needs. One of the benefits of the Camel component framework is that it was designed as a factory system where new components can be easily added to solve a business use case.

As more companies and individuals look to migrate their operations and applications into cloud-based solutions, the management of these resources can prove to be a challenge. The overall architecture may be different than what many individuals are accustomed. The OpenShift Platform as a Service (PaaS) is one such cloud offering where developers can quickly create powerful scaled applications without having to worry about managing infrastructure or complex software installation.

Introducing the OpenShift Camel Component

Imagine being able to communicate with the OpenShift Platform directly through a Camel route? With the OpenShift Camel component, it is now possible. Exposed through an Endpoint as both a Camel Consumer and Producer, OpenShift resource details can be retrieved or modified all within a Camel route. Want to find out information about a particular OpenShift Domain, such as the applications and their details? How about automating application deployments? These are only some of the possibilities now available.

Communication between the OpenShift Camel component and OpenShift is facilitated by using the REST API exposed by the OpenShift platform and the OpenShift Java Client. Responses from the OpenShift Camel component will be returned using the API from the OpenShift Java Client.

Using the OpenShift Camel Component in your project

The OpenShift Camel Component can be easily added to a new or existing project. The following steps will describe how this can be accomplished.

GitHub Project: https://github.com/sabre1041/camel-openshift

  1. Fork/Clone the GitHub repository 
    git clone https://github.com/sabre1041/camel-openshift.git
  2. Build the project using Maven
    mvn clean install
  3. Add the component to a new or existing Camel Project as a Maven dependency in the projects’ POM file.
    <dependency>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-openshift</artifactId>
        <version>0.0.1-SNAPSHOT</version>
    </dependency>
  4. The following example implementation will retrieve information about the authenticated user and print out their user details to the log:Java DSL
     
    import org.apache.camel.builder.RouteBuilder;
    
    public class PrintOpenShiftUserRoute extends RouteBuilder {
    
        public void configure() throws Exception {
            // Retrieve User Details
            from("openshift://user?userName=<openshift_username>&password=<openshift_password>")
                //Print to Log
                .log("${body}");
        }
    
    }

    Spring XML

    <route>
      <description>Spring XML Route. Full XML omitted for brevity</description>
      <from uri="openshift://user?userName=<openshift_user>&amp;password=<openshift_password>"/>
      <log message="${body}"/>
    </route>

With the steps above complete, the project can be deployed to a running container or unit tests can be created to verify the expected results. Additional functionality including how to leverage Authentication as Message Headers can be found on the GitHub project page.

In an upcoming post, I will demonstrate several uses for the OpenShift Camel including how the component can be leveraged within a SwitchYard application.


Nexus Cartridge for OpenShift

Posted: November 16th, 2013 | Author: | Filed under: Technology | Tags: | No Comments »

OpenShift Nexus CartridgeDeploying an application to the OpenShift Platform as a Service (PaaS) can be accomplished in only a matter of minutes due to the wide array of tools available to the developer. One of the tools provided by OpenShift is the ability to add the Jenkins Continuous Integration server to the development lifecycle of an application. The inclusion of Jenkins affords several incentives for both developers and stakeholders of any development project. First, by adding Jenkins to an OpenShift application, the build process is delegated to Jenkins resulting in far less downtime for the application as typically the application is taken offline during the build and deployment process. Secondly, it brings in the Jenkins Continuous Integration server, with its wide array of plugins and large community support, into your environment. Jenkins is one of the components that make up a continuous integration environment which is a set of software tools responsible for the assembly and distribution of software artifacts. The primary components of this environment consist of a build process, version control system, continuous integration server and finally a repository manager. The OpenShift ecosystem contains the majority of the components with an exception of not being able to easily access and store artifacts that have been assembled as a result of a build process. This process is typically delegated to the final component of the continuous integration environment: the repository manager. One of the more popular repository managers available on the market today is Sonatype Nexus. Nexus can serve multiple purposes in an environment as it can be a central location for binary artifacts and their dependencies, act as a proxy between your organization and publically available repositories, and serve as a deployment destination for internally created artifacts.

Each component of an OpenShift application, whether it is a web or database server framework, is known as a cartridge and there are wide number of cartridges readily available today. One of the benefits of the OpenShift cartridge system is the ability to leverage downloadable user cartridges to extend the base set of included cartridges. By offering Nexus as a downloadable cartridge, it would allow for the completion of creating a continuous integration environment in the cloud. To learn more about a continuous integration environment and how to build one of your own, refer to the following link: http://www.redhat.com/resourcelibrary/whitepapers/eap-create-compile-and-deployment-environment

The Nexus cartridge project is hosted both on GitHub and on OpenShft and built using the Cartridge Development Kit. By leveraging the CDK, an end user can choose the particular build of a cartridge to add to their application. The Nexus cartridge project is available at the following locations:

After selecting the CDK link above, you can use the OpenShift RHC Client tools or the web console to add the cartridge to an application based on the information provided. If you are using the client tools, you can use the command presented at the top of the page which represents the most recent build from the master branch of the project to create an application. When using the web console, first login and click Add Application. Downloadable cartridges can be added by entering the location of the cartridge manifest in the textbox provided in the “Code Anything” section. Click Next, enter the name for the newly created application using the Nexus cartridge, and click Create Application. After the cartridge has been created, you will be presented with a confirmation and the user name and password for the Nexus administrator account. After waiting a few moments to allow the server to come online, you can click the application link provided which will bring you to the Nexus server start page. More information on how to configure and utilize Nexus can be found at the following documentation page: http://books.sonatype.com/nexus-book/reference/

At this point, you have a fully functional repository manager hosted in the cloud. In an upcoming post, I will demonstrate how you can create a  fully functional continuous integration environment hosted on the OpenShift platform.


Exploring the Errai Framework

Posted: April 11th, 2013 | Author: | Filed under: Technology | Tags: , , , | No Comments »

One of the prominent features of Java EE 6 was the introduction of Contexts and Dependency Injection (JSR 299). Built on top of Dependency Injection for Java (JSR 330), CDI allowed Dependency Injection (DI) and Aspect Oriented Programming (AOP) to become standardized giving developers the flexibility to implement a native solution rather than having to utilize a third party library such as Spring. CDI makes it easy to inject resources such as managed beans and services. It has done wonders for server side components, but if there is one thing web 2.0 has taught us, more and more functionality is being leveraged on the client side. Java based web frameworks have also moved in this direction as well with implementations such as Google Web Toolkit (GWT) compiling Java code into highly optimized JavaScript. Until now, CDI was restricted to being a server side technology, but with the help of the Errai Framework, CDI can now be leveraged on the browser. Errai is more than a dependency injection toolkit but rather a set of technologies built on top of the ErraiBus messaging framework for developing rich web applications using GWT. In this post, we will demonstrate several components of the Errai framework and how they can be used to build powerful web applications.

To help illustrate the various components of the Errai framework, we will walk through a sample application. This application allows users to perform basic mathematic operations such as addition, subtraction, multiplication and division among others. After a successful submission, the result of the operation is displayed below along with a log of all of the operations they have previously performed. Also included next to each submission is an icon indicating the users’ operating system and browser.

Errai Math

This application was designed specifically for showcasing various Errai components. The source code for this application is found at https://github.com/sabre1041/errai-math which will be useful when we begin discussing the implementation. The application can also be accessed directly as it is deployed on Red Hat’s OpenShift PaaS by navigating to http://erraimath.andyserver.com. The application can be deployed using GWT development mode, JBoss EAP 6 or OpenShift. It utilizes the following Errai components:

  • Errai JAX-RS
  • Errai CDI
  • Errai UI
  • Errai Data Binding

We will discuss each component in detail moving forward.

Before we dive into the details of the application, let’s briefly discuss how Errai applications fit into the conventions of a GWT application. A GWT application is configured into Modules which are defined in xml files. These files are placed at the top of the project hierarchy. In our application, the module file is named ErraiMath.gwt.xml. Standalone GWT applications need to also define an EntryPoint class which denotes the starting point for the application in their module file. Errai applications can forgo this requirement by instead annotating a class with the @EntryPoint annotation. Our EntryPoint class is called ErraiMath in the com.redhat.errai.math.client.local package. All client side files will be placed in this package.


The Block 87 Blog Introduction: How we got to this point

Posted: March 17th, 2013 | Author: | Filed under: Technology | Tags: | No Comments »

Congratulations. From the estimated 200 million blogs on the internet today, you have found your way to this neck of the information superhighway. Before we delve into real business, lets take a step back to who I am. My name is Andrew Block and I live two things: technology and sports (hence the tagline at the top). While my two hometown teams, the Buffalo Sabres and Buffalo Bills, continue their path towards futility, I’ll go beyond the X’s and O’s to provide more than you would find on the front page of espn.com. While sports will be a frequent topic, it will not be the primary subject. Technology, ever increasing throughout our lives, will be the primary topic of discussion. The majority of posts will be how to’s and best practices on a particular piece of technology (yes, even including this post, but we will get to that later).

So, now that we got that little introduction out of the way. Let’s get down to business. The title of this post beyond the introduction is “how we got to this point”. How did we get to this point? Over the past few years, I have written and contributed to a number of technology best practice documents. However, most of these were internal documents. I came to the conclusion that others outside my organization would also find these tidbits of tech useful. So I decided to go down the route of countless others on the internet today: start a blog. This isn’t my first attempt at “blogging”. I had a short lived blog which was written using one of the first versions of Movable Type. That was quite some time ago considering Movable Type was released in 2001. While I could have consulted the handy “Blogging for Dummies” book, I did consult 16 Ideas To Avoid Complete and Utter Failure. Fairly useful and has helped shaped the blog into what it is today. One of my personal traits regarding technology is that I like to get my hands dirty. While I could have gone the easy route by choosing a somewhat preconfigured site like Blogger or a hosted solution provided by WordPress, I wanted to be able to control my own destiny (like a team choosing to go last in a shootout). I chose WordPress, as emphasized by the aforementioned best practices document, from the countless blogging software options available. The next step was finding a home. I have a number of domain names in my possession and since this was a personal blog, I decided to fall back to andyserver.com which happens to be the oldest in my collection. This domain was never hosted by any hosting provider and was once hosted out of my basement. To avoid paying a hosting provider to house this blog, I turned to a Platform as a Service (PaaS). A Platform as a Service is a way to rent hardware and other services to deploy and run applications. Red Hat’s OpenShift platform was a perfect match to house my blog as it allowed me to get my hands dirty with the configuration and management, but more importantly, it was free! The rest of this post will explain the steps necessary to spin up a WordPress blog on the OpenShift platform.

Openshift Logo

To get started, first create an Openshift account at http://openshift.com. After registering and confirming your email, you are presented with a list of possible applications types to create. These types range from Java, Ruby, Python amongst many others.

Openshift Application List

In addition, there are a number of preconfigured applications (called instant apps) which will allow for apps to be spun up even faster. One of these options is WordPress which will package and configure PHP, MySQL, and the WordPress software. Perfect! Once this is selected, you are asked to create a name for your application and if this is the first OpenShift application, you are also asked to create a namespace for your OpenShift account. A namespace is essentially a subdomain for all of the applications you deploy. Once a namespace is set, it cannot be changed so be sure to think long and hard on this name. Go ahead and click Create Application to create the application. Once the application has been created, you are presented with a page detailing key information about your application such as MySQL connection information and the publicly accessible URL for the application. Clicking on this URL initially redirects to the WordPress initial configuration page where basic WordPress settings are made.

Wordpress Installation

Also on this page is a link to key documentation to the the initial installation and configuration process. Enter the required information and select Install WordPress. Once successful, you will be able to login to the WordPress Administration Console using the username and password previously configured. I’ll forgo the discussion on configuring WordPress, but consult the full documentation to learn more on how to manage your blog’s settings, modify the appearance, and add plugins to expand functionality.

Currently the blog is accessible at a URL such as [app_name]-[namespace]-rhcloud.com as previously configured. Not too easy to remember and certainly not in the andyserver.com domain as we desire. OpenShift has the ability to leverage cname (Canonical Name) records of DNS. This will allow us to use an address such as blog.andyserver.com and have all requests transparently redirect to our OpenShift application address. Full instructions of this process can be found here, but we will go over the high level steps. Configuration needs to occur both within OpenShift and with the DNS provider of the andyserver.com domain. Add a new cname value such as the following configuration:

cname

With the configuration now complete on the domain side, we now turn to the OpenShift configuration. While a number of application configuration options can be managed through the online management console, some configurations cannot. The management of cname records cannot be managed online and must be leveraged through the OpenShift command line tools. Available for Windows, Mac and Linux platforms, these tools provide full configuration and management of OpenShift applications. Detailed installation and configuration of these tools with your machine can be found on the Installing RHC Client Tools page.

After completing the processes detailed in the installation documentation, the RHC Client tools and git should be installed on your machine. In addition, running the rhc setup command should have walked you through the initial configuration steps such as generating an SSH key which can be used to configure access to your account through the command line tools. Full OpenShift documentation is available by consulting the User Guide. Once all of these steps have been completed, the rhc alias command can be run to add an alias to an application:

rhc alias add <application_name>

For the blog application, we would issue the command:

rhc alias add blog blog.andyserver.com

If the command was successful, the alias can be verified on the application properties page of the OpenShift web management console under the alias section. With the configurations now complete on both the domain and OpenShift side, try to access the blog using the cname address. Keep in mind that even though both steps were completed successfully, your application may not be reachable due to the DNS propagation delays. Once your site is reachable by its cname value, the final step is to configure the base URL within WordPress. Login to the administration console of your blog and navigate to the General page under the Settings section. Set the WordPress Address and Site Address URL to your cname value.

Congratulations, your blog is up and accessible by your countless followers. And that is how we got to this point. We started off introducing the Block 87 blog. We then introduced the OpenShift platform not only as a way to deploy and run applications in the cloud, but how to get a blog of your own like this one up and running. This is only a taste of things to come. Stay tuned.