Dynamic Jenkins Slave Provisioning on OpenShift

Posted: January 31st, 2016 | Author: | Filed under: Technology | 1 Comment »

Configure Jenkins

The Jenkins web console can be accessed at jenkins-<project>.<default_app_domain>. This link is also available at the top of the overview page. The default username and password combination to log into the Jenkins master is admin:password.

After successfully authenticating, the homepage is presented containing the list of jobs in the center, but more importantly the list of executors on the left. If the slave instance which was part of the template built and deployed successfully, it should be listed under the Executors section. This is a statically designated slave that was a great use case for the previous post, but not for demonstrating dynamically allocated slaves. Let’s go ahead and scale down the slave instance to 0.

Run the following command to scale down the slave:

oc scale dc jenkins-slave --replicas=0

In a few seconds, the slave listed in the executors section will disappear.

As we mentioned previously, there are a few manual steps that need to be completed before slaves can be dynamically allocated by Jenkins. These configurations are found in the Jenkins system management page. To access this page, select Manage Jenkins on the lefthand side of the page and then select Configure System. The modifications required are all under the cloud section underneath Kubernetes.

jenkins-management-cloud

 

First, confirm the name of the Kubernetes Namespace. This is equivalent to the name of the project in OpenShift. By default, jenkins namespace is that is specified.

Next is the method Jenkins will communicate with the OpenShift API. By default, the assembly of the docker image configured Jenkins to utilize the service account token credential as discussed earlier. It is also possible to use a normal user account to communicate with the API. However, since the username and password is configured in Jenkins, it becomes difficult to manage when the password is changed, especially when OpenShift is integrated with an external identity management system such as LDAP. If you choose to leverage an alternate strategy for communicating with the OpenShift API, it can be managed by clicking the Add button next to credentials and performing the appropriate configurations.

Validate Jenkins can communicate with OpenShift by clicking the Test Connection button. If you see “Connection successful”, Jenkins will be able to communicate with OpenShift. Finally, the location of the Docker image that will be used in the slave must be configured. The same Docker image that was used in the slave instance that was scaled down earlier can also act as a dynamic slave as it has the JNLP client installed. Since the image was built and pushed to the integrated Docker registry within OpenShift, lets determine its location by inspecting the ImageStream that was configured in the project.

Execute the following command to determine the location of the Jenkins slave image within the OpenShift integrated registry:

oc get is jenkins-slave --no-headers | awk '{ print $2 }'

Enter the value returned into the textbox next to Docker image. The value will look very similar to the default value with the exception of a different IP address. With all of the required modifications complete, click Save at the bottom of the page to apply the changes.

Now that Jenkins is configured to dynamically allocate job executions to slaves, lets go ahead and create a simple job that can be executed by the slave. This simple job will print out “Hello from <hostname>” where the name of the slave will be printed out, followed by a brief pause to simulate an extended build period. From the Jenkins homepage on the lefthand side, select New Item, enter the name Test Slave Job in the textbox next to Item Name, select Freestyle project and click OK.

On the project configuration page, scroll down to the build section and select Add build step and then Execute Shell. Enter the following in the command textbox:

set +x
echo "Hello from ${HOSTNAME}"
sleep 20

Click Save to apply the changes to the job.

Start a new build by selecting the Build Now button the lefthand side. Return to the main screen by clicking on the Jenkins logo on the top lefthand side of the screen. The job should now be visible in the job queue and momentarily, a new slave will appear underneath Build Executors Status.

Jenkins Slave Offline

Over on the OpenShift user interface, the new pod containing the slave can also be seen on the project overview page.

Jenkins OpenShift Slave Pod

Once the pod fully comes online, the job will run and once complete, the slave and pod will be destroyed.

Jenkins Slave Job Complete

The results of the build can be seen by clicking on the name of the job on the Jenkins main screen, selecting the #1 link underneath build history and then selecting Console Output.

Started by user Jenkins Admin
Building remotely on 2c20d91e0a873 in workspace /opt/jenkins-slave/workspace/Test Slave Job
[Test Slave Job] $ /bin/sh -xe /tmp/hudson2524873781198705586.sh
+ set +x
Hello from 2c20d91e0a873
Finished: SUCCESS

Being able to dynamically allocate resources in OpenShift to fulfill Jenkins executions gives organizations the flexibility to embrace the principles of Continuous Integration and Continuous Delivery by being able to facilitate the increased workload. Another paradigm to consider is a mix of concepts from this post and the previous where you have a pool of statically defined slaves with the ability to dynamically allocate slaves when the pool of statically defined slaves are consumed. To accomplish this, all you would need to do is scale back up the jenkins-slave deploymentConfig that we scaled down earlier. Set the replicas value to a value of your choosing. Jobs would be allocated to the set of static resources first, and once the master determined that there were no more available resources to take on any additional work, slave instances would be dynamically created.

Another alternative is to integrate with an existing Jenkins master instance outside of OpenShift and use OpenShift as the execution environment for Jenkins slaves. Two primary modifications are required to accomplish this paradigm:

  • The two OpenShift services that are currently referencing the master located in OpenShift would need to be converted to external services which point to the location of the externally hosted master.
  • The location of the OpenShift API for the Kubernetes cloud plugin in the Jenkins system configuration page would need to refer to the publicly accessible API location instead of SkyDNS.

The possibilities are endless.

 


One Comment on “Dynamic Jenkins Slave Provisioning on OpenShift”

  1. 1 Jenkins Slaves in OpenShift Using an External Jenkins Environment said at 9:48 am on February 15th, 2016:

    […] part 1 and part 2 of the series on Jenkins and OpenShift, we used OpenShift as the execution environment to run a […]


Leave a Reply