Jenkins Slaves in OpenShift Using an External Jenkins Environment

Posted: February 14th, 2016 | Author: | Filed under: Technology | No Comments »

At this point, the configurations for both OpenShift and Jenkins to utilize statically defined slaves instances are complete. Either using the OpenShift web console or using the OpenShift CLI, verify at least one slave instance is currently running. The slave instance should now be available in the list of executors on the lefthand side of the Jenkins landing page.

Jenkins Static Slave Ready

Select an existing job or create a new job and start a new build as a test to validate they are running on the slave instance within OpenShift.

Jenkins Static Slave Running

Now that jobs have been validated to utilize statically defined slaves instances within OpenShift from an externally facing Jenkins master, let’s cover the second use case where Jenkins leverages the Kubernetes plugin to dynamically provision slave instances in OpenShift. The first step is to delete the deploymentConfig which will remove the statically defined instances that are deployed and prevent new instances from attempting to be deployed if the slave image is updated:

oc delete dc jenkins-slave

Next, let’s configure the settings for the Kubernetes plugin on the system configuration page. Once again it can be accessed by selecting Manage Jenkins on the landing page and then selecting Configure System. Scroll down towards the bottom of the page and locate the Cloud section. Select Add a new cloud and choose Kubernetes which will generate a new section of configuration options.

Under Kubernetes, enter a name for the configuration, such as OpenShift and then enter the url of the OpenShift API.  You can choose to enter the server certificate key that is used for HTTPS communication or select the Disable HTTPS Security Check to disable this feature. Since a Kubernetes namespace is equivalent to to an OpenShift project, enter jenkins in this text box next to the Kubernetes namespace field.

Jenkins Kubernetes Plugin Settings

When Jenkins communicates with OpenShift, it must provide authentication in order to interact with the API. In Jenkins, these values are stored as credentials. In the previous post, we leveraged an OpenShift service account to provide the authentication details since the Jenkins master was running in OpenShift. As Jenkins is running outside of OpenShift, a username and password for an account with access to the jenkins project must be provided. Click the Add button to start the credential creation process and then in the dialog box next to kind, use the dropdown to select OpenShift OAuth Access Token. Enter the username, password and description into the textboxes. If desired, select the Advanced button and enter a unique id for the credential. Otherwise one will be created by default.

Jenkins Kubernetes Plugin Credentials

Finally click Add to create the credential. Select the credential from the dropdown box next to Credentials.

Validate the master is able to successfully communicate with the OpenShift API by clicking the Test Connection button which should return a Connection Successful message.

Jenkins Kubernetes Connection Success

Next, we will specify the addresses the dynamically provisioned slaves will use to communicate back to the Jenkins master. Two OpenShift capabilities will be used within these addresses. First, SkyDNS provides a mechanism to reach OpenShift resources using domain names, including Kubernetes Services. Earlier, Kubernetes services were created to reference the location of the Jenkins master. By referencing the service, the master can be reached by the slaves. Service addresses in SkyDNS take the form <service>.<project>.svc.cluster.local. In the Jenkins URL field, enter http://jenkins.jenkins.svc.cluster.local:8080 which leverages the jenkins Kubernetes service.  For the Jenkins tunnel field, enter jenkins-slave.jenkins.svc.cluster.local:50000 to use the jenkins-slave Kubernetes service which will provide a communication channel between the slave and master using the JNLP port. Click Apply to save the changes.

When a new slave instance is provisioned by the master, it communicates with OpenShift to create a pod with the appropriate Docker image and settings to execute the job. These configurations are specified by defining a new Pod Template. In the Kubernetes plugin configuration, select Add Pod Template and then Kubernetes Pod Template to add the fields to the configuration page. There are only three fields that require input. First, add a name to the template, such as slave. Next, the Docker image that will be used for the slave needs to be specified. When the template was instantiated earlier, a new build of a slave image was started, completed and then pushed to the integrated Docker registry. To determine the location of the image in the integrated registry, execute the following command:

oc get is jenkins-slave --no-headers | awk '{ print $2 }'

Insert the response into the textbox next to Docker image.

Finally, set the Jenkins slave root directory textbox to match the home directory specified in the slave Dockerfile.

/opt/jenkins-slave

Click Save to apply the changes to complete the required configuration to provision dynamic slaves.

Jenkins Kubernetes Pod Template

To validate the configurations are successful, first scale down any statically defined slave instances by running the following command

oc scale dc jenkins-slave --replicas=0

Start a new build and confirm the job is running in a pod on OpenShift.

Jenkins Kubernetes Dynamic Slaves

With a a successful execution of Jenkins jobs from both statically defined and dynamically provisioned slaves, we were able to demonstrate how an existing Jenkins environment can integrate with OpenShift. The ability to leverage the elastic resources of a platform as a service, such as OpenShift, to support the continuous integration and continuous delivery of applications allows businesses additional opportunities to handle the demand of the modern day world.



Leave a Reply