KubeCon NA 2021 – Getting back to reality

Posted: October 11th, 2021 | Author: | Filed under: Technology | Tags: , , | No Comments »

January 25, 2020. At the time, just another day at Devconf.cz in Brno Czech Republic delivering hands-on workshops for an emerging concept in the Kubernetes space, GitOps. Looking back, it would become the last in person conference that I would have the opportunity to attend due to the COVID-19 pandemic.

Fast forward 21 months, and while the world continues to be ravaged by the pandemic, there are reasons to believe things are headed in the right direction. Case in point, KubeCon and Cloud Native Con 2021 where adopters and technologists from leading open source cloud communities come together and share the latest and greatest news within this space.

This, like many other conferences these days, will be available in a virtual format, but there will also be an in person participation as well. After all these months, I am excited to be able to participate on site at the event where I will be able to once again share some of my recent efforts with the community. The following are some of the areas that you can find me either at the physical event or virtually.

OpenShift Commons Gathering – AMA Session

When: Tuesday October 12, 2021 – 2:30 PM PDT

If you or your organization uses the OpenShift Container Platform, what else could be better than hearing about how the community uses OpenShift to drive application deployments to deliver real business value as OpenShift Commons Gathering once again returns to KubeCon as one of the Day-0 events.

As many of you are aware, one of my key responsibilities is to help organizations achieve success by delivering solutions with OpenShift. I will be joining a group of Red Hat engineers and guest speakers for an Ask Me Anything (AMA) session on the OpenShift ecosystem during the OpenShift Commons Gathering event. This session provides an opportunity for you to ask any burning questions that you always wanted to ask as well our thoughts on where we all see OpenShift and Kubernetes headed in the future.

GitOpsCon North America 2021 – Securing GitOps

When: Tuesday October 12, 2021 – 3:30 PM PDT

GitOps is no longer an emerging concept in the Kubernetes space as it might have been back in January 2020, and this is evident by the second GitOpsCon returning to KubeCon as another Day-0 event. Adopting a GitOps based approach is fundamentally a paradigm shift for managing both applications and infrastructure for many organizations. It is important that proper considerations be made to properly enforce property security controls at each step and component involved in GitOps.

This lightning talk on Securing GitOps will highlight many of the key areas that anyone implementing a GitOps based approach should consider for implementing GitOps securely. Not only will the key areas of concerns be highlighted, but a set of tools will be introduced that you can take advantage of immediately.

KubeCon North America 2021 – Helm: The Charts and the Curious

When: Wednesday October 12, 2021 – 11:00 AM PDT

Helm is a package manager for Kubernetes and is one of the most popular ways that applications are deployed to Kubernetes. Charts are the packaged Kubernetes manifests and there is a vast ecosystem out there for building, packaging and distributing.

This talk will focus primarily on how to accelerate and secure the packaging and distribution of Helm charts including some of the approaches and tools that you can integrate into your Continuous Integration and Continuous Delivery process. You really do not want to miss this session, especially those who do use Helm as part of their standard workflow.

Booth Duty

Aside from the formal presentations, I will also be on the expo floor working several of the booths. This affords you the opportunity to “talk shop” and experience open source and cloud native solutions in action.

Red Hat Booth

What should not come as a surprise, I will be present at the Red Hat booth at various times throughout the convention. Aside from stopping by to say hi, be sure to check the associated activities delivered by Red Hat’s best including demos, workshops and live office hours.

More information related to Red Hat’s presence at KubeCon can be found here.

sigstore Booth

One the open source projects that I am heavily involved in these days is sigstore, a Linux Foundation sponsored project that aims to make signing and verifying content easier. Stop by, learn and take the sigstore tooling for a spin by signing content of your very own. Trust me, as soon as you see it, you will be hooked!

I’ll be around for the entire week so feel free to contact me via my various social media channels (LinkedIn, Twitter, Facebook) if you are interested in chatting. For those who are not attending the in-person event in Los Angeles, happy to set aside time so that you do not miss out either.

This is going to be fun!


Adding Image Digest References to Your Helm Charts

Posted: September 15th, 2021 | Author: | Filed under: Technology | Tags: , , , | No Comments »

A container image is a foundational component of any deployable resource in Kubernetes as it represents an executable workload. However, all images are not the same. An image with the name foo deployed on cluster A may not be the same as image foo on cluster B. This is due in part to the way that images are resolved at both the Kubernetes level as well as by the underlying container runtime.

When using Helm, the package manager for Kubernetes, the majority of publicly available charts do provide methods for specifying names of these backing images. Unfortunately, these options can lead end users into a false sense of security. In this post, we will describe the potential pitfalls that can occur when specifying image references within Helm charts along with options that can be employed by both chart maintainers and consumers to not only increase security, but reliability of their Kubernetes applications. 

Whats in a Name

All container images are given names, such as the foo image referenced previously. In addition, a hostname and any hierarchical details of the registry where the image is stored may also be provided, such as quay.io/myco resulting in an image name of quay.io/myco/foo. If the hostname of the registry is not provided, a default hostname is provided by the container runtime. Either a set of configured additional or mirror registries are used, otherwise Docker Hub is the default.

The final optional portion of an image is a tag which identifies different versions of the same image. If no tag is specified, the latest tag is used by default. In Helm, a common pattern employed by chart publishers is to provide configurable values for each component of an image and typically represented by the following values:

image:
  registry: quay.io
  repository: myco/foo
  tag: 1.0.0

While this configuration appears to be sensible given the composition of images, there are additional considerations that should be taken into account.

First, Kubernetes takes different approaches when it comes to satisfying the requirement that the image be available within the local container runtime. By default, the Kubernetes Kubelet will query the container runtime for the specified image. If found, a container will be started using this image. If the image is not found, it will be retrieved by the container runtime based on the configuration in the deployable resource. End users can modify this default behavior by specifying the imagePullPolicy property of the container. The imagePullPolicy can be one of the following values:

  • IfNotPresent (default) – Pull the image from the remote source only if it not present in the container runtime
  • Always – The Kubelet queries the remote registry to compare the digest value of the image. The image will be retrieved only if an image with the same digest is not found within the container runtime.
  • Never – Assumes the image is already available in the container runtime

Many chart maintainers provide the imagePullPolicy as a configurable value within their charts in order to allow the end user to control how images are retrieved.

There is one gotcha here. Even though the default imagePullPolicy is IfNotPresent, if the latest tag is specified, the imagePullPolicy is changed to Always when not specified. This subtle detail has been known to trip up even the more experienced Kubernetes user as a different image may be retrieved compared to a previous deployment even though no changes to the Kubernetes resource were made.

So, how can we avoid this type of situation?

Referencing Images by Digest

It is important to note that an image tag functions as a dynamic pointer to a concrete reference, known as a digest. A digest is an immutable SHA256 representation of an image and its layers. An image deployed yesterday with tag 1.0.0 may not reference the same underlying digest as it does today, which could cause adverse results depending on the contents of the updated image. Tags are provided for convenience purposes. It’s a lot easier to say “give me version 1.0.0” instead of “give me version image with reference sha256:d478cd82cb6a604e3a27383daf93637326d402570b2f3bec835d1f84c9ed0acc. Instead of using a tag to reference an image, such as quay.io/myco/foo:1.0.0, digests can be applied by adding @sha256:<digest> instead of the colon separator and tag name, such as quay.io/myco/foo@sha256:d478cd82cb6a604e3a27383daf93637326d402570b2f3bec835d1f84c9ed0acc.

Referencing images by digest has a number of advantages:

  1. Avoid unexpected or undesirable image changes.
  2. Increase security and awareness by knowing the specific image running in your environment.

The last point is increasingly important as more and more organizations look to tighten the grips on the software that is deployed. When combined with concepts like Software Bill of Materials (SBOM’s), it is crucial that the exact image that is defined matches the running image.

Supporting Image Digest in Helm Charts

Given that referencing a container image by digest merely involves a change in a Helm template, it should be fairly easy to implement. The primary challenge with this refactoring effort is the way the image itself is referenced. When using a Values structure for a image similar to the example provided previously, an image within a template file could be represented by the following:

image: "{{ .Values.image.registry}}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"

The colon separator prior to the image tag presents the primary reason why using image digests is a challenge in most of the publicly available Helm charts since an image digest reference uses a @ after the name of the image instead of a :.

Note: It is possible to include the digest when using the above format as @sha can be suffixed to the name of the image. However, this approach is not recommended as it may affect other portions of the chart that may rely on the name of the image.

Freedom of choice is the name of the game, and chart maintainers (particularly those that support publicly available charts) should provide suitable options for consumers to choose from. While referencing images by digest does have many benefits as described previously, there is still a need to support those that may want the convenience of referencing an image by tag. 

To satisfy both user personas, we can make use of a Helm Named Template to produce the desired image reference based on user input. A Named Template in Helm is a piece of reusable code that can be referenced throughout a chart.

First, let’s spend a moment reviewing thinking about how users should be able to specify the image so that the image reference can be correctly produced. Currently, as illustrated previously, the images dictionary accepts the name of the registry, repository and tag. Since the goal of this effort is to be able to support either a tag or a digest, let’s change the name of the tag property to version:

image:
  registry: <registry>
  repository: <repository>
  version: <version>

Now either a tag (such as 1.0.0) or a digest (such as sha256:d478cd82cb6a604e3a27383daf93637326d402570b2f3bec835d1f84c9ed0acc) can be specified in the version property.

The next step is to create the Named Template that will be able to produce the correctly formatted image reference. Named Templates, by default, are defined within the templates/helpers.tpl file (Any file starting with can be used to store Named Templates as it is convention in Helm that these files do not contain any Kubernetes manifests). The key to properly formatting the image reference is to be able to differentiate when the input is a tag versus a digest. Fortunately, since all image digests begin with sha256:, logic can be employed to apply the appropriate format when this situation is detected. The result is a Named Template called similar to the following:

{{/*
Create the image path for the passed in image field
*/}}
{{- define "mychart.image" -}}
{{- if eq (substr 0 7 .version) "sha256:" -}}
{{- printf "%s/%s@%s" .registry .repository .version -}}
{{- else -}}
{{- printf "%s/%s:%s" .registry .repository .version -}}
{{- end -}}
{{- end -}}

This Named Template called mychart.image first determines whether the first 7 characters of the version property contains sha256: using the Sprig substr function, which would indicate that it is an image digest reference. If so, a correctly formatted image reference is produced with the appropriate @ separator between the registry/repository and the digest. Otherwise, an image reference making use of a tag is produced.

The final step is to include the mychart.image Named Template within a Kubernetes template manifest. This is achieved by using the template function and providing both the name of the Named Template and the dictionary containing the image from the Values file.

image: "{{ template "mychart.image" .Values.image }}"

Now, specifying either the tag or digest in the version property within a Values file as shown below will result in a properly formatted image reference.

Use of a tag:

image:
  registry: quay.io
  repository: myco/foo
  version: 1.0.0

Result: quay.io/myco/foo:1.0.0

Use of a digest:

image:
  registry: quay.io
  repository: myco/foo
  version: sha256:d478cd82cb6a604e3a27383daf93637326d402570b2f3bec835d1f84c9ed0acc

Result: quay.io/myco/foo@sha256:d478cd82cb6a604e3a27383daf93637326d402570b2f3bec835d1f84c9ed0acc

By implementing this type of capability, chart producers enable consumers the flexibility for determining how images within charts should be rendered. The use of image digests has many benefits including security and an increased level of assurance of the content that is operating within a Kubernetes environment. It is the hope these types of patterns continue to proliferate within the Helm community.


Rotating the OpenShift kubeadmin Password

Posted: July 15th, 2021 | Author: | Filed under: Technology | Tags: , , | No Comments »

OpenShift includes the capabilities to integrate with a variety of identity providers to enable the authentication of users accessing the platform. When an OpenShift cluster is installed, a default kubeadmin administrator user is provided which enables access to complete some of the initial configuration, such as setting up identity providers and bootstrapping the cluster.

While steps are available to remove the kubeadmin user from OpenShift, there may be a desire for the account to be retained longterm as one of the break glass methods for gaining elevated access to the cluster (another being the kubeconfig file that is also provided at installation time and uses certificate based authentication).

In many organizations, policies are in place that require accounts with passwords associated with them to be rotated on a periodic basis. Given that the kubeadmin account provides privileged access to an OpenShift environment, it is important that options be available to not only provide additional security measures for protecting the integrity of the account, but to also comply with organizational policies.

The credentials for the kubeadmin password consists of four sets of five characters separated with dashes (xxxxx-xxxxx-xxxxx-xxxxx) and is generated by the OpenShift installer and stored in a secret called kubeadmin in the kube-system namespace. If you query the content stored within the secret, you will find a hashed value instead of the password itself.

oc extract -n kube-system secret/kubeadmin --to=-

# kubeadmin
$2a$10$QyUIC9VCglBZw4/pcbjZK.vVo4neHYLrl5uJgd9la36uGF6hgN1IW

To properly rotate the kubeadmin password, a new password must be generated in a format that aligns with OpenShift’s standard kubeadmin password format followed by a hashing function being applied so that it can be stored within the platform.

There are a variety of methods in which a password representative of the kubeadmin user can be generated. However, it only made sense to create a program that aligns with the functions and libraries present in the OpenShift installation binary. The following golang program performs not only the generation of the password and hash, but the base64 value that should be updated in the secret as a convenience.

package main

import (
	"fmt"
	"crypto/rand"
	"golang.org/x/crypto/bcrypt"
	b64 "encoding/base64"
	"math/big"
)

// generateRandomPasswordHash generates a hash of a random ASCII password
// 5char-5char-5char-5char
func generateRandomPasswordHash(length int) (string, string, error) {
	const (
		lowerLetters = "abcdefghijkmnopqrstuvwxyz"
		upperLetters = "ABCDEFGHIJKLMNPQRSTUVWXYZ"
		digits       = "23456789"
		all          = lowerLetters + upperLetters + digits
	)
	var password string
	for i := 0; i < length; i++ {
		n, err := rand.Int(rand.Reader, big.NewInt(int64(len(all))))
		if err != nil {
			return "", "", err
		}
		newchar := string(all[n.Int64()])
		if password == "" {
			password = newchar
		}
		if i < length-1 {
			n, err = rand.Int(rand.Reader, big.NewInt(int64(len(password)+1)))
			if err != nil {
				return "", "",err
			}
			j := n.Int64()
			password = password[0:j] + newchar + password[j:]
		}
	}
	pw := []rune(password)
	for _, replace := range []int{5, 11, 17} {
		pw[replace] = '-'
	}
	
	bytes, err := bcrypt.GenerateFromPassword([]byte(string(pw)), bcrypt.DefaultCost)
	if err != nil {
		return "", "",err
	}

	return string(pw), string(bytes), nil
}

func main() {
        password, hash, err := generateRandomPasswordHash(23)
        
        if err != nil {
           fmt.Println(err.Error())
           return
        }
	fmt.Printf("Actual Password: %s\n", password)
	fmt.Printf("Hashed Password: %s\n", hash)
	fmt.Printf("Data to Change in Secret: %s", b64.StdEncoding.EncodeToString([]byte(hash)))
}

If you do not have the go programming language installed on your machine, you can use the following link to interact with the program on the Go Playground.

https://play.golang.org/p/D8c4P90x5du

Hit the Run button to execute the program and a response similar to the following will be provided:

Actual Password: WbRso-QnRdn-6uE3e-x2reD
Hashed Password: $2a$10$sNtIgflx/nQyV51IXMuY7OtyGMIyTZpGROBN70vJZ4AoS.eau63VG
Data to Change in Secret: JDJhJDEwJHNOdElnZmx4L25ReVY1MUlYTXVZN090eUdNSXlUWnBHUk9CTjcwdkpaNEFvUy5lYXU2M1ZH

As you can see, the plaintext value that you can use to authenticate as the kubeadmin user, the hashed value that should be stored in the secret within the kube-system namespace and the value that can be substituted in the secret is provided.

To update the value of the kubeadmin password, execute the following command and replace the SECRET_DATA text with the value provided next to the “Data to Change in Secret” from the program execution above.

kubectl patch secret -n kube-system kubeadmin --type json -p '[{"op": "replace", "path": "/data/kubeadmin", "value": "SECRET_DATA"}]

Once the password has been updated, all active OAuth tokens and any associated sessions will be invalidated and you will be required to reauthenticate. Confirm that you are able to login to either the OpenShift CLI or web console using the plain text password provided above.

It really is that easy to manage the password associated with the kubeadmin user. The ability for the password to be rotated as desired allows for the compliance against most organizational password policies. Keep in mind that the secret containing the kubeadmin password can always be removed, thus eliminating this method for authenticating into the cluster. The generated kubeconfig file provided at install time can be used as a method of last resort for accessing an OpenShift environment if a need arises.


Helm Subchart Pattern Using Git Submodules

Posted: April 26th, 2021 | Author: | Filed under: Technology | No Comments »

Helm has become the de facto tool for packaging and deploying applications in a Kubernetes environment not strictly due to its ease of use, but also because of its versatility. What once was a complex process for managing applications now can be facilitated with ease. In more complex deployments, an application may have one or more components that it relies on for normal operation, such as a database for a front end application. Helm uses the concept of dependencies to define a relationship between the current chart and any other chart that is required to be deployed in order for a release to be deemed both complete and successful. In most cases, dependencies are sourced from Helm repositories, where a chart has been packaged and served on an external web server. An example of how dependencies can be defined in a Chart.yaml file can be found below:

dependencies:
  - name: database
    repository: https://mychartrepository.example.com
    version: 1.0.0

However, another approach is to source dependent charts from the local file system. This method has several advantages including avoiding a reliance on any external resource (the chart repository) as well as the ability to test dependent charts that may also be in development without formal packaging to be complete.

Instead of specifying the location of the remote chart repository server using the http(s) protocol, the file protocol can be used instead:

dependencies:
  - name: database
    repository: file://./<path_to_chart>
    version: 1.0.0

The process of Installing a chart containing dependencies is the same regardless if they are sourced from a remote repository or the local file system. Dependent charts are referenced from the charts directory at install time and the helm dependency subcommand can be used to build or update the contents of this directory.

While the file system approach of managing dependent charts sounds appetizing, it does introduce challenges when it is time to version the Chart in a source code management tool, such as a Git repository. Do you want to include the entire contents of each dependent chart in your repository? As your git repository evolves with the content of your own chart (or others), including the contents from other dependencies within the same repository may cause unwanted and excessive bloat. As most charts that would be consumed as dependencies are stored in their own Git repository, an alternate method for sourcing dependent charts is to reference them from their own Git repositories using Git submodules. A submodule allows for a separate Git repository to be embedded within another repository. Some of the benefits using this approach is that only a reference to the associated repository is tracked instead of the entire contents. In addition, since the repository referenced in the submodule uses the fixed SHA of a given commit, it is akin to a tag that is commonly associated with a chart packaged within a helm repository. This ensures that the contents used today will be the same as in the future.

Dependencies also enable another approach used in the Helm ecosystem called subcharts. A subchart is a mechanism of providing inheritance to one or more charts as the parent of the associated subchart(s) can provide overriding values along with additional content (templates) of their own. To demonstrate an end to end overview of using Git submodules in Helm, let’s walk through an example to highlight how this approach can be accomplished.

The following repository contains the associated resources

https://github.com/sabre1041/helm-dependency-submodules

By using the subchart approach, we are going to make use of the Quarkus Helm chart found within the Red Hat Developers Helm Chart repository as the subchart and specify overriding values within our parent chart.

Installing the Helm Chart

Before beginning, ensure that you have the following prerequisites satisfied:

  • Git
  • Helm CLI
  • Kubernetes CLI

First, clone the repository to your local machine and change into the repository directory:

git clone https://github.com/sabre1041/helm-dependency-submodules.git
cd helm-dependency-submodules

This chart provides many of the common attributes that you would find in any other chart including a values.yaml and Chart.yaml files.

The git submodule is located in the dependencies directory named redhat-helm-charts. However, if you list the contents of this directory, it will be empty. 

ls -l dependencies/redhat-helm-charts/

This is due the fact that submodules are not initialized or updated to bring in the contents of the referenced repository when a repository is cloned by default. To clone the content of the submodule, initialize and update the submodule:

Initialize the submodule

git submodule init

Submodule 'dependencies/redhat-helm-charts' (https://github.com/redhat-developer/redhat-helm-charts) registered for path 'dependencies/redhat-helm-charts'

Update the submodule

git submodule update

Cloning into '<base_directory>/helm-dependency-submodules/dependencies/redhat-helm-charts'...
Submodule path 'dependencies/redhat-helm-charts': checked out '47ae04c40a4e75b33ad6a2ae84b09a173f739781'

If you inspect the contents of the Chart.yaml file, you will note the dependency referencing the quarkus helm chart within the submodule path:

dependencies:
- name: quarkus
  version: 0.0.3
  repository: file://./dependencies/redhat-helm-charts/alpha/quarkus-chart

Use the helm dependency update subcommand to package the dependency chart into the charts directory.

helm dependency update

Saving 1 charts
Deleting outdated charts

Now, install the chart to your Kubernetes environment into a new namespace called helm-dependency-submodules.

helm-dependency-submodules.

helm upgrade -i -n helm-dependency-submodules --create-namespace helm-dependency-submodules 

Note: By default, the Quarkus Helm chart assumes a deployment to an OpenShift environment and therefore creates a Route resource. To skip the creation of the Route, pass in the --set quarkus.deploy.route.enabled=false flag to the helm update command.

A new namespace called helm-dependency-submodules will be created if it did not exist previously and the Quarkus application will be deployed. If running in an OpenShift, a new Route will be created exposing the application. Execute the following command to obtain the URL of the application.

kubectl get routes -n helm-dependency-submodules helm-dependency-submodules -o jsonpath=https://’{ .spec.host }’

Finally, uninstall the application by using the helm uninstall command as shown below

helm uninstall -n helm-dependency-submodules helm-dependency-submodules

GitOps Support Using Argo CD

More and more organizations are adopting GitOps as a mechanism for managing applications. Argo CD is one such tool that implements GitOps principles and it provides support for not only Helm charts, but submodules found within Git repositories. Using the Git repository covered in the last section, let’s describe how Argo CD can facilitate deployment of the Quarkus application within the Helm chart to the Kubernetes cluster. 

First, deploy an instance of Argo CD to the environment and ensure that the Argo CD controller has the necessary permissions to create a namespace and resources in a project called helm-dependency-submodules-argocd. There are multiple ways that Argo CD can be deployed including the community based operator, OpenShift GitOps when operating in an OpenShift environment, as well as static manifest files. 

Once Argo CD has been deployed and is operational, create the Application using the manifest found in the argocd directory of the git repository.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: quarkus
spec:
  destination:
    namespace: helm-dependency-submodules-argocd
    server: https://kubernetes.default.svc
  project: default
  source:
    path: .
    repoURL: https://github.com/sabre1041/helm-dependency-submodules
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: false
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Argo CD will take care of all the heavy lifting for us that we had to perform manually previously including:

  • Cloning the repository
  • Initializing and updating the submodule
  • Updating the chart dependencies
  • Installing the chart

Navigating to the quarkus Argo CD will illustrate the resources that were deployed and synchronized to the newly created helm-dependency-submodules-argocd namespace.

Application deployed using Argo CD

When changes are made to the repository, Argo CD will pick up the modifications and apply them based on the settings of the application. 

By either using the Helm CLI or a GitOps based tool like Argo CD as described during this discussion, Git submodules can provide yet another approach by which Helm can be used to streamline the deployment and management of applications in a kubernetes environment.


Kubernetes API Event Driven Triggering of Tekton Pipelines

Posted: March 30th, 2021 | Author: | Filed under: Technology | Tags: , , , | No Comments »

The world of cloud native architectures have enabled solutions to be delivered faster than ever before. Even as the overall time to delivery has been reduced, one area that continues to be of utmost importance is understanding the contents of the software package at each level. In a containerized environment, this includes everything from the base operating system, core libraries, and everything in between including the application itself. The assurance that each layer meets an agreed upon standard prior to being deployed into a production environment is known as the Trusted Software Supply Chain (TSSC). This assembly line approach to software delivery typically makes use of Continuous Integration and Continuous Delivery (CI/CD) tool(s) in a series of processes, modeled in a pipeline of steps, that every application must undergo.

Given that there are various layers of a containerized application, when a change occurs at any layer, the entire validation process must be repeated to ensure that the application continues to meet the expected level of compliance. In a cloud native world, this can occur frequently and it is important that the revalidation process occurs as soon as a change is detected to mitigate any potential vulnerability. The primary questions to ask is how to detect when a change occurs and what should be done? The answer first depends on the source. If source code changes, an event can be triggered from the source code management (SCM) system. Similar actions can be undertaken when a container image is pushed to an image registry. Then, the event can trigger some form of action or remediation.

Within the context of the Trusted Software Supply Chain, the trigger will typically invoke a pipeline along with a series of steps. This pipeline execution engine can take many forms, but Tekton is one such tool that has been gaining popularity in the Kubernetes ecosystem as it makes use of cloud native principles, such as declarative configurations (Custom Resources) and a distributed execution model. In the scenario where an image is pushed to a container registry, a triggering action would invoke a Tekton pipeline that includes a series of Tasks (steps), such as retrieving the image, scanning, and validating the contents.

Triggering a Tekton Pipeline when an Image is pushed

In OpenShift, one of the key features of the platform is that it contains an image registry along with an entire ecosystem of resources to aid in the life cycle of images (Images, ImageStreams along with several virtual resources including ImageStreamTags). As container images are either pushed to the integrated registry or referenced in an ImageStream, the metadata contained within these custom resources are updated to reflect the latest values. These changes as with any change to a resource in OpenShift can be monitored (one of the key principles of Operators) to invoke other actions in response. Such an action could be the triggering of a Tekton pipeline. The remainder of this article will describe how Tekton pipelines can be triggered based on changes to resources in OpenShift using cloud native event patterns.

There are many components encompassing the Tekton project. While the primary feature set focuses on the pipeline itself, another subproject, called triggers, provides the functionality for detecting and extracting information events in order to execute pipelines. Tekton and the combination the pipelines and triggers subproject is only half of the overall solution. Another component must provide the capability for not only monitoring when changes occur within OpenShift, but have the capability to send events. The missing piece of the puzzle is Knative. Knative is a platform for building serverless applications, and similar to Tekton, the full feature set is broken down into several subjects. Knative serving is responsible for managing serverless applications while Knative eventing provides support for passing events from producers to consumers. The functionality of Knative eventing provides the desired capabilities of sending events based on actions from the Kubernetes API which are then consumed by Tekton triggers to start pipelines. The diagram below provides an overview of the end state architect.

High level architecture

To begin, you will need access to an OpenShift environment with cluster administrator rights.

Note: While the use case described in this implementation are specific to OpenShift, the concepts are applicable to any Kubernetes environment with the associated project deployed.

Then, you will need to install two operators from OperatorHub: OpenShift Pipelines which will provide support for Tekton and OpenShift Serverless, which will provide support for Knative. 

Navigate to the OpenShift Web Console and select OperatorHub and then OperatorHub.

In the textbox, search for OpenShift Pipelines and then click Install. A page will be presented with the available channels and approval modes. Click Install again to install the operator.

Next, install OpenShift Serverless. Once again, select OperatorHub and then search for OpenShift Serverless. Click Install and on the subsequent page, select the channel matching your version of OpenShift and then click Install.

Once the operator has been installed, one additional step needs to be completed. Recall that Knative features several subprojects: serving and evening. By default, when the operator is deployed, both subprojects are not installed. Given that only Knative eventing will be used in this scenario, it will be the only subproject that will be installed. Under the OperatorHub section of the left hand navigation pane, select Installed Operators. At the top of the page, you will see a dropdown indicating the name of the current OpenShift project that you are in. Select the dropdown and select knative-eventing. Even though the subproject is not installed, OperatorHub will still create the project at deployment time.

With the Installed Operators page still loaded, select the OpenShift Serverless. On the operator page, select the Knative Eventing tab. Click the Create Knative button and then click Create. The OpenShift Serverless operator will then go ahead and deploy and configure Knative Eventing into the cluster. 

With the necessary cluster prerequisites complete, clone the git repository containing the OpenShift manifests as they will be referenced in each of the following sections:

git clone https://github.com/sabre1041/image-trigger-tekton
cd image-trigger-tekton

Next, create a new namespace called image-watcher where we will be able to deploy our resources:

kubectl create namespace image-watcher
kubectl config set-context --current --namespace=image-watcher

Now, let’s move on to the first portion of the solution: the Tekton based pipeline.


Argo CD Namespace Isolation

Posted: December 20th, 2020 | Author: | Filed under: Technology | Tags: , , | No Comments »

GitOps, the process for declaring the state of resources in a Git repository, has become synonymous with managing Kubernetes, and one of the most popular GitOps tools is Argo CD. The ability to drastically reduce the time and effort required to manage cluster configuration and associated applications has further accelerated the adoption of Kubernetes. However, as Kubernetes becomes more commonplace, there becomes a need to segregate the levels of access granted to users and tools to enable the proliferation of the technology.

In many enterprise organizations and managed services offerings, multi-tenancy is the norm and access is restricted for the types of operations that can be performed. This poses a challenge for Argo CD, which by default, manages resources at a cluster scope, meaning that it will attempt to perform operations across all namespaces, effectively breaking multi-tenancy. Contributors to the Argo CD project realized this concern early on and actually added support for namespace isolation back in version 1.4. Unfortunately, the namespace isolation feature in Argo CD is poorly documented, with most end users being unaware of such functionality. This article will illustrate the namespace isolation feature of Argo CD, how it can be used, as well as some of the limitations that currently exist.

Argo CD can be deployed to a Kubernetes environments in several ways:

The only method that currently supports namespace isolation is through the use of raw manifests and a separate manifest for namespace isolation has been included with each Argo CD release since version 1.4 (You can find the manifests on the releases page of Argo CD. The name of the file is called namespace-install.yaml instead of install.yaml for both the standard and highly available deployment).

The typical deployment of Argo CD creates two ClusterRoles:

  • Argo CD server – to provide the necessary level of access for resources that are made available through the browser, such as viewing logs from pods or events within namespaces.
  • Argo CD application controller – Full, unrestricted access to manage resources in a cluster as declared by the manifests from the Git repository

Any unprivileged user would be unable to successfully apply these resources which required the creation of a separate set of manifests. When using the set of manifests that supports namespace isolation, instead of ClusterRoles being created at a cluster scope, Roles and associated RoleBindings are created in the namespace where Argo CD is deployed. In addition, the Argo CD controller is granted only a limited number of resources instead of full access. The process for which Argo CD can apply and manage the resources that are declared in Git repositories will be described later on.

Deploying Argo CD in Namespace Isolation Mode

To demonstrate how the namespace isolation feature of Argo CD can be used, an OpenShift Container Platform environment will be used (any Kubernetes environment will work, however there are several considerations that need to be made when running in OpenShift).

First, obtain access to an OpenShift environment and create a new project called argocd which will be where the set of Argo CD resources will be deployed:

$ oc new-project argocd

Apply the namespace isolation manifest

$ oc apply -f https://raw.githubusercontent.com/argoproj/argo-cd/v1.7.8/manifests/namespace-install.yaml

For this demonstration, version 1.7.8 was used. Feel free to replace with a version of your choosing.

After applying the manifests, the resources will be deployed. You may notice that the Deployment for Redis will not be running. As of version 1.7, the Redis deployment has considerations for ensuring that the container does not run as the root user. The configuration in the pod securityContext conflicts with the standard security mechanisms employed in OpenShift through the use of Security Context Constraints (SCC’s). Given that OpenShift already enforces that all pods by default run with a non-root user using a randomly generated ID, the value in the securityContext field can be safely removed.

Execute the following command to patch the deployment to remove the field from the Deployment:

$ oc patch deployment argocd-redis -p '{"spec": {"template": {"spec": {"securityContext": null}}}}'

The Redis pod will then start now that the invalid manifest was removed.

The final step is to expose the Argo CD server service as a Route. Execute the following command to create a new Route for the Argo CD server;

$ oc create route passthrough argocd --service=argocd-server --port=https --insecure-policy=Redirect

The hostname for the route created can be found by executing the following command:

$ oc get route argocd -o jsonpath='{ .spec.host }'

Argo CD supports several methods for securing access to the server, including SSO. The most straightforward is to use the out of the box integrated authentication provider. By default, the password of the admin password is set as the name of the pod the first time the Argo CD server starts

The Argo CD CLI can be used to change the admin password so that if the server pod restarts, the password will not be lost.

Login to the Argo CD CLI:

$ argocd --insecure --grpc-web login "$(oc get routes argocd -o jsonpath='{ .spec.host }')":443 --username "admin" --password "$(oc get pod -l app.kubernetes.io/name=argocd-server -o jsonpath='{.items[*].metadata.name}')"

Set the admin password for Argo CD to be “password” by executing the following command

$ argocd account update-password --current-password=$(oc get pod -l app.kubernetes.io/name=argocd-server -o jsonpath='{.items[*].metadata.name}') --new-password=password

With the default password changed, launch a web browser and navigate to the url of the route discovered previously. Enter the admin username and password to access the console.

Namespace Isolation

Clusters define the Kubernetes environments for which resources will be deployed to. A cluster can be either the environment Argo CD is deployed on or a remote instance. When Argo CD is first deployed, a single local cluster is created called in-cluster which references the local environment for Which Argo CD is running on and communicates against the internal Kubernetes service (https://kubernetes.default.svc). If we were to create an application that attempted to manipulate cluster level resources, the process would fail as the Argo CD does not have the necessary permissions. As described previously, Argo CD uses the argocd-application-controller service account to manage resources and this service account has a ClusterRoleBinding against a ClusterRole with unrestricted permissions. In a namespace deployment of Argo CD, this level of permission does not exist and the service account is only granted a limited level of access to manage Argo CD related resources and internal functions.

For Argo CD to be able to function as desired, access to namespaces must be explicitly granted. This process requires the use of the Argo CD CLI and the argocd cluster add subcommand to specify the namespaces that should be granted access to manage.

Create a namespace called argocd-managed for which we will be able to test against

$ oc new-project argocd-managed --skip-config-write

The --skip-config-write option was specified to avoid changing into the newly created project since the majority of our actions will remain in the argocd project.

To grant Argo CD access to manage resources in the argocd-managed project, add a new cluster called “argocd-managed” using the following command:

$ argocd cluster add $(oc config current-context) --name=argocd-managed --in-cluster --system-namespace=argocd --namespace=argocd-managed

You may have noticed a few interesting options in the above

--name – Friendly name of the cluster
--in-cluster – Specifies that the internal Kubernetes service should be used to communicate with the OpenShift API.
--system-namespace – Configurations for clusters managed by Argo CD are typically written to a secret in the kube-system namespace. As the kube-system namespace requires elevated access, the argocd namespace for which Argo CD is deployed within will be used instead
--namespace – Namespace that Argo CD should be granted access to manage. Multiple iterations of the namespaces parameter can be specified in the argocd cluster add command to manage multiple namespaces.

The command will then return the following result.

INFO[0002] ServiceAccount "argocd-manager" created in namespace "argocd"
INFO[0002] Role "argocd-managed/argocd-manager-role" created
INFO[0003] RoleBinding "argocd-managed/argocd-manager-role-binding" created
Cluster 'https://kubernetes.default.svc' added

A new service account called argocd-manager is created in the argocd namespace along with a role and rolebinding in the targeted namespace that grants the argocd-manager service account unrestricted privileges.

The details for the cluster are written in a secret in the argocd</code namespace and contain the following key properties:

  • name – Friendly name for the cluster
  • server – Hostname for the cluster
  • config – json data structure describing how to communicate with the cluster

The full list of properties can be found here.

For the cluster that was previously added, the following is the decoded contents of the secret :

config: '{"bearerToken":"<TOKEN>","tlsClientConfig":{"insecure":true}}'
name: argocd-managed
namespaces: argocd-managed
server: https://kubernetes.default.svc

The bearerToken that is defined in the cluster config is associated with the newly created argocd-manager service account which was granted access in the argocd-managed namespace. The namespaces field is a comma separated list of namespaces that Argo CD can manage resources against .

Let’s demonstrate that Argo CD can be used to deploy resources against the argocd-managed namespace and validate namespace isolation.

Using the Argo CD CLI, create a new application called nexus to deploy a Sonatype Nexus instance:

$ argocd app create nexus --repo=https://github.com/redhat-canada-gitops/catalog --path=nexus2/base --dest-server=https://kubernetes.default.svc --dest-namespace=argocd-managed --sync-policy=auto

You can verify the application in the Argo CD web console using the route, username and password that was previously created.

By selecting the nexus application, you will be presented with a depiction similar to the following indicating Argo CD was successfully configured for namespace isolation:

Note: You may ignore the “OutOfSync” message as it is indicating that the live OpenShift Route for Nexus within the cluster contains differences than the manifest declared. These types of situations are managed through the use of customizing the differences.

Validating Namespace Isolation Enforcement

The enforcement of namespace isolation can be validated using multiple approaches. First, Argo CD will forbid the management of resources in a cluster that is not specified by a value present in the namespaces field of the cluster configuration when configured in namespace isolation mode. Otherwise, standard Kubernetes RBAC will forbid the argocd-application-controller service account from managing resources in a namespace it cannot access.

Let’s validate this assessment by creating a new namespace called argocd-not-managed and attempt to deploy the same nexus application.

First, create the new project:

$ oc new-project argocd-not-managed --skip-config-write

Next, create an application called argocd-not-managed in the argocd-not-managed namespace

$ argocd app create nexus-not-managed --repo=https://github.com/redhat-canada-gitops/catalog --path=nexus2/base --dest-server=https://kubernetes.default.svc --dest-namespace=argocd-not-managed --sync-policy=auto
Verify the application was not successfully deployed either in the ArgoCD web console or using the command line by executing the following command:
$ argocd app get nexus-not-managed

Name: nexus-not-managed
Project: default
Server: https://kubernetes.default.svc
Namespace: argocd-not-managed
Repo: https://github.com/redhat-canada-gitops/catalog
Target:
Path: nexus2/base
SyncWindow: Sync Allowed
Sync Policy: Automated
Sync Status: Unknown (5978975)
Health Status: Missing

CONDITION MESSAGE LAST TRANSITION
ComparisonError Namespace "argocd-not-managed" for Service "nexus" is not managed 2020-11-15 23:12:28 -0600 CST

GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
PersistentVolumeClaim argocd-not-managed nexus Unknown Missing
Service argocd-not-managed nexus Unknown Missing
apps.openshift.io DeploymentConfig argocd-not-managed nexus Unknown Missing
image.openshift.io ImageStream argocd-not-managed nexus Unknown Missing
route.openshift.io Route argocd-not-managed nexus Unknown Missing

Notice how under the condition, it explains that the application cannot be deployed to the argocd-not-managed namespace as it is not managed in this cluster, thus validating namespace isolation is functioning as expected.

Namespace Isolation Automation

The primary goal of Argo CD is to apply resources that are expressed in a declarative manner. The Argo CD server itself embraces declarative configuration through the use of Custom Resource Definitions, Secrets and ConfigMaps and given that the argocd cluster add command creates a series of resources itself, we can avoid having to use the Argo CD CLI to manage cluster configuration by being able to specify them in a declarative fashion.

Let’s automate the steps that the argocd cluster add command performs. Recall, the command added a Service Account, Role, RoleBinding and Secret.

Note: It is best to have a fresh environment of Argo CD available to work through these steps. To reuse the existing environment, execute the following command which should reset the environment to a semi-clean state.

$ argocd app delete nexus
$ argocd app delete nexus-not-managed
$ oc delete role argocd-manager-role -n argocd-managed
$ oc delete rolebinding argocd-manager-role-binding -n argocd-managed
$ oc delete sa argocd-manager -n argocd
$ oc delete secret -n argocd -l=argocd.argoproj.io/secret-type=cluster

First, create a service account called argocd-manager in the argocd namespace

$ oc -n argocd create sa argocd-manager

Next, create a Role called argocd-manager-role with unrestricted access in the argocd-managed project:

$ oc create role -n argocd-managed argocd-manager-role --resource=*.*  --verb=*

Now, create a rolebinding to bind the newly created role to the service account previously created:

$ oc create rolebinding argocd-manager-role-binding -n argocd-managed --role=argocd-manager-role --serviceaccount=argocd:argocd-manager

Finally, the cluster secret can be created. Execute the following command to create the secret which will contain the bearer token for the argocd-manager service account and the namespace that the cluster will manage (among a few others).

oc -n argocd create -f - << EOF
apiVersion: v1
stringData:
  config: '{"bearerToken":"$(oc serviceaccounts get-token argocd-manager)","tlsClientConfig":{"insecure":true}}'
  name: argocd-managed
  namespaces: argocd-managed
  server: https://kubernetes.default.svc
kind: Secret
metadata:
  annotations:
    managed-by: argocd.argoproj.io
  labels:
    argocd.argoproj.io/secret-type: cluster
  name: cluster-kubernetes.default.svc-argocd-managed
type: Opaque
EOF

Notice how the secret created above contains the label argocd.argoproj.io/secret-type: cluster. Any secret with this label will be interpreted by Argo CD as a cluster secret.

At this point, Argo CD has been set up in the same manner as the CLI. This type of configuration affords greater flexibility and avoids needing to use the Argo CD CLI to perform common and repeatable configurations. Feel free to repeat the application creation and deployment as described previously to confirm a successful synchronization of resources into the cluster.

Additional Forms of Restricting Access

Aside from using namespaces and clusters to limit access to where resources can be deployed, Argo CD does have other constructs available for supporting multi-tenancy. Projects allow for a logical grouping of applications and policies within Argo CD and can either supplement or act as a replacement for the namespace isolation feature.

For example, there may be a need for a single Argo CD instance to be deployed with access to manage cluster level resources instead of separate instances, but still provide some form of isolation between teams. By using a combination of Argo CD projects and RBAC, this can be achieved.

Projects provide the capability to limit the source repositories containing content (Git), the clusters resources can be deployed to, the namespaces, and the types of resources that can be deployed in a whitelist/blacklist fashion, both at a cluster and namespace scope. Finally, RBAC policies through the use of group association can be applied to determine the rights that users have against projects.

While projects do provide a finer grained access mode and configuration model, it does require additional work in order to achieve the desired rights granted to users. Since Argo CD is deployed with rights to manage resources at a cluster level, it is imperative that proper considerations be made in order to protect the integrity of the cluster as well as to restrict the level of access that can be achieved by various tenants.

Limitations of Argo CD Namespace Isolation

While the namespace isolation feature in Argo CD does provide a path towards supporting true multi-tenancy, there are still additional hurdles that must be overcome (as of version 1.7.8) before it can be achieved. An Argo CD cluster configuration provides a method for specifying the Kubernetes cluster URL, credentials that can be used to communicate with the cluster, as well as the namespaces that resources can be deployed to. However, regardless of the number cluster configurations made against a single cluster, only one can be active at a time. This gap limits being able to use the namespace isolation feature to provide access to a namespaced scoped deployment of Argo CD and provide two separate teams that manage different namespaces the ability to easily manage their own set of resources without the knowledge of each other.

The other limitation, as described near the beginning of the article is the lack of documentation around the support for namespace isolation. It may be possible that you, the reader, are learning about this feature. If there was more awareness of this type of functionality, existing issues could be resolved and new features could be developed to expand the potential capabilities.

The creators and community surrounding Argo CD realize that multi-tenant support is important for broader adoption of the tool into enterprise organizations and those with a high security posture. The namespace isolation feature is a great first step, but additional work still needs to be achieved. For now, the recommended approach is to deploy separate namespace scoped instances of Argo CD for teams that do not require access to cluster scoped resources and are looking to leverage a specific set of namespaces. Fortunately, given that Argo CD emphasizes declarative configuration, the implementation can be easily achieved.


Using the subPath Property of OpenShift Volume Mounts

Posted: February 18th, 2019 | Author: | Filed under: Technology | No Comments »

One of the methodologies of cloud native architectures is to externalize applications configurations and store them within the environment. OpenShift provides multiple mechanisms for storing configurations within the platform, which include the use of Secrets and ConfigMaps. These resources can then be exposed to applications as either environment variables or as file system volumes mounted within pods and containers. By default, volume mounts uses the standard operating system mount command to inject the external content into the container. While this implementation works well for the majority of use cases, there are situations where there is a desire to retain the contents of the existing directory and only inject a specific file instead of an entire directory. Fortunately, OpenShift and Kubernetes have a solution for this challenge using the subPath property of the container volumeMount for which this discussion will highlight.

To demonstrate how one can utilize subPath volume mounting, let’s deploy an application that can benefit from this feature. Applications commonly consume configuration files within dedicated directories, such as a conf or conf.d. One such application that leverages this paradigm is the Apache HTTP Server, better known as httpd. A variety of configuration files are spread across these two directories within the /etc/httpd folder. If custom configuration files were injected using a typical volume mount to one of these locations, key assets the application server had been expecting to find may not be available. The conf.d directory is the common location for user defined configurations. For example, one can specify that all requests that are made to specific context paths are automatically redirected to another location as shown below:

<VirtualHost *:*>
  
  Redirect permanent /redhat https://www.redhat.com/
  Redirect permanent /openshift https://www.openshift.com/
 
</VirtualHost>

In the example above, requests to the /redhat  context path are redirected to https://www.redhat.com while requests made against the /openshift context path are redirected to https://www.openshift.com. While one could configure this file to be placed in this directory at image build time, there may be a desire to customize the contents per deployed environment. This is where externalizing the configuration outside the image and injecting the contents at runtime becomes valuable. Given that this file does not contain any sensitive data, it is ideal to be stored in a ConfigMap. First, create a new file on the local file system called subpath-redirect.conf with the contents from the example above.

Now, lets use an OpenShift environment to demonstrate this use case. Login and create a new project called subpath-example:

oc new-project subpath-example

By default, OpenShift provides a ImageStream containing an Apache HTTPD server which can be deployed with a single command. Execute the following to create a new application called subpath-example:

oc new-app --image-stream=httpd --name=subpath-example

A new deployment of the httpd image will be initiated. When the new-app command is used against an ImageStream, no route to expose the application outside the cluster is created. Execute the following command to expose the service to allow for ingress traffic.

oc expose svc httpd

Locate the hostname of the application by executing the following command:

oc get route subpath-example -–template=’{{ .spec.host }}’

Copy the resulting location into a web browser which should display the default Apache welcome page.

Now, lets use the contents of the subpath-redirect.conf file previously create a new ConfigMap that can then be injected into the application.

oc create configmap subpath-example –-from-file=subpath-redirect.conf

Before mounting the ConfigMap, explore the running application by starting a remote shell session into the running container.

First, confirm the container is running by locating the name of the running pod:

oc get pods

Start the remote shell session:

oc rsh <pod_name>

List the files in the /etc/httpd/conf.d directory:

ls –l /etc/httpd/conf.d

-rwxrwxrwx. 1 root       root        366 Nov  7 12:26 README
-rwxrwxrwx. 1 root       root         63 Jan  8  2018 auth_mellon.conf
-rwxrwxrwx. 1 root       root       2966 Nov  7 12:25 autoindex.conf
-rwxrwxrwx. 1 1000150000 root       9410 Feb 14 06:33 ssl.conf
-rwxrwxrwx. 1 root       root       1252 Nov  7 12:21 userdir.conf
-rwxrwxrwx. 1 root       root        556 Nov  7 12:21 welcome.conf

As demonstrated by the above output, there are a number of user defined configurations already present within the httpd image (inside the conf.d directory) and overwriting these files by a standard volume mount could cause the container to fail to operate.

Define a new volume in the pod referencing the ConfigMap containing the contents of the Apache HTTPD configuration file along with a volumeMount to the /etc/httpd/conf.d by editing the httpd DeploymentConfig by executing oc edit dc subpath-example.

...
       terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/httpd/conf.d/subpath-reverse-proxy.conf
          name: subpath-example
          subPath: subpath-reverse-proxy.conf
      volumes:
      - configMap:
          defaultMode: 420
          name: subpath-example
        name: subpath-example
...

Notice how the subPath property of the volumeMount specifies the name of the file in within the ConfigMap (oc describe configmap subpath-example would display this name as well) along with the full mountPath to the file that will be created in the Apache configuration directory.

Save the changes which will trigger a new deployment of the application

Running oc get pods again will confirm that the new version of the application has been deployed.

Once again, obtain a remote shell to the running pod using the steps previously described.

List the files in the /etc/httpd/conf.d and notice the presence of the subpath-redirect.conf file from the ConfigMap:

-rwxrwxrwx. 1 root       root        366 Nov  7 12:26 README
-rwxrwxrwx. 1 root       root         63 Jan  8  2018 auth_mellon.conf
-rwxrwxrwx. 1 root       root       2966 Nov  7 12:25 autoindex.conf
-rwxrwxrwx. 1 1000150000 root       9410 Feb 14 06:33 ssl.conf
-rw-r--r--. 1 root       1000150000  150 Feb 14 06:33 subpath-reverse.conf
-rwxrwxrwx. 1 root       root       1252 Nov  7 12:21 userdir.conf
-rwxrwxrwx. 1 root       root        556 Nov  7 12:21 welcome.conf

Confirm the configuration has been applied to the application by navigating to the /redhat context of the application in your browser. If successful, you should be redirected to https://www.redhat.com. In addition, navigating to the /openshift context will redirect to https://www.openshift.com.

The ability to inject individual files from externally stored resources within the platform using the subPath feature of volumes expands and accelerates the delivery of applications to achieve greater business value.


Enabling the OpenShift Cluster Console in Minishift

Posted: October 23rd, 2018 | Author: | Filed under: Technology | 2 Comments »

Through the continued evolution of the platform, OpenShift has shifted the focus from the installation and initial deployment of infrastructure and applications to understanding how the platform and their applications are performing, better known as day two operations. As a result of the incorporation of the CoreOS team and their existing ecosystem of tools into the OpenShift portfolio, the release of OpenShift Container Platform 3.11 includes a new administrator focused web console (cluster console) which provides insights into the management of nodes, role base access controls, and the underlying cloud infrastructure objects. While this new console is automatically enabled in the deployment of the OpenShift Container Platform, the console is not enabled in Minishift/Container Development Kit (CDK), the local containerized version of OpenShift. This post will describe the steps necessary for enabling the deployment of the cluster console in Minishift.

Before beginning, ensure that you have the latest release of Minishift. You can download the latest release from Github or from the Red Hat Developers website if making use of the Container Development Kit (CDK).

As of the publishing of this article, Minishift makes use of OpenShift version 3.10. To align with the features that are provided with OpenShift 3.11 to support the cluster console, Minishift should also be configured to make use of this version. When starting up an instance of Minishift, the –openshift-version flag can be provided to specify the version that should be utilized (The CDK uses the flag –ocp-tag).

Start an instance of Minishift to make use of OpenShift version 3.11. In addition, be sure to provide the VM containing OpenShift with enough resources to support the containers required for the deployment using the –memory parameter.

minishift start --openshift-version=v3.11.0 --memory=6114

When using the Container Development Kit, use the following command:

minishift start --ocp-tag =v3.11.16 --memory=6114

Once provisioning completes of the VM completes and the necessary container images have been retrieved and started, information on how to access the cluster will be provided in the command line output similar to the following:

The server is accessible via web console at:
    https://192.168.99.100:8443

When the provisioning process completes, you will be logged in to the OpenShift Command Line Interface (CLI) as a user called “developer“. Since the majority of the steps for deploying the cluster console require higher level permissions, you will need to login as a user with higher level permissions. You can login as the system administrator account using the following command as noted in the prior output:

oc login -u system:admin

The entire list of projects configured in OpenShift are then displayed. Unfortunately, this account cannot be used to access the web console. We need to grant another user cluster-admin permissions. Let’s give a user called “admin” cluster-admin privileges by executing the following command:

oc adm policy add-cluster-role-to-user cluster-admin admin

Now, login as this user to confirm that it has the same set of permissions as the system administrator user

$ oc login -u admin

Enter any password when prompted to finalize the login process.

Note: The Container Development Kit (CDK) ships with a set of of addons that provide additional features and functionality on top of the base set of components. One of these addons is the “admin-user” addon which configures a user named admin with cluster-admin privileges. Similar to the admin-user addon, another addon called anyuid is enabled by default in the CDK to streamline the development process. By default, containers running on OpenShift make use of random user ID which increases the overall security of OpenShift. The functionality within OpenShift that aids in this process is called Security Context Constraints (SCC). By default, all containers use the restricted SCC. The anyuid SCC for which the anyuid addon makes use of allows all containers to use the user ID as defined within the container instead of a random user ID. However, the utilization of the anyuid SCC by all OpenShift components has been known to cause challenges. Since new container development is not being emphasized as part of this effort, disable the configurations that were made by the addon by executing the following command:

$ oc adm policy remove-scc-from-group anyuid system:authenticated

With all of the policies now properly configured, let’s try to access the OpenShift web console. Due to a known issue, navigating to the base address in a web browser will result in an error. Instead, add the /console context to the OpenShift server address to work around this issue.

For example, if OpenShift is available at https://192.168.99.100:8443, the console would be accessible at https://192.168.99.100:8443/console

Accept the self signed certificate warning and you should be presented with the OpenShift web console. Login to the web console with the user “admin”. Any password can be entered as no additional validation is performed. Once authenticated successfully, you will be presented with the OpenShift catalog.

While access to the OpenShift web console is great, it only provides a developer’s centric viewpoint into the platform which has been available since the infancy of OpenShift 3. Additional steps will need to be performed to install the cluster console to provide a more operational viewpoint into the platform.

Coinciding with the release of OpenShift 3.11 was also the introduction of the Operator Framework and Operators into the mainstream use. Operators are a method for packaging, deploying and managing Kubernetes based applications. The cluster console makes use of an operator called the console-operator to manage its lifecycle.

To make use of an operator, a set of resources must be deployed to an OpenShift environment. These manifests are stored within the Github repository associated with the operator.

The content of the repository can either be downloaded as zip from Github or cloned. The command below will use git to clone the repository to a local machine and navigate into the directory created.

$ git clone https://github.com/openshift/console-operator 
$ cd console-operator

All of the manifests needed to create the necessary project in OpenShift along with the remaining assets are located in a directory called manifests. The contents of the directory can all be created using a single command using the OpenShift CLI.

$ oc apply -f manifests/

 


A new namespace containing the operator and the console will be deployed. This can be confirmed by viewing the set of running pods in the newly created openshift-console namespace.

$ oc get pods -n openshift-console

NAME                                 READY     STATUS    RESTARTS   AGE
console-operator-7748b877b5-58h2z    1/1       Running   0          5m
openshift-console-67b8f48b9d-dw7dl   1/1       Running   0          5m

In addition, a route is also created to expose access outside the cluster.

$ oc get routes -n openshift-console

NAME      HOST/PORT                                     PATH      SERVICES   PORT      TERMINATION          WILDCARD
console   console-openshift-console.192.168.99.100.nip.io             console    https     reencrypt/Redirect   None

Navigate the the URL provided. Once again, accept the self signed certificate and login using the admin user. Once authenticated, the list of projects is presented.

 

Note: If you attempt to access the cluster console and are presented with a redirect loop where the login page continues to appear, it indicates a race condition has occurred where the console was not properly configured with the correct permissions to make requests against the OpenShift API. When this situation, occurs, execute the following command to delete the console pod which should mount the secrets properly upon the creation of the newly created pod:

oc delete pod -n openshift-console -l app=openshift-console

Now, under the administration section of the navigation bar, roles and their bindings, quotas along with the set of defined custom resource definitions can be browsed. Take a moment to view each of these sections at your leisure.

Most platform administrators are concerned with a holistic snapshot of the entire OpenShift environment. This is provided on the status page underneath the Home section of the left hand navigation bar.

When navigating to the status page for the first time, only the default namespace is displayed. To view all namespaces, select the “all projects” option from the Projects dropdown at the top of the page. This will display an aggregation across all namespaces. Currently, only events are displayed which is only a portion of what platform administrators need to determine the state of the environment. There are key components of this page missing. This is due to the fact that the remaining content is sourced from metrics gathered in Prometheus which is not deployed by default in Minishift.

Fortunately, there is an operator available as of OpenShift 3.11 to manage the deployment of the Prometheus. The ecosystem of Prometheus tools including Alertmanager and Kube State Metrics is made available by the content found in the cluster-monitoring-operator repository. In a similar fashion as completed previously for the console-operator, open up a terminal session and clone the contents of the repository locally and change into the directory.

$ git clone https://github.com/openshift/cluster-monitoring-operator
$ cd cluster-monitoring-operator

Apply the contents of the manifest directory to OpenShift.

$ oc apply -f manifests/

A new namespace called openshift-monitoring will be created along with operators for managing Prometheus and the rest of the monitoring stack. There are a number of components that are deployed by the operators so it may take a few minutes for all of the components to become active. If necessary, refresh the cluster console status page to reveal additional telemetry of the current state of the OpenShift environment.

When reviewing the metrics now presented from Prometheus, a portion of the values may not be displayed. If this is the case, additional permissions may need to be added to allow for the OpenShift controller to perform a TokenReview. The operator configured a ClusterRole called prometheus-k8s which enables access to perform TokenReviews. Execute the following command to associate this ClusterRole to the service account being used by the controller-manager pod.

$ oc adm policy add-cluster-role-to-user prometheus-k8s -n openshift-controller-manager -z openshift-controller-manager

Wait a few moments and the remaining graphs should display values properly.

 

While the cluster console that is deployed on Minishift does not contain all of the metrics that are typically available in a full OpenShift deployment, it provides insights into the capabilities unlocked by this new administrative console and the expanded day two operations features.


Minishift and the Enterprise: Disconnected Image Registry

Posted: October 9th, 2018 | Author: | Filed under: Technology | Tags: , , , | No Comments »

Security continues to be a priority in most organizations. Any breach may result in intellectual or financial losses. Reducing access to external systems by internal resources is one was to limit the threat potential. One such method is to place a middleman, or proxy, between internal and external resources to govern the types of traffic. Considerations for how the Container Development Kit (CDK) can traverse proxy servers were covered in a prior blog. However, many organizations are further reducing the need for communicating with remote systems and placing all resources within their infrastructure. Systems operating in a manner where access to external resources is completely restricted is known as running in a disconnected environment. OpenShift supports operating in a disconnected environment and cluster operators can take steps to prepare for normal operation. A full discussion on managing OpenShift in a disconnected environment is beyond the scope of this discussion, but can be found here. While there are several areas the must be accounted for when operating in a disconnected environment, having access to the container images that reside in external image registries is essential. The CDK, like the full platform is driven by container images sourced from external locations. Fortunately, the CDK does contain the functionality to specify an alternate location for which images that control the execution can originate from.

OpenShift’s container images are stored by default in the Red Hat Container Catalog (RHCC) by default. Many organizations operate their own container registry internally for providing content either from remote locations or created internally. Common registry examples in use include a standalone docker registry (docker distribution), Sonatype Nexus, JFrog Artifactory and Red Hat Quay. Since the same container images that are used by OpenShift Container Platform are used by the CDK, organizations can serve them using an internal registry and satisfy both sets of consumers. One requirement that must be adhered to is that the name of the image repository, name and tag must match the source from the Red Hat Container Catalog (it can differ, however several manual changes would then be required).

Once the images are available in the local registry, a few configuration changes can be made to fully support their use in the CDK (See the section on syncing images). First, several of the options that will be leveraged in the CDK are classified as “Experimental features”. To enable support for experimental feature, set the “MINISHIFT_ENABLE_EXPERIMENTAL” environmental variable to “y” as shown below:

export MINISHIFT_ENABLE_EXPERIMENTAL=y

With experimental features enabled, the CDK can now be started. For this example, let’s assume that there is an image registry located at registry.mycorp.com which has been seeded with the images to support OpenShift. Execute the following command to utilize the CDK with images sourced from this internal registry:

minishift start --insecure-registry registry.mycorp.com --docker-opt add-registry= registry.mycorp.com --docker-opt block-registry=registry.access.redhat.com --extra-clusterup-flags --image= registry.mycorp.com/openshift3/ose

Phew, that was a long command. Lets take a moment to break it down.

  • minishift start

This is the primary command and subcommand used to start the CDK

  • –insecure-registry registry.mycorp.com

While the registry may be served using trusted SSL certificates, many organizations have their own Certificate Authority instead of leveraging a public CA, such as Comodo. Since the VM running the CDK only trusts certificates from public CA’s, this will allow docker to be able to communicate with the registry

  • –docker-opt add-registry= registry.mycorp.com

Many OpenShift components do not include the registry portion of the image and instead rely on the configuration of the underlying Docker daemon to provide a default set of registries to use. Both the OpenShift Container Platform and the Container Development Kit have the RHCC configured by default. By specifying the location of the internal registry, the CDK will be able to reference it when images are specified without the value of the registry.

  • –docker-opt block-registry=registry.access.redhat.com

To ensure images are only being sourced from the corporate registry not the default location (RHCC), the CDK VM can configured to place a restriction at the docker daemon level.

  • –extra-clusterup-flags –image= registry.mycorp.com/openshift3/ose

OpenShift in the context of the CDK as of OpenShift version 3.9 utilizes the same image as containerized installation and contains all of the necessary logic to manage an OpenShift cluster. Under the covers of the CDK, the “oc cluster up” utility is leveraged to deploy OpenShift. By default, “oc cluster up” references the full path of the image, including registry. This experimental feature flag allows this value to be overridden with the location of the image from the enterprise registry.

The CDK will now start by pulling the container image and once this image is started, all dependent images by the platform will be retrieve. After the CDK has started fully, verify all running images are using the enterprise container registry.

First, check the names of the images currently running at a Docker level using the minishift ssh command:

minishift ssh "docker images --format '{{.Repository}}:{{.Tag}}'"

The final component that requires modification to support leveraging an enterprise registry is to update all of the ImageStreams that are populated in OpenShift. By default, they reference images from the RHCC. The Ansible based OpenShift installer does contain logic to update ImageStreams if the location differs from the RHCC. Unfortunately, the CDK does not contain this logic. Fortunately, this issue can be corrected with only a few commands.

oc login -u admin

Similar to all other accounts in the CDK, any password can be specified.

Next replace the RHCC with the location of the enterprise registry for all ImageStreams by executing the following command:

oc get is -n openshift -o json | sed -e 's|registry.access.redhat.com|registry.mycorp.com|g' | oc replace -n openshift -f-

Make sure to replace registry.mycorp.com with the address of the enterprise registry.

With the ImageStreams now utilizing all of the enterprise registry as the source, reimport all of the ImageStreams:

for x in `oc get is -n openshift -o name`; do oc import-image $x -n openshift --all --insecure=true; done

After the command completes, all ImageStreams will be updated.

At this point the CDK is fully functional with images being referenced from the enterprise registry, thus enabling productivity in environments where security is a high priority


Minishift and the Enterprise: Registration

Posted: October 9th, 2018 | Author: | Filed under: Technology | Tags: , , | No Comments »

One of the many hallmarks of Open Source Software is the ability for anyone in the community to freely contribute to a software project. This open model provides an opportunity to garner insight into the direction of a project from a larger pool of resources in contrast to a closed sourced model where software may be regulated by a single organization or group. Many enterprises also see the value of Open Source Software to power their most critical systems. However, enterprises must be cognizant that Open Source Software from the community may not have the integrity that they have been accustomed to when using software obtained directly from a vendor. Red Hat, as a leader of Open Source Software solutions, provides a subscription model that can be used to meet the quality and support requirements necessary by any organization. A subscription includes fully tested and hardened software, patches, and customer support. Once a subscription has been purchases, licensed software must be registered to activate the necessary included features.

The Container Development Kit (CDK) is the supported version of the upstream minishift project, and given that the software package is built on top of a Red Hat Enterprise Linux base, a valid subscription associated with a Red Hat account is required to access the entire featureset provided by the CDK. To enable the development on Red Hat’s ecosystem of tools, a no-cost developer subscription is available through the Red Hat Developer program and includes an entitlement to Red Hat Enterprise Linux along with a suite of development tools that are regularly updated with the latest enhancements and features. Information about the Red Hat Developer Subscription along with the steps to create an account can be found at the Red Hat Developer Website.

Once a Red Hat Developer account has been obtained, the configuration of associating the account within the CDK can be completed. These steps were detailed in the prior post, Minishift and the Enterprise: Installation.

While the Red Hat Developer subscription is a great way for developers to take advantage of enterprise Linux software, many organizations frown upon the use of personal licenses operating within the organization, especially on company owned machines. The CDK is configured to automatically register and associate subscriptions against Red Hat’s hosted subscription management infrastructure. Accounts for developers can be created within the Red Hat Customer Portal for use with the CDK. As described in the post Minishift and the Enterprise: Proxies, subscription-manager, the tool within RHEL for tracking and managing subscriptions, is automatically configured to traverse a corporate proxy server to the public internet when this option is enabled. This feature, as previously mentioned, is useful as most enterprises employ some form of barrier between the end user and external network.

Unfortunately, many enterprises do not use Red Hat’s hosted subscription management system to register machines on their network and instead leverage Red Hat Satellite within their internal network. The CDK, as of version 3.3, is only able to register subscriptions against Red Hat automatically as part of normal startup. Fortunately, there are methods in which the user can configure the CDK to register against a satellite server instead of Red Hat. These options include:

  1. Executing commands to facilitate the registration process
  2. Leveraging an add-on which streamlines the registration process

Regardless of the method utilized, the CDK should be instructed to not attempt to register the machine during startup. This is accomplished by passing the –skip-registration parameter when executing the minishift start command as shown below:

minishift start --skip-registration

Even though the RHEL machine within the CDK is not registered, the majority of the functionality will remain unaffected. The key exception is managing software packages using the yum utility. Since RHEL based images inherit subscription and repository information from the host they are running on, operations both on the host machine as well as within a container making use of yum will fail due to the lack of valid subscriptions. This is primarily noticeable at image build time as it typically involves the installation of packages using yum.

The RHEL machine within the CDK can be registered manually in a similar fashion to any other RHEL machine using the subscription-manager utility. To gain access to a prompt within the CDK, the minishift ssh command can be used.

minishift ssh

By default, an ssh session is established within the CDK using the “docker” user. Since subscription-manager requires root privileges, access must be elevated using the sudo command. Execute the following command to elevate to the root user:

sudo su -

With access to root privileges, the machine can now be registered to Red Hat using the subscription-manager register command. Either a username/password or activation key/organization combination can be used as follows:

subscription-manager register --username=<username> --password=<password>

Or

subscription-manager register --org=<organization-key> --activationkey=<activation-key>

In either case, adding the –auto-attach parameter to each command will attach a subscription automatically to the new registration.

To subscribe the CDK against an instance of Red Hat Satellite instead of Red Hat’s hosted infrastructure, many of the same commands can be reused. An additional step is required to first download the bundle containing the certificates for the Satellite server so that the CDK can communicate securely to facilitate the registration process. Execute the following command to install the certificates into the CDK:

rpm -Uvh http://<satellite_server>/pub/katello-ca-consumer-latest.noarch.rpm

Now use subscription-manager to complete the registration process using the –org and –activationkey parameters

subscription-manager register --org=<organization-key> --activationkey=<activation-key> --auto-attach

To validate the CDK is properly subscribed, lets start a new container and attempt to install a package using yum. Once again, in a session within the CDK as the root user, execute the following command:

docker run -it --rm rhel:7.5 yum install -y dos2unix

If the above command succeeded, the CDK is properly registered and subscribed.

Automate Satellite Registration using an Add-on

Active users of the CDK routinely delete the RHEL VM that is part of the CDK using the minishift delete command and start with a clean slate as it eliminates the artifacts that have accumulated from prior work. As demonstrated previously, registration of the CDK against a Red Hat Satellite does involve a number of manual steps. Fortunately, there is a way to automate this process through the use of a minishift add-on. An add-on is a method to extend the base minishfit startup process by injecting custom actions. This is ideal as the add-on can streamline the repetitive manual processes that would normally need to be executed to register against satellite.

An addon called satellite-registration is available to facilitate the registration of the CDK against a Satellite instance. To install the add-on, first clone the repository to the local machine:

git clone https://github.com/sabre1041/cdk-minishift-utils.git

With the repository available on the local machine, install the add-on into the CDK

minishift addons install cdk-minishift-utils/addons/satellite-registration

Confirm the add-on was installed successfully by executing the following command:

minishift addons list

When any new add-on is installed, it is disabled by default (as indicated by the disabled designation). Add-ons can be enabled which will automatically execute them at startup or they can be manually invoked using the minishift apply command. If you recall, registration against a satellite instance required several values be provided to complete the process:

  • Location of the satellite server to obtain the certificate bundle
  • Organization ID
  • Activation Key

The add-on similarly requires these also be provided so that it can register the CDK successfully. Add-on’s offer a method of injecting parameters during the execution process through the –addon-env flag. The above items are associated with the add-on environment variables listed below:

  • SATELLITE_CA_URL
  • SATELLITE_ORG
  • SATELLITE_ACTIVATION_KEY

To test the add-on against a satellite server, first start up the CDK with auto registration disabled:

minishift start --skip-registration

Once the CDK has started, apply the satellite-registration add-on along with the required flags:

minishift addons apply satellite-registration --addon-env "SATELLITE_CA_URL=<CA_LOCATION>" --addon-env "SATELLITE_ORG=<ORG_NAME>" --addon-env "SATELLITE_ACTIVATION_KEY=<ACTIVATION_KEY>"

Confirm the registration was successful by checking the status as reported by subscription manager from the local machine

minishift ssh sudo subscription-manager status

If the “Overall Status” as reported by the previous command returns “Current”, the CDK was successfully subscribed to the satellite instance.

Whether using Red Hat hosted infrastructure or Red Hat Satellite, developers in the community or within an enterprise setting have access to build powerful applications using trusted Red Hat software by registering and associating subscriptions to the Container Development Kit.