Minishift and the Enterprise: Disconnected Image Registry

Posted: May 31st, 2018 | Author: | Filed under: Technology | Tags: , , , | No Comments »

Security continues to be a priority in most organizations. Any breach may result in intellectual or financial losses. Reducing access to external systems by internal resources is one was to limit the threat potential. One such method is to place a middleman, or proxy, between internal and external resources to govern the types of traffic. Considerations for how the Container Development Kit (CDK) can traverse proxy servers were covered in a prior blog. However, many organizations are further reducing the need for communicating with remote systems and placing all resources within their infrastructure. Systems operating in a manner where access to external resources is completely restricted is known as running in a disconnected environment. OpenShift supports operating in a disconnected environment and cluster operators can take steps to prepare for normal operation. A full discussion on managing OpenShift in a disconnected environment is beyond the scope of this discussion, but can be found here. While there are several areas the must be accounted for when operating in a disconnected environment, having access to the container images that reside in external image registries is essential. The CDK, like the full platform is driven by container images sourced from external locations. Fortunately, the CDK does contain the functionality to specify an alternate location for which images that control the execution can originate from.

OpenShift’s container images are stored by default in the Red Hat Container Catalog (RHCC). Many organizations operate their own container registry internally for providing content either from remote locations or created in house. Common registry examples in use include a standalone docker registry (docker distribution), Sonatype Nexus, JFrog Artifactory and Red Hat Quay. Since the same container images that are used by OpenShift Container Platform are used by the CDK, organizations can serve them using an internal registry and satisfy both sets of consumers. One requirement that must be adhered to is that the name of the image repository, name and tag must match the source from the Red Hat Container Catalog (it can differ, however several manual changes would then be required).


With experimental features enabled, the CDK can now be started. For this example, let’s assume that there is an image registry located at which has been seeded with the images to support OpenShift. Execute the following command to utilize the CDK with images sourced from this internal registry:

minishift start --insecure-registry --docker-opt --docker-opt --extra-clusterup-flags

Note: Concepts from both prior blogs on Proxies and Registration can also be used when running in a fully disconnected environment.

Note: Due to a regression in version 3.4 of the CDK, the –extra-clusterup-flags parameter is not accepted

Phew, that was a long command. Let’s take a moment to break it down.

  • minishift start

This is the primary command and subcommand used to start the CDK

  • –insecure-registry

While the registry may be served using trusted SSL certificates, many organizations have their own Certificate Authority instead of leveraging a public CA, such as Comodo. Since the VM running the CDK only trusts certificates from public CA’s, this will allow docker to be able to communicate with the registry

  • –docker-opt add-registry=

Many OpenShift components do not include the registry portion of the image and instead rely on the configuration of the underlying Docker daemon to provide a default set of registries to use. Both the OpenShift Container Platform and the Container Development Kit have the RHCC configured by default. By specifying the location of the internal registry, the CDK will be able to reference it when images are specified without the value of the registry.

  • –docker-opt

To ensure images are only being sourced from the corporate registry not the default location (RHCC), the CDK VM can be configured to place a restriction at the docker daemon level.

  • –extra-clusterup-flags –image=

OpenShift in the context of the CDK as of OpenShift version 3.9 utilizes the same image as containerized installation and contains all of the necessary logic to manage an OpenShift cluster. Under the covers of the CDK, the “oc cluster up” utility is leveraged to deploy OpenShift. By default, “oc cluster up” references the full path of the image, including registry. This experimental feature flag allows this value to be overridden with the location of the image from the enterprise registry.

The CDK will now start by pulling the container image and once this image is started, all dependent images by the platform will be retrieve. After the CDK has started fully, verify all running images are using the enterprise container registry.

First, check the names of the images currently running at a Docker level using the minishift ssh command:

minishift ssh "docker images --format '{{.Repository}}:{{.Tag}}'"

The final component that requires modification to support leveraging an enterprise registry is to update all of the ImageStreams that are populated in OpenShift. By default, they reference images from the RHCC. The Ansible based OpenShift installer does contain logic to update ImageStreams if the location differs from the RHCC. Unfortunately, the CDK does not contain this logic. Fortunately, this issue can be corrected with only a few commands.

First, make sure you are logged into OpenShift as a user with `cluster-admin` rights. By default, the `admin` user contains these privileges.

oc login -u admin

Similar to all other accounts in the CDK, any password can be specified.

Next replace the RHCC with the location of the enterprise registry for all ImageStreams by executing the following command:

oc get is -n openshift -o json | sed -e 's|||g' | oc replace -n openshift -f-

Make sure to replace with the address of the enterprise registry.

With the ImageStreams now utilizing all of the enterprise registry as the source, reimport all of the ImageStreams:

for x in `oc get is -n openshift -o name`; do oc import-image $x -n openshift --all --insecure=true; done

After the command completes, all ImageStreams will be updated.

At this point the CDK is fully functional with images being referenced from the enterprise registry, thus enabling productivity in environments where security is a high priority.

Red Hat Summit 2018 Labs: A Recap

Posted: May 21st, 2018 | Author: | Filed under: Uncategorized | Tags: , | No Comments »

Red Hat Summit week is one of those weeks that I regularly circle on the calendar. Not only does it afford the opportunity to connect with my fellow Red Hat colleagues, but in addition, I am able to reconnect with some of my former customers. In reality, the entire three days that comprises Summit (plus OpenShift Commons Gathering which is held the day prior) is one big blur that consists of non-stop activity, but most importantly, a whole lot of fun. Part of the fun is the ability to showcase some of the latest and greatest technology thorough the use of hands-on labs that attendees are able to take advantage of. A full overview of these lab sessions were discussed in a prior post.

There are several challenges when it comes to hands-on labs at Red Hat Summit:

  • Lab sessions are longer than breakout sessions. The time commitment required may result in missing other sessions of interest.
  • Some prior knowledge may be required in order to fully appreciate the content

Regardless of the reason, one of the key goals of being an open source advocate is for materials to be made available publicly. Not only do I preach this sentiment, but this is also expressed by Red Hat as a whole. Fortunately, this year, all of the lab content from Summit is available on GitHub at the following location:

Included in the repository are the lab guides that the actual attendees at Summit leveraged. Several of the guides also include steps for creating the supporting environment.

As for the lab sessions themselves, they were a mix of angst, frustration, excitement and finally relief. As anyone who has previously presented at a conference can attest to, if anything can happen, it will happen. Each of my lab sessions were in the 4pm-6pm timeslot. The timing of the lab can be dangerous as exhaustion from the day starts to take hold as well as being the one impediment holding attendees back from happy hours and evening events. Nevertheless, each of the sessions were packed to the brim.

Develop IoT solutions with containers and serverless patterns

The first day of Summit featured my first lab combining the Internet of Things (IoT), Containers (OpenShift) and Serverless (Functions as a Service [FaaS] using OpenWhisk). While things kicked off great, things were about to go downhill quickly. One of the first steps in the lab was for attendees to clone the Git repository containing the lab material. Unfortunately, the speed of the cloning action hovered around 5kb/s. For anyone who has worked previously in the container space, this meant trouble especially when larger container images needed to be retrieved later on. Fears came to reality as attendees struggled through the next set of tasks and we quickly realized that a successful completion of the lab may not be attainable. The entire lab environment was hosted in a cloud environment which removed the majority of the constraints on the infrastructure at Summit. However, it was communicated to the various team who were running labs during this session time that the cloud provider was having technical faults communicating with their external Internet Service Provider which resulted in the slow, but almost unusable connection to the public internet. After 45 minutes of waiting and hoping the issue would be resolved, attention turned to giving attendees the best experience possible given the constraints.

Fortunately, the associated lab guide provides a high level overview of Functions as a Services and the key OpenWhisk concepts that were to be introduced. While attendees were a little disappointed they would not be able to have hands on experience during this session, they left with the understanding of where to find the lab material, but most importantly how to create an environment representative of the lab themselves.

You too can learn how to set up and complete the lab in your own environment by utilizing the following set of assets:

Managing Your OpenShift Cluster From Installation and Beyond

The second day of Red Hat Summit featured the next evolution in the “Managing OpenShift from Installation and Beyond” series. Even with the technical challenges faced the prior day, there were several factors that provided some form of assurance that this lab would go smoother:

  • This was actually the second opportunity at Red Hat Summit to execute this lab. The team completed a lab session the day prior to Summit kicking off to a few select individuals. Any outstanding issues or enhancements were made after this session so that attendees of Red Hat Summit proper could have the best experience possible.
  • The lab was hosted in Amazon Web Services (AWS). If a similar issue in the environment occurred, this lab session would be the least of Amazon’s concern 🙂

As anticipated, the lab went off without any issues and attendees were able to fully immerse themselves into the ways that automation using Ansible and Ansible Tower can install and manage the OpenShift Container Platform. This lab was advantageous as it covered a variety of topics ranging from Ansible Tower, the Prometheus ecosystem with visualization support from Grafana along with building and deploying Ansible Playbook Bundles.

Those interested in reviewing the guide or standing up the environment itself can refer to the lab assets:

While Red Hat Summit 2018 has come to a close, eyes are already on to next year’s event back on the east coast of the US in Boston May 7-9 2019. Hope to see you there as well!

Hands on Labs at Red Hat Summit 2018

Posted: May 4th, 2018 | Author: | Filed under: Uncategorized | Tags: , , , | No Comments »

Red Hat Summit once again returns to the city by the bay, San Francisco, California, the week of May 7th. Not only is the event returning to Silicon Valley, but I will be reprising my role as hands on lab magician to offer Red Hat Summit attendees with the opportunity to take advantage of some of the latest and greatest technology available. Last year, I had the fortune of working with two of the smartest groups of individuals to develop and deliver labs focused of two hot topics in the industry: The installation and management of the OpenShift Container Platform along with the Internet of Things (IoT). This year, an entire new set of labs have been developed to demonstrate new and exciting ways to utilize the technology.

Develop IoT solutions with containers and serverless patterns

Tuesday is the first full day of Red Hat Summit and Ishu Verma (@IoT_Ishu), Technical Marketing Manager for IoT solutions at Red Hat and I showcase how the Internet of Things can be applied to one of the hottest trends in the industry: serverless technologies. In this session, attendees will be introduced to the principles along with the benefits of serverless technology and deploy Apache OpenWhisk on top of the OpenShift Container Platform. Once deployed, a variety of OpenWhisk concepts and patterns will be leveraged to demonstrate how metrics sent from IoT devices can best be utilized. These patterns include the basics of OpenWhisk tasks, such as creating actions, triggers and rules, along with methods for ingesting IoT data onto the platform. This session is ideal for anyone interested in working with serverless technologies on an open platform using real world IoT use cases.

Date: Tuesday May 8
Time: 4:00pm-6:00pm
Location: Moscone South – Room 158
Session Link:

Managing your OpenShift cluster from installation and beyond

In this session, Scott Collier (@collier_s), Vinny Valdez (@VinnyValdez) and Brett Thurber (LinkedIn) from Red Hat and myself will not only showcase how to install the OpenShift Container Platform using automation tools such as Ansible and Ansible Tower, but we will cover key “day two operations” concepts that ensure that the entire OpenShift Container Platform ecosystem remains healthy. This includes leveraging the Prometheus ecosystem of services to monitor the platform along with visualization support using Grafana. In addition, attendees will learn how to build their own Ansible Playbook Bundles to deploy an application that plays an integral part in managing the lifecycle of cluster management events. You do not want to miss this session!

Date: Thursday May 9
Time: 4:00pm-6:00pm
Location:Moscone South – Room 157
Session Link: