Posted: October 23rd, 2018 | Author: sabre1041 | Filed under: Technology | 2 Comments »
Through the continued evolution of the platform, OpenShift has shifted the focus from the installation and initial deployment of infrastructure and applications to understanding how the platform and their applications are performing, better known as day two operations. As a result of the incorporation of the CoreOS team and their existing ecosystem of tools into the OpenShift portfolio, the release of OpenShift Container Platform 3.11 includes a new administrator focused web console (cluster console) which provides insights into the management of nodes, role base access controls, and the underlying cloud infrastructure objects. While this new console is automatically enabled in the deployment of the OpenShift Container Platform, the console is not enabled in Minishift/Container Development Kit (CDK), the local containerized version of OpenShift. This post will describe the steps necessary for enabling the deployment of the cluster console in Minishift.
Before beginning, ensure that you have the latest release of Minishift. You can download the latest release from Github or from the Red Hat Developers website if making use of the Container Development Kit (CDK).
As of the publishing of this article, Minishift makes use of OpenShift version 3.10. To align with the features that are provided with OpenShift 3.11 to support the cluster console, Minishift should also be configured to make use of this version. When starting up an instance of Minishift, the –openshift-version flag can be provided to specify the version that should be utilized (The CDK uses the flag –ocp-tag).
Start an instance of Minishift to make use of OpenShift version 3.11. In addition, be sure to provide the VM containing OpenShift with enough resources to support the containers required for the deployment using the –memory parameter.
minishift start --openshift-version=v3.11.0 --memory=6114
When using the Container Development Kit, use the following command:
minishift start --ocp-tag =v3.11.16 --memory=6114
Once provisioning completes of the VM completes and the necessary container images have been retrieved and started, information on how to access the cluster will be provided in the command line output similar to the following:
The server is accessible via web console at:
https://192.168.99.100:8443
When the provisioning process completes, you will be logged in to the OpenShift Command Line Interface (CLI) as a user called “developer“. Since the majority of the steps for deploying the cluster console require higher level permissions, you will need to login as a user with higher level permissions. You can login as the system administrator account using the following command as noted in the prior output:
oc login -u system:admin
The entire list of projects configured in OpenShift are then displayed. Unfortunately, this account cannot be used to access the web console. We need to grant another user cluster-admin permissions. Let’s give a user called “admin” cluster-admin privileges by executing the following command:
oc adm policy add-cluster-role-to-user cluster-admin admin
Now, login as this user to confirm that it has the same set of permissions as the system administrator user
$ oc login -u admin
Enter any password when prompted to finalize the login process.
Note: The Container Development Kit (CDK) ships with a set of of addons that provide additional features and functionality on top of the base set of components. One of these addons is the “admin-user” addon which configures a user named admin with cluster-admin privileges. Similar to the admin-user addon, another addon called anyuid is enabled by default in the CDK to streamline the development process. By default, containers running on OpenShift make use of random user ID which increases the overall security of OpenShift. The functionality within OpenShift that aids in this process is called Security Context Constraints (SCC). By default, all containers use the restricted SCC. The anyuid SCC for which the anyuid addon makes use of allows all containers to use the user ID as defined within the container instead of a random user ID. However, the utilization of the anyuid SCC by all OpenShift components has been known to cause challenges. Since new container development is not being emphasized as part of this effort, disable the configurations that were made by the addon by executing the following command:
$ oc adm policy remove-scc-from-group anyuid system:authenticated
With all of the policies now properly configured, let’s try to access the OpenShift web console. Due to a known issue, navigating to the base address in a web browser will result in an error. Instead, add the /console context to the OpenShift server address to work around this issue.
For example, if OpenShift is available at https://192.168.99.100:8443, the console would be accessible at https://192.168.99.100:8443/console
Accept the self signed certificate warning and you should be presented with the OpenShift web console. Login to the web console with the user “admin”. Any password can be entered as no additional validation is performed. Once authenticated successfully, you will be presented with the OpenShift catalog.
While access to the OpenShift web console is great, it only provides a developer’s centric viewpoint into the platform which has been available since the infancy of OpenShift 3. Additional steps will need to be performed to install the cluster console to provide a more operational viewpoint into the platform.
Coinciding with the release of OpenShift 3.11 was also the introduction of the Operator Framework and Operators into the mainstream use. Operators are a method for packaging, deploying and managing Kubernetes based applications. The cluster console makes use of an operator called the console-operator to manage its lifecycle.
To make use of an operator, a set of resources must be deployed to an OpenShift environment. These manifests are stored within the Github repository associated with the operator.
The content of the repository can either be downloaded as zip from Github or cloned. The command below will use git to clone the repository to a local machine and navigate into the directory created.
$ git clone https://github.com/openshift/console-operator
$ cd console-operator
All of the manifests needed to create the necessary project in OpenShift along with the remaining assets are located in a directory called manifests. The contents of the directory can all be created using a single command using the OpenShift CLI.
$ oc apply -f manifests/
A new namespace containing the operator and the console will be deployed. This can be confirmed by viewing the set of running pods in the newly created openshift-console namespace.
$ oc get pods -n openshift-console
NAME READY STATUS RESTARTS AGE
console-operator-7748b877b5-58h2z 1/1 Running 0 5m
openshift-console-67b8f48b9d-dw7dl 1/1 Running 0 5m
In addition, a route is also created to expose access outside the cluster.
$ oc get routes -n openshift-console
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
console console-openshift-console.192.168.99.100.nip.io console https reencrypt/Redirect None
Navigate the the URL provided. Once again, accept the self signed certificate and login using the admin user. Once authenticated, the list of projects is presented.
Note: If you attempt to access the cluster console and are presented with a redirect loop where the login page continues to appear, it indicates a race condition has occurred where the console was not properly configured with the correct permissions to make requests against the OpenShift API. When this situation, occurs, execute the following command to delete the console pod which should mount the secrets properly upon the creation of the newly created pod:
oc delete pod -n openshift-console -l app=openshift-console
Now, under the administration section of the navigation bar, roles and their bindings, quotas along with the set of defined custom resource definitions can be browsed. Take a moment to view each of these sections at your leisure.
Most platform administrators are concerned with a holistic snapshot of the entire OpenShift environment. This is provided on the status page underneath the Home section of the left hand navigation bar.
When navigating to the status page for the first time, only the default namespace is displayed. To view all namespaces, select the “all projects” option from the Projects dropdown at the top of the page. This will display an aggregation across all namespaces. Currently, only events are displayed which is only a portion of what platform administrators need to determine the state of the environment. There are key components of this page missing. This is due to the fact that the remaining content is sourced from metrics gathered in Prometheus which is not deployed by default in Minishift.
Fortunately, there is an operator available as of OpenShift 3.11 to manage the deployment of the Prometheus. The ecosystem of Prometheus tools including Alertmanager and Kube State Metrics is made available by the content found in the cluster-monitoring-operator repository. In a similar fashion as completed previously for the console-operator, open up a terminal session and clone the contents of the repository locally and change into the directory.
$ git clone https://github.com/openshift/cluster-monitoring-operator
$ cd cluster-monitoring-operator
Apply the contents of the manifest directory to OpenShift.
$ oc apply -f manifests/
A new namespace called openshift-monitoring will be created along with operators for managing Prometheus and the rest of the monitoring stack. There are a number of components that are deployed by the operators so it may take a few minutes for all of the components to become active. If necessary, refresh the cluster console status page to reveal additional telemetry of the current state of the OpenShift environment.
When reviewing the metrics now presented from Prometheus, a portion of the values may not be displayed. If this is the case, additional permissions may need to be added to allow for the OpenShift controller to perform a TokenReview. The operator configured a ClusterRole called prometheus-k8s which enables access to perform TokenReviews. Execute the following command to associate this ClusterRole to the service account being used by the controller-manager pod.
$ oc adm policy add-cluster-role-to-user prometheus-k8s -n openshift-controller-manager -z openshift-controller-manager
Wait a few moments and the remaining graphs should display values properly.
While the cluster console that is deployed on Minishift does not contain all of the metrics that are typically available in a full OpenShift deployment, it provides insights into the capabilities unlocked by this new administrative console and the expanded day two operations features.
Posted: October 9th, 2018 | Author: sabre1041 | Filed under: Technology | Tags: cdk, minishift, OpenShift, registries | No Comments »
Security continues to be a priority in most organizations. Any breach may result in intellectual or financial losses. Reducing access to external systems by internal resources is one was to limit the threat potential. One such method is to place a middleman, or proxy, between internal and external resources to govern the types of traffic. Considerations for how the Container Development Kit (CDK) can traverse proxy servers were covered in a prior blog. However, many organizations are further reducing the need for communicating with remote systems and placing all resources within their infrastructure. Systems operating in a manner where access to external resources is completely restricted is known as running in a disconnected environment. OpenShift supports operating in a disconnected environment and cluster operators can take steps to prepare for normal operation. A full discussion on managing OpenShift in a disconnected environment is beyond the scope of this discussion, but can be found here. While there are several areas the must be accounted for when operating in a disconnected environment, having access to the container images that reside in external image registries is essential. The CDK, like the full platform is driven by container images sourced from external locations. Fortunately, the CDK does contain the functionality to specify an alternate location for which images that control the execution can originate from.
OpenShift’s container images are stored by default in the Red Hat Container Catalog (RHCC) by default. Many organizations operate their own container registry internally for providing content either from remote locations or created internally. Common registry examples in use include a standalone docker registry (docker distribution), Sonatype Nexus, JFrog Artifactory and Red Hat Quay. Since the same container images that are used by OpenShift Container Platform are used by the CDK, organizations can serve them using an internal registry and satisfy both sets of consumers. One requirement that must be adhered to is that the name of the image repository, name and tag must match the source from the Red Hat Container Catalog (it can differ, however several manual changes would then be required).
Once the images are available in the local registry, a few configuration changes can be made to fully support their use in the CDK (See the section on syncing images). First, several of the options that will be leveraged in the CDK are classified as “Experimental features”. To enable support for experimental feature, set the “MINISHIFT_ENABLE_EXPERIMENTAL” environmental variable to “y” as shown below:
export MINISHIFT_ENABLE_EXPERIMENTAL=y
With experimental features enabled, the CDK can now be started. For this example, let’s assume that there is an image registry located at registry.mycorp.com which has been seeded with the images to support OpenShift. Execute the following command to utilize the CDK with images sourced from this internal registry:
minishift start --insecure-registry registry.mycorp.com --docker-opt add-registry= registry.mycorp.com --docker-opt block-registry=registry.access.redhat.com --extra-clusterup-flags --image= registry.mycorp.com/openshift3/ose
Phew, that was a long command. Lets take a moment to break it down.
This is the primary command and subcommand used to start the CDK
- –insecure-registry registry.mycorp.com
While the registry may be served using trusted SSL certificates, many organizations have their own Certificate Authority instead of leveraging a public CA, such as Comodo. Since the VM running the CDK only trusts certificates from public CA’s, this will allow docker to be able to communicate with the registry
- –docker-opt add-registry= registry.mycorp.com
Many OpenShift components do not include the registry portion of the image and instead rely on the configuration of the underlying Docker daemon to provide a default set of registries to use. Both the OpenShift Container Platform and the Container Development Kit have the RHCC configured by default. By specifying the location of the internal registry, the CDK will be able to reference it when images are specified without the value of the registry.
- –docker-opt block-registry=registry.access.redhat.com
To ensure images are only being sourced from the corporate registry not the default location (RHCC), the CDK VM can configured to place a restriction at the docker daemon level.
- –extra-clusterup-flags –image= registry.mycorp.com/openshift3/ose
OpenShift in the context of the CDK as of OpenShift version 3.9 utilizes the same image as containerized installation and contains all of the necessary logic to manage an OpenShift cluster. Under the covers of the CDK, the “oc cluster up” utility is leveraged to deploy OpenShift. By default, “oc cluster up” references the full path of the image, including registry. This experimental feature flag allows this value to be overridden with the location of the image from the enterprise registry.
The CDK will now start by pulling the container image and once this image is started, all dependent images by the platform will be retrieve. After the CDK has started fully, verify all running images are using the enterprise container registry.
First, check the names of the images currently running at a Docker level using the minishift ssh command:
minishift ssh "docker images --format '{{.Repository}}:{{.Tag}}'"
The final component that requires modification to support leveraging an enterprise registry is to update all of the ImageStreams that are populated in OpenShift. By default, they reference images from the RHCC. The Ansible based OpenShift installer does contain logic to update ImageStreams if the location differs from the RHCC. Unfortunately, the CDK does not contain this logic. Fortunately, this issue can be corrected with only a few commands.
oc login -u admin
Similar to all other accounts in the CDK, any password can be specified.
Next replace the RHCC with the location of the enterprise registry for all ImageStreams by executing the following command:
oc get is -n openshift -o json | sed -e 's|registry.access.redhat.com|registry.mycorp.com|g' | oc replace -n openshift -f-
Make sure to replace registry.mycorp.com with the address of the enterprise registry.
With the ImageStreams now utilizing all of the enterprise registry as the source, reimport all of the ImageStreams:
for x in `oc get is -n openshift -o name`; do oc import-image $x -n openshift --all --insecure=true; done
After the command completes, all ImageStreams will be updated.
At this point the CDK is fully functional with images being referenced from the enterprise registry, thus enabling productivity in environments where security is a high priority
Posted: October 9th, 2018 | Author: sabre1041 | Filed under: Technology | Tags: cdk, minishift, OpenShift | No Comments »
One of the many hallmarks of Open Source Software is the ability for anyone in the community to freely contribute to a software project. This open model provides an opportunity to garner insight into the direction of a project from a larger pool of resources in contrast to a closed sourced model where software may be regulated by a single organization or group. Many enterprises also see the value of Open Source Software to power their most critical systems. However, enterprises must be cognizant that Open Source Software from the community may not have the integrity that they have been accustomed to when using software obtained directly from a vendor. Red Hat, as a leader of Open Source Software solutions, provides a subscription model that can be used to meet the quality and support requirements necessary by any organization. A subscription includes fully tested and hardened software, patches, and customer support. Once a subscription has been purchases, licensed software must be registered to activate the necessary included features.
The Container Development Kit (CDK) is the supported version of the upstream minishift project, and given that the software package is built on top of a Red Hat Enterprise Linux base, a valid subscription associated with a Red Hat account is required to access the entire featureset provided by the CDK. To enable the development on Red Hat’s ecosystem of tools, a no-cost developer subscription is available through the Red Hat Developer program and includes an entitlement to Red Hat Enterprise Linux along with a suite of development tools that are regularly updated with the latest enhancements and features. Information about the Red Hat Developer Subscription along with the steps to create an account can be found at the Red Hat Developer Website.
Once a Red Hat Developer account has been obtained, the configuration of associating the account within the CDK can be completed. These steps were detailed in the prior post, Minishift and the Enterprise: Installation.
While the Red Hat Developer subscription is a great way for developers to take advantage of enterprise Linux software, many organizations frown upon the use of personal licenses operating within the organization, especially on company owned machines. The CDK is configured to automatically register and associate subscriptions against Red Hat’s hosted subscription management infrastructure. Accounts for developers can be created within the Red Hat Customer Portal for use with the CDK. As described in the post Minishift and the Enterprise: Proxies, subscription-manager, the tool within RHEL for tracking and managing subscriptions, is automatically configured to traverse a corporate proxy server to the public internet when this option is enabled. This feature, as previously mentioned, is useful as most enterprises employ some form of barrier between the end user and external network.
Unfortunately, many enterprises do not use Red Hat’s hosted subscription management system to register machines on their network and instead leverage Red Hat Satellite within their internal network. The CDK, as of version 3.3, is only able to register subscriptions against Red Hat automatically as part of normal startup. Fortunately, there are methods in which the user can configure the CDK to register against a satellite server instead of Red Hat. These options include:
- Executing commands to facilitate the registration process
- Leveraging an add-on which streamlines the registration process
Regardless of the method utilized, the CDK should be instructed to not attempt to register the machine during startup. This is accomplished by passing the –skip-registration parameter when executing the minishift start command as shown below:
minishift start --skip-registration
Even though the RHEL machine within the CDK is not registered, the majority of the functionality will remain unaffected. The key exception is managing software packages using the yum utility. Since RHEL based images inherit subscription and repository information from the host they are running on, operations both on the host machine as well as within a container making use of yum will fail due to the lack of valid subscriptions. This is primarily noticeable at image build time as it typically involves the installation of packages using yum.
The RHEL machine within the CDK can be registered manually in a similar fashion to any other RHEL machine using the subscription-manager utility. To gain access to a prompt within the CDK, the minishift ssh command can be used.
minishift ssh
By default, an ssh session is established within the CDK using the “docker” user. Since subscription-manager requires root privileges, access must be elevated using the sudo command. Execute the following command to elevate to the root user:
sudo su -
With access to root privileges, the machine can now be registered to Red Hat using the subscription-manager register command. Either a username/password or activation key/organization combination can be used as follows:
subscription-manager register --username=<username> --password=<password>
Or
subscription-manager register --org=<organization-key> --activationkey=<activation-key>
In either case, adding the –auto-attach parameter to each command will attach a subscription automatically to the new registration.
To subscribe the CDK against an instance of Red Hat Satellite instead of Red Hat’s hosted infrastructure, many of the same commands can be reused. An additional step is required to first download the bundle containing the certificates for the Satellite server so that the CDK can communicate securely to facilitate the registration process. Execute the following command to install the certificates into the CDK:
rpm -Uvh http://<satellite_server>/pub/katello-ca-consumer-latest.noarch.rpm
Now use subscription-manager to complete the registration process using the –org and –activationkey parameters
subscription-manager register --org=<organization-key> --activationkey=<activation-key> --auto-attach
To validate the CDK is properly subscribed, lets start a new container and attempt to install a package using yum. Once again, in a session within the CDK as the root user, execute the following command:
docker run -it --rm rhel:7.5 yum install -y dos2unix
If the above command succeeded, the CDK is properly registered and subscribed.
Automate Satellite Registration using an Add-on
Active users of the CDK routinely delete the RHEL VM that is part of the CDK using the minishift delete command and start with a clean slate as it eliminates the artifacts that have accumulated from prior work. As demonstrated previously, registration of the CDK against a Red Hat Satellite does involve a number of manual steps. Fortunately, there is a way to automate this process through the use of a minishift add-on. An add-on is a method to extend the base minishfit startup process by injecting custom actions. This is ideal as the add-on can streamline the repetitive manual processes that would normally need to be executed to register against satellite.
An addon called satellite-registration is available to facilitate the registration of the CDK against a Satellite instance. To install the add-on, first clone the repository to the local machine:
git clone https://github.com/sabre1041/cdk-minishift-utils.git
With the repository available on the local machine, install the add-on into the CDK
minishift addons install cdk-minishift-utils/addons/satellite-registration
Confirm the add-on was installed successfully by executing the following command:
minishift addons list
When any new add-on is installed, it is disabled by default (as indicated by the disabled designation). Add-ons can be enabled which will automatically execute them at startup or they can be manually invoked using the minishift apply command. If you recall, registration against a satellite instance required several values be provided to complete the process:
- Location of the satellite server to obtain the certificate bundle
- Organization ID
- Activation Key
The add-on similarly requires these also be provided so that it can register the CDK successfully. Add-on’s offer a method of injecting parameters during the execution process through the –addon-env flag. The above items are associated with the add-on environment variables listed below:
- SATELLITE_CA_URL
- SATELLITE_ORG
- SATELLITE_ACTIVATION_KEY
To test the add-on against a satellite server, first start up the CDK with auto registration disabled:
minishift start --skip-registration
Once the CDK has started, apply the satellite-registration add-on along with the required flags:
minishift addons apply satellite-registration --addon-env "SATELLITE_CA_URL=<CA_LOCATION>" --addon-env "SATELLITE_ORG=<ORG_NAME>" --addon-env "SATELLITE_ACTIVATION_KEY=<ACTIVATION_KEY>"
Confirm the registration was successful by checking the status as reported by subscription manager from the local machine
minishift ssh sudo subscription-manager status
If the “Overall Status” as reported by the previous command returns “Current”, the CDK was successfully subscribed to the satellite instance.
Whether using Red Hat hosted infrastructure or Red Hat Satellite, developers in the community or within an enterprise setting have access to build powerful applications using trusted Red Hat software by registering and associating subscriptions to the Container Development Kit.
Recent Comments