Serving OCI Helm Charts in Helm Chart Repositories

Posted: June 3rd, 2024 | Author: | Filed under: Technology | Tags: , , | No Comments »

The introduction of OCI registries as a medium for storing Helm charts brought a number of advantages as compared to traditional Helm repositories including the ability to leverage the same infrastructure and primitives as standard containers along with reducing the overall effort that it takes to serve Helm charts for others to consume. However, even as the adoption of OCI based Helm charts continues to increase, there are several limitations compared to their traditional Helm Chart repository counterparts. Examples of these limitations include the inability to organize multiple charts amongst each other or the ability to search for charts that are stored within OCI registries.

A recent discussion within the Helm project GitHub repository brought to light a new method for which OCI based Helm charts could be managed. Charts stored in Helm repositories make use of an Index file which defines not only the charts that are being served, but the remote source where they are located.

The following is an example of a Helm index file:

apiVersion: v1
entries:
  oci-artifacts-demo:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2024-05-31T11:50:24.431916-05:00"
    description: Sample Helm Chart
    digest: db5c00dcae815f35b4d0d18d507aeae98f648e849ec1786a0573111210e5f337
    name: my-chart
    type: application
    urls:
    - https://example.com/charts/my-chart-0.1.1.tgz
    version: 0.1.1
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2024-05-31T11:50:24.431418-05:00"
    description: Sample Helm Chart
    digest: a59db5293c542bdbe5f3e85aff3f357d1d0501ae308f51407541644baf8bd32a
    name: my-chart
    type: application
    urls:
    - https://example.com/charts/my-chart-0.1.0.tgz
    version: 0.1.0
generated: "2024-05-31T11:50:24.43089-05:00"

As illustrated by the index file above, the urls field specifies the location of the packaged chart. While the majority of Helm chart repositories serve packaged charts alongside the index file, this is not a hard requirement and a packaged chart could be served by any accessible location.

Under the covers, Helm utilizes an interface (Getter) for accessing charts stored in remote locations. As one can expect, there are two implementations of the interface: accessing charts stored in Helm charts repositories as well as OCI registries. The determination of which method to use is determined by the protocol specified within the URL. URL’s specifying the http or https scheme access charts from traditional Helm repositories, while those with the oci scheme access charts from OCI registries.

The mechanisms for which Helm retrieves remote chart content based on the specified protocol enables using a Helm index in new ways. Instead referencing the location of the packaged chart stored within an HTTP server, the chart could instead be stored in an OCI registry. All that needs to change is the location of the chart specified in the urls field. This reference needs to include the oci:// protocol so that the underlying functionality for retrieving OCI based charts is activated. Let’s see how we can accomplish this capability.

Enabling OCI Charts in Chart Repositories

First, create a new Helm chart called my-chart:

helm create my-chart

Package up the chart

helm package my-chart

We will create a second version of the chart so that the chart index we will build includes multiple versions. To increment the chart version, use the yq utility which enables modifying YAML based content. If yq is not currently installed on your machine, navigate to the project website and follow the steps to download and install in your machine.

Once yq has been installed, update the version of the my-chart Helm chart to version 0.1.1:

yq -i '.version = "0.1.1"' my-chart/Chart.yaml

Package the updated chart

helm package my-chart

At this point, there are two packaged charts in the current directory (versions 0.0.1 and 0.1.1). Before publishing these charts to an OCI registry, set an environment variable called HELM_REGISTRY_REFERENCE representing the reference of the remote registry where the charts will be stored (for example: myregistry.com/charts).

export HELM_REGISTRY_REFERENCE=myregistry.com/charts

Now, push both charts to the remote registry

helm push my-chart-0.1.0.tgz oci://${HELM_REGISTRY_REFERENCE}
helm push my-chart-0.1.1.tgz oci://${HELM_REGISTRY_REFERENCE}

Next, generate a Helm index file based on charts stored within the current directory

helm repo index .

With the index file generated, use yq to dynamically update the URL within the index file to reference the location of the chart in the OCI registry instead of the default path generated by Helm.

yq eval -i '. |= .entries[][] |= .urls[0] = "oci://" + env(HELM_REGISTRY_REFERENCE) + "/" + .name + ":" + .version' index.yaml

View the contents of the index file and you will notice that each chart version utilizes an OCI based reference in the urls field, enabling the use of the charts stored in the OCI registry.

apiVersion: v1
entries:
  my-chart:
    - apiVersion: v2
      appVersion: 1.16.0
      created: "2024-06-01T12:31:02.734732-05:00"
      description: A Helm chart for Kubernetes
      digest: a7f05a380cc6ed45feab837b65b1f0a51f8737236105d7325b1b13953d7abb96
      name: my-chart
      type: application
      urls:
        - oci://${HELM_REGISTRY_REFERENCE}/charts/my-chart:0.1.1
      version: 0.1.1
    - apiVersion: v2
      appVersion: 1.16.0
      created: "2024-06-01T12:31:02.734383-05:00"
      description: A Helm chart for Kubernetes
      digest: bbee6c6f09d1535bafdd4cb5c0c7344ed6812a0b23650e2afd005cb80450a89a
      name: my-chart
      type: application
      urls:
        - oci://${HELM_REGISTRY_REFERENCE}/charts/my-chart:0.1.0
      version: 0.1.0
generated: "2024-06-01T12:31:02.733967-05:00"

NOTE: ${HELM_REGISTRY_REFERENCE} in the example above will be rendered in your version

The index file can then be uploaded to an HTTP based web server for broader distribution and use by others. There are a number of options for which the index file can be hosted, including GitHub Pages. Otherwise, since the file only needs to be served briefly, we can use python to start a minimalistic web server locally.

Execute the following command to start the web server on port 8000 using the contents from the current directory.

python -m http.server

Confirm the index file is being served properly

curl http://localhost:8000/index.yaml

Now that the chart repository has been configured and confirmed to be accessible, add a new repository called repo-oci and update the contents.

helm repo add repo-oci http://localhost:8000
helm repo update repo-oci

Verifying the Approach

With all of the steps now complete for leveraging OCI based charts within chart repositories including adding the associated repository to the local machine, you can begin to use it it similar to any other helm repository, including searching for all charts and versions available

helm search repo repo-oci/ --versions

NAME             	CHART VERSION	APP VERSION	DESCRIPTION                
repo-oci/my-chart	0.1.1        	1.16.0     	A Helm chart for Kubernetes
repo-oci/my-chart	0.1.0        	1.16.0     	A Helm chart for Kubernetes

Confirm that typical operations for for interacting with a chart can still be achieved, such as inspecting the contents of the Chart.yaml file using the helm show chart command:

helm show chart repo-oci/my-chart

apiVersion: v2
appVersion: 1.16.0
description: A Helm chart for Kubernetes
name: my-chart
type: application
version: 0.1.1

You can even choose to install one of these charts to a Kubernetes cluster using the helm install or helm upgrade commands. However, since

Capabilities surrounding the use of OCI artifacts continue to evolve both within the Helm project and in the OCI community. However, until new features become more readily available which offers an improved experience working with OCI based content, the ability to combine the use of Helm repositories and registries provides a suitable middleground in the meantime.


KubeCon EU 2024: A Model Conference

Posted: March 27th, 2024 | Author: | Filed under: Technology | Tags: , , , | No Comments »
KubeCon + CloudNativeCon EU

The cloud native world recently descended upon the city of lights in Paris for the 2024 edition of KubeCon + CloudNativeCon EU. As for what has become the norm, the main conference was filled with three days of keynotes, breakouts, and the ever popular Partner Pavilion consisting of a dizzying array of vendors and CNCF projects sharing their offerings. All of this was preceded by a series of co-located events that brought together individuals and organizations focusing on some of cloud native’s most popular projects and initiatives. Looking back at a wild and action packed week, I wanted to share my thoughts, opinions and experiences reflecting upon the week and what it means looking forward toward the future.

An Entire Conference Before the Conference

While many attendees focus on attending just the primary KubeCon + CloudNativeCon event, the conference in all reality kicks off the day prior with the day-0 events. Each KubeCon + CloudNativeCon features co-located events comprising some of the current most popular projects and technologies, like BackstageCon, Cloud Native AI day, and Platform Engineering day, along with familiar staples like ArgoCon and OpenShift Commons Gathering. ArgoCon and OpenShift Commons Gathering were the two co-located events that I participated in, and while the activities at ArgoCon will be described in detail later on, OpenShift Commons Gathering certainly did not disappoint.

Attendees at OpenShift Commons Gathering

OpenShift Commons Gathering this time around took place at the Gaumont Aquaboulevard, a movie theater that was approximately a 15 minute walk from the main venue. The format was somewhat unique to past gatherings as for most of the day, there were two concurrent tracks: the main stage and a series of focused breakout sessions.

Each of the breakout sessions lasted approximately one hour and enabled attendees to immerse themselves in a particular topic area and to collaborate with other members of the OpenShift community.
I, along with my good friend Piotr Godowski from IBM held an interactive breakout session focused on all things security. Not only did we touch upon many of the best practices that are involved for securing containers and the OpenShift platform, but we made the session as engaging as possible as attendees were able to provide their input and feedback within a real time survey platform based upon how they are currently addressing common security concerns and how their efforts are prioritized compared to other IT initiatives. The theater style seating also helped encourage and foster conversations between participants which helped maximize the value that the session could provide. The survey responses will be used to establish future topics for OpenShift Commons initiatives including follow up sessions at future OpenShift Common Gathering events.

For more information on OpenShift Commons including learning more about how to get involved with the community, check out the OpenShift Commons website.

OCI Artifacts Take Center Stage

OCI (Open Container Initiative) artifacts enables the packaging and storage of additional content types aside from container images within traditional OCI registries. OCI artifacts are not new as they have been used for several years now (see the support for storing Helm Charts within OCI artifacts), but recent announcements have helped bring it to the forefront. Just prior to KubeCon, OCI specification v1.1.0 was released that solidified how OCI artifacts are defined and managed. There is a good blog post that was published by the Microsoft Azure Container Registry team that highlights many of the changes and enhancements that are part of the OCI v1.1.0 specification.

Discussions surrounding OCI artifacts were part of both the co-located events as well as the main KubeCon + CloudNativeCon event as I was fortunate enough to speak to the benefits, the features that it enables and how the community can participate.

OCI Artifacts to the Masses

AI and ML is undoubtedly the hottest topic in the tech industry these days. As the community and organizations come to grasp the ways that AI and ML technologies can be utilized, one such area of focus is the ability to manage and utilize ML based models in a scalable way. While S3 is one such option for serving these types of models, OCI artifact represents an alternate solution that not only provides the storage and management capabilities, but also enables the reuse of many of the other technologies that have been developed to support traditional containers including security and provenance.

Attendees of KubeCon + CloudNativeCon got a first glimpse into the world of OCI artifacts and their possible use as they were mentioned several times during the keynotes as well as within dedicated breakout sessions (see below).

GitOps Management using OCI Artifacts in Argo CD

ArgoCon

One of the efforts that I have been spearheading for some time now is the ability to manage GitOps assets more natively in Argo CD. At the Argo Con co-located event, Christian Hernandez, Dan Garfield and Hilliary Lipsig and I held a panel that discussed a new proposal in the Argo CD community surrounding bringing first class support for handling GitOps content (content traditionally stored in Git repositories and standard Helm Chart Repositories).

The discussion offered insights into the challenges that OCI artifacts can help solve, how they can be used and ways to help join the community to bring these new sets of capabilities to fruition. The assets including the presentation and recording can be found below:

If you are interested in contributing or participating in the efforts surrounding Argo CD and OCI artifacts, feel free to join the #argo-cd-oci-integration channel on CNCF Slack. I personally, am excited to be able to work with members of the Argo CD community to bring these new opportunities to reality.

A Working Group dedicated to OCI Artifacts

The TAG App Delivery within the CNCF includes projects and initiatives related to delivering cloud-native applications, including building, packaging, deploying, managing, and operating them. As OCI artifacts represent a way to address many of the concerns that the TAG is tasked with, there is a working group within TAG App Delivery that is specifically focused on OCI artifacts. There are three key functions for the working group:

  1. Gather End User Feedback
  2. Advocate for Innovative Projects
  3. Develop Common Patterns

To provide greater visibility and to provide an overview of the Artifacts WG within TAG App Delivery, I participated in a series of lightning talks that was held at the TAG App Delivery project booth that highlighted many of the associated efforts that the TAG is working on.

The presentation consisted of an overview of the challenges found with managing artifacts in a cloud native world, an overview of OCI artifacts, and some of the key areas that the working group is currently focusing on. Of course, any presentation at a conference included a demonstration that provided attendees an overview of some of the initial efforts to address one of the key concerns when managing artifacts effectively: searching for artifacts. The demonstration illustrated a recent feature that was added to the Zot container registry, a CNCF sandbox project, to enable artifact searching.

If there is an interest in participating in the Artifacts WG of TAG App Delivery, head over to the working group website on how to get involved including joining the #wg-artifacts Slack channel along with the bi-weekly community meeting. The presentation from the lightning talk can be found here.

Organizations taking note

Bloomberg breakout session

While most organizations are just getting their hands on the concepts of AI/ML including OCI artifacts, others have identified the benefits that OCI artifacts can provide in this space and have started developing solutions to take advantage of the opportunities. Bloomberg shared how their internal Data Science Platform (DSP) is exploring the use of OCI Artifacts to manage their ML assets. They are still early in their journey, but it is exciting to see that organizations are recognizing the challenges and the potential ways that they will be able to take advantage of OCI Artifacts to achieve their business goals. I had the opportunity to meet with the presenters and will be seeing how they would be able to share their perspectives including experiences and roadmap back to the TAG App Delivery Artifacts WG group.

Managing OCI Artifacts

Looking across the cloud native landscape, from capabilities that are already in place, such as Helm, and those that are just at the incubation stage, there must be methods to support the management of assets as OCI Artifacts. ORAS (OCI Registry As Storage), a CNCF sandbox project, has become the de facto tool for managing OCI Artifacts and it is already in use by projects utilizing OCI Artifacts along with those that are just at the exploratory phase. Helm and Argo CD already use Helm within their projects and it will be the basis for the expanded use of OCI Artifacts by Argo CD. The Bloomberg team is also making use of ORAS as the reference library as part of their initial implementation.

I have been a maintainer of the ORAS project for some time now and it is refreshing to see so many Open Source projects starting to investigate and utilize ORAS. With each of these implementations making use of ORAS, they will be able to both provide concrete use cases as well as potential features that can be used to increase the capabilities of ORAS.

If you are interested in participating in the ORAS community, join the #oras CNCF channel or check out the ORAS website for more information.

The Helm Community Remains Strong

One of the primary reasons that I attended KubeCon was to be a representative of the Helm project leadership as a project maintainer at the conference. Events, like KubeCon + CloudNativeCon EU, is one of the ways to raise awareness into the current state and initiatives of Open Source projects with the community as a whole. The Helm project offered three ways for attendees to interact with the project:

  1. Maintainers Track breakout session
    1. Chart Your Course Like a ChampionAndrew BlockKarena Angell, Joe Julian, Scott Rigby
  2. Contributor Session
    1. Building the Helm 4 HighwayAndrew Block, Joe Julian, Scott Rigby
  3. Helm project booth in the Project Pavillion

There continues to be a good amount of interest in the Helm project and it was evident in the number of attendees who packed the breakout sessions and stopped by the booth in the Project Pavilion. Probably the most refreshing aspect was the number of attendees that both passed through the project booth and in the hallway tracks who voiced their support for the project including their willingness to offer their time and energy to contribute. This becomes increasingly important as the Helm project works toward the next major version: Helm 4. It is the community that will help guide the project into the next phase so that the appropriate features and capabilities are documented and tasked out appropriately. In fact, the entire contributor breakout session was dedicated to Helm 4 to provide attendees the opportunity to have a first glimpse into some of the areas the maintainers are envisioning as the key priorities to focus on.

If you are interested in learning more about the Helm project including how to contribute, visit the Helm website and/or join the #helm-users channel on Kubernetes Slack.

Conveying the Value of Open Source

Open Source projects are only as strong as the maintainers and contributors who take an active role. However, in today’s economic market, it has become increasingly difficult for many individuals to continue their participation in Open Source projects. This can be attributed to a variety of factors, but one such area that has seen a substantial dropoff from the past is individuals who are gainfully employed being able to have dedicated time for Open Source contribution.

While this may come as a surprise to many, it does make sense. Profits are at a premium these days and many organizations are focusing the efforts of their employees on areas that are within the bounds of the organization. The dropoff of eligible contributors has impacted many Open Source projects, causing them to either remain stagnant or become abandoned altogether. This disparity was highlighted in two ways at KubeCon.

First, Bob Killen, Program Manager at Google spoke directly on this topic in his presentation Why is this so HARD! Conveying the Business Value of Open Source. He illustrated the fact that there is often a disparity between time that employees dedicate on Open Source initiatives and leadership understanding what it can provide for the organization. Often, it is a lack of data. Without the facts; specifically the direct relationship and benefits for organizations, Leadership is unable to justify the time being spent and as a result, the pool of eligible contributors is reduced. I have seen it firsthand as a maintainer of several Open Source projects. There just isn’t as many contributors as there once were. However, if projects establish appropriate tooling, such as providing metrics that interested contributors could take back to their organization, they would be able to appropriately justify the time they are spending on these projects and the true value that it provides.

This specific challenge, where organizations relying on Open Source software should provide opportunities for their employees to dedicate time to associated Open Source projects was highlighted during the Flux and the Wider Ecosystem Planning Birds of a Feature (BoF) session. The future of Flux, a GitOps management tool and CNCF graduated project, was called into question as WeaveWorks, the commercial organization supporting the Open Source project, had recently ceased operations. Since a large number of contributors and maintainers of the Flux project were WeaveWorks employees, there was no clear understanding of what the future would hold once WeaveWorks ceased operations. 

Alexis Richardson, former WeaveWorks CEO, and Stefan Prodan, maintainer of the Flux project, led the Birds of a Feather session in front of a packed KubeCon audience to address many of these concerns. As an individual who works in the Kubernetes GitOps space on a daily basis, it was great to see the overwhelming response from the community on what could have been a dire outcome. Thanks to corporate support from organizations, such as GitLab, the Flux project will indeed continue on into the future. However, Richardson was adamant that organizations who do utilize the project must dedicate time for their employees to contribute. And, without this level of support, more and more Open Source projects will unfortunately fall by the wayside.

KubeCon is All About the People

We all live in a distributed world where everyone in the community is spread across the entire globe. Events, like KubeCon + CloudNativeCon EU, offer the opportunity to bring together as many people from the community into a single location. While technology has certainly helped close the gap in terms of making distributed teams as productive as possible, nothing beats the face-to-face collaboration and “hallway type” conversations that a conference, like KubeCon, can enable. I cannot begin to count the number of individuals that I met up throughout the course of the week that I have either met up in various forums, like Slack, or associated with project level discussions.

In addition, to be honest, KubeCon has become literally a Red Hat reunion. Red Hatters, current and former, are everywhere; in almost every community. With that being said, I spent a good amount of time catching up with Red Hatters to hear about what projects that they are working on and their thoughts — past, present and future.

The Red Hat booth became a location where many of these conversations occurred. Once again, the Red Hat booth was a popular destination for all attendees where they had the opportunity to learn about Red Hat solutions and to interact with Red Hat experts. Each day, scores of attendees lined up for the chance to take home a coveted Fedora of their own. Throughout the conference, and even on the streets of Paris, the iconic red Fedoras were everywhere, illustrating the connection of the Red Hat brand with the market.

The Best KubeCon Yet

KubeCon + CloudNativeCon EU 2024

Looking back at the week that was in Paris, I can confidently say that it was the best KubeCon + CloudNativeCon EU that I have personally attended. Granted, we have come a long way since the first KubeCon + CloudNativeCon EU that I attended back in North America in 2021, the first post-pandemic.

Everything, from the location (who doesn’t love Paris in spring), to the venue (well appointed and right smack in the city of Paris along with being well connected by the citys’ robust transit system), made for an overwhelmingly enjoyable event. The vibe was infectious. 13,000+ attendees embracing Cloud Native and Open Source and having a blast at the same time. Of course, not everything was perfect. Several of the popular sessions were overcrowded with potential attendees overflowing out into the hallways. However, for the majority of the session, the room sizes were suited for the expected and actual attendance.

Looking forward, the CNCF announced the locations for the North American and European KubeCon + CloudNativeCon events for 2025 and 2026:

  • Europe 2025 – London – April 1-4, 2025
  • North America 2025 – Atlanta – November 10-13 
  • Europe 2026 – Amsterdam – March 23-26
  • North America 2026 – Los Angeles  – October 26-29 

Salt Lake City, the location for KubeCon + CloudNativeCon NA in November 2024 has its work cut out to match the success of the KubeCon + CloudNativeCon EU Paris event. Fortunately, there are a continuous set of Kubernetes Community Days (KCD’s) running throughout the world, to satisfy the demand in the meantime.