Authors | Creation Date | Status | Extra |
---|---|---|---|
@dashanji,@camilamacedo86,@LCaparelli | Sep, 2023 | Implemented | - |
This proposal aims to introduce an optional mechanism that allows users to generate a Helm Chart from their Kubebuilder-scaffolded project. This will enable them to effectively package and distribute their solutions.
To achieve this goal, we are proposing a new native Kubebuilder
plugin (i.e., helm-chart/v1-alpha
)
which will provide the necessary scaffolds. The plugin will function similarly
to the existing Grafana Plugin, generating or regenerating HelmChart files using the init and edit
sub-commands (i.e., kubebuilder init|edit --plugins helm-chart/v1-alpha
).
An alternative solution could be to implement an alpha command,
similar to the helper provided to upgrade projects that would
provide the HelmChart under the dist
directory, similar to what
is done by helmify.
To enable the helm-chart generation when a project is initialized
kubebuilder init --plugins=
go/v4,helm/v1-alpha
To enable the helm-chart generation after the project be scaffolded
kubebuilder edit --plugins=
helm/v1-alpha
Note that the HelmChart should be scaffold under the
dist/
directory in both scenarios:example-project/ dist/ chart/
To sync the HelmChart with the latest changes and add the manifests generated
kubebuilder edit --plugins=
helm/v1-alpha
The above command will be responsible for ensuring that the Helm Chart is properly updated with the latest changes in the project, including the files generated by controller-gen when users run make manifests.
According to Helm Best Practices for Custom Resource Definitions, there are two main methods for handling CRDs:
- Method 1:Let Helm Do It For You: Place CRDs in the
crds/
directory. Helm installs these CRDs during the initial install but does not manage upgrades or deletions. - Method 2:Separate Charts: Place the CRD definition in one chart and the resources using the CRD in another chart. This method requires separate installations for each chart.
Raised Considerations and Concerns
- Use Helm crd directory The upgraded chart versions will silently ignore CRDs even if they differ from the installed versions. This could lead to surprising and unexpected behavior. Therefore, Kubebuilder should not encourage or promote this approach.
- Templates Folder: Moving CRDs to the
templates
folder facilitates upgrades but uninstalls CRDs when the operator is uninstalled. However, it allows users easier manage the CRDs and install them on upgrades. It is a common approach adopted by maintainers but is not considered a good practice by Helm itself. - Separate Helm Chart for CRDs: This approach allows control over both CRD and operator versions without deleting CRDs when the operator chart is deleted. Also, follows the HelmChart best practices. Another problem with this approach is to ensure that the CRDs will be applied before the CRs since both will be under the template directory.
- When Webhooks are used: If a CRD specifies, for example, a conversion webhook, the "API chart" needs to contain the CRDs and the webhook
service/workload
. It would also make sense to includevalidating/mutating
webhooks, requiring the scaffolding of separate main modules and image builds for webhooks and controllers which does not shows to be compatible with Kubebuilder Golang scaffold.
Proposed Solution
Follow the same approach adopted by Cert-Manager.
Add the CRDs under the template
directory and have a spec in the values.yaml
which will define if the CRDs should or not be applied
helm install|upgrade \
myrelease \
--namespace my-namespace \
--set `crds.enabled=true`
Also, add another spec to the values.yaml
to not allow the CRDs
be deleted when the helm is uninstalled:
# START annotations {{- if .Values.crds.keep }}
annotations:
helm.sh/resource-policy: keep
# END annotations {{- end }}
Additionally, we might want to scaffold separate charts for the APIs and support both. An example of this approach provided as feedback was karpenter-provider-aws.
We should either make clear the usage of both supported ways and clarify their limitations. However, the proposed solution would result in the following layout:
example-project/
dist/
chart/
example-project-crd/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ │ └── <CRDs YAML files generated under config/crds/>
└── values.yaml
example-project/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ └── <CRDs YAML files generated under config/crds/>
│ ├── ...
Helm charts allow maintainers to define dependencies via the Chart.yaml
file.
However, in the initial version of this plugin at least, we do not need to consider management of dependencies.
Adding dependencies such as Cert-Manager and Prometheus directly in the Chart.yaml
could introduce issues since these components are intended to be installed only once per cluster.
Attempting to manage multiple installations could lead to conflicts and cause unintended behaviors,
especially in shared cluster environments.
To avoid these issues, the plugin for now will not scaffold this file and will not try to manage it. Instead, users will be responsible for managing these dependencies outside of the generated Helm chart, ensuring they are correctly installed and only installed once in the cluster.
Currently, projects scaffolded with Kubebuilder can be distributed via YAML. Users can run
make build-installer IMG=<some-registry>/<project-name>:tag
, which will generate dist/install.yaml
.
Therefore, its consumers can install the solution by applying this YAML file, such as:
kubectl apply -f https://raw.githubusercontent.com/<org>/<project-name>/<tag or branch>/dist/install.yaml
.
However, many adopt solutions require the Helm Chart format, such as FluxCD. Therefore, maintainers are looking to also provide their solutions via Helm Chart. Users currently face the challenges of lacking an officially supported distribution mechanism for Helm Charts. They seek to:
- Harness the power of Helm Chart as a package manager for the project, enabling seamless adaptation to diverse deployment environments.
- Take advantage of Helm's dependency management capabilities to simplify the installation process of project dependencies, such as cert-manager.
- Seamlessly integrate with Helm's ecosystem, including FluxCD, to efficiently manage the project.
Consequently, this proposal aims to introduce a method that allows Kubebuilder users to easily distribute their projects through Helm Charts, a strategy that many well-known projects have adopted:
NOTE: For further context see the discussion topic
- Allow Kubebuilder users distribute their projects using Helm easily.
- Make the best effort to preserve any customizations made to the Helm Charts by the users, which means we will skip syncs in the
values.ymal
. - Stick with Helm layout definitions and externalize into the relevant values-only options to distribute the default scaffold done by Kubebuilder. We should follow https://helm.sh/docs/chart_best_practices.
- Converting any Kustomize configuration to Helm Charts like helmify does.
- Support the deprecated plugins. This option should be supported from
go/v4
andkustomize/v2
- Introduce support for Helm in addition to Kustomize, or replace Kustomize with Helm entirely, similar to the approach taken by Operator-SDK, thereby allowing users to utilize Helm Charts to build their Project.
- Attend standard practices that deviate from Helm Chart layout, definition, or conventions to workaround its limitations.
- As a developer, I want to be able to generate a helm chart from a kustomize directory so that I can distribute the helm chart to my users. Also, I want the generation to be as simple as possible without the need to write any additional duplicate files.
- As a user, I want the helm chart can cover all potential configurations when I deploy it on the Kubernetes cluster.
- As a platform engineer, I want to be able to manage different versions and configurations of a project across multiple clusters and environments based on the same distribution artifact (Helm Chart), with versioning and dependency locking for supply chain security.
-
Location and Versioning: The new plugin should follow Kubebuilder standards and be implemented under
pkg/plugins/optional
. It should be introduced as an alpha version (v1alpha
), similar to the Grafana plugin. -
The data should be tracked in PROJECT File: Usage of the plugin should be tracked in the
PROJECT
file with the input via flags and options if required. Example entry in thePROJECT
file:
...
plugins:
helm.go.kubebuilder.io/v1-alpha:
options: ## (If ANY)
<flag/key>: <value>
Ensure that user-provided input is properly tracked, similar to how it's done in other plugins (see the code in the plugin.go) and the (code source to track the data) of the deploy-image plugin for reference.
NOTE We might not need options/flags in the first implementation. However, we should still track the plugin as we do for the Grafana plugin.
Following the structure implementation for the source code of this plugin:
.
├── helm-chart
│ └── v1alpha1
│ ├── init.go
│ ├── edit.go
│ ├── plugin.go
│ └── scaffolds
│ ├── init.go
│ ├── edit.go
│ └── internal
│ └── templates
For each subCommand we will need to check the resources which are scaffold for each subCommand via the kustomize plugin and ensure that we will implement the subCommand of the HelmChart plugin to the respective scaffolds as well.
Users will need to call the subcommand edit
passing the plugin to
ensure that the Helm chart is properly synced.
Therefore, the PostScaffold
of this command could perform steps such as:
- Run
make manifests
: Generate the latest CRDs and other manifests. - Copy the files to Helm chart templates:
- Copy CRDs:
cp config/crd/bases/*.yaml chart/example-project-crd/templates/crds/
- Copy RBAC manifests:
cp config/rbac/*.yaml chart/example-project/templates/rbac/
- Copy webhook configurations:
cp config/webhook/*.yaml chart/example-project/templates/webhook/
- Copy the manager manifest:
cp config/default/manager.yaml chart/example-project/templates/manager/manager.ymal0
- Copy CRDs:
- Replace placeholders with Helm values: Ensure that customized fields, such as the namespace, are properly replaced accordingly.
Example: Replace
name: system
with{{ .Values.Release.name }}
.
This ensures the Helm chart is always up-to-date with the latest manifests generated by Kubebuilder, maintaining consistency with the configured namespace and other customizable fields.
We will need to use the utils helpers such as ReplaceInFile or EnsureExistAndReplace to achieve this goal.
- Allow values.yaml to be fully re-generated with the flag --force:
By default, the values.yaml
file should not
be overwritten. However, users should have the option to overwrite it using
a flag (--force=true
).
This can be implemented in the specific template as done for other plugins:
if f.Force {
f.IfExistsAction = machinery.OverwriteFile
} else {
f.IfExistsAction = machinery.Error
}
NOTE: We will evaluate the cases when we implement webhook.go
and api.go
for the HelmChart plugin. However, we might use the force flag to replicate
the same behavior implemented in the subCommands of the kustomize plugin.
For instance, if the flag is used when creating an API, it forces
the overwriten of the generated samples. Similarly, if the api subCommand
of the HelmChart plugin is called with --force
, we should replace
all samples with the latest versions instead of only adding the new one.
- Helm Chart Templates should have conditions:
Ensure templates install resources based on
conditions defined in the values.yaml
. Example for CRDs:
# To install CRDs
{{- if .Values.crd.enable }}
...
{{- end }}
- Customizable Values: Set customizable values in the
values.yaml
, such as defining ServiceAccount names, and whether they should be created or not. Furthermore, we should include comments to help end-users understand the source of configurations. Example:
{{- if .Values.rbac.enable }}
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: project-v4
app.kubernetes.io/managed-by: kustomize
name: {{ .Values.rbac.serviceAccountName }}
namespace: {{ .Release.Namespace }}
{{- end }}
- Example of values.yaml:
Following an example to illustrate the expected result of this plugin:
# Install CRDs under the template
crd:
enable: false
keep: true
# Webhook configuration sourced from the `config/webhook`
webhook:
enabled: true
conversion:
enabled: true
## RBAC configuration under the `config/rbac` directory
rbac:
create: true
serviceAccountName: "controller-manager"
# Cert-manager configuration
certmanager:
enabled: false
issuerName: "letsencrypt-prod"
commonName: "example.com"
dnsName: "example.com"
# Network policy configuration sourced from the `config/network_policy`
networkPolicy:
enabled: false
# Prometheus configuration
prometheus:
enabled: false
# Manager configuration sourced from the `config/manager`
manager:
replicas: 1
image:
repository: "controller"
tag: "latest"
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
# Metrics configuration sourced from the `config/metrics`
metrics:
enabled: true
# Leader election configuration sourced from the `config/leader_election`
leaderElection:
enabled: true
role: "leader-election-role"
rolebinding: "leader-election-rolebinding"
# Controller Manager configuration sourced from the `config/manager`
controllerManager:
manager:
args:
- --metrics-bind-address=:8443
- --leader-elect
- --health-probe-bind-address=:8081
containerSecurityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
image:
repository: controller
tag: latest
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
replicas: 1
serviceAccount:
annotations: {}
# Kubernetes cluster domain configuration
kubernetesClusterDomain: cluster.local
# Metrics service configuration sourced from the `config/metrics`
metricsService:
ports:
- name: https
port: 8443
protocol: TCP
targetPort: 8443
type: ClusterIP
# Webhook service configuration sourced from the `config/webhook`
webhookService:
ports:
- port: 443
protocol: TCP
targetPort: 9443
type: ClusterIP
The HelmChart plugin should not scaffold optional options enabled
when those are scaffolded as disabled by the default implementation
of kustomize/v2
and consequently the go/v4
plugin used by default. Example:
The dependency on Cert-Manager is disabled by default.
From config/default/kusyomization.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
Therefore, by default the values.yaml
should be scaffolded with:
# Cert-manager configuration
certmanager:
enabled: false
Following an example of the expected result of this plugin:
example-project/
dist/
chart/
example-project-crd/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ │ └── <CRDs YAML files generated under config/crds/>
└── values.yaml
example-project/
├── Chart.yaml
├── templates/
│ ├── _helpers.tpl
│ ├── crds/
│ └── <CRDs YAML files generated under config/crds/>
│ ├── certmanager/
│ │ └── certificate.yaml
│ ├── manager/
│ │ └── manager.yaml
│ ├── network-policy/
│ │ ├── allow-metrics-traffic.yaml
│ │ └── allow-webhook-traffic.yaml // Should be added by the plugin subCommand webhook.go
│ ├── prometheus/
│ │ └── monitor.yaml
│ ├── rbac/
│ │ ├── kind_editor_role.yaml
│ │ ├── kind_viewer_role.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── metrics_auth_role.yaml
│ │ ├── metrics_auth_role_binding.yaml
│ │ ├── metrics_reader_role.yaml
│ │ ├── role.yaml
│ │ ├── role_binding.yaml
│ │ └── service_account.yaml
│ ├── samples/
│ │ └── kind_version_admiral.yaml
│ ├── webhook/
│ │ ├── manifests.yaml
│ │ └── service.yaml
└── values.yaml
A README.md is scaffold for the projects. (see its implementation here). Therefore, if the project is scaffold with the HelmChart plugin then, we should update the Distribution section to add the info and steps over how to keep the HelmChart synced.
To ensure that the new plugin will work well we will need to:
- Implement e2e tests for the plugin. (for reference see the e2e tests for the DeployImage)
- Ensure that the plugin is scaffold with all samples under the testdata directory (we will need call the plugin in test/testdata/generate.sh)
The new plugin should either be properly documented such as the others. For reference see:
Difficulty in Maintaining the Solution
Maintaining the solution may prove challenging in the long term, particularly if it does not gain community adoption and, consequently, collaboration. To mitigate this risk, the proposal aims to introduce an optional alpha plugin or to implement it through an alpha command. This approach provides us with greater flexibility to make adjustments or, if necessary, to deprecate the feature without definitively compromising support.
In order to prove that would be possible we could refer to the open source tool helmify.
Inability to Handle Complex Kubebuilder Scenarios
The proposed plugin may struggle to appropriately handle complex scenarios commonly encountered in Kubebuilder projects, such as intricate webhook configurations. Kubebuilder’s scaffolded projects can have sophisticated webhook setups, and translating these accurately into Helm Charts may prove challenging. This could result in Helm Charts that are not fully reflective of the original project’s functionality or configurations.
Incomplete Generation of Valid and Deployable Helm Charts
The proposed solution may not be capable of generating a fully valid and deployable Helm Chart for all use cases supported by Kubebuilder. Given the diversity and complexity of potential configurations within Kubebuilder projects, there is a risk that the generated Helm Charts may require significant manual intervention to be functional. This drawback undermines the goal of simplifying distribution via Helm Charts and could lead to frustration for users who expect a seamless and automated process.
Via a new command (Alternative Option)
By running the following command, the plugin will generate a helm chart from the specific kustomize directory and output it to the directory specified by the --output
flag.
kubebuilder alpha generate-helm-chart --from=<path> --output=<path>
The main drawback of this option is that it does not adhere to the Kubebuilder ecosystem.
Additionally, we would not take advantage of Kubebuilder library features, such as avoiding
overwriting the values.yaml
. It might also be harder to support and maintain since we would
not have the templates as we usually do.
Lastly, another con is that it would not allow us to scaffold projects with the plugin
enabled and in the future provide further configurations and customizations for this plugin.
These configurations would be tracked in the PROJECT
file, allowing integration with other
projects, extensions, and the re-scaffolding of the HelmChart while preserving the inputs
provided by the user via plugins flags as it is done for example for
the Deploy Image plugin.