Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mismatch between README values.yaml and actual values.yaml #21

Open
lindhe opened this issue Aug 28, 2023 · 2 comments
Open

Mismatch between README values.yaml and actual values.yaml #21

lindhe opened this issue Aug 28, 2023 · 2 comments

Comments

@lindhe
Copy link

lindhe commented Aug 28, 2023

The example for values.yaml in the README doesn't match the contents of the actual values.yaml. If the example from the README is up-to-date, I think those should be merged into the real values.yaml file (as comments, if they are not reasonable default values).

The general cluster configuration options are available through [values.yaml](./charts/values.yaml).
```yaml
# cluster specific values
cluster:
# specify cluster name
name: cluster-example
# specify cluster labels
labels: {}
# specify cluster annotations
annotations: {}
# specify cloud credential secret name, do not need to be provided if using custom driver
cloudCredentialSecretName: example
# specify cloud provider, options are amazonec2, digitalocean, azure, vsphere or custom
cloudprovider: ""
# enable network policy
enableNetworkPolicy: false
kubernetesVersion: "v1.21.0-alpha2+rke2r1"
# specify rancher helm chart values deployed into downstream cluster
rancherValues: {}
# specify extra env variables in cluster-agent deployment
# agentEnvs:
# - name: HTTP_PROXY
# value: foo.bar
# general RKE options
rke:
# specify rancher helm chart values deployed into downstream cluster
chartValues: {}
# controlplane/etcd configuration settings
controlPlaneConfig:
# Path to the file that defines the audit policy configuration
# audit-policy-file: ""
# IPv4/IPv6 network CIDRs to use for pod IPs (default: 10.42.0.0/16)
# cluster-cidr: ""
# IPv4 Cluster IP for coredns service. Should be in your service-cidr range (default: 10.43.0.10)
# cluster-dns: ""
# Cluster Domain (default: "cluster.local")
# cluster-domain: ""
# CNI Plugin to deploy, one of none, canal, cilium (default: "canal")
cni: calico
# Do not deploy packaged components and delete any deployed components (valid items: rke2-coredns, rke2-ingress-nginx, rke2-kube-proxy, rke2-metrics-server)
# disable: false
# Disable automatic etcd snapshots
# etcd-disable-snapshots: false
# Expose etcd metrics to client interface. (Default false)
# etcd-expose-metrics: false
# Directory to save db snapshots. (Default location: ${data-dir}/db/snapshots)
# etcd-snapshot-dir: ""
# Set the base name of etcd snapshots. Default: etcd-snapshot-<unix-timestamp> (default: "etcd-snapshot")
# etcd-snapshot-name: ""
# Number of snapshots to retain (default: 5)
# etcd-snapshot-retention: 5
# Snapshot interval time in cron spec. eg. every 5 hours '* */5 * * *' (default: "0 */12 * * *")
# etcd-snapshot-schedule-cron: "0 */12 * * *"
# Customized flag for kube-apiserver process
# kube-apiserver-arg: ""
# Customized flag for kube-scheduler process
# kube-scheduler-arg: ""
# Customized flag for kube-controller-manager process
# kube-controller-manager-arg: ""
# Validate system configuration against the selected benchmark (valid items: cis-1.5, cis-1.6 )
# profile: "cis-1.6"
# Enable Secret encryption at rest
# secrets-encryption: false
# IPv4/IPv6 network CIDRs to use for service IPs (default: 10.43.0.0/16)
# service-cidr: "10.43.0.0/16"
# Port range to reserve for services with NodePort visibility (default: "30000-32767")
# service-node-port-range: "30000-32767"
# Add additional hostnames or IPv4/IPv6 addresses as Subject Alternative Names on the server TLS cert
# tls-san: []
# worker configuration settings
workerConfig:
- config:
# Node name
# node-name: ""
# Disable embedded containerd and use alternative CRI implementation
# container-runtime-endpoint: ""
# Override default containerd snapshotter (default: "overlayfs")
# snapshotter: ""
# IP address to advertise for node
# node-ip: "1.1.1.1"
# Kubelet resolv.conf file
# resolv-conf: ""
# Customized flag for kubelet process
# kubelet-arg: ""
# Customized flag for kube-proxy process
# kube-proxy-arg: ""
# Kernel tuning behavior. If set, error if kernel tunables are different than kubelet defaults. (default: false)
# protect-kernel-defaults: false
# Enable SELinux in containerd (default: false)
# selinux: true
# Cloud provider name
# cloud-provider-name: ""
# Cloud provider configuration file path
# cloud-provider-config: ""
machineLabelSelector:
matchLabels:
foo: bar
# enable local auth endpoint
localClusterAuthEndpoint:
enabled: false
# specify fqdn of local access endpoint
# fqdn: foo.bar.example
# specify cacert of local access endpoint
# caCerts: ""
# Specify upgrade options
upgradeStrategy:
controlPlaneDrainOptions:
enabled: false
# deleteEmptyDirData: false
# disableEviction: false
# gracePeriod: 0
# ignoreErrors: false
# skipWaitForDeleteTimeoutSeconds: 0
# timeout: 0
workerDrainOptions:
enabled: false
# deleteEmptyDirData: false
# disableEviction: false
# gracePeriod: 0
# ignoreErrors: false
# skipWaitForDeleteTimeoutSeconds: 0
# timeout: 0
workerConcurrency: "1"
```

@lindhe
Copy link
Author

lindhe commented Aug 28, 2023

It seems like rancherValues is unused, so that should be removed from the example.

This is resolved in #22.

@lindhe
Copy link
Author

lindhe commented Aug 28, 2023

After #22, the example in README.md is almost identical to the contents of values.yaml. I think the examples should be removed from README.md, so we don't risk having out-of-date documentation again in the future.

lindhe added a commit to lindhe/cluster-template-examples that referenced this issue Aug 28, 2023
After this change, charts/values.yaml is the authoritative example.

* Move `agentEnvs` example to values.yaml
* Update relative link to charts/values.yaml

Fixes rancher#21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant