Skip to content

Commit

Permalink
Merge pull request #44710 from kubernetes/dev-1.30
Browse files Browse the repository at this point in the history
Official 1.30 Release Docs
  • Loading branch information
drewhagen authored Apr 17, 2024
2 parents 13dd6a8 + 344254b commit 0471ca1
Show file tree
Hide file tree
Showing 108 changed files with 2,342 additions and 450 deletions.
9 changes: 8 additions & 1 deletion content/en/docs/concepts/architecture/garbage-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ until disk usage reaches the `LowThresholdPercent` value.

{{< feature-state feature_gate_name="ImageMaximumGCAge" >}}

As an alpha feature, you can specify the maximum time a local image can be unused for,
As a beta feature, you can specify the maximum time a local image can be unused for,
regardless of disk usage. This is a kubelet setting that you configure for each node.

To configure the setting, enable the `ImageMaximumGCAge`
Expand All @@ -151,6 +151,13 @@ and also set a value for the `ImageMaximumGCAge` field in the kubelet configurat
The value is specified as a Kubernetes _duration_; for example, you can set the configuration
field to `3d12h`, which means 3 days and 12 hours.

{{< note >}}
This feature does not track image usage across kubelet restarts. If the kubelet
is restarted, the tracked image age is reset, causing the kubelet to wait the full
`ImageMaximumGCAge` duration before qualifying images for garbage collection
based on image age.
{{< /note>}}

### Container garbage collection {#container-image-garbage-collection}

The kubelet garbage collects unused containers based on the following variables,
Expand Down
41 changes: 35 additions & 6 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -516,14 +516,44 @@ During a non-graceful shutdown, Pods are terminated in the two phases:
recovered since the user was the one who originally added the taint.
{{< /note >}}

### Forced storage detach on timeout {#storage-force-detach-on-timeout}

In any situation where a pod deletion has not succeeded for 6 minutes, kubernetes will
force detach volumes being unmounted if the node is unhealthy at that instant. Any
workload still running on the node that uses a force-detached volume will cause a
violation of the
[CSI specification](https://github.com/container-storage-interface/spec/blob/master/spec.md#controllerunpublishvolume),
which states that `ControllerUnpublishVolume` "**must** be called after all
`NodeUnstageVolume` and `NodeUnpublishVolume` on the volume are called and succeed".
In such circumstances, volumes on the node in question might encounter data corruption.

The forced storage detach behaviour is optional; users might opt to use the "Non-graceful
node shutdown" feature instead.

Force storage detach on timeout can be disabled by setting the `disable-force-detach-on-timeout`
config field in `kube-controller-manager`. Disabling the force detach on timeout feature means
that a volume that is hosted on a node that is unhealthy for more than 6 minutes will not have
its associated
[VolumeAttachment](/docs/reference/kubernetes-api/config-and-storage-resources/volume-attachment-v1/)
deleted.

After this setting has been applied, unhealthy pods still attached to a volumes must be recovered
via the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure mentioned above.

{{< note >}}
- Caution must be taken while using the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure.
- Deviation from the steps documented above can result in data corruption.
{{< /note >}}

## Swap memory management {#swap-memory}

{{< feature-state feature_gate_name="NodeSwap" >}}

To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
the kubelet (default is true), and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/)
must be set to false.
must be set to false.
To allow Pods to utilize swap, `swapBehavior` should not be set to `NoSwap` (which is the default behavior) in the kubelet config.

{{< warning >}}
When the memory swap feature is turned on, Kubernetes data such as the content
Expand All @@ -535,17 +565,16 @@ specify how a node will use swap memory. For example,

```yaml
memorySwap:
swapBehavior: UnlimitedSwap
swapBehavior: LimitedSwap
```

- `UnlimitedSwap` (default): Kubernetes workloads can use as much swap memory as they
request, up to the system limit.
- `NoSwap` (default): Kubernetes workloads will not use swap.
- `LimitedSwap`: The utilization of swap memory by Kubernetes workloads is subject to limitations.
Only Pods of Burstable QoS are permitted to employ swap.

If configuration for `memorySwap` is not specified and the feature gate is
enabled, by default the kubelet will apply the same behaviour as the
`UnlimitedSwap` setting.
`NoSwap` setting.

With `LimitedSwap`, Pods that do not fall under the Burstable QoS classification (i.e.
`BestEffort`/`Guaranteed` Qos Pods) are prohibited from utilizing swap memory.
Expand Down
31 changes: 28 additions & 3 deletions content/en/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,15 @@ using the [kubelet configuration file](/docs/tasks/administer-cluster/kubelet-co
These settings let you configure the maximum size for each log file and the maximum number of
files allowed for each container respectively.

In order to perform an efficient log rotation in clusters where the volume of the logs generated by
the workload is large, kubelet also provides a mechanism to tune how the logs are rotated in
terms of how many concurrent log rotations can be performed and the interval at which the logs are
monitored and rotated as required.
You can configure two kubelet [configuration settings](/docs/reference/config-api/kubelet-config.v1beta1/),
`containerLogMaxWorkers` and `containerLogMonitorInterval` using the
[kubelet configuration file](/docs/tasks/administer-cluster/kubelet-config-file/).


When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in
the basic logging example, the kubelet on the node handles the request and
reads directly from the log file. The kubelet returns the content of the log file.
Expand Down Expand Up @@ -148,7 +157,7 @@ If systemd is not present, the kubelet and container runtime write to `.log` fil
run the kubelet via a helper tool, `kube-log-runner`, and use that tool to redirect
kubelet logs to a directory that you choose.

The kubelet always directs your container runtime to write logs into directories within
By default, kubelet directs your container runtime to write logs into directories within
`/var/log/pods`.

For more information on `kube-log-runner`, read [System Logs](/docs/concepts/cluster-administration/system-logs/#klog).
Expand All @@ -166,7 +175,7 @@ If you want to have logs written elsewhere, you can indirectly
run the kubelet via a helper tool, `kube-log-runner`, and use that tool to redirect
kubelet logs to a directory that you choose.

However, the kubelet always directs your container runtime to write logs within the
However, by default, kubelet directs your container runtime to write logs within the
directory `C:\var\log\pods`.

For more information on `kube-log-runner`, read [System Logs](/docs/concepts/cluster-administration/system-logs/#klog).
Expand All @@ -180,6 +189,22 @@ the `/var/log` directory, bypassing the default logging mechanism (the component
do not write to the systemd journal). You can use Kubernetes' storage mechanisms
to map persistent storage into the container that runs the component.

Kubelet allows changing the pod logs directory from default `/var/log/pods`
to a custom path. This adjustment can be made by configuring the `podLogsDir`
parameter in the kubelet's configuration file.

{{< caution >}}
It's important to note that the default location `/var/log/pods` has been in use for
an extended period and certain processes might implicitly assume this path.
Therefore, altering this parameter must be approached with caution and at your own risk.

Another caveat to keep in mind is that the kubelet supports the location being on the same
disk as `/var`. Otherwise, if the logs are on a separate filesystem from `/var`,
then the kubelet will not track that filesystem's usage, potentially leading to issues if
it fills up.

{{< /caution >}}

For details about etcd and its logs, view the [etcd documentation](https://etcd.io/docs/).
Again, you can use Kubernetes' storage mechanisms to map persistent storage into
the container that runs the component.
Expand All @@ -200,7 +225,7 @@ as your responsibility.

## Cluster-level logging architectures

While Kubernetes does not provide a native solution for cluster-level logging, there are
While Kubernetes does not provide a native solution for cluster-level logging, there are
several common approaches you can consider. Here are some options:

* Use a node-level logging agent that runs on every node.
Expand Down
27 changes: 14 additions & 13 deletions content/en/docs/concepts/cluster-administration/system-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ second line.}

### Contextual Logging

{{< feature-state for_k8s_version="v1.24" state="alpha" >}}
{{< feature-state for_k8s_version="v1.30" state="beta" >}}

Contextual logging builds on top of structured logging. It is primarily about
how developers use logging calls: code based on that concept is more flexible
Expand All @@ -133,8 +133,9 @@ If developers use additional functions like `WithValues` or `WithName` in
their components, then log entries contain additional information that gets
passed into functions by their caller.

Currently this is gated behind the `StructuredLogging` feature gate and
disabled by default. The infrastructure for this was added in 1.24 without
For Kubernetes {{< skew currentVersion >}}, this is gated behind the `ContextualLogging`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) and is
enabled by default. The infrastructure for this was added in 1.24 without
modifying components. The
[`component-base/logs/example`](https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/staging/src/k8s.io/component-base/logs/example/cmd/logger.go)
command demonstrates how to use the new logging calls and how a component
Expand All @@ -147,14 +148,14 @@ $ go run . --help
--feature-gates mapStringBool A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
AllAlpha=true|false (ALPHA - default=false)
AllBeta=true|false (BETA - default=false)
ContextualLogging=true|false (ALPHA - default=false)
ContextualLogging=true|false (BETA - default=true)
$ go run . --feature-gates ContextualLogging=true
...
I0404 18:00:02.916429 451895 logger.go:94] "example/myname: runtime" foo="bar" duration="1m0s"
I0404 18:00:02.916447 451895 logger.go:95] "example: another runtime" foo="bar" duration="1m0s"
I0222 15:13:31.645988 197901 example.go:54] "runtime" logger="example.myname" foo="bar" duration="1m0s"
I0222 15:13:31.646007 197901 example.go:55] "another runtime" logger="example" foo="bar" duration="1h0m0s" duration="1m0s"
```

The `example` prefix and `foo="bar"` were added by the caller of the function
The `logger` key and `foo="bar"` were added by the caller of the function
which logs the `runtime` message and `duration="1m0s"` value, without having to
modify that function.

Expand All @@ -165,8 +166,8 @@ is not in the log output anymore:
```console
$ go run . --feature-gates ContextualLogging=false
...
I0404 18:03:31.171945 452150 logger.go:94] "runtime" duration="1m0s"
I0404 18:03:31.171962 452150 logger.go:95] "another runtime" duration="1m0s"
I0222 15:14:40.497333 198174 example.go:54] "runtime" duration="1m0s"
I0222 15:14:40.497346 198174 example.go:55] "another runtime" duration="1h0m0s" duration="1m0s"
```

### JSON log format
Expand Down Expand Up @@ -244,11 +245,11 @@ To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature th
running on the node. To use the feature, ensure that the `NodeLogQuery`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled for that node, and that the
kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. On Linux
we assume that service logs are available via journald. On Windows we assume that service logs are available
in the application log provider. On both operating systems, logs are also available by reading files within
the assumption is that service logs are available via journald. On Windows the assumption is that service logs are
available in the application log provider. On both operating systems, logs are also available by reading files within
`/var/log/`.

Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
Provided you are authorized to interact with node objects, you can try out this feature on all your nodes or
just a subset. Here is an example to retrieve the kubelet service logs from a node:

```shell
Expand Down Expand Up @@ -293,4 +294,4 @@ kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&patter
* Read about [Contextual Logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging)
* Read about [deprecation of klog flags](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components)
* Read about the [Conventions for logging severity](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md)

* Read about [Log Query](https://kep.k8s.io/2258)
36 changes: 36 additions & 0 deletions content/en/docs/concepts/configuration/configmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,42 @@ ConfigMaps consumed as environment variables are not updated automatically and r
A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes#using-subpath) volume mount will not receive ConfigMap updates.
{{< /note >}}


### Using Configmaps as environment variables

To use a Configmap in an {{< glossary_tooltip text="environment variable" term_id="container-env-variables" >}}
in a Pod:

1. For each container in your Pod specification, add an environment variable
for each Configmap key that you want to use to the
`env[].valueFrom.configMapKeyRef` field.
1. Modify your image and/or command line so that the program looks for values
in the specified environment variables.

This is an example of defining a ConfigMap as a pod environment variable:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: env-configmap
spec:
containers:
- name: envars-test-container
image: nginx
env:
- name: CONFIGMAP_USERNAME
valueFrom:
configMapKeyRef:
name: myconfigmap
key: username
```

It's important to note that the range of characters allowed for environment
variable names in pods is [restricted](/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config).
If any keys do not meet the rules, those keys are not made available to your container, though
the Pod is allowed to start.

## Immutable ConfigMaps {#configmap-immutable}

{{< feature-state for_k8s_version="v1.21" state="stable" >}}
Expand Down
23 changes: 4 additions & 19 deletions content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -567,25 +567,10 @@ in a Pod:
For instructions, refer to
[Define container environment variables using Secret data](/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data).

#### Invalid environment variables {#restriction-env-from-invalid}

If your environment variable definitions in your Pod specification are
considered to be invalid environment variable names, those keys aren't made
available to your container. The Pod is allowed to start.

Kubernetes adds an Event with the reason set to `InvalidVariableNames` and a
message that lists the skipped invalid keys. The following example shows a Pod that refers to a Secret named `mysecret`, where `mysecret` contains 2 invalid keys: `1badkey` and `2alsobad`.

```shell
kubectl get events
```

The output is similar to:

```
LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON
0s 0s 1 dapi-test-pod Pod Warning InvalidEnvironmentVariableNames kubelet, 127.0.0.1 Keys [1badkey, 2alsobad] from the EnvFrom secret default/mysecret were skipped since they are considered invalid environment variable names.
```
It's important to note that the range of characters allowed for environment variable
names in pods is [restricted](/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config).
If any keys do not meet the rules, those keys are not made available to your container, though
the Pod is allowed to start.

### Container image pull Secrets {#using-imagepullsecrets}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,7 @@ There are three types of hook handlers that can be implemented for Containers:
Resources consumed by the command are counted against the Container.
* HTTP - Executes an HTTP request against a specific endpoint on the Container.
* Sleep - Pauses the container for a specified duration.
The "Sleep" action is available when the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
`PodLifecycleSleepAction` is enabled.
This is a beta-level feature default enabled by the `PodLifecycleSleepAction` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).

### Hook handler execution

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -295,6 +295,50 @@ When you add a custom resource, you can access it using:
(generating one is an advanced undertaking, but some projects may provide a client along with
the CRD or AA).


## Custom resource field selectors

[Field Selectors](/docs/concepts/overview/working-with-objects/field-selectors/)
let clients select custom resources based on the value of one or more resource
fields.

All custom resources support the `metadata.name` and `metadata.namespace` field
selectors.

Fields declared in a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}
may also be used with field selectors when included in the `spec.versions[*].selectableFields` field of the
{{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}.

### Selectable fields for custom resources {#crd-selectable-fields}

{{< feature-state feature_gate_name="CustomResourceFieldSelectors" >}}

You need to enable the `CustomResourceFieldSelectors`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) to
use this behavior, which then applies to all CustomResourceDefinitions in your
cluster.

The `spec.versions[*].selectableFields` field of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}} may be used to
declare which other fields in a custom resource may be used in field selectors.
The following example adds the `.spec.color` and `.spec.size` fields as
selectable fields.

{{% code_sample file="customresourcedefinition/shirt-resource-definition.yaml" %}}

Field selectors can then be used to get only resources with with a `color` of `blue`:

```shell
kubectl get shirts.stable.example.com --field-selector spec.color=blue
```

The output should be:

```
NAME COLOR SIZE
example1 blue S
example2 blue M
```

## {{% heading "whatsnext" %}}

* Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,19 +54,6 @@ that plugin or [networking provider](/docs/concepts/cluster-administration/netwo

## Network Plugin Requirements

For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need
specific configuration to support kube-proxy. The iptables proxy depends on iptables, and the
plugin may need to ensure that container traffic is made available to iptables. For example, if
the plugin connects containers to a Linux bridge, the plugin must set the
`net/bridge/bridge-nf-call-iptables` sysctl to `1` to ensure that the iptables proxy functions
correctly. If the plugin does not use a Linux bridge, but uses something like Open vSwitch or
some other mechanism instead, it should ensure container traffic is appropriately routed for the
proxy.

By default, if no kubelet network plugin is specified, the `noop` plugin is used, which sets
`net/bridge/bridge-nf-call-iptables=1` to ensure simple configurations (like Docker with a bridge)
work correctly with the iptables proxy.

### Loopback CNI

In addition to the CNI plugin installed on the nodes for implementing the Kubernetes network
Expand Down
Loading

0 comments on commit 0471ca1

Please sign in to comment.