Skip to content

Commit

Permalink
Merge pull request #34290 from kubernetes/dev-1.25
Browse files Browse the repository at this point in the history
Official 1.25 Release Docs
  • Loading branch information
kcmartin authored Aug 23, 2022
2 parents 5706c58 + 67d8155 commit b79539d
Show file tree
Hide file tree
Showing 161 changed files with 9,268 additions and 17,541 deletions.
21,434 changes: 6,475 additions & 14,959 deletions api-ref-assets/api/swagger.json

Large diffs are not rendered by default.

23 changes: 11 additions & 12 deletions api-ref-assets/config/fields.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
fields:
- containers
- initContainers
- ephemeralContainers
- imagePullSecrets
- enableServiceLinks
- os
Expand All @@ -20,7 +21,9 @@
- runtimeClassName
- priorityClassName
- priority
- preemptionPolicy
- topologySpreadConstraints
- overhead
- name: Lifecycle
fields:
- restartPolicy
Expand Down Expand Up @@ -48,11 +51,9 @@
- name: Security context
fields:
- securityContext
- name: Beta level
- name: Alpha level
fields:
- ephemeralContainers
- preemptionPolicy
- overhead
- hostUsers
- name: Deprecated
fields:
- serviceAccount
Expand Down Expand Up @@ -384,6 +385,9 @@
fields:
- selector
- manualSelector
- name: Alpha level
fields:
- podFailurePolicy

- definition: io.k8s.api.batch.v1.JobStatus
field_categories:
Expand All @@ -396,7 +400,7 @@
- completedIndexes
- conditions
- uncountedTerminatedPods
- name: Alpha level
- name: Beta level
fields:
- ready

Expand Down Expand Up @@ -525,6 +529,7 @@
- cephfs
- cinder
- csi
- ephemeral
- fc
- flexVolume
- flocker
Expand All @@ -539,9 +544,6 @@
- scaleIO
- storageos
- vsphereVolume
- name: Alpha level
fields:
- ephemeral
- name: Deprecated
fields:
- gitRepo
Expand Down Expand Up @@ -591,7 +593,7 @@
- volumeName
- storageClassName
- volumeMode
- name: Alpha level
- name: Beta level
fields:
- dataSource
- dataSourceRef
Expand Down Expand Up @@ -714,6 +716,3 @@
- resourceVersion
- selfLink
- uid
- name: Ignored
fields:
- clusterName
6 changes: 3 additions & 3 deletions api-ref-assets/config/toc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -179,9 +179,6 @@ parts:
- name: PodDisruptionBudget
group: policy
version: v1
- name: PodSecurityPolicy
group: policy
version: v1beta1
- name: Extend Resources
chapters:
- name: CustomResourceDefinition
Expand Down Expand Up @@ -230,6 +227,9 @@ parts:
- name: ComponentStatus
group: ""
version: v1
- name: ClusterCIDR
group: networking.k8s.io
version: v1alpha1
- name: Common Definitions
chapters:
- name: DeleteOptions
Expand Down
41 changes: 22 additions & 19 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -139,10 +139,10 @@ time_format_default = "January 02, 2006 at 3:04 PM PST"
description = "Production-Grade Container Orchestration"
showedit = true

latest = "v1.24"
latest = "v1.25"

fullversion = "v1.24.0"
version = "v1.24"
fullversion = "v1.25.0"
version = "v1.25"
githubbranch = "main"
docsbranch = "main"
deprecated = false
Expand All @@ -169,6 +169,9 @@ algolia_docsearch = false
# Enable Lunr.js offline search
offlineSearch = false

# Official CVE feed bucket URL
cveFeedBucket = "https://storage.googleapis.com/k8s-cve-feed/official-cve-feed.json"

[params.pushAssets]
css = [
"callouts",
Expand All @@ -179,40 +182,40 @@ js = [
]

[[params.versions]]
fullversion = "v1.24.0"
version = "v1.24"
githubbranch = "v1.24.0"
fullversion = "v1.25.0"
version = "v1.25"
githubbranch = "v1.25.0"
docsbranch = "main"
url = "https://kubernetes.io"

[[params.versions]]
fullversion = "v1.23.6"
fullversion = "v1.24.2"
version = "v1.24"
githubbranch = "v1.24.2"
docsbranch = "release-1.24"
url = "https://v1-24.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.23.8"
version = "v1.23"
githubbranch = "v1.23.6"
githubbranch = "v1.23.8"
docsbranch = "release-1.23"
url = "https://v1-23.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.22.9"
fullversion = "v1.22.11"
version = "v1.22"
githubbranch = "v1.22.9"
githubbranch = "v1.22.11"
docsbranch = "release-1.22"
url = "https://v1-22.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.21.12"
fullversion = "v1.21.14"
version = "v1.21"
githubbranch = "v1.21.12"
githubbranch = "v1.21.14"
docsbranch = "release-1.21"
url = "https://v1-21.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.20.15"
version = "v1.20"
githubbranch = "v1.20.15"
docsbranch = "release-1.20"
url = "https://v1-20.docs.kubernetes.io"

# User interface configuration
[params.ui]
# Enable to show the side bar menu in its compact state.
Expand Down
126 changes: 126 additions & 0 deletions content/en/docs/concepts/architecture/cgroups.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
---
title: About cgroup v2
content_type: concept
weight: 50
---

<!-- overview -->

On Linux, {{< glossary_tooltip text="control groups" term_id="cgroup" >}}
constrain resources that are allocated to processes.

The {{< glossary_tooltip text="kubelet" term_id="kubelet" >}} and the
underlying container runtime need to interface with cgroups to enforce
[resource mangement for pods and containers](/docs/concepts/configuration/manage-resources-containers/) which
includes cpu/memory requests and limits for containerized workloads.

There are two versions of cgroups in Linux: cgroup v1 and cgroup v2. cgroup v2 is
the new generation of the `cgroup` API.

<!-- body -->


## What is cgroup v2? {#cgroup-v2}
{{< feature-state for_k8s_version="v1.25" state="stable" >}}

cgroup v2 is the next version of the Linux `cgroup` API. cgroup v2 provides a
unified control system with enhanced resource management
capabilities.

cgroup v2 offers several improvements over cgroup v1, such as the following:

- Single unified hierarchy design in API
- Safer sub-tree delegation to containers
- Newer features like [Pressure Stall Information](https://www.kernel.org/doc/html/latest/accounting/psi.html)
- Enhanced resource allocation management and isolation across multiple resources
- Unified accounting for different types of memory allocations (network memory, kernel memory, etc)
- Accounting for non-immediate resource changes such as page cache write backs

Some Kubernetes features exclusively use cgroup v2 for enhanced resource
management and isolation. For example, the
[MemoryQoS](/blog/2021/11/26/qos-memory-resources/) feature improves memory QoS
and relies on cgroup v2 primitives.


## Using cgroup v2 {#using-cgroupv2}

The recommended way to use cgroup v2 is to use a Linux distribution that
enables and uses cgroup v2 by default.

To check if your distribution uses cgroup v2, refer to [Identify cgroup version on Linux nodes](#check-cgroup-version).

### Requirements

cgroup v2 has the following requirements:

* OS distribution enables cgroup v2
* Linux Kernel version is 5.8 or later
* Container runtime supports cgroup v2. For example:
* [containerd](https://containerd.io/) v1.4 and later
* [cri-o](https://cri-o.io/) v1.20 and later
* The kubelet and the container runtime are configured to use the [systemd cgroup driver](/docs/setup/production-environment/container-runtimes#systemd-cgroup-driver)

### Linux Distribution cgroup v2 support

For a list of Linux distributions that use cgroup v2, refer to the [cgroup v2 documentation](https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md)

<!-- the list should be kept in sync with https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md -->
* Container Optimized OS (since M97)
* Ubuntu (since 21.10, 22.04+ recommended)
* Debian GNU/Linux (since Debian 11 bullseye)
* Fedora (since 31)
* Arch Linux (since April 2021)
* RHEL and RHEL-like distributions (since 9)

To check if your distribution is using cgroup v2, refer to your distribution's
documentation or follow the instructions in [Identify the cgroup version on Linux nodes](#check-cgroup-version).

You can also enable cgroup v2 manually on your Linux distribution by modifying
the kernel cmdline boot arguments. If your distribution uses GRUB,
`systemd.unified_cgroup_hierarchy=1` should be added in `GRUB_CMDLINE_LINUX`
under `/etc/default/grub`, followed by `sudo update-grub`. However, the
recommended approach is to use a distribution that already enables cgroup v2 by
default.

### Migrating to cgroup v2 {#migrating-cgroupv2}

To migrate to cgroup v2, ensure that you meet the [requirements](#requirements), then upgrade
to a kernel version that enables cgroup v2 by default.

The kubelet automatically detects that the OS is running on cgroup v2 and
performs accordingly with no additional configuration required.

There should not be any noticeable difference in the user experience when
switching to cgroup v2, unless users are accessing the cgroup file system
directly, either on the node or from within the containers.

cgroup v2 uses a different API than cgroup v1, so if there are any
applications that directly access the cgroup file system, they need to be
updated to newer versions that support cgroup v2. For example:

* Some third-party monitoring and security agents may depend on the cgroup filesystem.
Update these agents to versions that support cgroup v2.
* If you run [cAdvisor](https://github.com/google/cadvisor) as a stand-alone
DaemonSet for monitoring pods and containers, update it to v0.43.0 or later.
* If you use JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which [fully support cgroup v2](https://bugs.openjdk.org/browse/JDK-8230305).

## Identify the cgroup version on Linux Nodes {#check-cgroup-version}

The cgroup version depends on on the Linux distribution being used and the
default cgroup version configured on the OS. To check which cgroup version your
distribution uses, run the `stat -fc %T /sys/fs/cgroup/` command on
the node:

```shell
stat -fc %T /sys/fs/cgroup/
```

For cgroup v2, the output is `cgroup2fs`.

For cgroup v1, the output is `tmpfs.`

## {{% heading "whatsnext" %}}

- Learn more about [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html)
- Learn more about [container runtime](/docs/concepts/architecture/cri)
- Learn more about [cgroup drivers](/docs/setup/production-environment/container-runtimes#cgroup-drivers)
31 changes: 30 additions & 1 deletion content/en/docs/concepts/cluster-administration/system-traces.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ as the kube-apiserver is often a public endpoint.
To enable tracing, enable the `APIServerTracing`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configration file
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configuration file
with `--tracing-config-file=<path-to-config>`. This is an example config that records
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:

Expand All @@ -76,6 +76,35 @@ samplingRatePerMillion: 100
For more information about the `TracingConfiguration` struct, see
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).

### kubelet traces

{{< feature-state for_k8s_version="v1.25" state="alpha" >}}

The kubelet CRI interface and authenticated http servers are instrumented to generate
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
Trace context propagation is also configured. A parent span's sampling decision is always respected.
A provided tracing configuration sampling rate will apply to spans without a parent.
Enabled without a configured endpoint, the default OpenTelemetry Collector reciever address of "localhost:4317" is set.

#### Enabling tracing in the kubelet

To enable tracing, enable the `KubeletTracing`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the kubelet. Also, provide the kubelet with a
[tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.25/tracing/api/v1/types.go).
This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:

```yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
KubeletTracing: true
tracing:
# default value
#endpoint: localhost:4317
samplingRatePerMillion: 100
```

## Stability

Tracing instrumentation is still under active development, and may change
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ directly or from your monitoring tools.
## Local ephemeral storage

<!-- feature gate LocalStorageCapacityIsolation -->
{{< feature-state for_k8s_version="v1.10" state="beta" >}}
{{< feature-state for_k8s_version="v1.25" state="stable" >}}

Nodes have local ephemeral storage, backed by
locally-attached writeable devices or, sometimes, by RAM.
Expand Down Expand Up @@ -306,13 +306,15 @@ as you like.
{{< /tabs >}}

The kubelet can measure how much local storage it is using. It does this provided
that:
that you have set up the node using one of the supported configurations for local
ephemeral storage.

- the `LocalStorageCapacityIsolation`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled (the feature is on by default), and
- you have set up the node using one of the supported configurations
for local ephemeral storage.
is enabled (the feature is on by default), and you have set up the node using one
of the supported configurations for local ephemeral storage.
- Quotas are faster and more accurate than directory scanning. The
`LocalStorageCapacityIsolationFSQuotaMonitoring` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled (the feature is on by default),

If you have a different configuration, then the kubelet does not apply resource
limits for ephemeral local storage.
Expand Down Expand Up @@ -446,7 +448,7 @@ that file but the kubelet does not categorize the space as in use.
{{% /tab %}}
{{% tab name="Filesystem project quota" %}}

{{< feature-state for_k8s_version="v1.15" state="alpha" >}}
{{< feature-state for_k8s_version="v1.25" state="beta" >}}

Project quotas are an operating-system level feature for managing
storage use on filesystems. With Kubernetes, you can enable project
Expand Down
Loading

0 comments on commit b79539d

Please sign in to comment.