Preparation and study material for Certified Kubernetes Administrator exam v1.30.
- Reasoning
- Aliases
- Allowed Kubernetes documentation resources
- CKA Environment
- CKA Exam Simulator
- Cluster Architecture, Installation and Configuration
- Workloads and Scheduling
- Understand deployments and how to perform rolling update and rollbacks
- Use ConfigMaps and Secrets to configure applications
- Know how to scale applications
- Understand the primitives used to create robust, self-healing, application deployments
- Understand how resource limits can affect Pod scheduling
- Awareness of manifest management and common templating tools
- Services and Networking
- Understand host networking configuration on the cluster nodes
- Understand connectivity between Pods
- Understand ClusterIP, NodePort, LoadBalancer service types and endpoints
- Know how to use Ingress controllers and Ingress resources
- Know how to configure and use CoreDNS
- Choose an appropriate container network interface plugin
- Storage
- Troubleshooting
- Bonus Exercise: am I ready for the CKA exam?
After using Kubernetes in production for over a year now (both on-prem and AWS/EKS), I wanted to see what the CKA exam is all about.
Keep it simple:
alias k=kubectl
alias do="--dry-run=client -o yaml"
alias now="--force --grace-period 0"
alias kc="kubectl config"
alias kcgc="kubectl config get-contexts"
alias kccc="kubectl config current-context"
- kubectl Cheat Sheet
- kubectl-commands Reference
- Kubernetes Documentation
- GitHub Kubernetes
- Kubernetes Blog
See https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad#cka-and-ckad-environment
There are six clusters (CKA) that comprise the exam environment, made up of varying numbers of containers, as follows:
Cluster | Members | CNI | Description |
---|---|---|---|
k8s | 1 master, 2 worker | flannel | k8s cluster |
hk8s | 1 master, 2 worker | calico | k8s cluster |
bk8s | 1 master, 1 worker | flannel | k8s cluster |
wk8s | 1 master, 2 worker | flannel | k8s cluster |
ek8s | 1 master, 2 worker | flannel | k8s cluster |
ik8s | 1 master, 1 base node | loopback | k8s cluster − missing worker node |
At the start of each task you'll be provided with the command to ensure you are on the correct cluster to complete the task.
Command-like tools kubectl
, jq
, tmux
, curl
, wget
and man
are pre-installed in all environments.
Do not sit the CKA exam unless you get the perfect score and understand the solutions (regardless of the time taken to solve all questions).
Unless stated otherwise, all Kubernetes resources should be created in the cka
namespace.
We have a six-node (three control planes and three worker nodes) Kubernetes homelab cluster running Rocky Linux already.
For the sake of this excercise, we will create a new two-node cluster, with one control plane and one worker node, using Ubuntu 20.04 LTS. It makes sense to use a Debian-based distribution here because we have a RHEL-based homelab cluster already.
Libvirt/KVM nodes:
- srv37-master: 2 vCPUs, 4GB RAM, 16GB disk, 10.11.1.37/24
- srv38-node: 2 vCPUs, 4GB RAM, 16GB disk, 10.11.1.38/24
Provision a KVM guest for the control plane using PXE boot:
virt-install \
--connect qemu+ssh://[email protected]/system \
--name srv37-master \
--network bridge=br0,model=virtio,mac=C0:FF:EE:D0:5E:37 \
--disk path=/var/lib/libvirt/images/srv37.qcow2,size=16 \
--pxe \
--ram 4096 \
--vcpus 2 \
--os-type linux \
--os-variant ubuntu20.04 \
--sound none \
--rng /dev/urandom \
--virt-type kvm \
--wait 0
When asked for an OS, select Ubuntu 20.04 LTS Server option.
Provision a KVM guest for the worker node using PXE boot:
virt-install \
--connect qemu+ssh://[email protected]/system \
--name srv38-node \
--network bridge=br0,model=virtio,mac=C0:FF:EE:D0:5E:38 \
--disk path=/var/lib/libvirt/images/srv38.qcow2,size=16 \
--pxe \
--ram 4096 \
--vcpus 2 \
--os-type linux \
--os-variant ubuntu20.04 \
--sound none \
--rng /dev/urandom \
--virt-type kvm \
--wait 0
We will use kubeadm
to install a Kubernetes v1.29 cluster. We will upgrade the cluster to v1.30 in the next chapter.
Docs: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
Install container runtime on all nodes:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Install containerd
on all nodes:
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install containerd.io
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
To use the systemd
cgroup driver in /etc/containerd/config.toml
with runc
, set:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
sudo sed -i 's/ SystemdCgroup =.*/ SystemdCgroup = true/g' /etc/containerd/config.toml
Make sure to restart containerd:
sudo systemctl restart containerd
Docs: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
Install kubeadm
, kubelet
and kubectl
(v1.29):
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.29.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.29.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes-1.29.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl enable kubelet
Initialise the control plane node. Set pod network CIDR based on the CNI that you plan to install later:
- Calico -
192.168.0.0/16
- Flannel -
10.244.0.0/16
- Weave Net -
10.32.0.0/12
We are going to use Flannel, hence 10.244.0.0/16
.
sudo kubeadm init \
--kubernetes-version "1.29.9" \
--pod-network-cidr "10.244.0.0/16"
Configure kubectl
access on the control plane:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run the output of the init command on the worker node:
sudo kubeadm join 10.11.1.37:6443 --token "ktlb43.llip8nym905afakm" \
--discovery-token-ca-cert-hash sha256:b3f1c31e2777bd54b3f7a797659a96072711809ae84e8c9be3fba449c8e32dd4
-1.29 Install a pod network to the cluster. You can choose one of the following: Calico, Flannel, Weave Net.
- To install Calico, run the following:
kubectl apply -f "https://projectcalico.docs.tigera.io/manifests/calico.yaml"
- To install Flannel (for Kubernetes v1.17+), run the following:
kubectl apply -f "https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml"
- To install Weave Net, run the following:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Check the cluster to make sure that all nodes are running and ready:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
srv37 Ready control-plane 14m v1.29.9
srv38 Ready <none> 102s v1.29.9
Docs: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
Create a new token on the control plane:
kubeadm token create --print-join-command
The output will be something like this:
kubeadm join 10.11.1.37:6443 --token hh{truncated}g4 --discovery-token-ca-cert-hash sha256:77{truncated}28
Run the kubeadm join
command on a new worker node that is to be added to the cluster.
Install etcd client package on the control plane using a package manager (this may not always install the version of the package that you want):
sudo apt-get -y install etcd-client
Alternativelly:
ETCD_VER=v3.5.16
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GITHUB_URL}
mkdir -p /tmp/etcd-download-test
curl -fsSL ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
/tmp/etcd-download-test/etcdctl version
etcdctl version: 3.5.16
API version: 3.5
Find paths of certificates and keys:
sudo egrep "cert-|key-|trusted-" /etc/kubernetes/manifests/etcd.yaml|grep -ve peer
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
Take a snapshot by specifying the endpoint and certificates:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save "/root/etcd_backup_$(date +%F).db"
Do not use snapshot status
command because it can alter the snapshot file and render it invalid.
Restore an etcd cluster from the snapshot. Identify the default data-dir
:
sudo grep data-dir /etc/kubernetes/manifests/etcd.yaml
- --data-dir=/var/lib/etcd
If any API servers are running in your cluster, you should not attempt to restore instances of etcd. Instead, follow these steps to restore etcd:
- Stop all API server instances.
- Restore state in all etcd instances.
- Restart all API server instances.
Stop all control plane components:
cd /etc/kubernetes/manifests/
sudo mv ./*yaml ../
Make sure all control plane pods are NotReady
:
sudo crictl pods | egrep "kube|etcd"
Restore the snapshot into a specific directory:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd_backup \
snapshot restore "/root/etcd_backup_$(date +%F).db"
Tell etcd to use the new directory /var/lib/etcd_backup
:
sudo sed -i 's/\/var\/lib\/etcd/\/var\/lib\/etcd_backup/g' /etc/kubernetes/etcd.yaml
Start all control plane components:
cd /etc/kubernetes/manifests/
sudo mv ../*yaml ./
Give it some time (up to several minutes) for etcd to restart.
Docs: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
We will upgrade previously deployed Kubernetes cluster v1.29 to v1.30.
Set up Kubernetes repository for the new release:
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.30.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.30.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes-1.30.list
Upgrade the control plane:
sudo apt-mark unhold kubeadm && \
sudo apt-get update && \
sudo apt-get install -y kubeadm=1.30.1-1.1 && \
sudo apt-mark hold kubeadm
kubeadm version
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.30.1
kubectl drain srv37 --ignore-daemonsets
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get install -y kubelet=1.30.1-1.1 kubectl=1.30.1-1.1 && \
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon srv37
Upgrade the worker node:
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring-1.30.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring-1.30.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes-1.30.list
sudo apt-mark unhold kubeadm && \
sudo apt-get update && \
sudo apt-get install -y kubeadm=1.30.1-1.1 && \
sudo apt-mark hold kubeadm
sudo kubeadm upgrade node
kubectl drain srv38 --ignore-daemonsets
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get install -y kubelet=1.30.1-1.1 kubectl=1.30.1-1.1 && \
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon srv38
Verify the status of the cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
srv37 Ready control-plane 38m v1.30.1
srv38 Ready <none> 33m v1.30.1
Docs:
- https://kubernetes.io/docs/reference/access-authn-authz/rbac/
- https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
- https://kubernetes.io/docs/tasks/administer-cluster/certificates/
Exercise 1: perform the following tasks.
- Create a new user called
peasant
and grant the user access to the Kubernetes cluster. - Generate a private key
peasant.key
and CSRpeasant.csr
for the user. - User should have permissions to
get
andlist
the following resources in thecka
namespace:pods
andservices
. - Check API access using the
auth can-i
subcommand.
A few steps are required in order to get a normal user to be able to authenticate and invoke an API. First, this user must have a certificate issued by the Kubernetes cluster, and then present that certificate to the Kubernetes API.
Create a private key and a CSR using openssl
:
openssl genrsa -out peasant.key 2048
openssl req -new -key peasant.key -out peasant.csr
Create a CertificateSigningRequest
and submit it to a Kubernetes cluster:
cat peasant.csr | base64 | tr -d "\n"
cat > peasant-csr.yaml <<EOF
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: peasant
spec:
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2lEQ0NBWEFDQVFBd1F6RUxNQWtHQTFVRUJoTUNSMEl4RURBT0JnTlZCQWdNQjBodmJXVnNZV0l4RURBTwpCZ05WQkFvTUIwaHZiV1ZzWVdJeEVEQU9CZ05WQkFNTUIzQmxZWE5oYm5Rd2dnRWlNQTBHQ1NxR1NJYjNEUUVCCkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEYm4xd3c5bExDbERDd1EyeTlsVE1sSG9SUHNJS0VmbnFvb25mUmZiMnQKdzE5dzhrK3BGbkJsQkVxNEtSTldHSThnRkpab2MvS0lJN25lUnQwVDNnSkcrV09hWW56M2lYK3liL3IyNmh6ZwpTUWVWVjJCVEYxTVp3T0FmcjJpa2dPN1l4Z1YrZ0gwNFRmU1hKRUdSa2hUWEs4bWZzT3NrMHpnWW1NRHgvSkxpCmVZSWdlSkZMVnhoeCtUU3B2NzZQREk4OEJrVXpQbzYrQ2hXaExPcHRsL041Ym5BWXpXY2hIa2VTbGEwcC9UNnAKZ2RtR0tPOXkyN3k5bUx0MnJkd1Q2Tld5cXBsQk9ZbzloZ0VzKy9BYm5CSllvc0VMUzBKZmxpai90S3hEUyt0Qwp5YUY5NmQ0eTd3MUE3eVk4bmxZV3VIOFRUbXg4N0VySHA2OE16eUlZcmVYSkFnTUJBQUdnQURBTkJna3Foa2lHCjl3MEJBUXNGQUFPQ0FRRUFOMVIzY2Z4bTErZ0NiS1hxK0oyUkJ4eUFrSnFxNzF0SlZBemJmT3dScDV6REk4WTAKUHhCYlpBTjVGZVh3UFRyZnlPNm50TDFWSy9vTW9ETXoxTXNPZDZ5OXp6RGZYaHZRb3EvWW1Yd1g3M05iVzhJNQpweG1PRWkyeTZydEd2eVA2SHpicmtGTnNiVklPRktWMWhPSU9neitvd25TOFpUNWZzQmZOdXQ3eTRJTEZydmNRCnl6Mk0rWlAyWm96VlZ0NE9GaXNSSUF1MGlocE5Uc0ZLNkZUM3ZjblQ4bnZHQm5rS0NEeEh2YmRWVVdDK2xIZmkKZDI2Q3FwVmxQemNQRnRqRCtYQyswU2UzamJ1eW40MkxJYzRkaVBHUDA0a2Nsa2NLLzF1Z2NITHNCSjZZUGdyeQpSc05iQkhCSHM3c1hvNUw4NHIyOGlTL0RWR281d2tSd2hHNzlwUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
kubectl apply -f peasant-csr.yaml
Approve certificate signing request:
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
peasant 10s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Pending
kubectl certificate approve peasant
Create a role and rolebinding:
kubectl create role peasant-role --resource=pods,svc --verb=get,list -n cka
kubectl create rolebinding peasant-role-binding --role=peasant-role --user=peasant -n cka
Verify:
kubectl auth can-i get pods --as=peasant -n cka
yes
kubectl auth can-i delete pods --as=peasant -n cka
no
kubectl auth can-i list svc --as=peasant -n cka
yes
kubectl auth can-i update svc --as=peasant -n cka
no
Exercise 2: perform the following tasks.
- Create a new service account
kubernetes-dashboard-sa
. - Create a cluster role
kubernetes-dashboard-clusterrole
that grants permissionsget,list,watch
to resourcespods,nodes
to API groupmetrics.k8s.io
. - Grant the service account access to the cluster by creating a cluster role binding
kubernetes-dashboard-role-binding
.
Create a service account:
kubectl create sa kubernetes-dashboard-sa -n cka
Create a cluser role:
kubectl create clusterrole kubernetes-dashboard-clusterrole \
--verb=get,list,watch --resource=pods,nodes \
--dry-run=client -o yaml | sed 's/- ""/- metrics.k8s.io/g' | kubectl apply -f -
Create a cluster role binding:
kubectl create clusterrolebinding kubernetes-dashboard-role-binding \
--clusterrole=kubernetes-dashboard-clusterrole \
--serviceaccount=cka:kubernetes-dashboard-sa
Docs: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
A stacked HA cluster is a topology where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components. This is the default topology in kubeadm.
Create a kube-apiserver load balancer with a name that resolves to DNS. Install HAProxy and allow it to listen on kube-apiserver port 6443, configure firewall to allow inbound HAProxy traffic on kube-apiserver port 6443.
sudo apt-get -y install haproxy psmisc
Configure HAProxy /etc/haproxy/haproxy.cfg
:
global
log /dev/log local0
log /dev/log local1 notice
stats socket /var/lib/haproxy/stats level admin
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend kubernetes
bind 10.11.1.30:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server srv31-master 10.11.1.31:6443 check fall 3 rise 2
server srv32-master 10.11.1.32:6443 check fall 3 rise 2
server srv33-master 10.11.1.33:6443 check fall 3 rise 2
Configure DNS:
host kubelb.hl.test
kubelb.hl.test has address 10.11.1.30
Enable and start HAProxy service:
sudo systemctl enable --now haproxy
Initialise the control plane:
sudo kubeadm init \
--control-plane-endpoint "kubelb.hl.test:6443" \
--upload-certs
Join other control plane nodes to the cluster.
Unless stated otherwise, all Kubernetes resources should be created in the cka
namespace.
Docs: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment
Exercise 1: perform the following tasks.
- Create a deployment object
httpd-pii-demo
consisting of 4 pods, each containing a singlelisenet/httpd-pii-demo:0.2
container. It should be able to run on control-plane nodes as well. - Identify the update strategy employed by this deployment.
- Modify the update strategy so
maxSurge
is equal to50%
andmaxUnavailable
is equal to50%
. - Perform a rolling update to this deployment so that the image gets updated to
lisenet/httpd-pii-demo:0.3
. - Undo the most recent change.
Imperative commands:
kubectl create deploy httpd-pii-demo --image=lisenet/httpd-pii-demo:0.2 --replicas=4 -n cka
kubectl describe deploy httpd-pii-demo -n cka
kubectl edit deploy httpd-pii-demo -n cka
kubectl set image deploy/httpd-pii-demo httpd-pii-demo=lisenet/httpd-pii-demo:0.3 -n cka
kubectl rollout undo deploy/httpd-pii-demo -n cka
kubectl rollout status deploy/httpd-pii-demo -n cka
Declarative YAML (updated):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: httpd-pii-demo
name: httpd-pii-demo
namespace: cka
spec:
replicas: 4
selector:
matchLabels:
app: httpd-pii-demo
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
template:
metadata:
labels:
app: httpd-pii-demo
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
containers:
- image: lisenet/httpd-pii-demo:0.2
name: httpd-pii-demo
Docs:
- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Exercise 2: perform the following tasks.
- Create a single pod
httpd
of imagehttpd:2.4
. The container should be namedwebserver
. - This pod should only be scheduled on a control plane node.
Imperative commands:
kubectl run httpd --image=httpd:2.4 --dry-run=client -o yaml -n cka > httpd-control-plane.yaml
Edit the file httpd-control-plane.yaml
and add tolerations
and nodeSelector
sections:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd
name: httpd
spec:
containers:
- image: httpd:2.4
name: webserver
restartPolicy: Always
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
nodeSelector:
node-role.kubernetes.io/control-plane: ""
Deploy the pod:
kubectl apply -f ./httpd-control-plane.yaml
Docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Exercise 1: perform the following tasks to configure application to use a configmap.
- Create a configmap
webapp-color
that has the following key=value pair:key
= colorvalue
= blue
- Create a pod
webapp-color
that useskodekloud/webapp-color
image. - Configure the pod so that the underlying container has the environent variable
APP_COLOR
set to the value of the configmap. - Check pod logs to ensure that the variable has been set correctly.
Imperative commands:
kubectl create cm webapp-color --from-literal=color=blue -n cka
kubectl run webapp-color --image=kodekloud/webapp-color \
--dry-run=client -o yaml -n cka > webapp-color.yaml
Edit the file webapp-color.yaml
and add env
section:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: webapp-color
name: webapp-color
namespace: cka
spec:
containers:
- image: kodekloud/webapp-color
name: webapp-color
env:
# Define the environment variable
- name: APP_COLOR
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to APP_COLOR
name: webapp-color
# Specify the key associated with the value
key: color
Deploy the pod and validate:
kubectl apply -f webapp-color.yml
kubectl logs webapp-color -n cka | grep "Color from environment variable"
Declarative YAML:
---
apiVersion: v1
data:
color: blue
kind: ConfigMap
metadata:
name: webapp-color
namespace: cka
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: webapp-color
name: webapp-color
namespace: cka
spec:
containers:
- image: kodekloud/webapp-color
name: webapp-color
env:
# Define the environment variable
- name: APP_COLOR
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to APP_COLOR
name: webapp-color
# Specify the key associated with the value
key: color
Exercise 2: perform the following tasks to configure application to use a configmap.
- Create a configmap
grafana-ini
that containes a file namedgrafana.ini
with the following content:
[server]
protocol = http
http_port = 3000
- Create a pod
grafana
that usesgrafana/grafana:9.3.1
image. - Mount the configmap to the pod using
/etc/grafana/grafana.ini
as amountPath
andgrafana.ini
as asubPath
.
Imperative commands:
cat > grafana.ini <<EOF
[server]
protocol = http
http_port = 3000
EOF
kubectl create configmap grafana-ini --from-file=grafana.ini -n cka
kubectl run grafana --image=grafana/grafana:9.3.1 \
--dry-run=client -o yaml -n cka > grafana.yaml
Edit the file grafana.yaml
and add volumes
and volumeMounts
sections:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: grafana
name: grafana
namespace: cka
spec:
containers:
- image: grafana/grafana:9.3.1
name: grafana
volumeMounts:
- name: grafana-config
mountPath: /etc/grafana/grafana.ini
subPath: grafana.ini
volumes:
- name: grafana-config
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: grafana-ini
Deploy the pod and verify:
kubectl apply -f grafana.yml
kubectl exec grafana -n cka -- cat /etc/grafana/grafana.ini
[server]
protocol = http
http_port = 3000
Declarative YAML:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-ini
namespace: cka
data:
grafana.ini: |
[server]
protocol = http
http_port = 3000
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: grafana
name: grafana
namespace: cka
spec:
containers:
- image: grafana/grafana:9.3.1
name: grafana
volumeMounts:
- name: grafana-config
mountPath: /etc/grafana/grafana.ini
volumes:
- name: grafana-config
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: grafana-ini
Docs: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Exercise 3: perform the following tasks to configure application to use a secret.
- Create a secret
mysql-credentials
that has the following key=value pairs:mysql_root_password
= Mysql5.7RootPasswordmysql_username
= dbadminmysql_password
= Mysql5.7UserPassword
- Create a pod
mysql-pod-secret
that usesmysql:5.7
image. - Configure the pod so that the underlying container has the following environment variables set:
MYSQL_ROOT_PASSWORD
from secret keymysql_root_password
MYSQL_USER
from secret keymysql_username
MYSQL_PASSWORD
from secret keymysql_password
- Exec a command in the container to show that it has the configured environment variable.
Imperative commands:
kubectl create secret generic mysql-credentials \
--from-literal=mysql_root_password="Mysql5.7RootPassword" \
--from-literal=mysql_username=dbadmin \
--from-literal=mysql_password="Mysql5.7UserPassword" \
-n cka
kubectl run mysql-pod-secret --image=mysql:5.7 \
--dry-run=client -o yaml -n cka > mysql-pod-secret.yaml
Edit the file mysql-pod-secret.yaml
and add env
section:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: mysql-pod-secret
name: mysql-pod-secret
namespace: cka
spec:
containers:
- image: mysql:5.7
name: mysql-pod-secret
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_root_password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_password
Deploy the pod and validate:
kubectl apply -f mysql-pod-secret.yaml
kubectl exec mysql-pod-secret -n cka -- env | grep ^MYSQL
MYSQL_MAJOR=5.7
MYSQL_VERSION=5.7.37-1debian10
MYSQL_ROOT_PASSWORD=Mysql5.7RootPassword
MYSQL_USER=dbadmin
MYSQL_PASSWORD=Mysql5.7UserPassword
Declarative YAML:
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-credentials
namespace: cka
data:
mysql_password: TXlzcWw1LjdVc2VyUGFzc3dvcmQ=
mysql_root_password: TXlzcWw1LjdSb290UGFzc3dvcmQ=
mysql_username: ZGJhZG1pbg==
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: mysql-pod-secret
name: mysql-pod-secret
namespace: cka
spec:
containers:
- image: mysql:5.7
name: mysql-pod-secret
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_root_password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-credentials
key: mysql_password
Docs: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment
Exercise: perform the following tasks.
- Create a deployment object
nginx-deployment
consisting of 2 pods containing a singlenginx:1.22
container. - Increase the deployment size by adding 1 additional pod. Record the action.
- Decrease the deployment back to its original size of 2 pods. Record the action.
Imperative commands:
kubectl create deploy nginx-deployment --image=nginx:1.22 --replicas=2 -n cka
kubectl scale deploy nginx-deployment --replicas=3 --record -n cka
kubectl scale deploy nginx-deployment --replicas=2 --record -n cka
Declarative YAML (initial):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deployment
name: nginx-deployment
namespace: cka
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: nginx:1.22
name: nginx
Declarative YAML (updated):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deployment
name: nginx-deployment
namespace: cka
spec:
replicas: 3
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: nginx:1.22
name: nginx
Docs:
- https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
- https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
- https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
- https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
Exercise 1: perform the following tasks.
- Create a pod
httpd-liveness-readiness
that useslisenet/httpd-healthcheck:1.0.0
image. - Configure both
liveness
andreadiness
probes for a TCP check on port10001
. - Set
initialDelaySeconds
to 5. The probe should be performed every 10 seconds.
Imperative commands:
kubectl run httpd-liveness-readiness --image=lisenet/httpd-healthcheck:1.0.0 \
--dry-run=client -o yaml -n cka > httpd-liveness-readiness.yaml
Edit the file httpd-liveness-readiness.yaml
and add probes:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd-liveness-readiness
name: httpd-liveness-readiness
namespace: cka
spec:
containers:
- image: lisenet/httpd-healthcheck:1.0.0
name: httpd-liveness-readiness
readinessProbe:
tcpSocket:
port: 10001
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 10001
initialDelaySeconds: 5
periodSeconds: 10
Deploy the pod:
kubectl apply -f httpd-liveness-readiness.yaml
Verify:
kubectl describe po/httpd-liveness-readiness -n cka | grep tcp
Liveness: tcp-socket :10001 delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :10001 delay=5s timeout=1s period=10s #success=1 #failure=3
Exercise 2: perform the following tasks.
- Create a pod called
multi-container
that has the following:- A container named
blue
withlisenet/httpd-pii-demo:0.2
image. - A container named
healthcheck
withlisenet/httpd-healthcheck:1.0.0
image. - An
initContainer
namedbusybox
withbusybox:1.35.0
image. Container runs the following command:echo FIRST
.
- A container named
Imperative commands:
kubectl run multi-container --image=lisenet/httpd-healthcheck:1.0.0 \
--dry-run=client -o yaml -n cka > multi-container.yaml
Edit the file multi-container.yaml
and add containers.
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: multi-container
name: multi-container
namespace: cka
spec:
containers:
- image: lisenet/httpd-pii-demo:0.2
name: blue
- image: lisenet/httpd-healthcheck:1.0.0
name: healthcheck
initContainers:
- name: busybox
image: busybox:1.35.0
command: ['sh', '-c', 'echo FIRSH']
Deploy the pod and verify that 2 containers are running:
kubectl apply -f multi-container.yaml
kubectl get po/multi-container -n cka
NAME READY STATUS RESTARTS AGE
multi-container 2/2 Running 0 11s
Docs: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#example-1
Exercise 1: perform the following tasks.
- Create a pod
httpd-healthcheck
that useslisenet/httpd-healthcheck:1.0.0
image. - Set the pod memory request to 40Mi and memory limit to 128Mi.
Imperative commands:
kubectl run httpd-healthcheck --image=lisenet/httpd-healthcheck:1.0.0 \
--dry-run=client -o yaml -n cka > httpd-healthcheck.yaml
Edit the file httpd-healthcheck.yaml
and add resources
section:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd-healthcheck
name: httpd-healthcheck
spec:
containers:
- image: lisenet/httpd-healthcheck:1.0.0
name: httpd-healthcheck
resources:
requests:
memory: 40Mi
limits:
memory: 128Mi
Deploy the pod:
kubectl apply -f httpd-healthcheck.yaml
Verify:
kubectl describe po/httpd-healthcheck -n cka | grep -B1 memory
Limits:
memory: 128M
Requests:
memory: 40M
Docs: https://kubernetes.io/docs/concepts/policy/limit-range/
Exercise 2: perform the following tasks.
- Create a namespace
cka-memlimit
with a container memory limit of 30Mi. - Create a pod
httpd-healthcheck-memlimit
that useslisenet/httpd-healthcheck:1.0.0
image in thecka-memlimit
namespace, and set the pod memory request to 100Mi. - Observe the error.
Imperative commands:
kubectl create ns cka-memlimit
cat > cka-memlimit.yaml <<EOF
apiVersion: v1
kind: LimitRange
metadata:
name: cka-memlimit
namespace: cka-memlimit
spec:
limits:
- max:
memory: 30Mi
type: Container
EOF
kubectl apply -f cka-memlimit.yaml
kubectl run httpd-healthcheck-memlimit --image=lisenet/httpd-healthcheck:1.0.0 \
--dry-run=client -o yaml -n cka-memlimit > httpd-healthcheck-memlimit.yaml
Edit the file httpd-healthcheck-memlimit.yaml
and add resources
section:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd-healthcheck-memlimit
name: httpd-healthcheck-memlimit
namespace: cka-memlimit
spec:
containers:
- image: lisenet/httpd-healthcheck:1.0.0
name: httpd-healthcheck-memlimit
resources:
requests:
memory: 100Mi
Deploy the pod:
kubectl apply -f httpd-healthcheck-memlimit.yaml
The Pod "httpd-healthcheck-memlimit" is invalid: spec.containers[0].resources.requests: Invalid value: "100Mi": must be less than or equal to memory limit
Helm is the package manager (analogous to yum and apt) and Charts are packages (analogous to debs and rpms).
Unless stated otherwise, all Kubernetes resources should be created in the cka
namespace.
Doc: https://kubernetes.io/docs/concepts/cluster-administration/networking/
Exercise 1: perform the following tasks.
- Change the Service CIDR to
10.112.0.0/12
for the cluster. - Create a new service
new-cluster-ip
of typeClusterIP
that exposes port8080
. - Save the
iptables
rules of the control plane node for the created servicenew-cluster-ip
to a file/tmp/iptables.txt
.
Imperative commands. Check the cluster IP:
kubectl get svc/kubernetes
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
Change the Service CIDR on the kube-apiserver:
sed -i 's/10.96.0.0/10.112.0.0/g' /etc/kubernetes/manifests/kube-apiserver.yaml
Give it a some time for the kube-apiserver to restart. Check the pod was restarted using crictl
:
crictl ps | grep kube-controller-manager
a34270848373d f40be0088a83e About a minute ago Running kube-apiserver ...
Do the same for the controller manager:
sed -i 's/10.96.0.0/10.112.0.0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml
Give it some time for the controller-manager to restart. Check the pod was restarted using crictl
:
crictl ps | grep kube-controller-manager
e5666a46b88ac b07520cd7ab76 46 seconds ago Running kube-controller-manager ...
Create a new service and verify its IP address:
kubectl create svc clusterip new-cluster-ip --tcp 8080
kubectl get svc/new-cluster-ip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
new-cluster-ip ClusterIP 10.118.114.114 <none> 8080/TCP 7s
Kubernetes services are implemented using iptables
rules on all nodes. Save firewall rules for the service:
iptables-save | grep new-cluster-ip > /tmp/iptables.txt
Doc: https://kubernetes.io/docs/concepts/services-networking/
Every Pod gets its own IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports.
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
- pods on a node can communicate with all pods on all nodes without NAT,
- agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node.
Note: For those platforms that support Pods running in the host network (e.g. Linux):
- pods in the host network of a node can communicate with all pods on all nodes without NAT.
Kubernetes IP addresses exist at the Pod scope - containers within a Pod share their network namespaces - including their IP address and MAC address. This means that containers within a Pod can all reach each other's ports on localhost. This also means that containers within a Pod must coordinate port usage, but this is no different from processes in a VM. This is called the "IP-per-pod" model.
Docs: https://kubernetes.io/docs/concepts/services-networking/network-policies/
Exercise 1: perform the following tasks.
- Create a pod
httpd-netpol-blue
that uses imagelisenet/httpd-pii-demo:0.2
and has a label ofapp=blue
. - Create a pod
httpd-netpol-green
that uses imagelisenet/httpd-pii-demo:0.3
and has a label ofapp=green
. - Create a pod
curl-netpol
that uses imagecurlimages/curl:7.87.0
and has a label ofapp=admin
. The pod should run the following commandsleep 1800
. - Create a
NetworkPolicy
callednetpol-blue-green
. - The policy should allow the
busybox
pod only to:- connect to
httpd-netpol-blue
pods on port80
.
- connect to
- Use the
app
label of pods in your policy.
After implementation, connections from busybox
pod to httpd-netpol-green
pod on port 80
should no longer work.
Imperative commands. Create pods:
kubectl run httpd-netpol-blue --image="lisenet/httpd-pii-demo:0.2" --labels=app=blue -n cka
kubectl run httpd-netpol-green --image="lisenet/httpd-pii-demo:0.3" --labels=app=green -n cka
kubectl run curl-netpol --image="curlimages/curl:7.87.0" --labels=app=admin -n cka -- sleep 1800
Get IP addresses of pods and test web access.
kubectl get po -o wide -n cka | awk '{ print $1" "$6 }'
NAME IP
curl-netpol 192.168.135.204
httpd-netpol-blue 192.168.137.36
httpd-netpol-green 192.168.137.40
kubectl -n cka exec curl-netpol -- curl -sI http://192.168.137.36
HTTP/1.1 200 OK
Date: Mon, 21 Feb 2022 01:05:37 GMT
Server: Apache/2.4.48 (Debian)
X-Powered-By: PHP/7.3.30
Content-Type: text/html; charset=UTF-8
kubectl -n cka exec curl-netpol -- curl -sI http://192.168.137.40
HTTP/1.1 200 OK
Date: Mon, 21 Feb 2022 01:05:37 GMT
Server: Apache/2.4.48 (Debian)
X-Powered-By: PHP/7.3.30
Content-Type: text/html; charset=UTF-8
Create a file netpol-blue-green.yaml
with the following content:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol-blue-green
namespace: cka
spec:
podSelector:
matchLabels:
app: admin
policyTypes:
- Egress # policy is only about Egress
egress:
- to: # first condition "to"
- podSelector:
matchLabels:
app: blue
ports: # second condition "port"
- protocol: TCP
port: 80
Create the network policy and verify:
kubectl apply -f netpol-blue-green.yaml
kubectl get networkpolicy -n cka
NAME POD-SELECTOR AGE
netpol-blue-green app=admin 29s
Test web access again:
kubectl -n cka exec curl-netpol -- curl -sI --max-time 5 http://192.168.137.36
HTTP/1.1 200 OK
Date: Mon, 21 Feb 2022 01:05:37 GMT
Server: Apache/2.4.48 (Debian)
X-Powered-By: PHP/7.3.30
Content-Type: text/html; charset=UTF-8
kubectl -n cka exec curl-netpol -- curl -sI --max-time 5 http://192.168.137.40
command terminated with exit code 28
Declarative YAML:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol-blue-green
namespace: cka
spec:
podSelector:
matchLabels:
app: admin
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: blue
ports:
- protocol: TCP
port: 80
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: blue
name: httpd-netpol-blue
namespace: cka
spec:
containers:
- image: lisenet/httpd-pii-demo:0.2
name: httpd-netpol-blue
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: green
name: httpd-netpol-green
namespace: cka
spec:
containers:
- image: lisenet/httpd-pii-demo:0.3
name: httpd-netpol-green
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: admin
name: curl-netpol
namespace: cka
spec:
containers:
- args:
- "sleep"
- "1800"
image: curlimages/curl:7.87.0
name: curl-netpol
Docs: https://kubernetes.io/docs/concepts/services-networking/service/
Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically. Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
A service is an abstraction which defines a logical set of Pods and a policy by which to access them. Types of services:
- ClusterIP - exposes the Service on an internal IP in the cluster. Service only reachable from within the cluster.
- NodePort - exposing services to external clients.E ach cluster node opens a port on the node itself (hence the name) and redirects traffic received on that port to the underlying service.
- LoadBalancer - exposing services to external clients. A LoadBalancer service accessible through a dedicated load balancer, provisioned from the Cloud infrastructure Kubernetes is running on.
Exercise 1: perform the following tasks.
- Create a pod
httpd-healthcheck-nodeport
that useslisenet/httpd-healthcheck:1.0.0
image. - Expose the pod's port
10001
through a service of typeNodePort
and set the node port to30080
. - Get the index page through the
NodePort
usingcurl
.
Imperative commands:
kubectl run httpd-healthcheck-nodeport --image=lisenet/httpd-healthcheck:1.0.0 -n cka
kubectl expose pod httpd-healthcheck-nodeport \
--port=10001 \
--target-port=10001 \
--type=NodePort \
--dry-run=client -o yaml \
-n cka > httpd-healthcheck-nodeport.yaml
Edit the file httpd-healthcheck-nodeport.yaml
and add the given node port:
---
apiVersion: v1
kind: Service
metadata:
labels:
run: httpd-healthcheck-nodeport
name: httpd-healthcheck-nodeport
namespace: cka
spec:
ports:
- port: 10001
protocol: TCP
targetPort: 10001
nodePort: 30080
selector:
run: httpd-healthcheck-nodeport
type: NodePort
Deploy the service and verify:
kubectl apply -f httpd-healthcheck-nodeport.yaml
kubectl describe svc/httpd-healthcheck-nodeport -n cka
Name: httpd-healthcheck-nodeport
Namespace: cka
Labels: run=httpd-healthcheck-nodeport
Annotations: <none>
Selector: run=httpd-healthcheck-nodeport
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.97.138.226
IPs: 10.97.138.226
Port: <unset> 10001/TCP
TargetPort: 10001/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.48:10001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
curl http://10.11.1.40:30080/
httpd-healthcheck
Declarative YAML:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd-healthcheck-nodeport
name: httpd-healthcheck-nodeport
namespace: cka
spec:
containers:
- image: lisenet/httpd-healthcheck:1.0.0
name: httpd-healthcheck-nodeport
---
apiVersion: v1
kind: Service
metadata:
labels:
run: httpd-healthcheck-nodeport
name: httpd-healthcheck-nodeport
namespace: cka
spec:
ports:
- port: 10001
protocol: TCP
targetPort: 10001
nodePort: 30080
selector:
run: httpd-healthcheck-nodeport
type: NodePort
Exercise 2: perform the following tasks.
- Create a pod
httpd-healthcheck-loadbalancer
that useslisenet/httpd-healthcheck:1.0.0
image. - Expose the pod's port 10001 through a service of type
LoadBalancer
. - Verify that the service has an external IP pending.
Imperative commands:
kubectl run httpd-healthcheck-loadbalancer --image=lisenet/httpd-healthcheck:1.0.0 -n cka
kubectl expose pod httpd-healthcheck-loadbalancer \
--port=10001 --target-port=10001 --type=LoadBalancer -n cka
kubectl get svc/httpd-healthcheck-loadbalancer -n cka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-healthcheck-loadbalancer LoadBalancer 10.111.116.18 <pending> 10001:32465/TCP 68s
Declarative YAML:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: httpd-healthcheck-loadbalancer
name: httpd-healthcheck-loadbalancer
namespace: cka
spec:
containers:
- image: lisenet/httpd-healthcheck:1.0.0
name: httpd-healthcheck-loadbalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
run: httpd-healthcheck-loadbalancer
name: httpd-healthcheck-loadbalancer
namespace: cka
spec:
ports:
- port: 10001
protocol: TCP
targetPort: 10001
selector:
run: httpd-healthcheck-loadbalancer
type: LoadBalancer
Exercise 3: perform the following tasks.
Imperative commands:
- Follow up from the previous exercise 2 and deploy
MetalLB
to the cluster. - Verify that the service
httpd-healthcheck-loadbalancer
has an external IP address assigned. - Get the index page through the
LoadBalancer
service usingcurl
.
kubectl create ns metallb-system
cat > metallb-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.11.1.61-10.11.1.65
EOF
kubectl apply -f metallb-configmap.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
kubectl get svc/httpd-healthcheck-loadbalancer -n cka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-healthcheck-loadbalancer LoadBalancer 10.111.116.18 10.11.1.61 10001:32465/TCP 10m
curl http://10.11.1.61:10001
httpd-healthcheck
Docs: https://kubernetes.io/docs/concepts/services-networking/ingress/
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
Docs: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Exercise: perform the following tasks.
- Make sure that you have
MetalLB
installed from the previous task. - Create a deployment object
httpd-pii-demo-blue
containing a singlelisenet/httpd-pii-demo:0.2
container and expose its port80
through a typeLoadBalancer
service. - Create a deployment object
httpd-pii-demo-green
containing a singlelisenet/httpd-pii-demo:0.3
container and expose its port80
through a typeLoadBalancer
service. - Deploy
ingress-nginx
controller. - Create the ingress resource
ingress-blue-green
to make the applications available at/blue
and/green
on theIngress
service.
Imperative commands:
kubectl create deploy httpd-pii-demo-blue --image=lisenet/httpd-pii-demo:0.2 -n cka
kubectl expose deploy/httpd-pii-demo-blue --port=80 --target-port=80 --type=LoadBalancer -n cka
kubectl create deploy httpd-pii-demo-green --image=lisenet/httpd-pii-demo:0.3 -n cka
kubectl expose deploy/httpd-pii-demo-green --port=80 --target-port=80 --type=LoadBalancer -n cka
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml
kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.59.72 10.11.1.62 80:30811/TCP,443:31972/TCP 51s
ingress-nginx-controller-admission ClusterIP 10.100.251.17 <none> 443/TCP 52s
Create a manifest file ingress-blue-green.yaml
for an ingress resource:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-blue-green
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: cka
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /blue
pathType: Prefix
backend:
service:
name: httpd-pii-demo-blue
port:
number: 80
- path: /green
pathType: Prefix
backend:
service:
name: httpd-pii-demo-green
port:
number: 80
Deploy ingress resource and verify:
kubectl apply -f ingress-blue-green.yaml
kubectl describe ingress/ingress-blue-green -n cka
Name: ingress-blue-green
Namespace: cka
Address: 10.11.1.62
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/blue httpd-pii-demo-blue:80 (10.244.1.61:80)
/green httpd-pii-demo-green:80 (10.244.1.62:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 12s (x2 over 33s) nginx-ingress-controller Scheduled for sync
Docs: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns.
Exercise: perform the following tasks.
- Create a pod
busybox
that usesbusybox
image and run the following commandsleep 7200
. - Exec a command in the container to show what nameservers have been configured in
/etc/resolv.conf
. - Exec a command in the container to perform
nslookup
on servicehttpd-healthcheck-loadbalancer
. - Verify that the IP address returned matches with the service IP reported by Kubernetes.
- Update coredns configmap to
forward
queries that are not within the cluster domain of Kubernetes to10.11.1.2
and10.11.1.3
. - Exec a command in the container to perform
nslookup
on servicekubelb.hl.test
.
Imperative commands:
kubectl -n cka run busybox --image=busybox -- sleep 7200
kubectl -n cka exec busybox -- cat /etc/resolv.conf
search cka.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
Execute nslookup
command and check with the service to make sure the IPs match.
kubectl -n cka exec busybox -- nslookup httpd-healthcheck-loadbalancer
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: httpd-healthcheck-loadbalancer.cka.svc.cluster.local
Address: 10.111.116.18
kubectl -n cka get svc/httpd-healthcheck-loadbalancer
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpd-healthcheck-loadbalancer LoadBalancer 10.111.116.18 10.11.1.61 10001:32465/TCP 11h
Update coredns configmap and delete pods to pick up changes:
kubectl edit cm/coredns -n kube-system
for i in $(kubectl get po -n kube-system -l k8s-app=kube-dns -o name); do
kubectl delete ${i} -n kube-system;
done
kubectl get cm/coredns -o yaml -n kube-system
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 10.11.1.2 10.11.1.3 /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
kubectl -n cka exec busybox -- nslookup kubelb.hl.test
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kubelb.hl.test
Address: 10.11.1.30
Docs: https://kubernetes.io/docs/concepts/cluster-administration/networking/
- AWS VPC CNI offers integrated AWS Virtual Private Cloud (VPC) networking for Kubernetes clusters.
- Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads.
- Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
- Weave Net is a resilient and simple to use network for Kubernetes and its hosted applications.
Unless stated otherwise, all Kubernetes resources should be created in the cka
namespace.
Docs:
- https://kubernetes.io/docs/concepts/storage/storage-classes/
- https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Exercise: perform the following tasks.
- Create a
storageclass
object namedportworx-csi
with the following specifications:- Provisioner is set to
kubernetes.io/portworx-volume
. - Reclaim policy is set to
Delete
. - Filesystem type
fs
is set toxfs
. - Volume expansion is allowed.
- Provisioner is set to
- Create a
persistentvolume
namedportworx-volume
based on this storage class with the following specifications:- Storage class name is set to
portworx-csi
. - Access mode is set to
ReadWriteOnce
. - Storage capacity is set to
100Mi
. - Persistent volume reclaim policy is set to
Delete
. - Volume mode is set to
Filesystem
.
- Storage class name is set to
- Note, the
persistentvolume
will not initialise unless Portworx integration is configured, but the goal here is to get the YAML right.
Imperative commands:
cat > storageclass.yaml <<EOF
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: portworx-csi
provisioner: kubernetes.io/portworx-volume
parameters:
fs: xfs
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
EOF
kubectl apply -f storageclass.yaml
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
portworx-csi kubernetes.io/portworx-volume Delete Immediate true 5s
cat > persistentvolume.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: portworx-volume
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: portworx-csi
EOF
kubectl apply -f persistentvolume.yaml
Docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Kubernetes supports two volumeModes
of PersistentVolumes
:
- Filesystem - volume with is mounted into Pods into a directory.
- Block - volume is presented into a Pod as a block device, without any filesystem on it.
Access modes:
- ReadWriteOnce (RWO) - the volume can be mounted as read-write by a single node.
- ReadOnlyMany (ROX) - the volume can be mounted as read-only by many nodes.
- ReadWriteMany (RWX) - the volume can be mounted as read-write by many nodes.
- ReadWriteOncePod (RWOP) - the volume can be mounted as read-write by a single Pod. Only supported for CSI volumes and Kubernetes version 1.22+.
Reclaim policies:
- Retain - manual reclamation.
- Recycle - basic scrub (
rm -rf /thevolume/*
). Currently, only NFS and HostPath support recycling. - Delete - associated storage asset such as AWS EBS volume is deleted.
A PersistentVolume
is a piece of storage in the cluster that has been either provisioned by an administrator, or dynamically provisioned using Storage Classes
.
A PersistentVolumeClaim
is a request for storage by a user. For example, Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes.
Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is then mounted to the host and into the Pod.
Docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
Exercise: perform the following tasks.
- Create a persistent volume named
pv-httpd-webroot
with the following specifications:- Storage class name is set to
manual
. - Access mode is set to
ReadWriteMany
. - Storage capacity is set to
64Mi
. - Persistent volume reclaim policy is set to
Retain
. - Volume mode is set to
Filesystem
. hostPath
is set to/mnt/data
.
- Storage class name is set to
- Create a persistent volume claim object
pvc-httpd-webroot
. - Create a deployment object
httpd-persistent
consisting of 2 pods, each containing a singlehttpd:2.4
container. - Configure deployment to use the persistent volume claim as a volume where
mountPath
is set to/usr/local/apache2/htdocs
. - Exec a command in either of the httpd containers and create a blank file
/usr/local/apache2/htdocs/blank.html
. - Delete the deployment object
httpd-persistent
. - Create the deployment object
httpd-persistent
again. - Exec a command in either of the httpd containers to verify that the file
/usr/local/apache2/htdocs/blank.html
exists.
Imperative commands:
cat > pv-httpd-webroot.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-httpd-webroot
spec:
capacity:
storage: 64Mi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: "/mnt/data"
EOF
kubectl apply -f pv-httpd-webroot.yaml
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-httpd-webroot 64Mi RWX Retain Available manual 15s
cat > pvc-httpd-webroot.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-httpd-webroot
namespace: cka
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 64Mi
EOF
kubectl apply -f pvc-httpd-webroot.yaml
kubectl get pvc -n cka
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-httpd-webroot Bound pv-httpd-webroot 64Mi RWX manual 8s
kubectl create deploy httpd-persistent \
--image=httpd:2.4 --replicas=2 \
--dry-run=client -o yaml -n cka > httpd-persistent.yaml
Edit the file httpd-persistent.yaml
and add volumes
and volumeMounts
sections:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: httpd-persistent
name: httpd-persistent
namespace: cka
spec:
replicas: 2
selector:
matchLabels:
app: httpd-persistent
template:
metadata:
labels:
app: httpd-persistent
spec:
containers:
- image: httpd:2.4
name: httpd
volumeMounts:
- mountPath: "/usr/local/apache2/htdocs"
name: pvc-httpd-persistent
volumes:
- name: pvc-httpd-persistent
persistentVolumeClaim:
claimName: pvc-httpd-webroot
Create deployment:
kubectl apply -f httpd-persistent.yaml
Create a blank file:
kubectl -n cka exec $(k get po -n cka | grep httpd-persistent | cut -d" " -f1 | head -n1) -- touch /usr/local/apache2/htdocs/blank.html
Delete and re-create the deployment:
kubectl delete deploy/httpd-persistent -n cka
kubectl apply -f httpd-persistent.yaml
Verify the blank file exists:
kubectl -n cka exec $(k get po -n cka | grep httpd-persistent | cut -d" " -f1 | head -n1) -- ls /usr/local/apache2/htdocs/blank.html
/usr/local/apache2/htdocs/blank.html
Declarative YAML:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-httpd-webroot
spec:
capacity:
storage: 64Mi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-httpd-webroot
namespace: cka
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 64Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: httpd-persistent
name: httpd-persistent
namespace: cka
spec:
replicas: 2
selector:
matchLabels:
app: httpd-persistent
template:
metadata:
labels:
app: httpd-persistent
spec:
containers:
- image: httpd:2.4
name: httpd
volumeMounts:
- mountPath: "/usr/local/apache2/htdocs"
name: pvc-httpd-persistent
volumes:
- name: pvc-httpd-persistent
persistentVolumeClaim:
claimName: pvc-httpd-webroot
Display addresses of the master and services:
kubectl cluster-info
Show the latest events in the whole cluster, ordered by time (see kubectl
cheatsheet documentation):
kubectl get events -A --sort-by=.metadata.creationTimestamp
Check the status of the nodes:
kubectl get nodes
Check the status of the Docker service:
systemctl status docker
View kubelet
systemd service logs:
journalctl -u kubelet
Check etcd health and status:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint health
https://127.0.0.1:2379 is healthy: successfully committed proposal: took = 12.882378ms
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint status
https://127.0.0.1:2379, 9dc11d4117b1b759, 3.5.5, 4.7 MB, true, 3, 561317
Dump pod logs to stdout:
kubectl logs ${POD_NAME}
Dump pod logs for a deployment (single-container case):
kubectl logs deploy/${DEPLOYMENT_NAME}
Dump pod logs for a deployment (multi-container case):
kubectl logs deploy/${DEPLOYMENT_NAME} -c ${CONTAINER_NAME}
We can write container logs to a file. For example, get container ID of coredns:
sudo crictl ps --quiet --name coredns
5582a4b80318a741ad0d9a05df6d235642e73a2e88ff53933c103ffd854c0069
Dump container logs to a file (both the standard output and standard error):
sudo crictl logs ${CONTAINER_ID} | tee /tmp/container.log 2>&1
Docs: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
Depends on the application. The first step in debugging a pod is taking a look at it. Check the current state of the pod and recent events with the following command:
kubectl describe pods ${POD_NAME}
Docs: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
Check log files:
/var/log/kube-apiserver.log
/var/log/kube-scheduler.log
/var/log/kube-controller-manager.log
/var/log/kubelet.log
/var/log/kube-proxy.log
Docs: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
Run commands in a pod. The --rm
parameter deletes the pod after it exits.
kubectl run busybox --image=busybox --rm --restart=Never -it sh
Now you can run things like nslookup
, nc
or telnet
into Mordor.
If you can't solve this then you're likely not ready.
Bonus exercise: perform the following tasks to create a stateful application.
Docs: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
- Create a namespace called
cka
. All resources should be created in this namespace. - Create a new secret
mysql-password
that has the following key=value pair:mysql_root_password
= Mysql5.6Password
- Create a new
PersistentVolume
namedpv-mysql
.- Set capacity to
1Gi
. - Set
accessMode
toReadWriteOnce
. - Set
hostPath
to/data_mysql
. - Set
persistentVolumeReclaimPolicy
toRecycle
. - The volume should have no
storageClassName
defined.
- Set capacity to
- Create a new
PersistentVolumeClaim
namedpvc-mysql
. It should request1Gi
storage, accessModeReadWriteOnce
and should not define astorageClassName
. The PVC should bound to the PV correctly. - Create a new
StatefulSet
namedmysql
.- Use container image
mysql:5.6
. - The container in the pod should
runAsUser=65534
andrunAsGroup=65534
. - Mount the persistent volume at
/var/lib/mysql
. - There should be only
1 replica
running. - Define
initContainer
calledfix-permissions
that uses imagebusybox:1.35
. - The
initContainer
shouldrunAsUser=0
. - The
initContainer
should to mount the persistent volume and run the following command:["sh", "-c", "chown -R 65534:65534 /var/lib/mysql"]
.
- Use container image
- Configure
mysql
stateful set deployment so that the underlying container has the following environment variables set:MYSQL_ROOT_PASSWORD
from secret keymysql_root_password
.
- Create a new
ClusterIP
service namedmysql
which exposesmysql
pods on port3306
. - Create a new
Deployment
namedwordpress
.- Use container image
wordpress:5.6-apache
. - Use deployment strategy
Recreate
. - There should be
3 replicas
created. - The pods should request
10m
cpu and64Mi
memory. - The
livenessProbe
should perform an HTTP GET request to the path/readme.html
and port80
every 5 seconds. - Configure
PodAntiAffinity
to ensure that the scheduler does not co-locate replicas on a single node. - Pods of this deployment should be able to run on control-plane nodes as well, create the proper
toleration
.
- Use container image
- Configure
wordpress
deployment so that the underlying container has the following environment variables set:WORDPRESS_DB_PASSWORD
from secret keymysql_root_password
.WORDPRESS_DB_HOST
set to value ofmysql
.WORDPRESS_DB_USER
set to value ofroot
.
- Create a
NodePort
service namedwordpress
which exposeswordpress
deployment on port80
and connects to the container on port80
. The port on the node should be set to31234
. - Create a
NetworkPolicy
callednetpol-mysql
. Use theapp
label of pods in your policy. The policy should allow themysql-*
pods to:- accept ingress traffic on port
3306
fromwordpres-*
pods only. - connect to IP block
10.0.0.0/8
.
- accept ingress traffic on port
- Navigate your web browser to http://10.11.1.38:31234/ and take a moment to enjoy a brand new instance of WordPress on Kubernetes.
- Take a backup of
etcd
running on the control plane and save it on the control plane to/tmp/etcd-backup.db
. - Delete
wordpress
deployment configuration from the cluster. Verify that the application is no longer accessible. - Restore
etcd
configuration from the backup file/tmp/etcd-backup.db
. Confirm that the cluster is working and that allwordpress
pods are back.
Imperative commands:
kubectl create ns cka
kubectl create secret generic mysql-password \
--from-literal=mysql_root_password="Mysql5.6Password" -n cka
cat > pv-mysql.yaml <<EOF
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-mysql
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Recycle
accessModes:
- ReadWriteOnce
hostPath:
path: "/data_mysql"
EOF
cat > pvc-mysql.yaml <<EOF
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-mysql
namespace: cka
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
EOF
kubectl apply -f pv-mysql.yaml
kubectl apply -f pvc-mysql.yaml
kubectl create deploy mysql --image=mysql:5.6 --replicas=1 \
--dry-run=client -o yaml -n cka > deployment-mysql.yaml
Edit the file deployment-mysql.yaml
, change Deployment
to StatefulSet
and add required configuration:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: mysql
name: mysql
namespace: cka
spec:
replicas: 1
selector:
matchLabels:
app: mysql
serviceName: mysql
template:
metadata:
labels:
app: mysql
spec:
securityContext:
runAsUser: 65534
runAsGroup: 65534
volumes:
- name: pv-mysql
persistentVolumeClaim:
claimName: pvc-mysql
initContainers:
- name: fix-permissions
image: busybox:1.35
command: ["sh", "-c", "chown -R 65534:65534 /var/lib/mysql"]
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: "/var/lib/mysql"
name: pv-mysql
containers:
- image: mysql:5.6
name: mysql
volumeMounts:
- mountPath: "/var/lib/mysql"
name: pv-mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: mysql_root_password
Create the deployment and verify mysql pod logs:
kubectl apply -f deployment-mysql.yaml
kubectl logs $(kubectl get po -n cka -l app=mysql -o name) -n cka | tail -n1
Version: '5.6.51' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
Create a service for mysql, which serves on port 3306:
kubectl create svc clusterip mysql --tcp=3306 -n cka
kubectl get svc -n cka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.107.189.130 <none> 3306/TCP 111s
Create a deplopyment file for wordpress.
kubectl create deploy wordpress --image=wordpress:5.6-apache --replicas=3 \
--dry-run=client -o yaml -n cka > deployment-wordpress.yaml
Edit the file deployment-wordpress.yaml
and add required configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: wordpress
name: wordpress
namespace: cka
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
strategy:
type: Recreate
spec:
# The deployment has PodAntiAffinity configured to ensure
# the scheduler does not co-locate replicas on a single node.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- wordpress
topologyKey: "kubernetes.io/hostname"
# The following toleration "matches" the taint on the control-plane node, therefore
# a pod with this toleration would be able to schedule onto control-plane.
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
containers:
- image: wordpress:5.6-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: mysql_root_password
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_USER
value: root
resources:
requests:
cpu: 10m
memory: 64Mi
livenessProbe:
httpGet:
path: /readme.html
port: 80
periodSeconds: 5
Create the deployment and verify wordpress pod logs:
kubectl apply -f deployment-wordpress.yaml
kubectl logs $(kubectl get po -n cka -l app=wordpress -o name|head -n1) -n cka | grep WordPress
WordPress not found in /var/www/html - copying now...
Complete! WordPress has been successfully copied to /var/www/html
While the wordpress pods can run on both the worker and control-plane nodes because of tolerations
, we only have one of each (single control-plane and single worker node), meaning that the 3rd replica of wordpress cannot be scheduled and will be in a pending state.
Create a service for wordpress, which serves on port 80 and connects to the containers on port 80:
kubectl expose deployment wordpress --type=NodePort --port=80 --target-port=80 \
--dry-run=client -o yaml -n cka > service-wordpress.yaml
Edit the file service-wordpress.yaml
and add nodePort
:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
namespace: cka
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 31234
selector:
app: wordpress
type: NodePort
Create the service and verify:
kubectl apply -f service-wordpress.yaml
kubectl get svc -n cka
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql ClusterIP 10.107.189.130 <none> 3306/TCP 20m
wordpress NodePort 10.110.61.109 <none> 80:31234/TCP 20m
Create a file netpol-wordpress.yaml
that contains our network policy configuration:
cat > netpol-mysql.yaml <<EOF
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol-mysql
namespace: cka
spec:
podSelector:
matchLabels:
app: mysql
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: wordpress
ports:
- protocol: TCP
port: 3306
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/8
EOF
Deploy the network policy:
kubectl apply -f netpol-mysql.yaml
Now, navigate your browser to http://10.11.1.38:31234/ and enjoy a brand new instance of WordPress on Kubernetes.
Take an etcd
snapshot on the control plane by specifying the endpoint and certificates:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db
Delete wordpress deployment:
kubectl delete deploy/wordpress -n cka
No wordpress pods should be present at this point.
Restore etcd
configuration from the snapshot. On the control plane, identify the default data-dir
:
sudo grep data-dir /etc/kubernetes/manifests/etcd.yaml
- --data-dir=/var/lib/etcd
Stop all control plane components:
cd /etc/kubernetes/manifests/
sudo mv ./*yaml ../
Make sure that all control plane pods are NotReady
:
sudo crictl pods | egrep "kube|etcd"
Restore the snapshot to directory /var/lib/etcd_backup
:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd_backup \
snapshot restore /tmp/etcd-backup.db
Configure etcd to use the new directory /var/lib/etcd_backup
:
sudo sed -i 's/\/var\/lib\/etcd/\/var\/lib\/etcd_backup/g' /etc/kubernetes/etcd.yaml
Start all control plane components:
cd /etc/kubernetes/manifests/
sudo mv ../*yaml ./
Give it some time (up to several minutes) for etcd to restart, and verify that wordpress pods are back.
kubectl get po -n cka
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 40m
wordpress-c7b5c7666-bntkc 1/1 Running 0 38s
wordpress-c7b5c7666-djrzl 0/1 Pending 0 38s
wordpress-c7b5c7666-lf6l7 1/1 Running 0 38s
For declarative YAML solution, see bonus-task-solution.yaml.