We’ve just posted a course on the freecodecamp.org YouTube channel to help prepare you for the Certified Kubernetes Administrator certification. This course is designed to provide a deep, practical understanding of Kubernetes administration, from fundamental concepts to advanced troubleshooting.
You can view the course freecodecamp.org YouTube channel (2 hour clock)
https://www.youtube.com/watch?v=fr9gqfwl6nm
There are many demos in the course using Kubernetes. Below you can find all the commands used in the course so it’s easy for you to practice on your local machine.
CKA Hands-on Companion: Commands and Demos
Part 1: Kubernetes Fundamentals and Lab Setup
This section uses a single node cluster setup kubeadm To create an environment that mirrors the CKA exam.
Section 1.3: Setting Up Your CKA Practice Environment
Step 1: Install the container runtime (on all nodes)
Load Required Kernel Modules:
cat <Configure SYSCTL for networking:
cat <Install the container:
sudo apt-get update sudo apt-get install -y containerdConfigure the container for the System DC group driver:
sudo mkdir -p /etc/containerd sudo containerd config default | sudo tee /etc/containerd/config.toml sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.tomlRestart and enable the container:
sudo systemctl restart containerd sudo systemctl enable containerd
Step 2: Install the Kubernetes binaries (on all nodes)
Disable swap memory:
sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstabAdd the Kubernetes opt-in repository:
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gpg sudo mkdir -p -m 755 /etc/apt/keyrings curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb (signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg) /' | sudo tee /etc/apt/sources.list.d/kubernetes.listInstall and maintain binaries (adjust versions as needed):
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
Step 3: Create a single-node cluster (in the control plane).
Start the control plane node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16Create a cubicle for the administrative user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configRemove control plane stains:
kubectl taint nodes --all node-role.kubernetes.io/control-plane-Install the Flannel CNI plugin:
kubectl apply -fVerify the cluster:
kubectl get nodes kubectl get pods -n kube-system
Part 2: Cluster architecture, installation and configuration (25%)
Section 2.1: Bootstrapping with a Multi-Node Cluster kubeadm
Initialization of control plane (run on control plane node)
run
kubeadm init(Change):sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=- Note: save
kubeadm joincommand from the output.
- Note: save
Install the Calico CNI plugin:
kubectl apply -fVerify the cluster and CNI installation:
kubectl get pods -n kube-system kubectl get nodes
Joining worker nodes (run on each worker node)
Save with the join command
kubeadm init:sudo kubeadm join:6443 --token \ --discovery-token-ca-cert-hash sha256:<hash> Verify the full cluster (from the control plane node):
kubectl get nodes -o wide
Section 2.2: Cluster Lifecycle Management
Upgrading clusters with kubeadm (eg: upgrade to 1.29.1)
Upgrade Control Plane: Upgrade
kubeadmBinary:sudo apt-mark unhold kubeadm sudo apt-get update && sudo apt-get install -y kubeadm='1.29.1-1.1' sudo apt-mark hold kubeadmPlan and apply the upgrade (on the control plane node):
sudo kubeadm upgrade plan sudo kubeadm upgrade apply v1.29.1Upgrade
kubeletAndkubectl(at the control plane node):sudo apt-mark unhold kubelet kubectl sudo apt-get update && sudo apt-get install -y kubelet='1.29.1-1.1' kubectl='1.29.1-1.1' sudo apt-mark hold kubelet kubectl sudo systemctl daemon-reload sudo systemctl restart kubeletUpgrade the worker node: extract the node (from the control plane node):
kubectl drain--ignore-daemonsets Upgrade the binaries (on the worker node):
sudo apt-mark unhold kubeadm kubelet sudo apt-get update sudo apt-get install -y kubeadm='1.29.1-1.1' kubelet='1.29.1-1.1' sudo apt-mark hold kubeadm kubeletUpgrade the node configuration and restart Kublet (on the worker node):
sudo kubeadm upgrade node sudo systemctl daemon-reload sudo systemctl restart kubeletInward node (from control plane node):
kubectl uncordon
Backup and restore etc (run on control plane node)
Perform a backup (using the host
etcdctl):sudo mkdir -p /var/lib/etcd-backup sudo ETCDCTL_API=3 etcdctl snapshot save /var/lib/etcd-backup/snapshot.db \ --endpoints= \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.keyPerform maintenance (on the control plane node):
sudo systemctl stop kubelet sudo ETCDCTL_API=3 etcdctl snapshot restore /var/lib/etcd-backup/snapshot.db \ --data-dir /var/lib/etcd-restored sudo systemctl start kubelet
Section 2.3: Implementing a Highly Available (HA) Control Plane
First initialize the control plane node (change
):sudo kubeadm init --control-plane-endpoint "load-balancer.example.com:6443" --upload-certs- Note: Save the HA-specific join command and
--certificate-key.
- Note: Save the HA-specific join command and
Join additional control plane nodes (run on second and third control plane nodes):
sudo kubeadm join load-balancer.example.com:6443 --token\ --discovery-token-ca-cert-hash sha256:<hash> \ --control-plane --certificate-key
Section 2.4: Managing Role-Based Access Control (RBAC)
Demo: Getting read-only access
Create a namespace and service account:
kubectl create namespace rbac-test kubectl create serviceaccount dev-user -n rbac-testmake
RoleManifesto (role.yaml):apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: rbac-test name: pod-reader rules: - apiGroups: ("") resources: ("pods") verbs: ("get", "list", "watch")Apply:
kubectl apply -f role.yamlmake
RoleBindingManifesto (rolebinding.yaml):apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: rbac-test subjects: - kind: ServiceAccount name: dev-user namespace: rbac-test roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.ioApply:
kubectl apply -f rolebinding.yamlConfirm permission:
kubectl auth can-i list pods --as=system:serviceaccount:rbac-test:dev-user -n rbac-test kubectl auth can-i delete pods --as=system:serviceaccount:rbac-test:dev-user -n rbac-test
Section 2.5: Application Management with Helm and Customize
Demo: Installing an Application with Helm
Add a chart repository:
helm repo add bitnami helm repo updateInstall a chart with value override:
helm install my-nginx bitnami/nginx --set service.type=NodePortManage the application:
helm upgrade my-nginx bitnami/nginx --set service.type=ClusterIP helm rollback my-nginx 1 helm uninstall my-nginx
Demo: Customizing a Deployment with Customize
Create a base manifest (
my-app/base/deployment.yaml):mkdir -p my-app/base cat <my-app/base/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: nginx image: nginx:1.25.0 EOF Create a base customization file (
my-app/base/kustomization.yaml):cat <my-app/base/kustomization.yaml resources: - deployment.yaml EOF Create production overlays and patches:
mkdir -p my-app/overlays/production cat <my-app/overlays/production/patch.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 EOF cat < my-app/overlays/production/kustomization.yaml bases: -../../base patches: - path: patch.yaml EOF Apply an overlay (note
-kflag for customize):kubectl apply -k my-app/overlays/productionConfirm the change:
kubectl get deployment my-app
Part 3: Workload and Scheduling (15%)
Section 3.1: Mastering Deployment
Demo: Performing a Rolling Update
Create a base deployment manifest (
deployment.yaml):apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.24.0 ports: - containerPort: 80Apply:
kubectl apply -f deployment.yamlUpdate the container image to trigger rolling updates:
kubectl set image deployment/nginx-deployment nginx=nginx:1.25.0Watch the rollout:
kubectl rollout status deployment/nginx-deployment kubectl get pods -l app=nginx -w
Implementing and verifying rollbacks
View revision history:
kubectl rollout history deployment/nginx-deploymentRoll back to previous version:
kubectl rollout undo deployment/nginx-deploymentRoll back to a specific revision (eg, revision 1):
kubectl rollout undo deployment/nginx-deployment --to-revision=1
Section 3.2: Configuring Applications with Configs and Secrets
Methods of creation
Creation: Mandatory Creation:
kubectl create configmap app-config --from-literal=app.color=blue --from-literal=app.mode=production echo "retries = 3" > config.properties kubectl create configmap app-config-file --from-file=config.propertiesSecret: Mandatory Creation:
kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password='s3cr3t'
Demo: Using configmaps and secrets in beans
manifest: environment variable (
pod-config.yaml):apiVersion: v1 kind: Pod metadata: name: config-demo-pod spec: containers: - name: demo-container image: busybox command: ("/bin/sh", "-c", "env && sleep 3600") env: - name: THEME valueFrom: configMapKeyRef: name: app-config-declarative key: ui.theme - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: password restartPolicy: NeverApply:
kubectl apply -f pod-config.yamlConfirm:kubectl logs config-demo-podManifesto: Soar Volumes (
pod-volume.yaml):apiVersion: v1 kind: Pod metadata: name: volume-demo-pod spec: containers: - name: demo-container image: busybox command: ("/bin/sh", "-c", "cat /etc/config/config.properties && sleep 3600") volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: app-config-file restartPolicy: NeverApply:
kubectl apply -f pod-volume.yamlConfirm:kubectl logs volume-demo-pod
Section 3.3: Implementing Workload Autoscaling
Demo: Installing and Verifying Matrix Server
Install Matrix Server:
kubectl apply -fConfirm installation:
kubectl top nodes kubectl top pods -A
Demo: Automating Deployment
Create a deployment with resource requests (required
hpa-demo-deployment.yamlnot shown, use a simple):kubectl create deployment php-apache --image=k8s.gcr.io/hpa-example --requests="cpu=200m" kubectl expose deployment php-apache --port=80Create an HPA (target 50% CPU, scale 1-10 copies):
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10Generate load (will run in background):
kubectl run -it --rm load-generator --image=busybox -- /bin/sh -c "while true; do wget -q -O- done"Observe the scaling:
kubectl get hpa -w(Stop the load generator to observe the scale down)
Section 3.5: Advanced Scheduling
Demo: Using Node Affinity
A node label:
kubectl label nodedisktype=ssd Create a pod with node affinity (required
affinity-pod.yamlnot shown, create a dummy pod for the node label):kubectl apply -f affinity-pod.yaml
Demo: Using stakes and tolerances
stain a node (effect:
NoSchedule):kubectl taint nodeapp=gpu:NoSchedule Create a pod with tolerances (required
toleration-pod.yaml(not shown, create a dummy pod for the stain):kubectl apply -f toleration-pod.yamlVerify pod scheduling on the tainted node:
kubectl get pod gpu-pod -o wide
Part 4: Services and Networking (20%)
Section 4.2: Cabernet Services
Demo: Creating a cluster service
Create a deployment:
kubectl create deployment my-app --image=nginx --replicas=2Expose the deployment with the Cluster service (required
clusterip-service.yamlnot shown, use an imperative command):kubectl expose deployment my-app --port=80 --target-port=80 --name=my-app-service --type=ClusterIPVerify access (within the temporary pod):
kubectl run tmp-shell --rm -it --image=busybox -- /bin/sh
Demo: Creating a NodePort Service
Create the NodePort service (required
nodeport-service.yamlnot shown, use an imperative command):kubectl expose deployment my-app --port=80 --target-port=80 --name=my-app-nodeport --type=NodePortVerify access information:
kubectl get service my-app-nodeport kubectl get nodes -o wide
Section 4.3: Entry and Gateway API
Demo: Route-Based Routing with Ninx Angular
Install the nginx ingress controller:
kubectl apply -fDeploy two sample applications and services:
kubectl create deployment app-one --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment app-one --port=8080 kubectl create deployment app-two --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment app-two --port=8080Create an injury resource (required
ingress.yaml(not shown, use the provided structure to create the file):kubectl apply -f ingress.yamlTest English:
INGRESS_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress.ip}') curl http://$INGRESS_IP/app1 curl http://$INGRESS_IP/app2
Section 4.4: Network Policies
Demo: Securing an Application with Network Policies
Create a default disable all access policy (
deny-all.yaml):apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress spec: podSelector: {} policyTypes: - IngressApply:
kubectl apply -f deny-all.yamlDeploy a web server and a service:
kubectl create deployment web-server --image=nginx kubectl expose deployment web-server --port=80Contact Contact (will fail):
kubectl run tmp-shell --rm -it --image=busybox -- /bin/sh -c "wget -O- --timeout=2 web-server"Create an “Allow” policy (
allow-web-access.yaml):apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-web-access spec: podSelector: matchLabels: app: web-server policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: access: "true" ports: - protocol: TCP port: 80Apply:
kubectl apply -f allow-web-access.yamlCheck the “Allow” policy (the connection will succeed):
kubectl run tmp-shell --rm -it --labels=access=true --image=busybox -- /bin/sh -c "wget -O- web-server"
Section 4.5: COREDNS
Demo: Customizing COREDNS for an external domain
Edit the COREDNS CONFIGMAP:
kubectl edit configmap coredns -n kube-systemAdd a new server block inside it
CorefileA data structure (eg, formy-corp.com):my-corp.com:53 { errors cache 30 forward . 10.10.0.53 }
Part 5: Storage (10%)
Section 5.2: Creating Volumes
Static provisioning demo
Create a percent volume (
pv.yaml):apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: manual hostPath: path: "/mnt/data"Apply:
kubectl apply -f pv.yamlCreate a percentvolum claim (
pvc.yaml):apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3GiApply:
kubectl apply -f pvc.yamlVerify the binding:
kubectl get pv,pvcCreate a pod that uses PVC (
pod-storage.yaml):apiVersion: v1 kind: Pod metadata: name: storage-pod spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: my-storage volumes: - name: my-storage persistentVolumeClaim: claimName: task-pv-claimApply:
kubectl apply -f pod-storage.yaml
Section 5.3: Storage Classes and Dynamic Provisioning
Demo: Using the default storage class
Check out the available storage classes:
kubectl get storageclassCreate a PVC without a PV (depends on the default storage class):
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-dynamic-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1GiApply:
kubectl apply -f dynamic-pvc.yamlObserve the dynamic supply:
kubectl get pv
Part 6: Troubleshooting (30%)
Section 6.2: Troubleshooting Applications and Pods
Debugging tools for crashes and failures
Get detailed information about resources (the most sensitive debugging command):
kubectl describe podCheck the application logs (for the current container):
kubectl logsCheck the application logs (for the previous crash container example):
kubectl logs--previous Get a shell inside the running container for live debugging:
kubectl exec -it-- /bin/sh
Section 6.3: Troubleshooting Clusters and Nodes
Check node status:
kubectl get nodesGet detailed node information:
kubectl describe nodeSee node resource capacity (for scheduling issues):
kubectl describe node| grep Allocatable check
kubeletService status (via SSH on affected node):sudo systemctl status kubelet sudo journalctl -u kubelet -fRepeatable scheduling on a single node:
kubectl uncordon
Section 6.5: Troubleshooting Services and Networking
Check service and endpoints (for connectivity issues):
kubectl describe serviceCheck DNS resolution from a client pod (from within the client pod shell):
kubectl exec -it client-pod -- nslookupCheck network policies (to see if traffic is being blocked):
kubectl get networkpolicy
Section 6.6: Monitoring Cluster and Application Resource Usage
Get Node resource usage (requires Matrix Server):
kubectl top nodesGet pod resource usage (requires Matrix Server):
kubectl top pods -n