Prepare for Kubernetes Administrator Certification and Pass

by SkillAiNest

We’ve just posted a course on the freecodecamp.org YouTube channel to help prepare you for the Certified Kubernetes Administrator certification. This course is designed to provide a deep, practical understanding of Kubernetes administration, from fundamental concepts to advanced troubleshooting.

You can view the course freecodecamp.org YouTube channel (2 hour clock)

https://www.youtube.com/watch?v=fr9gqfwl6nm

There are many demos in the course using Kubernetes. Below you can find all the commands used in the course so it’s easy for you to practice on your local machine.

CKA Hands-on Companion: Commands and Demos

Part 1: Kubernetes Fundamentals and Lab Setup

This section uses a single node cluster setup kubeadm To create an environment that mirrors the CKA exam.

Section 1.3: Setting Up Your CKA Practice Environment

Step 1: Install the container runtime (on all nodes)

  1. Load Required Kernel Modules:

     cat <
  2. Configure SYSCTL for networking:

     cat <
  3. Install the container:

     sudo apt-get update
     sudo apt-get install -y containerd
    
  4. Configure the container for the System DC group driver:

     sudo mkdir -p /etc/containerd
     sudo containerd config default | sudo tee /etc/containerd/config.toml
     sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
    
  5. Restart and enable the container:

     sudo systemctl restart containerd
     sudo systemctl enable containerd
    

Step 2: Install the Kubernetes binaries (on all nodes)

  1. Disable swap memory:

     sudo swapoff -a
     
     sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    
  2. Add the Kubernetes opt-in repository:

     sudo apt-get update
     sudo apt-get install -y apt-transport-https ca-certificates curl gpg
     sudo mkdir -p -m 755 /etc/apt/keyrings
     curl -fsSL  | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
     echo 'deb (signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg)  /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
  3. Install and maintain binaries (adjust versions as needed):

     sudo apt-get update
     sudo apt-get install -y kubelet kubeadm kubectl
     sudo apt-mark hold kubelet kubeadm kubectl
    

Step 3: Create a single-node cluster (in the control plane).

  1. Start the control plane node:

     sudo kubeadm init --pod-network-cidr=10.244.0.0/16
    
  2. Create a cubicle for the administrative user:

     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  3. Remove control plane stains:

     kubectl taint nodes --all node-role.kubernetes.io/control-plane-
    
  4. Install the Flannel CNI plugin:

     kubectl apply -f 
    
  5. Verify the cluster:

     kubectl get nodes
     kubectl get pods -n kube-system
    

Part 2: Cluster architecture, installation and configuration (25%)

Section 2.1: Bootstrapping with a Multi-Node Cluster kubeadm

Initialization of control plane (run on control plane node)

  1. run kubeadm init (Change ):

     sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=
    
    • Note: save kubeadm join command from the output.
  2. Install the Calico CNI plugin:

     kubectl apply -f 
    
  3. Verify the cluster and CNI installation:

     kubectl get pods -n kube-system
     kubectl get nodes
    

Joining worker nodes (run on each worker node)

  1. Save with the join command kubeadm init:

     
     sudo kubeadm join :6443 --token  \
         --discovery-token-ca-cert-hash sha256:<hash>
    
  2. Verify the full cluster (from the control plane node):

     kubectl get nodes -o wide
    

Section 2.2: Cluster Lifecycle Management

Upgrading clusters with kubeadm (eg: upgrade to 1.29.1)

  1. Upgrade Control Plane: Upgrade kubeadm Binary:

     sudo apt-mark unhold kubeadm
     sudo apt-get update && sudo apt-get install -y kubeadm='1.29.1-1.1'
     sudo apt-mark hold kubeadm
    
  2. Plan and apply the upgrade (on the control plane node):

     sudo kubeadm upgrade plan
     sudo kubeadm upgrade apply v1.29.1
    
  3. Upgrade kubelet And kubectl (at the control plane node):

     sudo apt-mark unhold kubelet kubectl
     sudo apt-get update && sudo apt-get install -y kubelet='1.29.1-1.1' kubectl='1.29.1-1.1'
     sudo apt-mark hold kubelet kubectl
     sudo systemctl daemon-reload
     sudo systemctl restart kubelet
    
  4. Upgrade the worker node: extract the node (from the control plane node):

     kubectl drain  --ignore-daemonsets
    
  5. Upgrade the binaries (on the worker node):

     
     sudo apt-mark unhold kubeadm kubelet
     sudo apt-get update
     sudo apt-get install -y kubeadm='1.29.1-1.1' kubelet='1.29.1-1.1'
     sudo apt-mark hold kubeadm kubelet
    
  6. Upgrade the node configuration and restart Kublet (on the worker node):

     
     sudo kubeadm upgrade node
     sudo systemctl daemon-reload
     sudo systemctl restart kubelet
    
  7. Inward node (from control plane node):

     kubectl uncordon 
    

Backup and restore etc (run on control plane node)

  1. Perform a backup (using the host etcdctl):

     
     sudo mkdir -p /var/lib/etcd-backup
    
     sudo ETCDCTL_API=3 etcdctl snapshot save /var/lib/etcd-backup/snapshot.db \
         --endpoints= \
         --cacert=/etc/kubernetes/pki/etcd/ca.crt \
         --cert=/etc/kubernetes/pki/etcd/server.crt \
         --key=/etc/kubernetes/pki/etcd/server.key
    
  2. Perform maintenance (on the control plane node):

     
     sudo systemctl stop kubelet
    
     
     sudo ETCDCTL_API=3 etcdctl snapshot restore /var/lib/etcd-backup/snapshot.db \
         --data-dir /var/lib/etcd-restored
    
     
    
     
     sudo systemctl start kubelet
    

Section 2.3: Implementing a Highly Available (HA) Control Plane

  1. First initialize the control plane node (change ):

     sudo kubeadm init --control-plane-endpoint "load-balancer.example.com:6443" --upload-certs
    
    • Note: Save the HA-specific join command and --certificate-key.
  2. Join additional control plane nodes (run on second and third control plane nodes):

     
     sudo kubeadm join load-balancer.example.com:6443 --token  \
         --discovery-token-ca-cert-hash sha256:<hash> \
         --control-plane --certificate-key 
    

Section 2.4: Managing Role-Based Access Control (RBAC)

Demo: Getting read-only access

  1. Create a namespace and service account:

     kubectl create namespace rbac-test
     kubectl create serviceaccount dev-user -n rbac-test
    
  2. make Role Manifesto (role.yaml):

     
     apiVersion: rbac.authorization.k8s.io/v1
     kind: Role
     metadata:
       namespace: rbac-test
       name: pod-reader
     rules:
     - apiGroups: ("") 
       resources: ("pods")
       verbs: ("get", "list", "watch")
    

    Apply: kubectl apply -f role.yaml

  3. make RoleBinding Manifesto (rolebinding.yaml):

     
     apiVersion: rbac.authorization.k8s.io/v1
     kind: RoleBinding
     metadata:
       name: read-pods
       namespace: rbac-test
     subjects:
     - kind: ServiceAccount
       name: dev-user
       namespace: rbac-test
     roleRef:
       kind: Role
       name: pod-reader
       apiGroup: rbac.authorization.k8s.io
    

    Apply: kubectl apply -f rolebinding.yaml

  4. Confirm permission:

     
     kubectl auth can-i list pods --as=system:serviceaccount:rbac-test:dev-user -n rbac-test
    
     
     kubectl auth can-i delete pods --as=system:serviceaccount:rbac-test:dev-user -n rbac-test
    

Section 2.5: Application Management with Helm and Customize

Demo: Installing an Application with Helm

  1. Add a chart repository:

     helm repo add bitnami 
     helm repo update
    
  2. Install a chart with value override:

     helm install my-nginx bitnami/nginx --set service.type=NodePort
    
  3. Manage the application:

     helm upgrade my-nginx bitnami/nginx --set service.type=ClusterIP
     helm rollback my-nginx 1
     helm uninstall my-nginx
    

Demo: Customizing a Deployment with Customize

  1. Create a base manifest (my-app/base/deployment.yaml):

     mkdir -p my-app/base
     cat < my-app/base/deployment.yaml
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: my-app
     spec:
       replicas: 1
       selector:
         matchLabels:
           app: my-app
       template:
         metadata:
           labels:
             app: my-app
         spec:
           containers:
           - name: nginx
             image: nginx:1.25.0
     EOF
    
  2. Create a base customization file (my-app/base/kustomization.yaml):

     cat < my-app/base/kustomization.yaml
     resources:
     - deployment.yaml
     EOF
    
  3. Create production overlays and patches:

     mkdir -p my-app/overlays/production
     cat < my-app/overlays/production/patch.yaml
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: my-app
     spec:
       replicas: 3
     EOF
     cat < my-app/overlays/production/kustomization.yaml
     bases:
     -../../base
     patches:
     - path: patch.yaml
     EOF
    
  4. Apply an overlay (note -k flag for customize):

     kubectl apply -k my-app/overlays/production
    
  5. Confirm the change:

     kubectl get deployment my-app
    

Part 3: Workload and Scheduling (15%)

Section 3.1: Mastering Deployment

Demo: Performing a Rolling Update

  1. Create a base deployment manifest (deployment.yaml):

     
     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: nginx-deployment
     spec:
       replicas: 3
       selector:
         matchLabels:
           app: nginx
       template:
         metadata:
           labels:
             app: nginx
         spec:
           containers:
           - name: nginx
             image: nginx:1.24.0
             ports:
             - containerPort: 80
    

    Apply: kubectl apply -f deployment.yaml

  2. Update the container image to trigger rolling updates:

     kubectl set image deployment/nginx-deployment nginx=nginx:1.25.0
    
  3. Watch the rollout:

     kubectl rollout status deployment/nginx-deployment
     kubectl get pods -l app=nginx -w
    

Implementing and verifying rollbacks

  1. View revision history:

     kubectl rollout history deployment/nginx-deployment
    
  2. Roll back to previous version:

     kubectl rollout undo deployment/nginx-deployment
    
  3. Roll back to a specific revision (eg, revision 1):

     kubectl rollout undo deployment/nginx-deployment --to-revision=1
    

Section 3.2: Configuring Applications with Configs and Secrets

Methods of creation

  1. Creation: Mandatory Creation:

     
     kubectl create configmap app-config --from-literal=app.color=blue --from-literal=app.mode=production
    
     
     echo "retries = 3" > config.properties
     kubectl create configmap app-config-file --from-file=config.properties
    
  2. Secret: Mandatory Creation:

     
     kubectl create secret generic db-credentials --from-literal=username=admin --from-literal=password='s3cr3t'
    

Demo: Using configmaps and secrets in beans

  1. manifest: environment variable (pod-config.yaml):

     
     apiVersion: v1
     kind: Pod
     metadata:
       name: config-demo-pod
     spec:
       containers:
       - name: demo-container
         image: busybox
         command: ("/bin/sh", "-c", "env && sleep 3600")
         env:
           - name: THEME
             valueFrom:
               configMapKeyRef:
                 name: app-config-declarative
                 key: ui.theme
           - name: DB_PASSWORD
             valueFrom:
               secretKeyRef:
                 name: db-credentials
                 key: password
       restartPolicy: Never
    

    Apply: kubectl apply -f pod-config.yaml Confirm: kubectl logs config-demo-pod

  2. Manifesto: Soar Volumes (pod-volume.yaml):

     
     apiVersion: v1
     kind: Pod
     metadata:
       name: volume-demo-pod
     spec:
       containers:
       - name: demo-container
         image: busybox
         command: ("/bin/sh", "-c", "cat /etc/config/config.properties && sleep 3600")
         volumeMounts:
         - name: config-volume
           mountPath: /etc/config
       volumes:
       - name: config-volume
         configMap:
           name: app-config-file
       restartPolicy: Never
    

    Apply: kubectl apply -f pod-volume.yaml Confirm: kubectl logs volume-demo-pod

Section 3.3: Implementing Workload Autoscaling

Demo: Installing and Verifying Matrix Server

  1. Install Matrix Server:

     kubectl apply -f 
    
  2. Confirm installation:

     kubectl top nodes
     kubectl top pods -A
    

Demo: Automating Deployment

  1. Create a deployment with resource requests (required hpa-demo-deployment.yaml not shown, use a simple):

     kubectl create deployment php-apache --image=k8s.gcr.io/hpa-example --requests="cpu=200m"
     kubectl expose deployment php-apache --port=80
    
  2. Create an HPA (target 50% CPU, scale 1-10 copies):

     kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
    
  3. Generate load (will run in background):

     kubectl run -it --rm load-generator --image=busybox -- /bin/sh -c "while true; do wget -q -O-  done"
    
  4. Observe the scaling:

     kubectl get hpa -w
    

    (Stop the load generator to observe the scale down)

Section 3.5: Advanced Scheduling

Demo: Using Node Affinity

  1. A node label:

     kubectl label node  disktype=ssd
    
  2. Create a pod with node affinity (required affinity-pod.yaml not shown, create a dummy pod for the node label):

     
     kubectl apply -f affinity-pod.yaml 
    

Demo: Using stakes and tolerances

  1. stain a node (effect: NoSchedule):

     kubectl taint node  app=gpu:NoSchedule
    
  2. Create a pod with tolerances (required toleration-pod.yaml (not shown, create a dummy pod for the stain):

     
     kubectl apply -f toleration-pod.yaml 
    
  3. Verify pod scheduling on the tainted node:

     kubectl get pod gpu-pod -o wide
    

Part 4: Services and Networking (20%)

Section 4.2: Cabernet Services

Demo: Creating a cluster service

  1. Create a deployment:

     kubectl create deployment my-app --image=nginx --replicas=2
    
  2. Expose the deployment with the Cluster service (required clusterip-service.yaml not shown, use an imperative command):

     kubectl expose deployment my-app --port=80 --target-port=80 --name=my-app-service --type=ClusterIP
    
  3. Verify access (within the temporary pod):

     kubectl run tmp-shell --rm -it --image=busybox -- /bin/sh
     
     
    

Demo: Creating a NodePort Service

  1. Create the NodePort service (required nodeport-service.yaml not shown, use an imperative command):

     kubectl expose deployment my-app --port=80 --target-port=80 --name=my-app-nodeport --type=NodePort
    
  2. Verify access information:

     kubectl get service my-app-nodeport
     kubectl get nodes -o wide
     
    

Section 4.3: Entry and Gateway API

Demo: Route-Based Routing with Ninx Angular

  1. Install the nginx ingress controller:

     kubectl apply -f 
    
  2. Deploy two sample applications and services:

     kubectl create deployment app-one --image=k8s.gcr.io/echoserver:1.4
     kubectl expose deployment app-one --port=8080
    
     kubectl create deployment app-two --image=k8s.gcr.io/echoserver:1.4
     kubectl expose deployment app-two --port=8080
    
  3. Create an injury resource (required ingress.yaml (not shown, use the provided structure to create the file):

     
     kubectl apply -f ingress.yaml
    
  4. Test English:

     INGRESS_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress.ip}')
     curl http://$INGRESS_IP/app1
     curl http://$INGRESS_IP/app2
    

Section 4.4: Network Policies

Demo: Securing an Application with Network Policies

  1. Create a default disable all access policy (deny-all.yaml):

     
     apiVersion: networking.k8s.io/v1
     kind: NetworkPolicy
     metadata:
       name: default-deny-ingress
     spec:
       podSelector: {} 
       policyTypes:
       - Ingress
    

    Apply: kubectl apply -f deny-all.yaml

  2. Deploy a web server and a service:

     kubectl create deployment web-server --image=nginx
     kubectl expose deployment web-server --port=80
    
  3. Contact Contact (will fail):

     kubectl run tmp-shell --rm -it --image=busybox -- /bin/sh -c "wget -O- --timeout=2 web-server"
    
  4. Create an “Allow” policy (allow-web-access.yaml):

     
     apiVersion: networking.k8s.io/v1
     kind: NetworkPolicy
     metadata:
       name: allow-web-access
     spec:
       podSelector:
         matchLabels:
           app: web-server
       policyTypes:
       - Ingress
       ingress:
       - from:
         - podSelector:
             matchLabels:
               access: "true"
         ports:
         - protocol: TCP
           port: 80
    

    Apply: kubectl apply -f allow-web-access.yaml

  5. Check the “Allow” policy (the connection will succeed):

     kubectl run tmp-shell --rm -it --labels=access=true --image=busybox -- /bin/sh -c "wget -O- web-server"
    

Section 4.5: COREDNS

Demo: Customizing COREDNS for an external domain

  1. Edit the COREDNS CONFIGMAP:

     kubectl edit configmap coredns -n kube-system
    
  2. Add a new server block inside it Corefile A data structure (eg, for my-corp.com):

     
         my-corp.com:53 {
             errors
             cache 30
             forward . 10.10.0.53 
         }
     
    

Part 5: Storage (10%)

Section 5.2: Creating Volumes

Static provisioning demo

  1. Create a percent volume (pv.yaml):

     
     apiVersion: v1
     kind: PersistentVolume
     metadata:
       name: task-pv-volume
     spec:
       capacity:
         storage: 10Gi
       accessModes:
         - ReadWriteOnce
       persistentVolumeReclaimPolicy: Retain
       storageClassName: manual
       hostPath:
         path: "/mnt/data"
    

    Apply: kubectl apply -f pv.yaml

  2. Create a percentvolum claim (pvc.yaml):

     
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: task-pv-claim
     spec:
       storageClassName: manual
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 3Gi
    

    Apply: kubectl apply -f pvc.yaml

  3. Verify the binding:

     kubectl get pv,pvc
    
  4. Create a pod that uses PVC (pod-storage.yaml):

     
     apiVersion: v1
     kind: Pod
     metadata:
       name: storage-pod
     spec:
       containers:
         - name: nginx
           image: nginx
           volumeMounts:
           - mountPath: "/usr/share/nginx/html"
             name: my-storage
       volumes:
         - name: my-storage
           persistentVolumeClaim:
             claimName: task-pv-claim
    

    Apply: kubectl apply -f pod-storage.yaml

Section 5.3: Storage Classes and Dynamic Provisioning

Demo: Using the default storage class

  1. Check out the available storage classes:

     kubectl get storageclass
    
  2. Create a PVC without a PV (depends on the default storage class):

     
     apiVersion: v1
     kind: PersistentVolumeClaim
     metadata:
       name: my-dynamic-claim
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 1Gi
    

    Apply: kubectl apply -f dynamic-pvc.yaml

  3. Observe the dynamic supply:

     kubectl get pv
    

Part 6: Troubleshooting (30%)

Section 6.2: Troubleshooting Applications and Pods

Debugging tools for crashes and failures

  1. Get detailed information about resources (the most sensitive debugging command):

     kubectl describe pod 
    
  2. Check the application logs (for the current container):

     kubectl logs 
    
  3. Check the application logs (for the previous crash container example):

     kubectl logs  --previous
    
  4. Get a shell inside the running container for live debugging:

     kubectl exec -it  -- /bin/sh
    

Section 6.3: Troubleshooting Clusters and Nodes

  1. Check node status:

     kubectl get nodes
    
  2. Get detailed node information:

     kubectl describe node 
    
  3. See node resource capacity (for scheduling issues):

     kubectl describe node  | grep Allocatable
    
  4. check kubelet Service status (via SSH on affected node):

     sudo systemctl status kubelet
     sudo journalctl -u kubelet -f
    
  5. Repeatable scheduling on a single node:

     kubectl uncordon 
    

Section 6.5: Troubleshooting Services and Networking

  1. Check service and endpoints (for connectivity issues):

     kubectl describe service 
    
  2. Check DNS resolution from a client pod (from within the client pod shell):

     kubectl exec -it client-pod -- nslookup 
    
  3. Check network policies (to see if traffic is being blocked):

     kubectl get networkpolicy
    

Section 6.6: Monitoring Cluster and Application Resource Usage

  1. Get Node resource usage (requires Matrix Server):

     kubectl top nodes
    
  2. Get pod resource usage (requires Matrix Server):

     kubectl top pods -n 
    

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro