Limited-Time Offer: Enjoy 50% Savings! - Ends In 0d 00h 00m 00s Coupon code: 50OFF
Welcome to QA4Exam
Logo

- Trusted Worldwide Questions & Answers

Most Recent Linux Foundation CKA Exam Dumps

 

Prepare for the Linux Foundation Certified Kubernetes Administrator exam with our extensive collection of questions and answers. These practice Q&A are updated according to the latest syllabus, providing you with the tools needed to review and test your knowledge.

QA4Exam focus on the latest syllabus and exam objectives, our practice Q&A are designed to help you identify key topics and solidify your understanding. By focusing on the core curriculum, These Questions & Answers helps you cover all the essential topics, ensuring you're well-prepared for every section of the exam. Each question comes with a detailed explanation, offering valuable insights and helping you to learn from your mistakes. Whether you're looking to assess your progress or dive deeper into complex topics, our updated Q&A will provide the support you need to confidently approach the Linux Foundation CKA exam and achieve success.

The questions for CKA were last updated on Apr 21, 2026.
  • Viewing page 1 out of 17 pages.
  • Viewing questions 1-5 out of 83 questions
Get All 83 Questions & Answers
Question No. 1

SIMULATION

Score: 7%

Task

Given an existing Kubernetes cluster running version 1.20.0, upgrade all of the Kubernetes control plane and node components on the master node only to version 1.20.1.

Be sure to drain the master node before upgrading it and uncordon it after the upgrade.

You are also expected to upgrade kubelet and kubectl on the master node.

Show Answer Hide Answer
Correct Answer: A

SOLUTION:

[student@node-1] > ssh ek8s

kubectl cordon k8s-master

kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force

apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 --disableexcludes=kubernetes

kubeadm upgrade apply 1.20.1 --etcd-upgrade=false

systemctl daemon-reload

systemctl restart kubelet

kubectl uncordon k8s-master


Question No. 2

SIMULATION

Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadWriteMany. The type of volume is hostPath and its location is /srv/app-data.

Show Answer Hide Answer
Correct Answer: A

solution

Persistent Volume

A persistent volume is a piece of storage in a Kubernetes cluster. PersistentVolumes are a cluster-level resource like nodes, which don't belong to any namespace. It is provisioned by the administrator and has a particular file size. This way, a developer deploying their app on Kubernetes need not know the underlying infrastructure. When the developer needs a certain amount of persistent storage for their application, the system administrator configures the cluster so that they consume the PersistentVolume provisioned in an easy way.

Creating Persistent Volume

kind: PersistentVolume

apiVersion: v1

metadata:

name:app-data

spec:

capacity: # defines the capacity of PV we are creating

storage: 2Gi #the amount of storage we are tying to claim

accessModes: # defines the rights of the volume we are creating

- ReadWriteMany

hostPath:

path: '/srv/app-data' # path to which we are creating the volume

Challenge

Create a Persistent Volume named app-data, with access mode ReadWriteMany, storage classname shared, 2Gi of storage capacity and the host path /srv/app-data.

2. Save the file and create the persistent volume.

3. View the persistent volume.

Our persistent volume status is available meaning it is available and it has not been mounted yet. This status will change when we mount the persistentVolume to a persistentVolumeClaim.

PersistentVolumeClaim

In a real ecosystem, a system admin will create the PersistentVolume then a developer will create a PersistentVolumeClaim which will be referenced in a pod. A PersistentVolumeClaim is created by specifying the minimum size and the access mode they require from the persistentVolume.

Challenge

Create a Persistent Volume Claim that requests the Persistent Volume we had created above. The claim should request 2Gi. Ensure that the Persistent Volume Claim has the same storageClassName as the persistentVolume you had previously created.

kind: PersistentVolume

apiVersion: v1

metadata:

name:app-data

spec:

accessModes:

- ReadWriteMany

resources:

requests:

storage: 2Gi

storageClassName: shared

2. Save and create the pvc

njerry191@cloudshell:~ (extreme-clone-2654111)$ kubect1 create -f app-data.yaml

persistentvolumeclaim/app-data created

3. View the pvc

4. Let's see what has changed in the pv we had initially created.

Our status has now changed from available to bound.

5. Create a new pod named myapp with image nginx that will be used to Mount the Persistent Volume Claim with the path /var/app/config.

Mounting a Claim

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

name: app-data

spec:

volumes:

- name:congigpvc

persistenVolumeClaim:

claimName: app-data

containers:

- image: nginx

name: app

volumeMounts:

- mountPath: '/srv/app-data '

name: configpvc


Question No. 3

SIMULATION

Create and configure the service front-end-service so it's accessible through NodePort and routes to the existing pod named front-end.

Show Answer Hide Answer
Correct Answer: A

solution


Question No. 4

SIMULATION

Create a pod that echo ''hello world'' and then exists. Have the pod deleted automatically when it's completed

Show Answer Hide Answer
Correct Answer: A

kubectl run busybox --image=busybox -it --rm --restart=Never --

/bin/sh -c 'echo hello world'

kubectl get po # You shouldn't see pod with the name 'busybox'


Question No. 5

SIMULATION

You must connect to the correct host.

Failure to do so may result in a zero score.

[candidate@base] $ ssh Cka000060

Task

Install Argo CD in the cluster by performing the following tasks:

Add the official Argo CD Helm repository with the name argo

The Argo CD CRDs have already been pre-installed in the cluster

Generate a template of the Argo CD Helm chart version 7.7.3 for the argocd namespace and save it to ~/argo-helm.yaml . Configure the chart to not install CRDs.

Show Answer Hide Answer
Correct Answer: A

Task Summary

SSH into cka000060

Add the Argo CD Helm repo named argo

Generate a manifest (~/argo-helm.yaml) for Argo CD version 7.7.3

Target namespace: argocd

Do not install CRDs

Just generate, don't install

Step-by-Step Solution

1 SSH into the correct host

ssh cka000060

Required --- skipping this = zero score

2 Add the Argo CD Helm repository

helm repo add argo https://argoproj.github.io/argo-helm

helm repo update

This adds the official Argo Helm chart source.

3 Generate Argo CD Helm chart template (version 7.7.3)

Use the helm template command to generate a manifest and write it to ~/argo-helm.yaml.

helm template argocd argo/argo-cd \

--version 7.7.3 \

--namespace argocd \

--set crds.install=false \

> ~/argo-helm.yaml

argocd Release name (can be anything; here it's same as the namespace)

--set crds.install=false Disables CRD installation

> ~/argo-helm.yaml Save to required file

4 Verify the generated file (optional but smart)

head ~/argo-helm.yaml

Check that it contains valid Kubernetes YAML and does not include CRDs.

Final Command Summary

ssh cka000060

helm repo add argo https://argoproj.github.io/argo-helm

helm repo update

helm template argocd argo/argo-cd \

--version 7.7.3 \

--namespace argocd \

--set crds.install=false \

> ~/argo-helm.yaml

head ~/argo-helm.yaml # Optional verification


Unlock All Questions for Linux Foundation CKA Exam

Full Exam Access, Actual Exam Questions, Validated Answers, Anytime Anywhere, No Download Limits, No Practice Limits

Get All 83 Questions & Answers