Prepare for the Linux Foundation Kubernetes and Cloud Native Associate exam with our extensive collection of questions and answers. These practice Q&A are updated according to the latest syllabus, providing you with the tools needed to review and test your knowledge.
QA4Exam focus on the latest syllabus and exam objectives, our practice Q&A are designed to help you identify key topics and solidify your understanding. By focusing on the core curriculum, These Questions & Answers helps you cover all the essential topics, ensuring you're well-prepared for every section of the exam. Each question comes with a detailed explanation, offering valuable insights and helping you to learn from your mistakes. Whether you're looking to assess your progress or dive deeper into complex topics, our updated Q&A will provide the support you need to confidently approach the Linux Foundation KCNA exam and achieve success.
Which key-value store is used to persist Kubernetes cluster data?
Kubernetes stores its cluster state (API objects) in etcd, making A correct. etcd is a distributed, strongly consistent key-value store that serves as the source of truth for the Kubernetes control plane. When you create or update objects such as Pods, Deployments, ConfigMaps, Secrets, or Nodes, the kube-apiserver validates the request and then persists the desired state into etcd. Controllers and the scheduler watch the API for changes (which ultimately reflect etcd state) and reconcile the cluster to match that desired state.
etcd's consistency guarantees are crucial. Kubernetes relies on accurate, up-to-date state to make scheduling decisions, enforce RBAC/admission policies, coordinate leader elections, and ensure controllers behave correctly. etcd uses the Raft consensus algorithm to replicate data among members and requires quorum for writes, enabling fault tolerance when deployed in HA configurations (commonly three or five members).
The other options are incorrect in Kubernetes' standard architecture. ZooKeeper is a distributed coordination system used by some other platforms, but Kubernetes does not use it as its primary datastore. Redis is an in-memory data store used for caching or messaging, not as Kubernetes' authoritative state store. ''ControlPlaneStore'' is not a standard Kubernetes component.
Operationally, etcd health is one of the most important determinants of cluster reliability. Slow disk I/O or unstable networking can degrade etcd performance and cause API latency spikes. Backup and restore procedures for etcd are critical disaster-recovery practices, and securing etcd (TLS, access restrictions) is essential because it may contain sensitive data (e.g., Secrets---often base64-encoded, and optionally encrypted at rest depending on configuration).
Therefore, the verified Kubernetes datastore is etcd, option A.
=========
Which of these is a valid container restart policy?
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (''On failure'') maps to Kubernetes' OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So ''On failure'' is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. ''On login,'' ''On update,'' and ''On start'' are not recognized values and don't align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user ''logins.''
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply ''Always'' because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
In the DevOps framework and culture, who builds, automates, and offers continuous delivery tools for developer teams?
The correct answer is C (Platform Engineers). In modern DevOps and platform operating models, platform engineering teams build and maintain the shared delivery capabilities that product/application teams use to ship software safely and quickly. This includes CI/CD pipeline templates, standardized build and test automation, artifact management (registries), deployment tooling (Helm/Kustomize/GitOps), secrets management patterns, policy guardrails, and paved-road workflows that reduce cognitive load for developers.
While application developers (B) write the application code and often contribute pipeline steps for their service, the ''build, automate, and offer tooling for developer teams'' responsibility maps directly to platform engineering: they provide the internal platform that turns Kubernetes and cloud services into a consumable product. This is especially common in Kubernetes-based organizations where you want consistent deployment standards, repeatable security checks, and uniform observability.
Cluster operators (D) typically focus on the health and lifecycle of the Kubernetes clusters themselves: upgrades, node pools, networking, storage, cluster security posture, and control plane reliability. They may work closely with platform engineers, but ''continuous delivery tools for developer teams'' is broader than cluster operations. Application users (A) are consumers of the software, not builders of delivery tooling.
In cloud-native application delivery, this division of labor is important: platform engineers enable higher velocity with safety by automating the software supply chain---builds, tests, scans, deploys, progressive delivery, and rollback. Kubernetes provides the runtime substrate, but the platform team makes it easy and safe for developers to use it repeatedly and consistently across many services.
Therefore, Platform Engineers (C) is the verified correct choice.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Tools that continuously reconcile cluster state to match a Git repository's desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (''GitOps Toolkit'') is related---Flux uses a GitOps Toolkit internally---but the question asks for a ''tool'' that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
Which of the following workload requires a headless Service while deploying into the namespace?
A StatefulSet commonly requires a headless Service, so A is the correct answer. In Kubernetes, StatefulSets are designed for workloads that need stable identities, stable network names, and often stable storage per replica. To support that stable identity model, Kubernetes typically uses a headless Service (spec.clusterIP: None) to provide DNS records that map directly to each Pod, rather than load-balancing behind a single virtual ClusterIP.
With a headless Service, DNS queries return individual endpoint records (the Pod IPs) so that each StatefulSet Pod can be addressed predictably, such as pod-0.service-name.namespace.svc.cluster.local. This is critical for clustered databases, quorum systems, and leader/follower setups where members must discover and address specific peers. The StatefulSet controller then ensures ordered creation/deletion and preserves identity (pod-0, pod-1, etc.), while the headless Service provides discovery for those stable hostnames.
CronJobs run periodic Jobs and don't require stable DNS identity for multiple replicas. Deployments manage stateless replicas and normally use a standard Service that load-balances across Pods. DaemonSets run one Pod per node, and while they can be exposed by Services, they do not intrinsically require headless discovery.
So while you can use a headless Service for other designs, StatefulSet is the workload type most associated with ''requires a headless Service'' due to how stable identities and per-Pod addressing work in Kubernetes.
Full Exam Access, Actual Exam Questions, Validated Answers, Anytime Anywhere, No Download Limits, No Practice Limits
Get All 240 Questions & Answers