Linux Foundation Kubernetes and Cloud Native Associate KCNA Exam Questions

Page: 1 / 14
Total 240 questions
Question 1

What is the default value for authorization-mode in Kubernetes API server?



Answer : B

The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server's default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.

However, it's crucial to distinguish ''default flag value'' from ''recommended configuration.'' In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls---any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server's historical default flag value is AlwaysAllow.

So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.

In short: AlwaysAllow is the API server's default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.

=========


Question 2

CI/CD stands for:



Answer : D

CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes. CI stands for Continuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early. CD is commonly used to mean Continuous Delivery or Continuous Deployment depending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted as Continuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches option D.

In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.

Why the other options are incorrect: ''Continuous Development'' isn't the standard ''D'' term; it's ambiguous and not the established acronym expansion. ''Cloud Integration/Cloud Development'' is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes ''Continuous Deployment,'' and that is one of the accepted canonical expansions for CD, D is the correct selection here.

Practically, CI/CD complements Kubernetes' declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.

=========


Question 3

Which option represents best practices when building container images?



Answer : C

Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.

Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.

Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.

Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.

Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages ''just in case'' contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.

According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.


Question 4

What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?



Answer : B

B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.

This approach is important because open-source communities are made up of diverse contributors across companies and geographies. ''Project Founder Say'' (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. ''Financial Analysis'' (A) is not a conflict resolution mechanism for technical decisions, and ''Flipism Technique'' (C) is not a real governance practice.

In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project's mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.

Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.

=========


Question 5

What are the two steps performed by the kube-scheduler to select a node to schedule a pod?



Answer : C

The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod's requirements are removed from consideration.

Second, scoring ranks the remaining feasible nodes using priority functions to choose the ''best'' placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.

Option B (''filtering and selecting'') is close but misses the explicit scoring step that is central to scheduler design. The scheduler does ''select'' a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.

Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can't be scheduled, it failed in filtering---kubectl describe pod often shows ''0/... nodes are available'' reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it's often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).

So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.

=========


Question 6

What does vertical scaling an application deployment describe best?



Answer : C

Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.

This differs from horizontal scaling, which changes the number of instances (replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.

In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less ''instant'' than HPA and can disrupt workloads if not planned. That's why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.

From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.


Question 7

What is an advantage of using the Gateway API compared to Ingress in Kubernetes?



Answer : B

The Gateway API is a newer Kubernetes networking API designed to address several limitations of the traditional Ingress resource. One of its most significant advantages is the clear separation of roles and responsibilities between infrastructure providers (such as platform teams or cluster administrators) and application developers. This design principle is a core motivation behind the Gateway API and directly differentiates it from Ingress.

With Ingress, a single resource often combines concerns such as load balancer configuration, TLS settings, routing rules, and application-level details. This frequently leads to heavy reliance on annotations, which are controller-specific, non-standardized, and blur ownership boundaries. Application developers may need elevated permissions to modify Ingress objects, even when changes affect shared infrastructure, creating operational risk.

The Gateway API introduces multiple distinct resources---such as GatewayClass, Gateway, and route resources (e.g., HTTPRoute)---each aligned with a specific role. Infrastructure providers manage GatewayClass and Gateway resources, which define how traffic enters the cluster and what capabilities are available. Application developers interact primarily with route resources to define how traffic is routed to their Services, without needing access to the underlying infrastructure configuration. This separation improves security, governance, and scalability in multi-team environments.

Option A is incorrect because automatic scaling based on CPU and memory is handled by the Horizontal Pod Autoscaler, not by Gateway API or Ingress. Option C describes a characteristic of Ingress, not an advantage of Gateway API; in fact, Gateway API explicitly reduces reliance on annotations by using structured, portable fields. Option D is incorrect because exposing applications externally requires more than just a Service; traffic management resources like Ingress or Gateway are still necessary.

Therefore, the correct and verified answer is Option B, as the Gateway API's role-oriented design is a key advancement over Ingress and is clearly documented in Kubernetes networking architecture guidance.


Page:    1 / 14   
Total 240 questions