Top Interview Questions
:
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Initially developed by Google and released in 2014, Kubernetes has become the de facto standard for container orchestration due to its robustness, flexibility, and extensive community support. It addresses the challenges that arise when running applications in containers at scale, enabling organizations to deploy cloud-native applications efficiently and reliably.
Containers, popularized by Docker, package applications and their dependencies into a single unit that can run consistently across different environments. While containers simplify application deployment, managing hundreds or thousands of containers across multiple servers can become highly complex. Challenges include:
Ensuring container availability and fault tolerance
Efficient resource utilization across servers
Automated deployment and scaling
Networking and service discovery
Rolling updates and rollbacks
Kubernetes solves these challenges by providing a unified platform to orchestrate containers, ensuring that applications are resilient, scalable, and manageable across diverse infrastructures.
Kubernetes operates using a set of core concepts that define its architecture and how applications are deployed:
Cluster:
A Kubernetes cluster is a set of nodes that run containerized applications. A cluster consists of a control plane and worker nodes. The control plane manages the cluster, while worker nodes run the application containers.
Node:
A node is a physical or virtual machine in the Kubernetes cluster. Nodes host Pods, the basic unit of deployment. Each node runs essential services, including a container runtime (e.g., Docker or containerd) and the kubelet, which communicates with the control plane.
Pod:
The Pod is the smallest deployable unit in Kubernetes, representing one or more containers that share storage, networking, and configuration. Pods are ephemeral; if a Pod fails, Kubernetes automatically replaces it.
Deployment:
A Deployment defines the desired state of an application, including the number of replicas (Pods) to run. Kubernetes ensures that the actual state matches the desired state, performing rolling updates, scaling, and self-healing automatically.
Service:
A Service is an abstraction that defines a logical set of Pods and provides a stable endpoint (IP or DNS) for accessing them. Services enable communication between different application components and external clients, even as Pods are dynamically added or removed.
ConfigMap and Secret:
ConfigMaps store configuration data that can be injected into Pods, while Secrets manage sensitive information, such as passwords or API keys, securely.
Namespace:
Namespaces allow multiple virtual clusters within a single physical cluster, providing isolation for teams, projects, or environments.
Volume:
Volumes provide persistent storage to Pods, enabling data to survive container restarts. Kubernetes supports multiple types of storage, including local disks, cloud storage, and network-attached storage.
The architecture of Kubernetes is designed for high availability, scalability, and extensibility:
Control Plane Components:
API Server: The central entry point for all administrative tasks. It exposes the Kubernetes API and processes requests from users, CLI, or other components.
etcd: A distributed key-value store that stores cluster state and configuration.
Controller Manager: Ensures the cluster’s desired state by managing controllers, such as ReplicaSet and Node controllers.
Scheduler: Assigns Pods to nodes based on resource requirements and availability.
Node Components:
kubelet: An agent that runs on each node and ensures that containers are running in the specified Pods.
kube-proxy: Manages networking rules, enabling communication between Pods and Services.
Container Runtime: Software responsible for running containers, such as Docker or containerd.
Kubernetes offers numerous features that make it ideal for modern cloud-native applications:
Automated Deployment and Rollbacks: Kubernetes supports automated application deployment, updates, and rollbacks. If an update fails, the system can revert to the previous stable version.
Self-Healing: Kubernetes automatically replaces failed containers, kills unresponsive Pods, and reschedules Pods on healthy nodes.
Horizontal Scaling: Applications can be scaled in or out automatically based on CPU utilization or custom metrics.
Service Discovery and Load Balancing: Kubernetes provides built-in service discovery and distributes traffic evenly among Pods.
Storage Orchestration: Persistent volumes and dynamic storage provisioning allow stateful applications to be easily managed.
Secret and Configuration Management: Kubernetes secures sensitive data and separates configuration from code, enabling environment-specific settings without changing the application.
Extensibility: Kubernetes supports custom controllers, operators, and APIs, allowing organizations to extend its functionality to meet specific requirements.
Kubernetes is a cornerstone of the DevOps and cloud-native movement. It integrates with CI/CD pipelines, enabling rapid development, testing, and deployment. Kubernetes is also cloud-agnostic, allowing applications to run consistently across on-premises, public cloud, and hybrid environments. Popular cloud providers, such as AWS (EKS), Google Cloud (GKE), and Azure (AKS), offer managed Kubernetes services, reducing operational complexity.
Kubernetes also works seamlessly with service meshes (like Istio) for traffic management and observability, logging and monitoring tools (Prometheus, Grafana), and serverless frameworks (Knative), creating a complete ecosystem for modern application delivery.
While Kubernetes offers powerful capabilities, organizations must consider the following challenges:
Complexity: Kubernetes has a steep learning curve due to its extensive architecture and configuration options.
Resource Management: Improperly configured clusters can lead to resource inefficiencies and higher costs.
Security: Misconfigured clusters can expose vulnerabilities. Security practices, such as RBAC, network policies, and Secrets management, are critical.
Monitoring and Troubleshooting: Managing logs, metrics, and debugging applications in large clusters can be challenging.
Despite these challenges, the benefits of Kubernetes—automation, scalability, and portability—often outweigh the complexity.
Kubernetes is widely adopted across industries, powering applications in finance, healthcare, e-commerce, and technology. Some common use cases include:
Microservices Architecture: Kubernetes simplifies the deployment and management of microservices by managing dependencies, scaling, and networking automatically.
Hybrid and Multi-Cloud Deployments: Organizations can deploy workloads across multiple clouds while maintaining consistency and avoiding vendor lock-in.
Big Data and Machine Learning: Kubernetes orchestrates data pipelines, AI/ML model training, and distributed processing frameworks like Apache Spark.
Continuous Deployment Pipelines: Kubernetes integrates with CI/CD tools to automate application testing, deployment, and rollback, reducing time-to-market.
Answer:
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It helps manage containers across multiple hosts and provides features like load balancing, service discovery, self-healing, and automated rollouts.
Answer:
Kubernetes has a master-worker architecture:
Master Node Components:
API Server – The front-end of the Kubernetes control plane, handling all REST requests.
Scheduler – Assigns pods to nodes based on resource availability.
Controller Manager – Ensures desired state is maintained (e.g., scaling pods, replication).
etcd – A key-value store that stores cluster configuration and state.
Worker Node Components:
Kubelet – An agent that runs on each node and communicates with the master to ensure containers are running.
Kube-proxy – Maintains network rules and load balances network traffic.
Container Runtime – Software to run containers (e.g., Docker, containerd).
Answer:
A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share storage, network, and specifications on how to run them. Pods are ephemeral in nature; if a pod dies, Kubernetes can create a new one.
Answer:
A Node is a worker machine (physical or virtual) in Kubernetes. It runs pods and is managed by the master node. Nodes contain the Kubelet, Kube-proxy, and a container runtime.
Answer:
A Kubernetes Cluster is a set of master and worker nodes. The master node manages the cluster, while worker nodes run applications in pods. The cluster provides high availability, scaling, and resource management.
Answer:
Namespaces are virtual clusters within a Kubernetes cluster. They help divide cluster resources between multiple users or teams. Useful in environments with multiple projects.
Example: default, kube-system, development, testing.
Answer:
A Service is an abstraction that defines a logical set of pods and a policy to access them. Services provide stable IP addresses and DNS names for pods, even if pod IPs change.
Types of Services:
ClusterIP – Default type, accessible only within the cluster.
NodePort – Exposes service on a static port on each node.
LoadBalancer – Uses cloud provider’s load balancer.
ExternalName – Maps service to an external DNS name.
Answer:
A Deployment is a Kubernetes object that manages replica sets and ensures the desired number of pods are running. It provides rolling updates, rollbacks, and scaling.
Example Command:
kubectl create deployment nginx --image=nginx
Answer:
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. Deployments usually manage ReplicaSets to handle scaling and updates automatically.
Answer:
ConfigMap – Stores non-sensitive configuration data (e.g., environment variables, config files).
Secret – Stores sensitive data like passwords, tokens, and keys in an encoded format.
Answer:
A StatefulSet manages stateful applications, where each pod has a unique identity and persistent storage. Useful for databases like MySQL, MongoDB, and Kafka.
Answer:
A DaemonSet ensures that a specific pod runs on all or selected nodes. Useful for logging, monitoring, or node management tools.
Answer:
Ingress manages external access to services in a cluster, usually HTTP/HTTPS. It provides URL-based routing, SSL termination, and load balancing.
Example:
Route /app1 → service1
Route /app2 → service2
Answer:
Helm is a package manager for Kubernetes. It helps define, install, and upgrade applications using charts (pre-configured templates).
Example Command:
helm install myapp stable/nginx
Answer:
Kubernetes supports manual and automatic scaling:
Manual Scaling:
kubectl scale deployment nginx --replicas=5
Horizontal Pod Autoscaler (HPA):
Automatically adjusts pod count based on CPU/memory usage.
kubectl autoscale deployment nginx --cpu-percent=50 --min=2 --max=10
Answer:
PersistentVolume (PV) – A piece of storage in the cluster provisioned by admin or dynamically.
PersistentVolumeClaim (PVC) – A request by a pod for storage. Pods use PVCs to access PVs.
Answer:
Monitoring tools for Kubernetes:
Prometheus + Grafana – Metrics collection and dashboards.
Kubernetes Dashboard – Web UI to view cluster resources.
kubectl top – CLI to see resource usage.
Answer:
Job: Runs a pod until completion (batch task).
CronJob: Runs jobs periodically using cron syntax.
| Feature | Deployment | StatefulSet |
|---|---|---|
| Pod Identity | Random | Stable (unique name) |
| Storage | Ephemeral | Persistent |
| Use Case | Stateless apps | Stateful apps |
kubectl commands| Command | Description |
|---|---|
kubectl get pods |
List all pods |
kubectl describe pod <pod-name> |
Detailed pod info |
kubectl logs <pod-name> |
View pod logs |
kubectl exec -it <pod-name> -- bash |
Access pod terminal |
kubectl apply -f <file.yaml> |
Create or update resources from YAML |
Answer:
A container is a lightweight, portable unit that packages an application along with its dependencies. Kubernetes orchestrates containers across multiple nodes, ensuring they run efficiently and reliably.
| Feature | Docker | Kubernetes |
|---|---|---|
| Purpose | Container runtime | Container orchestration |
| Management | Single container | Cluster of containers |
| Scaling | Manual | Automatic (HPA) |
| Networking | Basic | Advanced, service-based |
Answer:
Kubelet is an agent that runs on each worker node. Its primary role is to ensure that containers are running in pods as expected, based on the specifications provided by the master node.
Answer:
Kube-proxy is a networking component running on each node that maintains network rules, load balancing, and allows communication between pods and services.
Answer:
etcd is a key-value store used by Kubernetes to store cluster configuration, state, and metadata. It ensures consistency and reliability for the cluster’s desired state.
Answer:
The API Server is the central management point in the control plane. It exposes RESTful APIs, validates requests, and updates etcd with the desired state.
Answer:
HPA automatically scales the number of pods based on CPU/memory utilization or custom metrics. It ensures applications can handle variable loads efficiently.
| Service Type | Access | Use Case |
|---|---|---|
| ClusterIP | Internal to cluster | Internal services |
| NodePort | External via node IP:port | Expose small services externally |
| LoadBalancer | External, cloud-based | Production-grade services |
Answer:
Pod goes through several phases:
Pending – Pod created but not scheduled.
Running – Pod scheduled and container started.
Succeeded – Pod completed successfully.
Failed – Pod terminated with error.
Unknown – State cannot be determined.
Answer:
A sidecar container runs alongside the main container in a pod to enhance functionality, e.g., logging, monitoring, proxy, or backup services.
Answer:
Taint – Prevents pods from being scheduled on certain nodes unless they tolerate the taint.
Toleration – Allows a pod to run on nodes with matching taints.
Answer:
Node Selector – Simple way to constrain pods to specific nodes using labels.
Node Affinity – More expressive rules for scheduling pods to nodes (preferred or required).
Answer:
A volume is a directory accessible to containers in a pod, used for persistent or shared storage. Unlike pod storage, it can persist data beyond container lifetimes.
Types of Volumes: emptyDir, hostPath, nfs, persistentVolumeClaim.
| Type | Ephemeral | Persistent |
|---|---|---|
| Lifespan | Pod lifetime | Beyond pod lifetime |
| Use Case | Cache, temp files | Databases, logs |
Answer:
A CronJob runs tasks periodically using cron syntax. Useful for backups, batch jobs, and scheduled maintenance tasks.
Answer:
Role-Based Access Control (RBAC) restricts access to Kubernetes resources based on roles and permissions.
Components:
Role – Permissions within a namespace.
ClusterRole – Permissions across the cluster.
RoleBinding/ClusterRoleBinding – Assign roles to users/groups.
Answer:
Kubernetes automatically:
Restarts failed containers
Replaces pods that die
Reschedules pods on healthy nodes
Kills pods that fail liveness probes
Answer:
Liveness Probe: Checks if a container is alive; restarts if failed.
Readiness Probe: Checks if a container is ready to serve traffic; removes it from service endpoints if not ready.
Answer:
A DaemonSet ensures specific pods run on all or selected nodes. Example: logging agents, monitoring tools, or security daemons.
| Feature | Job | CronJob |
|---|---|---|
| Run Frequency | One-time | Scheduled periodically |
| Use Case | Batch tasks | Scheduled backups or tasks |
Answer:
Deployment: Stateless apps; pods interchangeable.
StatefulSet: Stateful apps; unique pod identity and persistent storage.
Answer:
ReplicaSet – Maintains desired pod replicas.
Deployment – Manages ReplicaSets.
StatefulSet – Stateful workloads.
DaemonSet – Runs pods on all/specific nodes.
Job / CronJob – Batch or scheduled tasks.
Answer:
CRD allows you to extend Kubernetes API with custom objects. This enables developers to manage custom resources like native Kubernetes objects.
Answer:
Kubernetes networking ensures:
Pods can communicate across nodes.
Each pod has a unique IP.
Services provide stable endpoints.
Network policies control traffic.
Answer:
Use kubeadm upgrade commands:
kubeadm upgrade plan
kubeadm upgrade apply <version>
Also, upgrade kubectl and kubelet on nodes.
Answer:
A LoadBalancer service exposes a service externally using cloud provider load balancers, distributing traffic to pods automatically.
Steps:
Check pod status:
kubectl get pods
Describe pod:
kubectl describe pod <pod-name>
Check logs:
kubectl logs <pod-name>
Exec into pod:
kubectl exec -it <pod-name> -- bash
Answer:
A rolling update gradually replaces old pods with new pods without downtime. Kubernetes ensures the desired number of pods are always running during updates.
Command:
kubectl set image deployment/nginx nginx=nginx:1.18
Answer:
Kubernetes Dashboard is a web-based UI to manage cluster resources, monitor workloads, and perform basic administrative tasks.
kubectl apply vs kubectl create| Command | Description |
|---|---|
kubectl create |
Creates a new resource |
kubectl apply |
Creates or updates a resource using YAML |
Answer:
PVC is a request for storage by a pod. Kubernetes binds PVCs to available PVs based on size and access mode.
Answer:
Kubernetes stores secrets in etcd in base64 encoded form. Access is controlled using RBAC, and secrets can be mounted as files or environment variables in pods.
Q: You deployed a pod, but it’s in CrashLoopBackOff. How do you troubleshoot?
A:
Check pod status: kubectl get pods
Describe pod: kubectl describe pod <pod>
Check logs: kubectl logs <pod>
Verify image, environment variables, and config files.
Check liveness/readiness probes.
Q1. What is Kubernetes, and why is it used?
Answer:
Kubernetes (K8s) is an open-source container orchestration platform used to automate deploying, scaling, and managing containerized applications. It abstracts the underlying infrastructure, allowing developers to focus on applications rather than server management.
Key Use Cases:
Automated deployment and rollback.
Self-healing (auto-restarting failed containers).
Horizontal scaling of applications.
Service discovery and load balancing.
Q2. Explain the Kubernetes architecture.
Answer:
Kubernetes has a master-worker architecture:
Master Node (Control Plane): Manages the cluster.
API Server: Entry point for all commands.
Controller Manager: Ensures desired state matches current state.
Scheduler: Assigns pods to nodes based on resources.
etcd: Distributed key-value store for cluster state.
Worker Nodes: Run application workloads.
Kubelet: Communicates with API server and ensures containers run.
Kube-proxy: Maintains networking rules.
Container Runtime: Docker, containerd, etc.
Q3. What are Pods in Kubernetes?
Answer:
A Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share:
Network (IP)
Storage volumes
Configuration and environment
Pods are ephemeral; they die and get recreated if the node fails.
Scenario: For a web application, a pod may contain a frontend container and a logging sidecar container.
Q4. Difference between Deployment, ReplicaSet, and StatefulSet?
Answer:
| Resource | Use Case | Key Feature |
|---|---|---|
| Deployment | Stateless apps | Rollbacks, scaling, updates |
| ReplicaSet | Ensures a set number of pods run | Works behind deployments |
| StatefulSet | Stateful apps (DBs, queues) | Stable network IDs & persistent storage |
Scenario: Deploying Redis cluster → StatefulSet is preferred for stable IDs.
Q5. How does auto-scaling work in Kubernetes?
Answer:
K8s supports:
Horizontal Pod Autoscaler (HPA): Scales pods based on CPU/memory or custom metrics.
Vertical Pod Autoscaler (VPA): Adjusts resources allocated to pods.
Cluster Autoscaler: Scales worker nodes based on pending pods.
Example: If CPU > 70% for 2 minutes, HPA spins up additional pods.
Q6. Explain Kubernetes Service types.
Answer:
| Service Type | Description | Use Case |
|---|---|---|
| ClusterIP | Internal access only | Microservices communication |
| NodePort | Exposes service via : | Testing on a single node |
| LoadBalancer | External access via cloud LB | Production deployments |
| ExternalName | Maps service to external DNS | Legacy services integration |
Q7. What is Ingress in Kubernetes?
Answer:
Ingress manages HTTP/S routing to services. It sits at the edge and can:
Route traffic based on host/path.
Terminate SSL/TLS connections.
Provide load balancing and authentication.
Scenario: Direct www.example.com/api → API service, www.example.com/web → frontend service.
Q8. Difference between PersistentVolume (PV) and PersistentVolumeClaim (PVC)?
Answer:
PV: Cluster-level resource representing storage.
PVC: User request for storage (size, access mode).
Binding: Kubernetes binds a PVC to an available PV.
Example: MySQL pod requires 10GB storage → PVC requests it, PV provides it.
Q9. Explain Storage Classes in Kubernetes.
Answer:
StorageClass defines dynamic provisioning rules for PVs.
Types: fast (SSD), standard (HDD).
Allows automated storage allocation using cloud providers.
Q10. How do ConfigMaps and Secrets differ?
Answer:
| Feature | ConfigMap | Secret |
|---|---|---|
| Data Type | Non-sensitive | Sensitive (passwords, keys) |
| Encoding | Plain text | Base64 |
| Use Case | App config | Credentials |
Scenario: Storing DB URL in ConfigMap, DB password in Secret.
Q11. How can you update ConfigMap/Secret without restarting pods?
Answer:
Use envFrom for env vars → pods need restart.
Use volume mounts → changes auto-reflected in mounted files.
Q12. Explain RBAC in Kubernetes.
Answer:
Role-Based Access Control (RBAC) manages user permissions:
Role: Namespaced permissions.
ClusterRole: Cluster-wide permissions.
RoleBinding / ClusterRoleBinding: Assigns Role/ClusterRole to users/groups.
Scenario: Developers can only create pods in dev namespace, not in prod.
Q13. How to secure a Kubernetes cluster?
Answer:
Use RBAC to limit permissions.
Enable Network Policies to control pod communication.
Regularly update K8s version.
Enable TLS for API server.
Use secrets instead of plain text configs.
Q14. How do you troubleshoot a pod in CrashLoopBackOff?
Answer:
Check pod events: kubectl describe pod <pod-name>
Check logs: kubectl logs <pod-name>
Verify image and command.
Check resource limits.
Debug interactively: kubectl exec -it <pod> -- /bin/bash
Q15. How do you monitor a Kubernetes cluster?
Answer:
Prometheus + Grafana: Metrics and dashboards.
ELK Stack / Loki: Logs aggregation.
Kube-state-metrics: Cluster state metrics.
Kubectl top: Quick resource usage check.
Q16. What is a DaemonSet and when would you use it?
Answer:
DaemonSet ensures a pod runs on every node.
Use Cases:
Log collection agents (Fluentd).
Node monitoring agents (Prometheus Node Exporter).
Q17. What is a Job and CronJob in Kubernetes?
Answer:
Job: Runs one-off tasks until completion.
CronJob: Runs jobs periodically based on schedule.
Scenario: Database backup every night → CronJob. Data migration → Job.
Q18. Can you explain Kubernetes Federation?
Answer:
Kubernetes Federation allows managing multiple clusters across regions.
Benefits: Multi-cluster app deployment, failover, geo-redundancy.
Use Case: High availability for global applications.
Q19. Explain the difference between rolling update and blue-green deployment.
Answer:
| Deployment Type | Strategy | Downtime |
|---|---|---|
| Rolling Update | Update pods gradually | Minimal |
| Blue-Green | Deploy new version alongside old one | Zero |
Scenario: For mission-critical apps, blue-green avoids downtime entirely.
Q20. How do you manage secrets with external vaults?
Answer:
Use HashiCorp Vault or AWS Secrets Manager.
K8s integrates via CSI driver or external secrets operator.
Advantage: Rotate secrets without redeploying pods.
Q21. How do you upgrade a Kubernetes cluster with zero downtime?
Answer:
Upgrade Control Plane first using kubeadm upgrade.
Upgrade worker nodes one at a time.
Drain nodes: kubectl drain <node-name> to safely evict pods.
Upgrade kubelet and kubectl on nodes.
Verify workloads using kubectl get nodes and kubectl get pods -A.
Scenario: In production clusters, upgrading worker nodes in batches ensures applications continue running.
Q22. How do you backup and restore etcd in Kubernetes?
Answer:
Backup:
ETCDCTL_API=3 etcdctl snapshot save backup.db \
--endpoints=<etcd-endpoints> \
--cert=<cert-file> --key=<key-file> --cacert=<cacert-file>
Restore:
ETCDCTL_API=3 etcdctl snapshot restore backup.db \
--name <new-etcd-name> --initial-cluster <initial-cluster-config> \
--initial-cluster-token <token> --initial-advertise-peer-urls <peer-urls>
Always take etcd backup regularly; critical for disaster recovery.
Q23. How do you handle node failures in Kubernetes?
Answer:
Kubernetes automatically reschedules pods to healthy nodes.
Use taints and tolerations to control which pods run on which nodes.
Set up Cluster Autoscaler to add new nodes if resources are insufficient.
Monitor with Prometheus for proactive alerts.
Scenario: If a node running a database pod crashes, StatefulSet ensures pod recreates on another node with persistent storage.
Q24. What are Network Policies in Kubernetes?
Answer:
NetworkPolicy defines allowed traffic between pods.
Example: Allow only frontend pods to talk to backend pods.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
role: backend
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
Improves security in multi-tenant clusters.
Q25. How do you troubleshoot service communication issues in Kubernetes?
Answer:
Check pod logs: kubectl logs <pod>
Check endpoints: kubectl get endpoints <service>
Test network connectivity: kubectl exec <pod> -- ping <service>
Verify DNS resolution: kubectl exec <pod> -- nslookup <service>
Scenario: If a pod can’t reach a database service, check if the Service IP or port is correct and NetworkPolicy allows traffic.
Q26. Explain CNI plugins and examples.
Answer:
CNI (Container Network Interface) manages pod networking.
Examples: Calico, Flannel, Weave, Cilium.
CNI provides:
Pod-to-pod communication
Network isolation
IP allocation
Scenario: Calico is used when NetworkPolicies are required; Flannel is lightweight and simple for basic networking.
Q27. How do you optimize Kubernetes for high performance?
Answer:
Use resource requests and limits to prevent CPU/memory starvation.
Use Horizontal Pod Autoscaler (HPA) for scaling.
Use readiness and liveness probes to prevent routing traffic to unhealthy pods.
Optimize container images for size and caching.
Consider node pools with different machine types for workloads.
Q28. How does Kubernetes handle large-scale deployments?
Answer:
Use multiple namespaces for isolation.
Use label selectors and affinity rules for pod distribution.
Cluster Autoscaler manages node scaling automatically.
Custom Metrics with HPA for application-level scaling.
Scenario: E-commerce website spikes during sale → HPA scales frontend pods, Cluster Autoscaler adds nodes.
Q29. How do you run a database like PostgreSQL in Kubernetes?
Answer:
Use StatefulSet for stable network IDs.
Use PersistentVolumes for data storage.
Define readiness/liveness probes to monitor database health.
Optionally, use headless services for internal communication in clusters.
Scenario: PostgreSQL pod restarts → StatefulSet ensures pod retains the same storage and network identity.
Q30. How does dynamic volume provisioning work?
Answer:
Create a PVC with a StorageClass.
Kubernetes automatically provisions PV based on StorageClass definition.
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast
Q31. How do you implement centralized logging?
Answer:
Use Fluentd or Logstash as log collector.
Store logs in ElasticSearch or Loki.
Visualize in Kibana or Grafana.
Q32. How do you secure Kubernetes secrets?
Answer:
Store in etcd with encryption at rest.
Use KMS provider for extra security.
Integrate with HashiCorp Vault or AWS Secrets Manager.
Use RBAC to restrict access.
Q33. What is Pod Security Context?
Answer:
Defines security settings for pods/containers:
Run as a specific user/group
Set file permissions
Restrict capabilities
Example:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
Q34. How do you perform a rolling update with zero downtime?
Answer:
Use Deployment with spec.strategy.type: RollingUpdate.
Set maxUnavailable and maxSurge values.
Monitor rollout: kubectl rollout status deployment <name>
Scenario: Update frontend application while serving traffic without downtime.
Q35. How do you debug a pod that cannot pull images?
Answer:
Check pod events: kubectl describe pod <pod>
Check image name and tag.
Verify container registry authentication.
Use kubectl run with the same image to test pull.
Q36. What is a sidecar container pattern?
Answer:
A sidecar container runs alongside the main container in a pod.
Use Cases: logging, monitoring, proxy, configuration update.
Example: Fluentd sidecar to collect app logs.
Q37. How do you implement blue-green deployments in Kubernetes?
Answer:
Deploy a new version alongside the old.
Use Ingress or Service to route traffic to the new version.
Switch traffic gradually or fully once verified.
Old version can be rolled back if needed.
Q38. How do you handle multi-tenancy in Kubernetes?
Answer:
Use namespaces for tenant isolation.
Apply NetworkPolicies to restrict traffic.
Use ResourceQuotas to limit CPU/memory.
Apply RBAC for tenant-specific access.
Q39. Disaster Recovery in Kubernetes – How do you plan it?
Answer:
Backup etcd regularly.
Use stateful backup for databases (PV snapshots).
Deploy apps across multi-zone clusters.
Keep a recovery playbook with tested restore steps.
Q40. How do you implement CI/CD with Kubernetes?
Answer:
Build container images with Jenkins/GitHub Actions.
Push images to container registry.
Deploy to K8s using kubectl, Helm, or ArgoCD.
Use rolling updates for zero downtime.
Q41. What is Helm in Kubernetes, and why is it used?
Answer:
Helm is a package manager for Kubernetes that simplifies deployment of applications using charts (pre-configured templates).
It manages installation, upgrade, rollback, and versioning of apps.
Scenario: Deploying a WordPress application with a single command instead of creating multiple manifests manually.
Q42. Difference between Helm 2 and Helm 3?
Answer:
| Feature | Helm 2 | Helm 3 |
|---|---|---|
| Tiller | Present, server-side | Removed, client-side only |
| Security | Issues with Tiller access | Improved with K8s RBAC |
| CRD management | Manual | Built-in support |
Q43. How do you perform rollback in Helm?
Answer:
helm rollback <release-name> <revision-number>
Helm keeps a history of releases, making rollback easy.
Scenario: After upgrading an app, if pods fail health checks, you can rollback to the previous stable release.
Q44. What are Helm values.yaml files?
Answer:
values.yaml contains default configuration values for a Helm chart.
Can override values during deployment using --set or -f custom-values.yaml.
Scenario: Deploying the same app in dev, staging, and prod with different resources or replicas.
Q45. What are Custom Resource Definitions (CRDs)?
Answer:
CRDs allow you to extend Kubernetes with custom resources.
You define a new resource type (like MySQLCluster) and manage it via API.
Scenario: Creating a CRD for a Kafka cluster that automates scaling and backup.
Q46. What is a Kubernetes Operator?
Answer:
Operator is a custom controller that manages CRDs and automates lifecycle tasks for complex applications.
Examples: Prometheus Operator, MongoDB Operator.
Scenario: MongoDB Operator automates provisioning, scaling, backup, and failover.
Q47. Difference between CRD and Operator?
Answer:
| Feature | CRD | Operator |
|---|---|---|
| Purpose | Define new resource type | Manage lifecycle of resources |
| Automation | No | Yes, includes logic for operations |
| Examples | MySQLCluster CRD | MongoDB Operator |
Q48. What is a service mesh, and why use it?
Answer:
Service mesh manages service-to-service communication transparently.
Features: load balancing, encryption, observability, retries, circuit breaking.
Examples: Istio, Linkerd, Consul.
Scenario: Istio can enforce mutual TLS and route traffic gradually for canary deployments.
Q49. Explain Istio sidecar injection.
Answer:
Istio injects Envoy proxy as a sidecar in pods to manage traffic.
Can be automatic (via namespace annotation) or manual.
Enables: traffic routing, telemetry, security policies without changing app code.
Q50. What are Ingress Controllers vs Service Mesh?
Answer:
| Feature | Ingress Controller | Service Mesh |
|---|---|---|
| Scope | North-south traffic (external → cluster) | East-west traffic (service → service) |
| Functions | Routing, TLS termination | Routing, retries, telemetry, security |
| Example | NGINX, Traefik | Istio, Linkerd |
Q51. How do you monitor Kubernetes cluster health?
Answer:
Node metrics: CPU, memory → kubectl top nodes
Pod metrics: kubectl top pods
Cluster metrics: Prometheus, Grafana
Logging: ELK stack or Loki
Scenario: Alert if CPU usage > 80% for 5 minutes, HPA scales pods automatically.
Q52. What are Prometheus and Alertmanager?
Answer:
Prometheus: Collects metrics using pull model.
Alertmanager: Sends alerts (email, Slack, PagerDuty) based on Prometheus rules.
Works with Grafana dashboards for visualization.
Q53. How do you implement distributed tracing in Kubernetes?
Answer:
Use Jaeger or Zipkin.
Trace requests across microservices for performance analysis.
Sidecar proxies in service mesh can automatically collect traces.
Scenario: Identify latency in multi-service request chain in production.
Q54. How do you troubleshoot Kubernetes pod startup issues?
Answer:
Check events: kubectl describe pod <pod>
Check logs: kubectl logs <pod>
Verify image pull and container command
Check init containers
Check resource limits (CPU/memory)
Q55. How do you debug networking issues?
Answer:
kubectl exec into pod and ping other pods.
Verify service endpoints: kubectl get endpoints
Check NetworkPolicy configuration.
Check CNI plugin logs (Flannel/Calico).
Q56. How do you debug DNS resolution issues in Kubernetes?
Answer:
Use kubectl exec <pod> -- nslookup <service>
Verify CoreDNS pod status: kubectl get pods -n kube-system
Check /etc/resolv.conf in pod.
Q57. How do you optimize resource allocation for pods?
Answer:
Set requests and limits for CPU/memory.
Use Vertical Pod Autoscaler for adjusting resources.
Use node pools for high-performance workloads.
Monitor resource usage and adjust HPA thresholds.
Q58. How do you prevent pod eviction under high load?
Answer:
Reserve resources with requests for critical pods.
Set priorityClass for important pods.
Use taints and tolerations to control pod placement.
Q59. How do you implement GitOps in Kubernetes?
Answer:
Use ArgoCD or Flux to sync Git repository with K8s cluster.
Any change in Git automatically reflects in cluster.
Ensures declarative, version-controlled deployments.
Scenario: Developers commit YAML to Git → ArgoCD deploys app automatically → audit trail available.
Q60. How do you perform zero-downtime deployments?
Answer:
Use rolling updates in Deployment.
Set maxUnavailable=0 and maxSurge=1 (or higher).
Verify readiness probes before routing traffic.
Optionally, use blue-green or canary deployment for safer releases.