Top Interview Questions
OpenShift is a powerful, enterprise-grade container platform developed by Red Hat, designed to facilitate the deployment, management, and scaling of applications in a cloud-native environment. It is built on Kubernetes, the leading open-source container orchestration platform, and extends its capabilities with developer-friendly tools, enhanced security features, and enterprise support. OpenShift allows organizations to adopt DevOps practices, accelerate software development, and efficiently manage containerized applications across hybrid and multi-cloud infrastructures.
Originally, OpenShift started as a Platform-as-a-Service (PaaS) solution but has evolved into a comprehensive Kubernetes-based container platform, enabling both developers and IT operations teams to collaborate seamlessly in deploying scalable applications.
OpenShift architecture is composed of several layers that integrate to provide a robust container management ecosystem:
Kubernetes Core:
At its heart, OpenShift uses Kubernetes as the container orchestration engine. Kubernetes manages container deployment, scaling, and networking, ensuring high availability and resource optimization.
OpenShift API and Web Console:
OpenShift provides a REST API and a user-friendly web console for developers and administrators. The console allows monitoring, deploying, and scaling applications with minimal complexity.
Container Runtime:
OpenShift leverages container runtimes like CRI-O and previously Docker to run applications in lightweight, isolated environments.
Operators:
OpenShift uses Kubernetes Operators to manage applications and infrastructure components. Operators automate complex tasks such as backups, updates, and scaling.
Build and Deployment Tools:
OpenShift includes Source-to-Image (S2I) and BuildConfigs to automate building container images directly from source code, streamlining continuous integration and delivery (CI/CD).
Networking and Service Mesh:
OpenShift offers advanced networking capabilities with OpenShift SDN, routing, load balancing, and optional service mesh integration via Istio or Red Hat OpenShift Service Mesh.
Storage Integration:
Persistent storage in OpenShift can be managed via Persistent Volume Claims (PVCs) and integrates with cloud storage providers and on-premises storage solutions.
Security Layer:
Security is integral in OpenShift. It enforces Role-Based Access Control (RBAC), Security Context Constraints (SCCs), and integrates with SELinux and other security mechanisms to ensure safe multi-tenant operations.
OpenShift is available in several editions depending on deployment preferences:
OpenShift Container Platform (OCP):
The enterprise edition suitable for on-premises or private cloud deployments, offering full support and enterprise-grade features.
Red Hat OpenShift Dedicated:
Managed by Red Hat, this edition is hosted on public clouds like AWS or Azure. Users do not need to manage underlying infrastructure.
OpenShift Online:
A public cloud PaaS offering from Red Hat suitable for developers to quickly deploy and test applications.
OpenShift OKD (Origin Kubernetes Distribution):
The upstream community version of OpenShift. It is open-source and ideal for learning, testing, and experimentation.
OpenShift brings a rich set of features that distinguish it from plain Kubernetes:
Developer-Centric Tools:
OpenShift emphasizes a developer-first approach. It supports multiple programming languages, frameworks, and databases, along with integrated CI/CD pipelines to accelerate development.
Automated Scaling:
With Horizontal Pod Autoscaling (HPA), OpenShift can automatically scale applications based on CPU/memory usage or custom metrics.
Integrated CI/CD Pipelines:
OpenShift integrates with Jenkins, Tekton, and other CI/CD tools, enabling automated build, test, and deployment workflows.
Multi-Tenant Security:
OpenShift enforces security boundaries between projects and teams, making it suitable for organizations with strict compliance and regulatory requirements.
Hybrid Cloud Support:
OpenShift runs on on-premises data centers, private clouds, and public clouds, providing flexibility for hybrid or multi-cloud strategies.
Monitoring and Logging:
Built-in monitoring with Prometheus and logging with Elasticsearch, Fluentd, and Kibana (EFK) stack help operators maintain visibility and performance insights.
Service Catalog and Operators:
OpenShift has a Service Catalog that allows deployment of pre-configured services, databases, and middleware. Operators manage the lifecycle of applications and infrastructure components, reducing operational overhead.
OpenShift follows a layered architecture:
Master Nodes:
These manage the Kubernetes control plane components, including the API server, controller manager, scheduler, and etcd database. They handle cluster management, authentication, and scheduling.
Worker Nodes:
These nodes host the application containers and pods. Each worker node runs kubelet and container runtime (CRI-O), and networking components for intra-cluster communication.
Etcd:
A distributed key-value store used for cluster state management and configuration storage. It ensures consistency across the cluster.
Ingress Controllers and Routing:
OpenShift uses ingress controllers to route external traffic to internal services, along with secure HTTPS termination and load balancing.
Persistent Storage:
OpenShift supports dynamic provisioning for persistent volumes, allowing applications to retain state even if pods are rescheduled.
OpenShift provides several advantages over plain Kubernetes or traditional infrastructure:
Enterprise-Ready Kubernetes:
OpenShift enhances Kubernetes with security, monitoring, and automation tools suitable for enterprise deployments.
Security by Default:
Features like SCCs, integrated OAuth, and image scanning reduce risks associated with containerized applications.
Simplified DevOps Workflows:
Developers can focus on writing code, while OpenShift handles building, scaling, and deploying containers.
Platform Consistency:
OpenShift offers a consistent platform across on-premises and cloud environments, simplifying hybrid deployments.
Reduced Operational Overhead:
Automated updates, Operator-managed applications, and built-in monitoring reduce the burden on IT teams.
Scalability:
OpenShift supports large-scale clusters with hundreds of nodes and thousands of pods.
Integration with Red Hat Ecosystem:
Integration with Red Hat Enterprise Linux (RHEL), Ansible automation, and middleware like Red Hat JBoss enhances enterprise adoption.
OpenShift is versatile and supports a wide range of use cases:
Microservices Architecture:
OpenShift is ideal for deploying microservices-based applications with independent scaling and CI/CD pipelines.
Hybrid and Multi-Cloud Deployments:
Enterprises can deploy applications across multiple cloud providers while maintaining a unified management platform.
DevOps and Continuous Delivery:
Integrated tools allow teams to implement CI/CD pipelines, automate testing, and streamline releases.
Application Modernization:
Legacy applications can be containerized and migrated to OpenShift, enabling modernization without full rewrites.
Big Data and AI/ML Workloads:
OpenShift can manage resource-intensive applications like machine learning pipelines, data analytics workloads, and AI model deployments.
Edge Computing:
Lightweight OpenShift clusters can run at the edge for IoT or remote environments.
While OpenShift is built on Kubernetes, there are key differences:
| Feature | Kubernetes | OpenShift |
|---|---|---|
| Installation | Manual, requires configuration | Easy installation with preconfigured settings |
| Security | Basic | Enhanced security (SCC, SELinux) |
| CI/CD | Not included | Built-in Jenkins, Tekton pipelines |
| Networking | Requires manual configuration | Preconfigured SDN and routing |
| Support | Community support | Enterprise support from Red Hat |
| User Interface | Minimal | Web console and CLI for developers |
OpenShift simplifies many Kubernetes complexities, making it more approachable for enterprises without sacrificing flexibility or power.
Despite its advantages, OpenShift has some challenges:
Learning Curve:
OpenShift introduces additional abstractions like Projects, BuildConfigs, and SCCs that require learning.
Resource Intensive:
OpenShift clusters require significant CPU, memory, and storage resources compared to plain Kubernetes.
Cost:
Enterprise editions and cloud hosting may involve higher costs, although the managed services reduce operational burden.
Complex Networking:
For advanced networking setups or hybrid deployments, configuration can be complex.
Answer:
OpenShift is a container orchestration platform developed by Red Hat. It is built on top of Kubernetes and provides a complete platform for developing, deploying, and managing containerized applications. OpenShift automates the deployment, scaling, and management of applications in a cloud environment.
Key features include:
Built-in CI/CD pipelines.
Application monitoring and logging.
Automated scaling.
Integrated developer tools and web console.
Answer:
OpenShift comes in multiple variants:
OpenShift Origin (OKD): Open-source version of OpenShift.
OpenShift Container Platform (OCP): Enterprise version provided by Red Hat.
OpenShift Online: Public cloud version managed by Red Hat.
OpenShift Dedicated: Managed cluster for enterprises on public clouds like AWS or Azure.
Answer:
| Feature | Kubernetes | OpenShift |
|---|---|---|
| Base | Open-source | Kubernetes-based, enterprise-ready |
| Security | Needs manual configuration | Includes built-in security policies |
| UI | Dashboard is basic | Web console with UI and CLI |
| CI/CD | Not integrated | Integrated Jenkins pipelines |
| Registry | Optional | Includes built-in container registry |
Answer:
A Pod is the smallest deployable unit in OpenShift (same as Kubernetes). It can contain one or more containers that share storage, network, and configuration. Pods are ephemeral and can be managed automatically by OpenShift controllers.
Answer:
A Deployment defines the desired state of your application, including the number of replicas, container image, and update strategy. OpenShift ensures that the actual state matches the desired state.
DeploymentConfig (DC) in OpenShift is similar but provides extra features like hooks and triggers.
Rollouts can be automated for updates without downtime.
Answer:
A Service is an abstraction that defines a logical set of Pods and a policy for accessing them. It allows Pods to communicate with each other or expose applications to external users.
Types of Services:
ClusterIP (default, internal communication)
NodePort (external access)
LoadBalancer (external load balancing)
ExternalName (external service mapping)
Answer:
A Route exposes an OpenShift service to the outside world using a hostname, such as www.example.com. Routes are OpenShift-specific and provide features like TLS termination and path-based routing.
Answer:
A Project in OpenShift is similar to a namespace in Kubernetes. It provides a boundary for resources, security, and user access. Each project has its own quota, policies, and roles.
Answer:
oc is the command-line tool for OpenShift that allows you to manage applications, resources, and the cluster.
Common commands:
oc login → Login to OpenShift cluster
oc new-project → Create a new project
oc get pods → List all pods
oc create -f <file> → Create resources from a YAML file
oc describe pod <pod-name> → Detailed information about a pod
Answer:
OpenShift supports 3 main types of builds:
Source-to-Image (S2I): Automatically builds a container image from application source code.
Docker Builds: Builds images using a Dockerfile.
Custom Builds: Allows custom build strategies using scripts.
Answer:
ConfigMap: Stores non-sensitive configuration data (like environment variables or config files) for pods.
Secret: Stores sensitive information like passwords, tokens, or keys. Secrets are base64 encoded for security.
Command example:
oc create configmap my-config --from-file=config.txt
oc create secret generic my-secret --from-literal=password=12345
Answer:
Persistent Volume (PV): A storage resource in the cluster, provisioned by an admin.
Persistent Volume Claim (PVC): A request by a user for storage resources, specifying size and access mode.
OpenShift attaches PVs to Pods using PVCs for persistent storage, unlike ephemeral storage that disappears with the Pod.
Answer:
HPA automatically scales the number of pod replicas based on CPU, memory usage, or custom metrics. It ensures that applications handle variable load without manual intervention.
Example command:
oc autoscale dc my-app --min 2 --max 5 --cpu-percent=70
Answer:
A Template defines a set of objects (like pods, services, routes) that can be parameterized and reused to deploy applications quickly. Templates make deployment repeatable and consistent.
Answer:
The OpenShift Router manages incoming external traffic and routes it to the appropriate service inside the cluster using Routes. It supports load balancing, TLS termination, and sticky sessions.
Answer:
OpenShift comes with an internal container image registry to store, manage, and deploy images. Users can push images directly from builds or manually.
Commands:
oc get is # List ImageStreams
oc import-image <image> # Import images
Answer:
Docker: Manages individual containers.
OpenShift: Orchestrates containers at scale using Kubernetes, adding features like CI/CD, monitoring, logging, and enterprise security.
Answer:
oc get all → List all resources in a project
oc logs <pod-name> → View logs
oc exec -it <pod-name> -- bash → Access pod shell
oc rollout status dc/<deployment-config> → Check deployment status
oc delete pod <pod-name> → Delete a pod
Answer:
BuildConfig defines how application source code is built into a container image. It specifies the build strategy (S2I, Docker, Custom), source repository, and triggers for automatic builds.
Answer:
SCC defines permissions and security policies for pods, including:
Running as a non-root user
Accessing host resources
Using privileged containers
OpenShift uses SCC to enforce security best practices in multi-tenant clusters.
Answer:
Deployment (Kubernetes): Standard Kubernetes object to manage pods and replicas.
DeploymentConfig (OpenShift-specific): Provides extra features like:
Hooks (pre/post deployment scripts)
Triggers (automatic deployment on image changes or config updates)
Rollback support
Example Scenario:
If you push a new container image to the internal OpenShift registry, DeploymentConfig can automatically trigger a new deployment.
Answer:
An ImageStream tracks container images in the registry and allows automatic updates of deployments when a new image is available.
Key points:
Helps manage versions of images
Can trigger DeploymentConfig automatically
Command example:
oc get is # List ImageStreams
oc describe is <image-name>
Answer:
Code Push: Developer pushes code to a repository (Git).
BuildConfig: OpenShift triggers a build (S2I/Docker).
Image Creation: A container image is created and pushed to the internal registry.
DeploymentConfig: New image triggers a deployment automatically.
Pods: OpenShift schedules pods using the new image.
Service & Route: Application becomes accessible internally or externally.
Scenario: This workflow ensures CI/CD is automated without manual intervention.
Answer:
An Operator is a method of packaging, deploying, and managing Kubernetes applications. It automates tasks like installation, upgrades, scaling, and backup.
Key points:
Written as custom controllers
Handles complex applications (databases, messaging queues)
Can be installed via OpenShift OperatorHub
Answer:
| Type | Description | Use Case |
|---|---|---|
| Ephemeral Storage | Data is lost when the pod is deleted | Caches, temporary files |
| Persistent Storage | Data survives pod deletion using PV and PVC | Databases, logs, configuration files |
Answer:
A Multi-Container Pod contains more than one container that share the same network namespace and storage volumes.
Use Case:
Sidecar containers for logging, monitoring, or proxy
Example: Nginx container + log-collector container in the same pod
Answer:
Route (OpenShift-specific): Exposes services outside the cluster, supports TLS termination, path-based routing.
Ingress (Kubernetes-native): Similar functionality but requires Ingress controller.
Scenario: OpenShift often uses Routes internally, but Ingress may be used in hybrid environments.
Answer:
Manual Scaling:
oc scale dc <deployment-name> --replicas=5
Automatic Scaling:
oc autoscale dc <deployment-name> --min 2 --max 10 --cpu-percent=70
OpenShift automatically increases or decreases pods based on load.
Answer:
A ServiceAccount is used to provide permissions and identity for pods to interact with the OpenShift API.
Default: default service account in each project
Can be assigned roles via RoleBindings
Scenario: Pods that need to pull secrets or access other services must use a proper ServiceAccount.
Answer:
Templates allow parameterization so that you can deploy multiple instances of an application with different configurations without changing YAML files.
Example:
<param name="APP_NAME" value="myapp"/>
Using oc new-app -f template.yaml -p APP_NAME=testapp
Answer:
The OpenShift Scheduler assigns pods to nodes based on resource availability, constraints, and policies.
Key points:
Ensures efficient use of cluster resources
Supports nodeSelector, taints, and tolerations
Scenario: You want certain pods to run only on GPU-enabled nodes → use nodeSelector.
Answer:
Taint: Applied to a node to repel pods unless they tolerate it.
Toleration: Applied to a pod to allow it to run on tainted nodes.
Example:
oc adm taint nodes node1 key=value:NoSchedule
This prevents pods from being scheduled unless they have matching tolerations.
Answer:
Logging: OpenShift integrates ELK/EFK stack (Elasticsearch, Fluentd, Kibana) for centralized logging.
Monitoring: Uses Prometheus and Grafana to monitor cluster health and metrics.
Scenario: If a pod crashes frequently, logs and metrics help troubleshoot the issue.
Answer:
Step 1: Check pod status
oc get pods
Step 2: Check logs
oc logs <pod-name>
Step 3: Access pod shell for debugging
oc exec -it <pod-name> -- bash
Step 4: Describe the pod for events
oc describe pod <pod-name>
Step 5: Verify PVC, ConfigMap, Secret, and image versions.
Answer:
OpenShift Online: Public cloud, managed by Red Hat. You don’t manage nodes or infrastructure.
OpenShift Container Platform: Enterprise deployment on-premise or private cloud. You manage nodes, networking, and storage.
Answer:
OpenShift DeploymentConfig supports automatic rollback to previous versions:
oc rollout undo dc/<deployment-config-name>
Useful in case of failed deployments or application issues.
Answer:
| Strategy | Description |
|---|---|
| Rolling | Updates pods gradually with zero downtime |
| Recreate | Stops old pods first, then starts new pods |
| Custom | Executes user-defined scripts or actions during deployment |
Answer:
Quotas limit resource usage within a project: CPU, memory, storage, number of pods, services, etc.
Command Example:
oc get quota
oc describe quota
Answer:
Horizontal Scaling: Adding/removing pod replicas (HPA).
Vertical Scaling: Increasing resources (CPU/memory) of existing pods (VPA).
Scenario: Web applications usually use horizontal scaling for handling traffic spikes.
Answer:
Steps:
Create a project: oc new-project my-java-app
Use S2I build: oc new-app openjdk-11~https://github.com/user/repo.git
Expose service: oc expose svc/my-java-app
Access application via route: oc get route
Answer:
RBAC controls who can do what in the OpenShift cluster. It uses:
Role: Permissions within a project (namespace)
ClusterRole: Permissions across the cluster
RoleBinding: Assigns Role to a user or group in a project
ClusterRoleBinding: Assigns ClusterRole to a user/group across cluster
Example:
oc create rolebinding dev-user-binding --role=edit --user=john
This allows john to edit resources in a project.
Answer:
Secrets store sensitive data like passwords, keys, and tokens.
They can be mounted as files or used as environment variables in pods.
Example as environment variable:
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
| Feature | ConfigMap | Secret |
|---|---|---|
| Use | Non-sensitive data | Sensitive data (passwords, tokens) |
| Storage | Plain text | Base64 encoded |
| Mount as | File or environment variable | File or environment variable |
Answer:
OpenShift uses Software Defined Networking (SDN) for pod communication:
Every pod gets a unique IP
Pods communicate across nodes transparently
Supports network policies to restrict traffic between pods
Command to view network policies:
oc get networkpolicy
Answer:
NetworkPolicy defines ingress and egress rules for pods.
Controls which pods or external IPs can communicate
Improves cluster security
Example: Allow only app pods to access the database pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
...
Answer:
Namespace: Kubernetes concept to isolate resources
Project: OpenShift abstraction built on namespaces, includes:
Quotas
Security policies
Roles & bindings
Scenario: Every project is essentially a namespace, but with additional security and management features.
Answer:
OpenShift DeploymentConfig allows rollback to previous versions:
oc rollout undo dc/my-app
Useful when a new image causes pod failures
Rollback is automatic for DeploymentConfigs with triggers
Answer:
Steps:
Check pod events:
oc describe pod <pod-name>
Check image pull secret configuration:
oc get secrets
Verify Docker registry credentials
Make sure image exists in registry
Scenario: Often occurs when private registry credentials are missing.
Answer:
OperatorHub is a marketplace of operators that simplify deployment of complex apps in OpenShift.
Benefits:
Automates installation, updates, backup, scaling
Manages database, messaging, or monitoring systems
Easy to install via Web Console or CLI
| Feature | S2I (Source-to-Image) | Docker Build |
|---|---|---|
| Input | Source code | Dockerfile |
| Output | Container image | Container image |
| Purpose | Simplified build for developers | Full control over image customization |
| Automation | Automatic integration with OpenShift | Manual build or CI/CD integration |
Answer:
OpenShift provides Jenkins pipelines to automate build, test, and deployment:
Build: Source code → Container image
Test: Automated testing in pipeline
Deploy: DeploymentConfig triggers deployment
Monitor: Check logs, metrics, and alerts
Scenario: Developer pushes code → Jenkins pipeline automatically builds and deploys to OpenShift.
Answer:
Create a PVC YAML file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Apply it:
oc create -f pvc.yaml
Mount in pod as volume.
Answer:
Probes are used to check pod health:
Liveness Probe: Checks if the pod is alive; restarts if unhealthy
Readiness Probe: Checks if the pod is ready to serve traffic
Example:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
Answer:
Use Secrets for sensitive data
Use NetworkPolicy to control pod communication
Enable Role-Based Access Control (RBAC)
Enforce Security Context Constraints (SCC)
Use TLS termination on Routes
Answer:
OpenShift uses Prometheus and Grafana:
Prometheus: Collects metrics from pods, nodes, and services
Grafana: Visualizes metrics with dashboards
Alerts: Triggered when thresholds are breached
Command example:
oc adm top nodes
oc adm top pods
Answer:
Create separate Secrets for each environment
Use ConfigMaps for non-sensitive environment-specific configs
Use parameters in Templates or CI/CD pipelines to inject secrets automatically
Answer:
Steps:
Check pod logs: oc logs <pod-name>
Describe pod for events: oc describe pod <pod-name>
Verify image, config, secrets, and environment variables
Check liveness and readiness probes
Answer:
ImageStreamTag (IST): Refers to a specific version/tag of an image in an ImageStream
Can be used to trigger deployments when a new tag is available
Example:
oc import-image my-app:latest --from=registry/my-app:latest
| Component | Purpose |
|---|---|
| Service | Internal load balancing between pods |
| Route | Expose service to external users via hostname |
Answer:
Create two environments (blue and green)
Deploy the new version to green
Update Route to point to green environment
Rollback to blue if issues occur
Scenario: Provides zero downtime deployments.
Answer:
OpenShift is built on Kubernetes, enhanced with enterprise-grade features:
Components:
Master Node: Controls the cluster (API server, Scheduler, Controller Manager, etcd)
Worker Node: Runs the workloads (pods)
etcd: Distributed key-value store for cluster state
Router: Handles external traffic and load balancing
Registry: Internal image registry for container images
SDN (Software Defined Network): Provides pod-to-pod communication
Operators: Automate deployment and lifecycle of applications
Key Points:
Master nodes manage authentication, authorization, scheduling, and scaling
Worker nodes run application pods, including monitoring and logging agents
Answer:
DeploymentConfig (DC) triggers allow automatic deployment when certain events occur:
Types of Triggers:
ImageChange: Deploys automatically when a new image is available in ImageStream
ConfigChange: Deploys when configuration files, environment variables, or secrets change
Manual: Deploy only when manually triggered
Command Example:
oc set triggers dc/my-app --from-image=my-image:latest --automatic
Scenario: Ensures zero manual intervention during CI/CD pipelines.
Answer:
Operator: Automates installation, management, upgrades, and scaling of complex applications (e.g., databases, Kafka)
OLM: Manages operator lifecycle:
Install and upgrade operators
Manage dependencies
Assign RBAC permissions
Example: Using the Prometheus Operator for monitoring deployment.
Answer:
Resource Quotas: Limit the number of pods, CPU, memory, and services per project
LimitRange: Sets default, minimum, and maximum CPU/memory per pod or container
Commands:
oc get quota
oc describe quota
oc get limits
oc describe limits
Scenario: Prevent a single application from consuming all cluster resources.
Answer:
HPA: Automatically scales pods horizontally based on CPU, memory, or custom metrics
VPA: Adjusts resources (CPU/memory) of running pods vertically
Command Example for HPA:
oc autoscale dc/my-app --min 2 --max 10 --cpu-percent=70
Scenario: HPA for web app scaling under traffic spikes; VPA for database pods to optimize resources.
Answer:
Persistent Volumes (PV): Pre-provisioned storage
Persistent Volume Claims (PVC): User request for storage
Dynamic Provisioning: Automatically provision storage using StorageClass
Scenario:
Database pods require PV to store data
Pods can be deleted or rescheduled without losing data
Command:
oc get pv
oc get pvc
Answer:
Cluster Networking: Pod-to-pod communication with unique IPs
Service Networking: ClusterIP, NodePort, LoadBalancer
SDN Plugins: OpenShift SDN, OVN-Kubernetes, or third-party CNI
NetworkPolicy: Control traffic flow between pods
Scenario: Use NetworkPolicy to isolate frontend and backend pods for security compliance.
Answer:
Check pod status: oc get pods
Describe pod events: oc describe pod <pod-name>
Check logs: oc logs <pod-name>
Access pod shell: oc exec -it <pod-name> -- bash
Check image, configmaps, secrets, volumes, probes
Scenario: CrashLoopBackOff due to missing environment variables or volume mount errors.
Answer:
Create environment-specific Secrets and ConfigMaps
Use Templates and parameters to inject values dynamically
Use CI/CD pipelines to deploy environment-specific configurations
Scenario: Automating deployments with Jenkins pipelines while keeping credentials secure.
Answer:
| Component | Purpose |
|---|---|
| Route | OpenShift-specific, exposes service via hostname, supports TLS |
| Ingress | Kubernetes-native, requires Ingress Controller for external access |
| LoadBalancer | External traffic distribution across nodes and pods |
Scenario: Use Routes for public web apps, LoadBalancer for internal services in hybrid cloud.
Answer:
Jenkins Pipelines: Automate build → test → deploy
S2I Builds: Source-to-Image builds integrated in pipeline
Triggers: DeploymentConfig triggers deployment on image change
Notifications & Rollbacks: Integrated for automated rollback on failure
Scenario: A new Git commit triggers a Jenkins job → OpenShift S2I build → DeploymentConfig rollout → Route updated.
Answer:
Blue-Green Deployment:
Deploy new version alongside old
Update Route to switch traffic
Rollback by pointing Route to old version
Canary Deployment:
Gradually route a small percentage of traffic to new version
Monitor performance before full rollout
Scenario: Zero-downtime deployment with monitoring.
Answer:
SCC defines security permissions for pods:
Privileged access
Root vs non-root user
Access to host filesystem or ports
Bind SCCs to users or service accounts
Command:
oc get scc
oc describe scc restricted
Scenario: Ensuring multi-tenant security compliance in production clusters.
Answer:
Prometheus: Metrics collection for pods, nodes, services
Grafana: Visual dashboards for metrics
Alertmanager: Sends alerts for threshold breaches
Elasticsearch + Fluentd + Kibana (EFK): Centralized logging
Scenario: Monitor CPU/memory usage to optimize pod scaling.
Answer:
Verify internal registry is running: oc get pods -n openshift-image-registry
Ensure image pull secrets are configured for private registries
Check network connectivity between nodes and registry
Inspect ImageStreamTag and triggers
Answer:
OperatorHub hosts pre-built operators for applications like databases, monitoring, Kafka
Operators automate lifecycle management: installation, scaling, upgrades
Install via Web Console or CLI
Scenario: Deploying PostgreSQL Operator to manage database cluster in OpenShift.
Answer:
Requests: Guaranteed resources allocated to a pod
Limits: Maximum resources a pod can consume
Prevents resource starvation and overcommitment
Command:
oc describe pod <pod-name>
Scenario: Database pods require high CPU and memory requests to ensure performance.
Answer:
Use OpenShift Cluster Version Operator (CVO)
Upgrade sequence: Master → Infrastructure → Worker nodes
Validate cluster health before and after upgrade
Test workloads in staging before production upgrade
Command:
oc get clusterversion
oc adm upgrade
Answer:
Use etcd snapshot for control plane data
Use Velero for namespace-level backup (including PVs)
Test restores in staging before production
Automate backup schedules
Answer:
Monitor pods using oc adm top pods
Check node usage using oc adm top nodes
Inspect slow queries or application logs
Check for resource limits or OOMKilled pods
Use Prometheus/Grafana dashboards
Answer:
Use OpenShift Hive or Red Hat Advanced Cluster Management (ACM)
Centralized policy management and monitoring
Automates cluster provisioning, upgrades, and RBAC across multiple clusters
Answer:
S2I: Builds images from source code automatically
Docker: Uses Dockerfile for custom builds
Custom: User-defined build steps and scripts
Choose strategy based on control, simplicity, or automation
Answer:
TLS for Routes and Services
NetworkPolicy to control traffic flow
ServiceAccount and RBAC for pod authorization
Secrets and ConfigMaps for credentials
Answer:
Ping pods across nodes to check connectivity
Inspect SDN logs (/var/log/messages or ovs-vswitchd)
Verify NetworkPolicy rules
Use oc get pods -o wide for pod IPs and node mapping
Check router and ingress pods for external access
Answer:
Projects: Logical isolation for teams or applications
RBAC: Controls access per project
SCC: Ensures pod-level security restrictions
NetworkPolicy: Restricts cross-project traffic
Answer:
Jenkins:
Use OpenShift Jenkins Pipeline plugin
Integrates with BuildConfig triggers
Automates S2I or Docker builds
GitLab:
Use GitLab Runner inside OpenShift or externally
Trigger builds using Webhooks
Tekton:
Cloud-native CI/CD pipelines
Runs pipelines as Kubernetes/OpenShift resources
Scenario: A commit to Git triggers an automated pipeline → builds container → deploys to dev environment.
Answer:
CVO automates cluster upgrades and ensures version consistency
Monitors cluster state and available updates
Performs rolling upgrades of masters, infrastructure, and worker nodes
Alerts admin if cluster is unhealthy before upgrade
Answer:
Use Red Hat Advanced Cluster Management (ACM) or Hive
Central dashboard to manage clusters, policies, and compliance
Supports cluster provisioning, upgrades, and disaster recovery
Synchronizes policies and RBAC across clusters
Answer:
Prometheus: Collect metrics from nodes, pods, and services
Grafana: Dashboard visualization
Alertmanager: Sends alerts for thresholds and SLA breaches
Node & Pod monitoring: oc adm top nodes and oc adm top pods
Scenario: Auto-scale pods based on CPU spikes and receive notifications for failures.
Answer:
Steps:
Check pod status and events: oc describe pod <pod-name>
Inspect logs for errors: oc logs <pod-name>
Verify liveness/readiness probes
Check for OOMKilled or resource constraints
Inspect ConfigMap, Secret, PVC bindings, or image versions
Answer:
Use ImageStreams to track image tags
Promote images by updating ImageStreamTag from dev → QA → prod
Deploy using DeploymentConfig triggers
Avoid rebuilding the same image in multiple environments
Scenario: Ensures consistent application versions across multiple environments.
Answer:
Deploy a new version alongside the old pods
Update Route to send a small percentage of traffic to new pods
Monitor application metrics, logs, and error rates
Gradually increase traffic or rollback if issues occur
Tools: DeploymentConfig, Routes, Prometheus/Grafana monitoring
Answer:
Horizontal Scaling: HPA to scale pod replicas based on metrics
Vertical Scaling: VPA to adjust CPU/memory of pods
Node Scaling: Add/remove worker nodes to the cluster
Use cluster autoscaler on cloud environments for dynamic node scaling
Answer:
Check node status: oc get nodes
Describe node: oc describe node <node-name>
Inspect kubelet and SDN logs on node
Verify pod scheduling, taints, and tolerations
Check disk space, CPU, memory, and network connectivity
Answer:
Use TLS encryption for API server and etcd
Enable RBAC for API access
Use OAuth or LDAP for authentication
Enable audit logging for API requests
Limit admin access to secure networks
Answer:
Check router pods: oc get pods -n openshift-ingress
Inspect router logs: oc logs <router-pod>
Verify Route configurations and service bindings
Test DNS resolution and firewall rules
Restart router pods if necessary
Answer:
Check PV and PVC status: oc get pv,pvc
Monitor storage metrics in Prometheus
Use Ceph, Gluster, or AWS EBS metrics for backend storage
Identify slow I/O or high latency affecting pods
Answer:
Use Projects/Namespaces for tenant isolation
Configure RBAC and SCC per project
Apply NetworkPolicies to restrict cross-project traffic
Assign separate ServiceAccounts for applications
Answer:
Enforce SCC for privileged access control
Enable SELinux in enforcing mode
Use NetworkPolicy to isolate sensitive apps
Store secrets in Kubernetes Secrets or external vaults
Audit cluster changes and pod activities
Answer:
Check node and pod resource utilization: oc adm top nodes / oc adm top pods
Inspect logs for application errors
Identify pods causing high CPU/memory usage
Optimize SDN performance or adjust resource requests/limits
Evaluate storage I/O and network latency
Answer:
Backup etcd snapshots for cluster state
Backup PVs using Velero or other tools
Test restores in staging environments
Maintain multiple availability zones for high availability
Automate backup schedules and disaster recovery drills
Answer:
Use Cluster Version Operator (CVO)
Enable automated updates with rollback policies
Test upgrades on staging clusters before production
Monitor workloads and metrics during upgrade
Answer:
Test pod-to-pod communication with ping or iperf
Check SDN plugin logs (ovs-vswitchd, ovn-controller)
Monitor router and ingress pods for bottlenecks
Optimize overlay network or use alternative CNI
Inspect firewall or security rules affecting latency
Answer:
Use ResourceQuotas to limit CPU, memory, pods per project
Implement LimitRanges for containers
Monitor node utilization and adjust pod placement
Use HPA/VPA to balance workloads dynamically
Answer:
Upgrade masters first, then infrastructure nodes, then workers
Drain nodes gracefully to avoid downtime
Monitor pods and services after each upgrade
Use rolling deployments and readiness probes for applications