Top Interview Questions
In the rapidly evolving world of cloud computing, managing infrastructure consistently, efficiently, and securely has become a critical requirement for organizations. Traditionally, infrastructure provisioning involved manual steps like setting up servers, configuring networks, and managing storage. This manual approach was error-prone, time-consuming, and difficult to scale. Terraform, an open-source tool developed by HashiCorp, addresses these challenges by enabling Infrastructure as Code (IaC), allowing infrastructure to be defined, provisioned, and managed using declarative configuration files.
Terraform is a declarative IaC tool that allows you to define cloud and on-premises infrastructure using a high-level configuration language called HCL (HashiCorp Configuration Language). Unlike imperative tools that require step-by-step instructions, Terraform uses declarative syntax to describe the desired state of infrastructure. Once the desired state is defined, Terraform automatically creates, updates, or deletes resources to match it.
Terraform supports multiple cloud providers such as AWS, Azure, Google Cloud Platform (GCP), Oracle Cloud, and many others, along with on-premise solutions like VMware and OpenStack. This multi-cloud and hybrid-cloud capability makes Terraform a popular choice for organizations adopting cloud-agnostic strategies.
Infrastructure as Code (IaC):
Terraform treats infrastructure like software. By writing code, teams can version control, test, and collaborate on infrastructure configurations, reducing errors caused by manual provisioning.
Declarative Configuration Language (HCL):
HCL is simple, human-readable, and designed for infrastructure management. It allows users to define resources, dependencies, and outputs clearly.
Provider Ecosystem:
Terraform uses providers to interact with cloud services and APIs. Providers act as plugins that manage resources in different environments. For example, the AWS provider allows Terraform to create EC2 instances, S3 buckets, and IAM roles.
Resource Graph & Dependency Management:
Terraform automatically builds a resource dependency graph to determine the order of operations, ensuring resources are provisioned or destroyed in the correct sequence.
State Management:
Terraform maintains a state file that represents the current infrastructure state. This allows Terraform to track changes, plan updates, and prevent drift between declared and actual infrastructure.
Plan & Apply Workflow:
Terraform provides a two-step workflow:
terraform plan generates an execution plan, showing what changes will be applied.
terraform apply applies the changes to reach the desired state.
This approach ensures safe and predictable modifications to infrastructure.
Modularity & Reusability:
Terraform supports modules, which are reusable units of infrastructure code. Modules improve code organization, reduce duplication, and enable teams to share standard templates across projects.
Immutable Infrastructure:
Terraform encourages immutable infrastructure, meaning resources are replaced rather than modified in-place whenever possible. This reduces configuration drift and ensures consistency.
Terraform’s architecture can be broken down into several core components:
Configuration Files:
These are written in .tf files using HCL. They define resources, data sources, variables, outputs, and modules.
Providers:
Providers are responsible for managing interactions with APIs. Each provider knows how to create, read, update, and delete (CRUD) resources in a specific environment.
State File (terraform.tfstate):
The state file tracks the current infrastructure state. It is critical for detecting changes and planning updates. Terraform can store state locally or remotely using backends like S3, Azure Blob Storage, or Terraform Cloud.
CLI & Commands:
Terraform is primarily a command-line tool. Commands like init, plan, apply, and destroy allow users to initialize projects, plan changes, provision resources, and tear down infrastructure.
Execution Plan:
Terraform creates an execution plan before making changes. This ensures that users can review the proposed actions and avoid unintended modifications.
A typical Terraform workflow consists of the following steps:
Write Configuration:
Define the desired infrastructure using HCL. This includes specifying resources like servers, databases, and networking components.
Initialize Project:
Run terraform init to download provider plugins and initialize the working directory.
Plan Changes:
Execute terraform plan to preview changes. Terraform compares the desired state with the current state and shows a detailed plan.
Apply Changes:
Run terraform apply to implement the infrastructure changes. Terraform updates the state file to reflect the new reality.
Manage Infrastructure:
Over time, make updates by modifying configuration files. Terraform ensures that only necessary changes are applied without affecting other resources.
Destroy Infrastructure:
When resources are no longer needed, terraform destroy can be used to safely remove all resources defined in the configuration.
Consistency:
By defining infrastructure as code, Terraform ensures that environments are consistent across development, staging, and production.
Version Control & Collaboration:
Terraform configurations can be stored in Git or other version control systems. Teams can collaborate, review changes, and roll back configurations if needed.
Multi-Cloud Flexibility:
Terraform supports a wide range of providers, making it easy to manage resources across multiple clouds using a single tool.
Automation:
Terraform automates provisioning, updates, and deletions, reducing human error and operational overhead.
Audit & Compliance:
The declarative approach and state management enable auditability. Organizations can track who made changes and ensure compliance with policies.
Cloud Infrastructure Provisioning:
Terraform is widely used to create, manage, and scale cloud resources such as virtual machines, storage buckets, and databases.
Hybrid & Multi-Cloud Management:
Organizations using multiple cloud providers can manage resources from a single configuration, simplifying operations and reducing complexity.
Continuous Integration/Continuous Deployment (CI/CD):
Terraform integrates with CI/CD pipelines to automatically provision infrastructure during application deployment, ensuring seamless development workflows.
Disaster Recovery & Scaling:
Terraform enables quick replication of infrastructure in different regions, aiding disaster recovery planning and scaling operations.
Infrastructure Standardization:
Organizations can create reusable Terraform modules that enforce best practices, security standards, and compliance rules across teams.
Use Remote State:
Store the state file in a remote backend with locking to prevent conflicts when multiple team members work on the same infrastructure.
Modularize Configurations:
Use modules to organize code logically. This enhances maintainability and promotes reuse across projects.
Version Control Everything:
Keep all Terraform code, including modules and variable files, under version control to track changes and support collaboration.
Plan Before Apply:
Always review terraform plan output to avoid accidental changes and ensure that modifications are intentional.
Protect Sensitive Data:
Avoid hardcoding secrets in Terraform files. Use environment variables, secret managers, or Terraform Vault integration.
Implement CI/CD Pipelines:
Integrate Terraform with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI to automate provisioning and updates safely.
Adopt Immutable Infrastructure:
Prefer replacing resources rather than modifying them in-place to maintain consistency and reduce configuration drift.
Answer:
Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to define and provision infrastructure using a high-level configuration language called HCL (HashiCorp Configuration Language). Terraform can manage resources across multiple cloud providers like AWS, Azure, Google Cloud, and even on-premises environments.
Answer:
Infrastructure as Code (IaC) is the practice of managing and provisioning computing infrastructure using machine-readable configuration files, rather than manual processes. IaC enables automation, consistency, and version control for infrastructure, reducing human error.
Answer:
Declarative Language: You define the desired state of infrastructure.
Multi-Cloud Support: Works with AWS, Azure, GCP, and others.
Immutable Infrastructure: Updates are made by creating new resources instead of modifying existing ones.
Dependency Graph: Terraform automatically understands resource dependencies.
Execution Plans: Shows what changes will occur before applying.
State Management: Keeps track of resources it manages.
Answer:
HCL (HashiCorp Configuration Language) is a human-readable configuration language used in Terraform. It is declarative and allows you to define infrastructure resources, variables, and outputs in a structured way.
Answer:
The Terraform workflow consists of 4 main steps:
Write: Define resources in .tf files using HCL.
Plan: Run terraform plan to see what changes will be applied.
Apply: Run terraform apply to create or modify resources.
Destroy: Run terraform destroy to remove all resources managed by Terraform.
Answer:
Providers are plugins that allow Terraform to interact with cloud platforms or services. Each provider exposes resources that can be managed. For example, the AWS provider lets you manage EC2 instances, S3 buckets, and IAM users.
Answer:
A resource is the most important element in Terraform. It represents a piece of infrastructure, such as a virtual machine, storage account, or database instance. Example:
resource "aws_instance" "my_ec2" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
Answer:
A module is a container for multiple resources that are used together. Modules help organize code, reuse configurations, and reduce duplication. You can use built-in modules, custom modules, or public modules from Terraform Registry.
Answer:
Terraform state is a file (terraform.tfstate) that keeps track of resources created by Terraform. It stores metadata, dependencies, and resource IDs. State is crucial for managing updates and deletes.
Local state: Stored on the local machine.
Remote state: Stored in a remote backend like S3, Azure Blob, or Terraform Cloud.
Answer:
Variables allow dynamic and reusable Terraform configurations. They are defined in .tf files and can have default values or be passed at runtime.
Example:
variable "region" {
description = "AWS region"
default = "us-east-1"
}
Answer:
Outputs allow you to display useful information after Terraform applies the configuration, such as instance IP addresses or resource IDs.
output "instance_ip" {
value = aws_instance.my_ec2.public_ip
}
Answer:
A backend in Terraform determines where Terraform stores state and how operations are performed.
Types of backends:
Local: Default backend, stores state locally.
Remote: Stores state on remote storage like AWS S3, GCP Cloud Storage, or Terraform Cloud, enabling team collaboration.
Answer:
terraform plan is a command that shows the execution plan. It previews the changes Terraform will make to reach the desired state, without actually applying them. It helps avoid unintended changes.
terraform apply?Answer:
terraform apply executes the actions proposed in the plan, creating, updating, or deleting resources to match the configuration.
terraform destroy?Answer:
terraform destroy removes all resources defined in the Terraform configuration. This is useful for cleaning up environments after testing or decommissioning infrastructure.
Answer:
Terraform builds a dependency graph by analyzing resource relationships. It ensures resources are created or destroyed in the correct order based on dependencies.
Answer:
Yes. Terraform supports multi-cloud deployments by configuring multiple providers in a single configuration.
Example:
provider "aws" {
region = "us-east-1"
}
provider "google" {
project = "my-gcp-project"
region = "us-central1"
}
Answer:
Workspaces allow multiple state files for the same configuration. Each workspace can represent a different environment (like dev, staging, prod).
Default workspace: default
Create new workspace: terraform workspace new staging
| Terraform | Ansible |
|---|---|
| Infrastructure provisioning | Configuration management |
| Declarative | Procedural / Declarative |
| Tracks state | Does not track state |
| Handles cloud resources | Mostly manages software/config |
Answer:
Sensitive data like passwords or API keys can be handled using:
sensitive = true in variables.
Environment variables.
Remote state backends with encryption.
Secrets management tools (Vault, AWS Secrets Manager).
Answer:
terraform import allows Terraform to take control of existing resources that were not created by Terraform. It updates the state file without changing the resource configuration.
Example:
terraform import aws_instance.my_ec2 i-1234567890abcdef0
Answer:
Use Git or any version control system to manage .tf files.
Use modules and Terraform Registry for reusable and versioned configurations.
Specify provider versions in configuration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
Answer:
Provisioners allow executing scripts on resources after creation. They are mainly used for bootstrapping but are not recommended for long-term configuration.
Example:
provisioner "remote-exec" {
inline = ["sudo apt-get update"]
}
taint and destroy in Terraform.Taint: Marks a resource for recreation in the next terraform apply.
Destroy: Removes the resource entirely from the infrastructure.
Answer:
Use terraform plan to preview changes.
Use terraform show to inspect state.
Use TF_LOG=DEBUG terraform apply for verbose logs.
Check the .terraform directory for provider info and cache.
| Terraform | CloudFormation |
|---|---|
| Multi-cloud support | AWS only |
| Open-source | AWS proprietary |
| Supports modules | Uses stacks/templates |
| State management via file | Managed by AWS |
Answer:
Use terraform init -upgrade to upgrade providers.
Test configuration on a separate workspace.
Review the upgrade guide for breaking changes.
Answer:
Automates infrastructure provisioning.
Supports CI/CD pipelines (Jenkins, GitHub Actions).
Ensures consistency across environments.
Reduces manual errors and speeds up deployments.
Answer:
Yes, using providers like VMware vSphere, OpenStack, or Microsoft Hyper-V. Terraform treats on-prem resources similar to cloud resources using providers.
Use remote state for teams.
Keep configurations modular.
Avoid hardcoding sensitive values.
Lock provider versions.
Always review terraform plan before apply.
Answer:
.tf: Main configuration files where you define resources, variables, and outputs.
.tfvars: Files used to pass variable values.
.tfstate: State file that keeps track of created resources.
.terraform.lock.hcl: Locks provider versions.
terraform refresh and terraform apply.terraform refresh: Updates the state file to match the real infrastructure without changing resources.
terraform apply: Creates, updates, or deletes resources to match the configuration.
terraform fmt?Answer:
terraform fmt formats Terraform code according to the standard style, making it readable and consistent.
terraform validate?Answer:
terraform validate checks the syntax and structure of your Terraform files without applying changes. It ensures the configuration is valid.
Answer:
Encrypts state at rest (like in S3 with KMS).
Uses access control to restrict backend access.
Avoid storing secrets directly in .tf files; use environment variables or Vault integration.
Answer:
Drift occurs when resources change outside Terraform.
Detect drift using terraform plan.
Correct it by updating configuration or applying Terraform changes.
count and for_each?count: Creates multiple copies of a resource by index.
for_each: Creates resources from a map or set with keys, allowing unique identification.
Example:
resource "aws_instance" "example" {
for_each = var.instance_names
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
tags = { Name = each.key }
}
local-exec and remote-exec provisioners?local-exec: Executes a command on the machine where Terraform runs.
remote-exec: Executes commands on the created resource via SSH or WinRM.
Answer:
Specify versions in the terraform block to avoid breaking changes:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.5"
}
}
}
Answer:
Data sources allow Terraform to read existing resources without managing them. Example:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"]
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
}
Answer:
Create a directory with .tf files containing resources.
Use variables.tf for inputs and outputs.tf for outputs.
Call the module in main configuration:
module "vpc" {
source = "./modules/vpc"
name = "my-vpc"
}
Answer:
Meta-arguments modify resource behavior. Examples:
depends_on – specify explicit dependencies
count – create multiple instances
for_each – iterate over collections
provider – override provider configuration
Answer:
Use workspaces (dev, staging, prod).
Use environment-specific variable files (dev.tfvars, prod.tfvars).
Use modules for reusable code.
Example:
terraform workspace new staging
terraform apply -var-file="staging.tfvars"
terraform import and terraform apply?terraform import: Adds existing infrastructure to Terraform state.
terraform apply: Creates or modifies resources based on configuration.
terraform graph.Answer:
terraform graph generates a visual representation of the dependency graph, which is useful for understanding resource relationships.
terraform graph | dot -Tpng > graph.png
Answer:
Terraform does not automatically rollback, but it stops at the failed step. You can fix the configuration and rerun terraform apply. For full rollback, you may use versioned state or backup state files.
plan and apply -auto-approve?terraform plan: Shows changes without applying.
terraform apply -auto-approve: Applies changes without asking for confirmation.
Answer:
Locking prevents multiple users from modifying the state simultaneously. Supported by remote backends like S3 with DynamoDB.
Answer:
Use remote state backends with locking.
Split infrastructure into modules.
Keep resources logically grouped to reduce conflicts.
Answer:
Use terraform plan to preview changes.
Use terraform validate for syntax check.
Use tools like Terratest (Go-based) or Kitchen-Terraform for automated tests.
Answer:
Lifecycle blocks control resource behavior. Common options:
create_before_destroy – creates new resource before destroying old one
prevent_destroy – prevents accidental deletion
ignore_changes – ignores certain attribute changes
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
lifecycle {
prevent_destroy = true
}
}
Answer:
Not using version control for .tf files.
Hardcoding sensitive data.
Forgetting to backup state files.
Not using terraform plan before apply.
Mixing multiple environments in a single state file.
Answer:
Yes, using the Docker provider, you can create images, containers, networks, and volumes using Terraform.
terraform output and terraform show?terraform output: Shows user-defined outputs.
terraform show: Displays all resource states and attributes in the state file.
| Terraform | Pulumi |
|---|---|
| Uses HCL | Uses programming languages (Python, JS, Go) |
| Declarative | Imperative / Declarative |
| State management via file | Uses cloud or local state |
| Mature ecosystem | Modern, flexible |
Answer:
I have worked on designing and managing Terraform modules for AWS, Azure, and GCP. I implemented infrastructure automation for multi-environment deployments, including CI/CD pipelines, remote state management using S3 and Terraform Cloud, and automated testing with Terratest. I also optimized state management for large teams and implemented best practices like version pinning, workspaces, and modularization.
Answer:
Root module: Main configuration calling other modules.
Environment modules: Dev, staging, prod separate modules.
Resource modules: Individual modules for VPC, networking, compute, database.
Inputs/Outputs: Proper variable definition for reuse.
Example structure:
terraform/
βββ modules/
β βββ vpc/
β βββ ec2/
β βββ rds/
βββ env/
β βββ dev/
β βββ staging/
β βββ prod/
βββ main.tf
βββ variables.tf
βββ outputs.tf
Answer:
Use remote backends (S3, Azure Blob, GCS, Terraform Cloud) with state locking using DynamoDB or equivalent.
Separate state files per environment using workspaces or directories.
Enable versioning for rollback capability.
Example:
terraform {
backend "s3" {
bucket = "terraform-state-prod"
key = "prod/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
Answer:
Workspaces allow multiple states for the same configuration. I use them for environment separation (dev, staging, prod) while keeping a single codebase. Commands:
terraform workspace new staging
terraform workspace select dev
terraform workspace show
Workspaces simplify CI/CD pipeline integration and reduce manual state file management.
Answer:
Use terraform plan regularly to detect drift.
Enable automation pipelines to ensure infrastructure aligns with Terraform configuration.
If drift occurs, apply the configuration to sync resources or manually import/adjust using terraform import.
Answer:
Never hardcode secrets in .tf files.
Use Terraform variables marked as sensitive = true.
Integrate with Vault, AWS Secrets Manager, or Azure Key Vault.
Encrypt remote state.
Example:
variable "db_password" {
description = "Database password"
type = string
sensitive = true
}
Answer:
Use Jenkins, GitHub Actions, GitLab CI, or Azure DevOps.
Typical flow:
terraform fmt → code formatting check
terraform validate → configuration validation
terraform plan → generate execution plan
terraform apply → apply approved changes
Integrate workspaces and environment-specific variable files.
Use pull requests and automated plan approvals for production changes.
count and for_each in complex modules?Answer:
count: Creates multiple instances based on a number index. Suitable for identical resources.
for_each: Creates multiple instances from a map or set, allows unique identification. Preferred in production modules for flexibility.
resource "aws_instance" "app_server" {
for_each = var.servers
ami = each.value.ami
instance_type = each.value.type
tags = { Name = each.key }
}
Answer:
Terraform automatically detects implicit dependencies via resource references.
For explicit dependency control, use depends_on meta-argument:
resource "aws_instance" "app" {
...
depends_on = [aws_security_group.sg]
}
Answer:
Use lifecycle blocks to control resource behavior:
prevent_destroy to avoid accidental deletion.
create_before_destroy for zero downtime updates.
ignore_changes to ignore certain attribute updates from external changes.
resource "aws_lb" "app_lb" {
...
lifecycle {
create_before_destroy = true
}
}
Answer:
Pin provider versions using required_providers block.
Use Terraform version constraints in terraform block.
Upgrade cautiously after testing in dev/staging.
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = { source = "hashicorp/aws", version = "~> 4.5" }
}
}
Answer:
Store modules in a central repository or use Terraform Registry.
Version modules with tags for stability.
Example:
module "vpc" {
source = "git::https://github.com/org/terraform-modules.git//vpc?ref=v1.2"
name = "prod-vpc"
cidr = "10.0.0.0/16"
}
Answer:
Use count or for_each with conditions.
Example:
resource "aws_instance" "optional_server" {
count = var.create_server ? 1 : 0
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
terraform import in production.Answer:
Import existing resources into Terraform state to bring unmanaged infrastructure under IaC.
Helps migrate legacy resources or avoid downtime.
terraform import aws_instance.my_ec2 i-1234567890abcdef0
Answer:
Use terraform plan to understand intended changes.
Enable debug logs: TF_LOG=DEBUG terraform apply.
Check state files for inconsistencies.
Validate provider credentials and API limits.
Review Terraform module version compatibility.
Answer:
Configure multiple providers in a single configuration.
Isolate environment/state files per cloud.
Example:
provider "aws" {
region = "us-east-1"
}
provider "google" {
project = "my-gcp-project"
region = "us-central1"
}
Use modules for shared architecture (VPC, IAM, network policies).
Answer:
Store state in remote backend with encryption (S3+KMS, Terraform Cloud).
Enable access control and IAM policies.
Version the state for rollback.
Avoid storing sensitive data directly; use sensitive = true variables.
Answer:
Split resources into logical modules with separate state files.
Use remote backend with locking to prevent concurrent edits.
Implement CI/CD pipelines to control changes.
Answer:
Terratest is a Go library to automate testing of Terraform modules.
Validates resources are created as expected.
Integration testing ensures Terraform modules work across environments.
Example: Check EC2 instance exists or verify S3 bucket configuration.
Answer:
Test upgrades in dev/staging.
Review release notes for breaking changes.
Upgrade providers with terraform init -upgrade.
Apply changes incrementally with terraform plan approval.
Answer:
Use terraform state mv to move resources within the state.
Ensures Terraform treats the renamed resource as the same resource.
Avoids recreation when refactoring modules or changing resource names.
Answer:
Use remote state with encryption and locking.
Modularize code and reuse via modules.
Version providers and Terraform carefully.
Implement CI/CD for automated validation and deployment.
Maintain proper documentation for modules.
Regularly backup state files.
Monitor drift and enforce Terraform as the source of truth.
Answer:
Mark a resource for recreation using terraform taint.
Example:
terraform taint aws_instance.my_ec2
terraform apply
Useful when a resource is corrupted or misconfigured and requires replacement.
Answer:
Terraform creates infrastructure, then tools like Ansible, Chef, or Puppet configure the resources.
Example workflow: Terraform provisions EC2 → Ansible configures application stack → CI/CD deploys application.
Answer:
Use Git tags or releases for each module version.
Reference module versions in projects using ref in source URL.
Ensure backward compatibility to avoid breaking deployments.
Answer:
Integrate Terraform with CloudWatch, Prometheus, Datadog for monitoring.
Tag resources for easy identification.
Maintain drift detection scripts using Terraform plan.
Answer:
Yes, Terraform encourages immutable infrastructure by creating new resources rather than modifying live ones.
Achieved with create_before_destroy, blue/green deployments, and module refactoring.
Reduces downtime and ensures stable deployments.
Answer:
Use retry blocks in providers if supported.
Split resource creation into smaller batches.
Handle API rate limits in CI/CD pipeline with delays or throttling.
Answer:
Explicit depends_on for cross-provider dependencies.
Use outputs from one provider as inputs to another.
Maintain modular and decoupled architecture.
Answer:
Deploy modules in staging or dev environments first.
Use automated tests (Terratest or kitchen-terraform).
Validate compliance and configuration standards.
Absolutely! Let’s expand the list further for experienced Terraform professionals. These questions will focus on advanced topics, real-world scenarios, CI/CD, multi-cloud strategies, troubleshooting, and best practices. I’ll include practical examples wherever relevant.
Answer:
Use separate provider configurations for each region.
Use modules that are region-agnostic.
Maintain separate state files or workspaces per region.
Example:
provider "aws" {
alias = "us_east"
region = "us-east-1"
}
provider "aws" {
alias = "us_west"
region = "us-west-2"
}
module "vpc_east" {
source = "./modules/vpc"
providers = { aws = aws.us_east }
}
module "vpc_west" {
source = "./modules/vpc"
providers = { aws = aws.us_west }
}
Answer:
Use create_before_destroy lifecycle.
Implement blue/green deployments.
Use load balancers to switch traffic after replacement.
resource "aws_instance" "app" {
lifecycle {
create_before_destroy = true
}
}
Answer:
Remote state with locking (S3 + DynamoDB).
Enforce branch strategy and pull request approvals for Terraform code.
Use CI/CD pipelines for plan/apply.
Module versioning and shared registry for consistency.
Answer:
Configure multiple providers for cloud and on-premises resources.
Use modular architecture for decoupling cloud-specific resources.
Maintain separate state files or workspaces per environment.
Example:
provider "aws" {
region = "us-east-1"
}
provider "vsphere" {
user = var.vsphere_user
password = var.vsphere_password
vcenter_server = var.vcenter
}
Answer:
Keep versioned remote states.
Use terraform state backup to revert to a previous state.
Reapply the previous configuration version.
Automate rollback in CI/CD pipelines when a deployment fails.
Answer:
Unit testing with terraform validate and terraform fmt.
Integration testing with Terratest or Kitchen-Terraform.
Deploy in staging environments with CI/CD to validate resources.
Test inputs, outputs, and dependencies of modules.
Answer:
Split resources into logical modules.
Use target flag to apply specific resources.
Use terraform graph to visualize dependencies.
Avoid unnecessary changes by ignoring drift-prone attributes.
Enable parallelism with -parallelism=N.
Answer:
Terraform provisions the infrastructure.
Tools like Ansible, Chef, or Puppet configure applications on provisioned resources.
Example workflow: Terraform creates EC2 → Ansible configures software → CI/CD deploys application.
Answer:
Import allows managing existing infrastructure.
Example: Importing a subnet and assigning it to a new module:
terraform import module.network.aws_subnet.subnet1 subnet-0abc123
After import, update the configuration to match imported resources.
Answer:
Use retry blocks in provider configuration.
Split resource creation into smaller batches.
Implement rate-limiting in CI/CD scripts.
Monitor provider quotas and adjust Terraform runs accordingly.
Answer:
Use outputs from one cloud as inputs for another.
Explicit depends_on for cross-cloud dependencies.
Maintain decoupled modules with minimal inter-cloud coupling.
Example:
output "vpc_id" {
value = aws_vpc.main.id
}
module "gcp_network" {
source = "./modules/gcp_network"
vpc_id_from_aws = module.aws_vpc.vpc_id
}
Answer:
Terraform is inherently declarative and idempotent.
Always run terraform plan to validate changes.
Avoid imperative scripts inside provisioners that modify infrastructure outside Terraform.
Answer:
Use for_each with maps or sets to dynamically create resources.
Use dynamic blocks for nested resource attributes.
Example:
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from
to_port = ingress.value.to
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr
}
}
Answer:
Use remote backend with locking (S3 + DynamoDB).
Enforce CI/CD for Terraform apply to avoid manual conflicts.
Split state files for large modules.
Enable automatic versioning and backups.
Answer:
Use pre-commit hooks for formatting (terraform fmt).
Use terraform validate in pipelines.
Implement module testing with Terratest.
Code review with branch protection and pull requests.
Answer:
Use prevent_destroy to protect critical resources.
Use create_before_destroy for updates requiring replacement.
Ignore non-critical changes with ignore_changes.
| Feature | Terraform Cloud | Terraform Enterprise |
|---|---|---|
| Hosted solution | SaaS | Self-hosted |
| Collaboration | Yes | Yes |
| Remote state management | Yes | Yes |
| Policy enforcement (OPA) | Paid tier | Full access |
| Enterprise support | Limited | Full support |
Answer:
Use Sentinel in Terraform Enterprise or Open Policy Agent (OPA) for open-source.
Enforce policies like approved instance types, restricted regions, or tags.
Example: Prevent certain instance types:
import "tfplan/v2" as tfplan
main = rule {
all tfplan.resource_changes as rc {
rc.type is "aws_instance" implies rc.change.after.instance_type != "t2.micro"
}
}
Answer:
Use environment variables for provider credentials.
Use Vault or cloud secret managers.
Avoid hardcoding secrets in .tf or .tfvars.
Use encrypted storage for .tfstate.
Answer:
Identify resources to manage.
Use terraform import to import resources into state.
Write Terraform configurations matching imported resources.
Apply changes carefully in a controlled environment.
Answer:
Test in dev/staging first.
Upgrade Terraform version and provider versions in a controlled workspace.
Backup remote state before applying.
Use terraform plan to check changes.
Answer:
Tag all resources with environment and ownership.
Integrate CloudWatch, Datadog, or Prometheus for monitoring.
Use Sentinel/OPA policies for compliance checks.
Automate drift detection and remediation scripts.
terraform console.Answer:
Interactive console to inspect state, variables, and expressions.
Useful for debugging and testing outputs.
terraform console
> var.region
> aws_instance.my_instance.id
Answer:
Use targeted applies (-target).
Parallelism (-parallelism=N) for large resource creation.
Modularize infrastructure for smaller state files.
Avoid unnecessary depends_on and implicit drift-prone attributes.
Answer:
Maintain versioned remote state files.
Automate infrastructure rebuilds using modules.
Use infrastructure snapshots or backups for databases.
Test recovery regularly in staging environments.
Answer:
Create highly reusable and parameterized modules.
Include outputs, variables, and defaults for flexibility.
Use internal module registry for organization-wide consistency.
Version control and enforce testing before production deployment.
Answer:
Use pull requests with peer review.
Enforce linting and formatting.
Validate plans automatically with CI/CD pipelines.
Restrict direct commits to production branches.
Answer:
Define multiple providers with aliases.
Pass specific providers to modules using providers argument.
Use outputs from one provider to feed resources for another.
Answer:
Integrate Terraform with Vault for dynamic secrets.
Use provider or null resource with remote-exec for secret updates.
Automate rotation in CI/CD pipelines.
Answer:
Modularize all infrastructure.
Use versioned remote state.
Enforce CI/CD for all Terraform code.
Keep secrets and sensitive data secure.
Document modules and variables.
Automate drift detection and remediation.
Regularly test upgrades and disaster recovery.