Claude Code + Terraform: Safe AI-Assisted IaC with Guardrails

AI can write Terraform faster than any human, but one wrong apply can destroy production. Here is the guardrails framework we use at VVVHQ to get 70% faster module development with zero unintended production changes.

By VVVHQ Team ·

AI-assisted Infrastructure as Code is one of the most powerful productivity multipliers we have seen in DevOps. It is also one of the most dangerous. A single hallucinated terraform destroy or an overly permissive IAM policy generated by an AI can take down production in seconds. At VVVHQ, we use Claude Code extensively for Terraform work, but we have built a guardrails framework that lets us move fast without breaking things.

This is not a theoretical post. This is how we actually work, every day, across client engagements.

Why AI-Assisted IaC Is Powerful but Dangerous

Terraform is declarative, which makes it a natural fit for AI code generation. You describe what you want, the AI writes the HCL, and terraform plan shows you exactly what will change. The problem is the gap between plan and apply.

An AI that can run shell commands can also run terraform apply -auto-approve. One wrong resource configuration, one missing lifecycle block, one overlooked dependency, and you are staring at a destroyed RDS instance with no recent snapshot. We have seen it happen at organizations that gave AI tools unrestricted access to their IaC pipelines.

The solution is not to avoid AI. The solution is guardrails.

Our Guardrails Framework

We enforce four layers of protection between AI-generated Terraform and production infrastructure.

Plan-Only Mode

The most critical rule: AI generates and executes terraform plan, but never terraform apply without explicit human approval. We enforce this through Claude Code hooks that intercept shell commands.

{
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "intercept",
            "command": "check-terraform-command.sh"
          }
        ]
      }
    ]
  }
}

The check-terraform-command.sh script inspects every shell command before execution. Any command matching terraform apply, terraform destroy, or state-modifying operations is blocked with a DENY response. The AI can plan all day long, but it cannot apply anything.

Deny Rules via .claude/rules/

We encode IaC safety policies directly in the rules files that Claude Code reads on every session start.

## Terraform Safety Rules
  • NEVER run terraform apply, destroy, taint, untaint, or state commands
  • NEVER generate IAM policies with broad Action or Resource wildcards
  • NEVER open security groups to 0.0.0.0/0 on any port
  • NEVER modify terraform state files directly
  • NEVER remove lifecycle prevent_destroy blocks
  • ALWAYS include lifecycle prevent_destroy on stateful resources
  • ALWAYS run terraform validate and terraform plan before presenting changes
  • ALWAYS use variables instead of hardcoded values for environments

These rules act as persistent context. Every time Claude Code works on Terraform in our repos, it operates within these constraints.

Workspace Isolation

AI operates exclusively in dev and staging workspaces. Production workspaces require a separate CI/CD pipeline with manual approval gates.

# backend.tf
terraform {
  backend "s3" {
    bucket         = "vvvhq-terraform-state"
    key            = "infra/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-locks"
  }
}

The AI only has credentials for dev/staging workspaces.

Production credentials live in the CI/CD pipeline only.

The AI literally cannot touch production because it does not have the credentials. Simple, effective, hard to circumvent.

Diff Review

Every AI-generated plan is reviewed by a human before apply. We use a pull request workflow where the AI commits its changes to a feature branch, the plan output is posted as a PR comment, and a human reviews both the code diff and the plan output before merging. The merge triggers terraform apply through our CI/CD pipeline, not through the AI.

Automated Security Gates

Beyond Claude Code hooks, we run automated security scanning as pre-commit hooks and CI checks.

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/antonbabenko/pre-commit-terraform
    hooks:
      - id: terraform_validate
      - id: terraform_fmt
      - id: terraform_tfsec
      - id: terraform_checkov

Every commit, whether AI-generated or human-written, passes through terraform validate, tfsec, and checkov. These tools catch common misconfigurations: public S3 buckets, unencrypted volumes, overly permissive security groups. The AI cannot bypass these gates.

What AI Excels At in Terraform

With guardrails in place, AI becomes remarkably productive for specific Terraform tasks.

Generating boilerplate modules. Describe what you need in plain language, and Claude Code generates a complete module with variables, outputs, documentation, and sensible defaults. A module that would take 45 minutes to write from scratch takes 5 minutes to generate and 10 minutes to review.

Refactoring existing configs. Point the AI at a Terraform file full of hardcoded values and ask it to extract variables. It handles the mechanical work of creating variables.tf, updating references, and generating terraform.tfvars.example files.

Writing documentation. AI-generated README files for Terraform modules are consistently good. It reads the variables, outputs, and resources, and produces clear documentation with usage examples.

Suggesting security improvements. Ask Claude Code to review a module for security issues and it will flag missing encryption settings, overly broad CIDR blocks, absent logging configurations, and missing tags.

Debugging plan errors. When terraform plan fails with a cryptic error, the AI is excellent at diagnosing the issue: dependency cycles, provider version conflicts, state drift.

What AI Should NOT Do Alone

Even with guardrails, some operations should remain exclusively human.

  • Apply changes to production. Always through CI/CD with manual approval.
  • Modify state files. Operations like state mv, state rm, and import are too risky for AI execution. State corruption can be catastrophic.
  • Create IAM policies with broad permissions. AI tends to over-permission for convenience. Every IAM policy should be least-privilege and human-reviewed.
  • Delete or taint resources. Resource destruction requires understanding of dependencies that AI may not fully grasp from code alone.

Real Results

We have been running this framework across multiple client engagements for six months. The numbers speak for themselves.

  • 70% faster module development. From requirements to reviewed, tested module.
  • Zero unintended production changes. Not one. The guardrails have caught multiple potentially destructive operations.
  • 40% reduction in Terraform-related PR review cycles. AI-generated code is consistently well-formatted and follows our conventions, reducing back-and-forth.
  • 100% of AI-generated plans reviewed by humans. No exceptions.

The Trust-but-Verify Principle

Our philosophy is simple: AI proposes, human disposes. Claude Code is an incredibly capable pair programmer for infrastructure work. It generates code faster than any human, it does not get tired, and it does not forget to add tags or documentation. But it also does not understand the blast radius of a misconfigured security group or the business impact of five minutes of downtime.

The guardrails framework is not about limiting AI. It is about using AI responsibly in a domain where mistakes are measured in downtime, data loss, and security breaches. With the right constraints, AI-assisted IaC is not just safe, it is a genuine competitive advantage.

If you are running Terraform at scale and want to adopt AI-assisted workflows safely, reach out to our team. We will help you build guardrails that fit your infrastructure and risk tolerance.

Tags: claude code, terraform, ai assisted iac, infrastructure as code, ai guardrails