# ARTICLE_LOADED
> cat terraform-infrastructure.md
> rendering content...
status: READING_MODE
security_level: PUBLIC
engagement_tracking: ENABLED
article@tektik:/var/blog/terraform-infrastructure$
cat metadata.json
"category": "DevOps"
"date": "2025-10-08"
"read_time": "15 dk okuma"
"author": "TekTık Yazılım DevOps Ekibi"
DevOps8 Ekim 2025

Terraform ile Infrastructure as Code

echo"Terraform kullanarak altyapı yönetimini kodlama ve best practice örnekleri. Multi-cloud IaC stratejileri, state management ve automated infrastructure provisioning."
#Terraform#IaC#DevOps#Cloud
15 dk okuma
TekTık Yazılım DevOps Ekibi
content@tektik:/var/articles/terraform-infrastructure.md$
./render-article --format=html --style=cyber

Terraform ile Infrastructure as Code

Infrastructure as Code (IaC), modern cloud altyapı yönetiminin temelini oluşturur. Terraform ile altyapınızı code olarak tanımlayarak, versiyonlanabilir, tekrarlanabilir ve ölçeklenebilir infrastructure yönetimi nasıl gerçekleştireceğinizi bu kapsamlı rehberde öğreneceksiniz.

İçindekiler

  1. IaC Temelleri ve Terraform'a Giriş
  2. Terraform Kurulum ve Konfigürasyon
  3. State Management Stratejileri
  4. Modular Infrastructure Design
  5. Multi-Cloud ve Multi-Environment
  6. Security ve Best Practices
  7. CI/CD Pipeline Entegrasyonu
  8. Advanced Terraform Patterns

IaC Temelleri ve Terraform'a Giriş {#iac-temelleri}

Infrastructure as Code Avantajları

Consistency (Tutarlılık)

  • Aynı infrastructure'ı tekrar tekrar oluşturma
  • Environment'lar arası tutarlılık
  • Human error'ların minimizasyonu

Version Control

  • Infrastructure değişikliklerinin izlenmesi
  • Git workflow'larının infrastructure'a uygulanması
  • Rollback ve change tracking

Automation

  • Manuel süreçlerin otomatikleştirilmesi
  • CI/CD pipeline entegrasyonu
  • Self-service infrastructure provisioning

Terraform Core Concepts

hcl
# main.tf
terraform {
  required_version = ">= 1.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.20"
    }
  }
}

# Provider configurations
provider "aws" {
  region = var.aws_region
  
  default_tags {
    tags = {
      Environment   = var.environment
      Project      = var.project_name
      ManagedBy    = "Terraform"
      Owner        = var.team_name
    }
  }
}

# Data sources
data "aws_availability_zones" "available" {
  state = "available"
}

data "aws_caller_identity" "current" {}

Terraform Kurulum ve Konfigürasyon {#terraform-kurulum}

Project Structure Best Practices

text
terraform/
├── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   ├── terraform.tfvars
│   │   └── outputs.tf
│   ├── staging/
│   │   └── ...
│   └── production/
│       └── ...
├── modules/
│   ├── vpc/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── outputs.tf
│   ├── eks/
│   │   └── ...
│   └── rds/
│       └── ...
├── shared/
│   ├── locals.tf
│   └── data.tf
└── scripts/
    ├── plan.sh
    └── apply.sh

Variables ve Locals Management

hcl
# variables.tf
variable "environment" {
  description = "Environment name"
  type        = string
  validation {
    condition     = contains(["dev", "staging", "production"], var.environment)
    error_message = "Environment must be dev, staging, or production."
  }
}

variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
  
  validation {
    condition     = can(cidrhost(var.vpc_cidr, 0))
    error_message = "VPC CIDR must be a valid IPv4 CIDR block."
  }
}

variable "instance_types" {
  description = "Map of instance types by environment"
  type        = map(string)
  default = {
    dev        = "t3.micro"
    staging    = "t3.small"
    production = "t3.medium"
  }
}

# locals.tf
locals {
  name_prefix = "${var.project_name}-${var.environment}"
  
  common_tags = {
    Environment   = var.environment
    Project      = var.project_name
    ManagedBy    = "Terraform"
    CreatedAt    = formatdate("YYYY-MM-DD", timestamp())
  }
  
  # Calculate subnets
  public_subnet_cidrs = [
    for i in range(var.subnet_count) :
    cidrsubnet(var.vpc_cidr, 8, i)
  ]
  
  private_subnet_cidrs = [
    for i in range(var.subnet_count) :
    cidrsubnet(var.vpc_cidr, 8, i + var.subnet_count)
  ]
}

State Management Stratejileri {#state-management}

Remote State Configuration

hcl
# backend.tf
terraform {
  backend "s3" {
    bucket         = "company-terraform-state"
    key            = "environments/production/terraform.tfstate"
    region         = "us-west-2"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
    
    # Role assumption for cross-account access
    role_arn = "arn:aws:iam::123456789012:role/TerraformStateRole"
  }
}

# State bucket module
module "terraform_state_backend" {
  source = "./modules/terraform-backend"
  
  state_bucket_name      = "company-terraform-state"
  dynamodb_table_name    = "terraform-state-lock"
  enable_versioning      = true
  enable_encryption      = true
  enable_mfa_delete      = var.environment == "production"
  
  allowed_account_ids = [
    "123456789012", # Production
    "123456789013", # Staging
    "123456789014", # Development
  ]
}

State Import ve Migration

hcl
# Import existing resources
resource "aws_instance" "existing_server" {
  ami           = "ami-0abcdef1234567890"
  instance_type = "t3.medium"
  
  tags = {
    Name = "imported-server"
  }
}

# Import command:
# terraform import aws_instance.existing_server i-1234567890abcdef0
bash
#!/bin/bash
# migrate-state.sh

# State migration script
terraform init

# Import existing resources
terraform import aws_vpc.main vpc-12345
terraform import aws_subnet.public_1 subnet-67890
terraform import aws_internet_gateway.main igw-abcdef

# Verify state
terraform plan

echo "State migration completed. Review the plan above."

Modular Infrastructure Design {#modular-design}

VPC Module

hcl
# modules/vpc/main.tf
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-vpc"
  })
}

resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.main.id

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-igw"
  })
}

resource "aws_subnet" "public" {
  count = length(var.availability_zones)

  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = var.availability_zones[count.index]
  map_public_ip_on_launch = true

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-public-${count.index + 1}"
    Type = "Public"
  })
}

resource "aws_subnet" "private" {
  count = length(var.availability_zones)

  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = var.availability_zones[count.index]

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-private-${count.index + 1}"
    Type = "Private"
  })
}

resource "aws_nat_gateway" "main" {
  count = var.enable_nat_gateway ? length(aws_subnet.public) : 0

  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.public[count.index].id

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-nat-${count.index + 1}"
  })

  depends_on = [aws_internet_gateway.main]
}

resource "aws_eip" "nat" {
  count = var.enable_nat_gateway ? length(aws_subnet.public) : 0

  domain = "vpc"

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-eip-nat-${count.index + 1}"
  })
}

# Route tables
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-public-rt"
  })
}

resource "aws_route_table" "private" {
  count = var.enable_nat_gateway ? length(aws_nat_gateway.main) : 0

  vpc_id = aws_vpc.main.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.main[count.index].id
  }

  tags = merge(var.tags, {
    Name = "${var.name_prefix}-private-rt-${count.index + 1}"
  })
}

resource "aws_route_table_association" "public" {
  count = length(aws_subnet.public)

  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "private" {
  count = length(aws_subnet.private)

  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private[count.index].id
}

EKS Module

hcl
# modules/eks/main.tf
resource "aws_eks_cluster" "main" {
  name     = var.cluster_name
  role_arn = aws_iam_role.cluster.arn
  version  = var.kubernetes_version

  vpc_config {
    subnet_ids              = var.subnet_ids
    endpoint_private_access = var.endpoint_private_access
    endpoint_public_access  = var.endpoint_public_access
    public_access_cidrs     = var.endpoint_public_access_cidrs
    
    security_group_ids = [aws_security_group.cluster.id]
  }

  encryption_config {
    provider {
      key_arn = aws_kms_key.eks.arn
    }
    resources = ["secrets"]
  }

  enabled_cluster_log_types = var.cluster_log_types

  depends_on = [
    aws_iam_role_policy_attachment.cluster_policy,
    aws_iam_role_policy_attachment.service_policy,
  ]

  tags = var.tags
}

resource "aws_eks_node_group" "main" {
  for_each = var.node_groups

  cluster_name    = aws_eks_cluster.main.name
  node_group_name = each.key
  node_role_arn   = aws_iam_role.node_group.arn
  subnet_ids      = var.private_subnet_ids

  instance_types = each.value.instance_types
  ami_type       = each.value.ami_type
  capacity_type  = each.value.capacity_type
  disk_size      = each.value.disk_size

  scaling_config {
    desired_size = each.value.desired_size
    max_size     = each.value.max_size
    min_size     = each.value.min_size
  }

  update_config {
    max_unavailable_percentage = each.value.max_unavailable_percentage
  }

  remote_access {
    ec2_ssh_key               = each.value.key_name
    source_security_group_ids = [aws_security_group.node_group.id]
  }

  launch_template {
    id      = aws_launch_template.node_group[each.key].id
    version = aws_launch_template.node_group[each.key].latest_version
  }

  depends_on = [
    aws_iam_role_policy_attachment.worker_node_policy,
    aws_iam_role_policy_attachment.cni_policy,
    aws_iam_role_policy_attachment.registry_policy,
  ]

  tags = merge(var.tags, {
    Name = "${var.cluster_name}-${each.key}"
  })
}

# Launch template for advanced configuration
resource "aws_launch_template" "node_group" {
  for_each = var.node_groups

  name_prefix = "${var.cluster_name}-${each.key}-"

  user_data = base64encode(templatefile("${path.module}/user_data.sh", {
    cluster_name        = aws_eks_cluster.main.name
    cluster_endpoint    = aws_eks_cluster.main.endpoint
    cluster_ca          = aws_eks_cluster.main.certificate_authority[0].data
    additional_userdata = each.value.additional_userdata
  }))

  vpc_security_group_ids = [aws_security_group.node_group.id]

  tag_specifications {
    resource_type = "instance"
    tags = merge(var.tags, {
      Name = "${var.cluster_name}-${each.key}-node"
    })
  }

  metadata_options {
    http_endpoint = "enabled"
    http_tokens   = "required"
    http_put_response_hop_limit = 2
  }
}

Multi-Cloud ve Multi-Environment {#multi-cloud}

Environment Configuration

hcl
# environments/production/main.tf
module "vpc" {
  source = "../../modules/vpc"

  name_prefix        = local.name_prefix
  vpc_cidr          = var.vpc_cidr
  availability_zones = data.aws_availability_zones.available.names
  enable_nat_gateway = true
  
  public_subnet_cidrs  = local.public_subnet_cidrs
  private_subnet_cidrs = local.private_subnet_cidrs

  tags = local.common_tags
}

module "eks" {
  source = "../../modules/eks"

  cluster_name       = "${local.name_prefix}-cluster"
  kubernetes_version = var.kubernetes_version
  
  subnet_ids         = module.vpc.public_subnet_ids
  private_subnet_ids = module.vpc.private_subnet_ids
  
  endpoint_private_access      = true
  endpoint_public_access       = true
  endpoint_public_access_cidrs = var.allowed_cidr_blocks

  node_groups = {
    general = {
      instance_types = ["t3.medium"]
      ami_type       = "AL2_x86_64"
      capacity_type  = "ON_DEMAND"
      disk_size      = 50
      
      desired_size = 2
      min_size     = 1
      max_size     = 5
      
      max_unavailable_percentage = 25
      key_name = var.key_pair_name
      additional_userdata = ""
    }
    
    spot = {
      instance_types = ["t3.medium", "t3.large"]
      ami_type       = "AL2_x86_64" 
      capacity_type  = "SPOT"
      disk_size      = 50
      
      desired_size = 3
      min_size     = 0
      max_size     = 10
      
      max_unavailable_percentage = 50
      key_name = var.key_pair_name
      additional_userdata = file("${path.module}/spot_userdata.sh")
    }
  }

  tags = local.common_tags
}

module "rds" {
  source = "../../modules/rds"

  identifier = "${local.name_prefix}-db"
  
  engine         = "postgres"
  engine_version = "14.9"
  instance_class = var.db_instance_class
  
  allocated_storage     = var.db_allocated_storage
  max_allocated_storage = var.db_max_allocated_storage
  storage_encrypted     = true
  
  db_name  = var.db_name
  username = var.db_username
  password = random_password.db_password.result
  
  vpc_security_group_ids = [aws_security_group.rds.id]
  db_subnet_group_name   = aws_db_subnet_group.main.name
  
  backup_retention_period = var.backup_retention_period
  backup_window          = var.backup_window
  maintenance_window     = var.maintenance_window
  
  skip_final_snapshot = var.environment != "production"
  
  tags = local.common_tags
}

Multi-Provider Setup

hcl
# Multi-cloud configuration
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
    google = {
      source  = "hashicorp/google"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
  alias  = "primary"
}

provider "aws" {
  region = var.aws_secondary_region
  alias  = "secondary"
}

provider "azurerm" {
  features {}
  subscription_id = var.azure_subscription_id
}

provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region
}

# Multi-region AWS resources
module "primary_vpc" {
  source = "./modules/vpc"
  
  providers = {
    aws = aws.primary
  }
  
  region = var.aws_region
  name_prefix = "${var.project_name}-primary"
}

module "secondary_vpc" {
  source = "./modules/vpc"
  
  providers = {
    aws = aws.secondary
  }
  
  region = var.aws_secondary_region
  name_prefix = "${var.project_name}-secondary"
}

Security ve Best Practices {#security-practices}

Security Configurations

hcl
# security.tf
resource "aws_kms_key" "main" {
  description         = "${local.name_prefix} KMS Key"
  enable_key_rotation = true
  
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "Enable IAM User Permissions"
        Effect = "Allow"
        Principal = {
          AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
        }
        Action   = "kms:*"
        Resource = "*"
      },
      {
        Sid    = "Allow EKS Service"
        Effect = "Allow"
        Principal = {
          Service = "eks.amazonaws.com"
        }
        Action = [
          "kms:Decrypt",
          "kms:GenerateDataKey"
        ]
        Resource = "*"
      }
    ]
  })

  tags = local.common_tags
}

resource "aws_kms_alias" "main" {
  name          = "alias/${local.name_prefix}-key"
  target_key_id = aws_kms_key.main.key_id
}

# Security Groups with restrictive rules
resource "aws_security_group" "web" {
  name_prefix = "${local.name_prefix}-web-"
  vpc_id      = module.vpc.vpc_id
  description = "Security group for web servers"

  ingress {
    description = "HTTPS"
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = var.allowed_cidr_blocks
  }

  ingress {
    description = "HTTP"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = var.allowed_cidr_blocks
  }

  egress {
    description = "All outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-web-sg"
  })

  lifecycle {
    create_before_destroy = true
  }
}

# WAF Configuration
resource "aws_wafv2_web_acl" "main" {
  name  = "${local.name_prefix}-waf"
  scope = "REGIONAL"

  default_action {
    allow {}
  }

  rule {
    name     = "RateLimitRule"
    priority = 1

    action {
      block {}
    }

    statement {
      rate_based_statement {
        limit              = 10000
        aggregate_key_type = "IP"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "${local.name_prefix}-RateLimitRule"
      sampled_requests_enabled   = true
    }
  }

  rule {
    name     = "AWSManagedRulesCommonRuleSet"
    priority = 2

    override_action {
      none {}
    }

    statement {
      managed_rule_group_statement {
        name        = "AWSManagedRulesCommonRuleSet"
        vendor_name = "AWS"
      }
    }

    visibility_config {
      cloudwatch_metrics_enabled = true
      metric_name                = "${local.name_prefix}-CommonRuleSet"
      sampled_requests_enabled   = true
    }
  }

  tags = local.common_tags
}

Secrets Management

hcl
# secrets.tf
resource "random_password" "db_password" {
  length  = 32
  special = true
}

resource "aws_secretsmanager_secret" "db_password" {
  name = "${local.name_prefix}-db-password"
  description = "Database password for ${local.name_prefix}"
  
  kms_key_id = aws_kms_key.main.arn
  
  tags = local.common_tags
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id     = aws_secretsmanager_secret.db_password.id
  secret_string = jsonencode({
    username = var.db_username
    password = random_password.db_password.result
    host     = module.rds.endpoint
    port     = module.rds.port
    dbname   = var.db_name
  })
}

# Parameter Store for non-sensitive configs
resource "aws_ssm_parameter" "app_config" {
  for_each = var.app_parameters

  name  = "/${local.name_prefix}/${each.key}"
  type  = "String"
  value = each.value

  tags = local.common_tags
}

CI/CD Pipeline Entegrasyonu {#cicd-integration}

GitHub Actions Terraform Pipeline

yaml
# .github/workflows/terraform.yml
name: Terraform

on:
  push:
    branches: [main, develop]
    paths: ['terraform/**']
  pull_request:
    branches: [main]
    paths: ['terraform/**']

env:
  TF_VERSION: '1.6.0'
  AWS_REGION: 'us-west-2'

jobs:
  plan:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        environment: [dev, staging, production]
        
    steps:
    - uses: actions/checkout@v4
    
    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v3
      with:
        terraform_version: ${{ env.TF_VERSION }}
    
    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
        aws-region: ${{ env.AWS_REGION }}
    
    - name: Terraform Format Check
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform fmt -check -recursive
    
    - name: Terraform Init
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform init
    
    - name: Terraform Validate
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform validate
    
    - name: Terraform Plan
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform plan -out=tfplan -input=false
        terraform show -no-color tfplan > plan.txt
    
    - name: Update PR with Plan
      if: github.event_name == 'pull_request'
      uses: actions/github-script@v7
      with:
        script: |
          const fs = require('fs');
          const plan = fs.readFileSync('terraform/environments/${{ matrix.environment }}/plan.txt', 'utf8');
          const maxGitHubBodyCharacters = 65536;
          
          function chunkSubstr(str, size) {
            const numChunks = Math.ceil(str.length / size)
            const chunks = new Array(numChunks)
            for (let i = 0, o = 0; i < numChunks; ++i, o += size) {
              chunks[i] = str.substr(o, size)
            }
            return chunks
          }
          
          const planChunks = chunkSubstr(plan, maxGitHubBodyCharacters);
          
          for (let i = 0; i < planChunks.length; i++) {
            const body = `### Terraform Plan (${{ matrix.environment }}) - Part ${i + 1}
            
            \`\`\`terraform
            ${planChunks[i]}
            \`\`\``;
            
            await github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: body
            });
          }

  apply:
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    needs: plan
    runs-on: ubuntu-latest
    strategy:
      matrix:
        environment: [dev, staging, production]
    environment: ${{ matrix.environment }}
    
    steps:
    - uses: actions/checkout@v4
    
    - name: Setup Terraform
      uses: hashicorp/setup-terraform@v3
      with:
        terraform_version: ${{ env.TF_VERSION }}
    
    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
        aws-region: ${{ env.AWS_REGION }}
    
    - name: Terraform Init
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform init
    
    - name: Terraform Apply
      run: |
        cd terraform/environments/${{ matrix.environment }}
        terraform apply -auto-approve

Advanced Terraform Patterns {#advanced-patterns}

Dynamic Configurations

hcl
# Dynamic block example
resource "aws_security_group" "dynamic_sg" {
  name_prefix = "${local.name_prefix}-dynamic-"
  vpc_id      = module.vpc.vpc_id

  dynamic "ingress" {
    for_each = var.security_group_rules
    content {
      description = ingress.value.description
      from_port   = ingress.value.from_port
      to_port     = ingress.value.to_port
      protocol    = ingress.value.protocol
      cidr_blocks = ingress.value.cidr_blocks
    }
  }

  dynamic "egress" {
    for_each = var.egress_rules
    content {
      description = egress.value.description
      from_port   = egress.value.from_port
      to_port     = egress.value.to_port
      protocol    = egress.value.protocol
      cidr_blocks = egress.value.cidr_blocks
    }
  }

  tags = local.common_tags
}

# Conditional resources
resource "aws_instance" "web" {
  count = var.enable_web_servers ? var.web_server_count : 0

  ami           = data.aws_ami.amazon_linux.id
  instance_type = local.instance_type
  subnet_id     = element(module.vpc.public_subnet_ids, count.index)

  vpc_security_group_ids = [aws_security_group.web.id]

  user_data = templatefile("${path.module}/user_data.tpl", {
    environment = var.environment
    app_name    = var.app_name
  })

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-web-${count.index + 1}"
  })
}

Custom Validation Rules

hcl
# variables.tf with advanced validations
variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t3.micro"

  validation {
    condition = can(regex("^t[2-4]\\.(nano|