Skip to content

Conversation

@killev
Copy link
Collaborator

@killev killev commented May 24, 2025

  • Add comprehensive AWS architecture documentation with Mermaid diagram
  • Document key architecture components including public layer, compute layer, storage layer, security layer, and operations layer
  • Include detailed component descriptions for ALB, CloudFront, ECS Fargate, ECR, S3 buckets, MongoDB, IAM roles, and monitoring services
  • Document environment isolation strategy with consistent naming conventions

This documentation is essential for understanding the AWS infrastructure setup and helps both developers and operations teams navigate the system architecture effectively.

- Add comprehensive Mermaid flowchart showing complete AWS infrastructure

- Document key architecture components organized by layers (Public, Compute, Storage, Security, Operations)

- Include environment isolation details and resource naming conventions
Copilot AI review requested due to automatic review settings May 24, 2025 09:23
@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 24, 2025

📝 Walkthrough

Walkthrough

This change introduces a comprehensive AWS infrastructure-as-code setup for the "SF Website" project using Terraform. It adds modular Terraform configurations, environment-specific variable files, IAM policies, deployment scripts, and extensive documentation. The infrastructure provisions VPC, ECS Fargate, DocumentDB, ElastiCache Redis, S3, CloudFront, ALB, IAM roles, security groups, monitoring, and CI/CD integration, with environment isolation and operational guidance.

Changes

Files / Groups Change Summary
Documentation
docs/aws-architecture-diagram.md
docs/infrastructureQNA.md
docs/Infrastructure.md
deployment/README.md
Added detailed AWS architecture diagram and explanations, infrastructure Q&A, and updated infrastructure documentation to reflect migration from self-managed MongoDB to managed DocumentDB and ElastiCache Redis, revised domain conventions, and enhanced monitoring. Added a comprehensive README covering prerequisites, deployment steps, module structure, security, monitoring, CI/CD integration, and operational best practices.
Terraform Root & Variables
deployment/main.tf
deployment/variables.tf
deployment/outputs.tf
deployment/terraform.tfvars.example
deployment/environments/backend-development.hcl
deployment/environments/backend-staging.hcl
deployment/environments/backend-production.hcl
deployment/environments/development.tfvars
Introduced root Terraform configuration, variable definitions with validation, comprehensive outputs, example tfvars, backend configs for remote state per environment, and a development environment tfvars file.
Terraform Modules: Networking & Security
deployment/modules/vpc/
deployment/modules/security_groups/

deployment/modules/bastion/*
Added modular Terraform code for VPC (subnets, NAT, IGW, flow logs), security groups for ALB, ECS, DocumentDB, Redis, and a bastion host with SSH access, public IP, and user data initialization.
Terraform Modules: Storage & CDN
deployment/modules/s3/
deployment/modules/cloudfront/
Added S3 buckets for attachments and logs with versioning, encryption, lifecycle, CORS, and CloudFront integration. CloudFront module provisions distribution and origin access identity for secure media delivery.
Terraform Modules: Compute & Containerization
deployment/modules/ecr/
deployment/modules/ecs/
Added ECR repository with lifecycle policies and ECS Fargate cluster/service with autoscaling, task definitions, environment/secrets injection, logging, and ALB integration.
Terraform Modules: Database & Cache
deployment/modules/documentdb/
deployment/modules/redis/
Added DocumentDB cluster with subnet, parameter group, audit logging, and backup; ElastiCache Redis cluster with subnet, encryption, authentication, and backup.
Terraform Modules: IAM & Parameter Store
deployment/modules/iam/
deployment/modules/parameter_store/
Provisioned IAM roles/policies for ECS, GitHub Actions (OIDC), and S3/ECR access. Added SSM Parameter Store resources for secrets (session, DB, Redis, GCS key).
Terraform Modules: Monitoring
deployment/modules/cloudwatch/*
Added CloudWatch log groups, dashboards, and notification integration (Slack webhook) for ECS, ALB, DocumentDB, Redis, and VPC flow logs.
Terraform IAM Policies
deployment/terraform-core-management-policy.json
deployment/terraform-infrastructure-management-policy.json
deployment/terraform-service-management-policy.json
deployment/terraform-s3-management-policy.json
Added IAM policy JSON files granting Terraform permissions for core AWS resources, infrastructure, service management, and S3 backend usage.
Scripts
deploy-ecs.sh
terraform-deploy.sh
scripts/build_assets.sh
Added Bash scripts for automating ECS deployment, Terraform apply, and generating a release ID for asset builds.
Application Integration
website/app.js
website/modules/@apostrophecms/uploadfs/index.js
Dockerfile
Updated application to construct DocumentDB URI using environment variables, integrated it into Apostrophe CMS config, refactored S3 uploadfs config, and updated Dockerfile to download the DocumentDB CA bundle for TLS.
CI/CD Workflow
.github/workflows/deploy_to_aws.yml
Updated GitHub Actions workflow to align with new branch, environment, repo naming, Terraform version, and deployment steps for AWS infrastructure.
.gitignore Added ignores for Terraform state/plan files, .pem keys, and cursor temp data.
New GitHub Actions Workflow
.github/workflows/destroy_aws_environment.yml
Added a manual-trigger GitHub Actions workflow for safe destruction of AWS environments with a destroy plan generation, manual approval via GitHub issue, and conditional apply of destruction plan.

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer
    participant GitHub as GitHub Actions
    participant ECR as AWS ECR
    participant ECS as AWS ECS
    participant SSM as AWS Parameter Store
    participant S3 as AWS S3
    participant DocDB as AWS DocumentDB
    participant Redis as AWS ElastiCache Redis
    participant ALB as AWS ALB
    participant CloudFront as AWS CloudFront
    participant CloudWatch as AWS CloudWatch

    Dev->>GitHub: Push code/trigger workflow
    GitHub->>ECR: Build & push Docker image
    GitHub->>ECS: Update ECS service with new image
    ECS->>SSM: Fetch secrets (DB, Redis, session)
    ECS->>DocDB: Connect to DocumentDB (via TLS)
    ECS->>Redis: Connect to Redis
    ECS->>S3: Store/retrieve media attachments
    ECS->>CloudWatch: Send logs/metrics
    User->>ALB: HTTPS request to app
    ALB->>ECS: Forward request to container
    ECS->>S3: Serve/upload media
    User->>CloudFront: Request media asset
    CloudFront->>S3: Fetch media via OAI
    CloudWatch->>Slack: Send alert/notification (if triggered)
Loading

Possibly related issues

Possibly related PRs


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces comprehensive AWS infrastructure documentation, complete with a detailed Mermaid diagram that visually represents the SF Website architecture and its key components.

  • Adds a Mermaid diagram outlining public, compute, storage, security, and operations layers
  • Documents each AWS component with environment-based naming conventions
  • Details environment isolation strategies with clear naming patterns

ECS -->|Send logs| CloudWatch
%% IAM relationships
IAM_Task -.->|Assume role| ECS
Copy link

Copilot AI May 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a legend to clarify the significance of dashed arrows (used for IAM relationships) versus solid arrows in the Mermaid diagram for better reader comprehension.

Copilot uses AI. Check for mistakes.
MongoDB -->|System metrics| CloudWatch
%% Backup
AWSBackup -->|Snapshot| MongoDB
Copy link

Copilot AI May 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the documentation to specify that AWS Backup snapshots target the EBS volume of the MongoDB EC2 instance, ensuring clear understanding of what is being backed up.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link

github-actions bot commented May 24, 2025

🔍 Vulnerabilities of apostrophe-cms:test

📦 Image Reference apostrophe-cms:test
digestsha256:5cd25d021604e0b0c38267875fb0a1f133e89fce8cf868f89c7750e3aabea7b9
vulnerabilitiescritical: 0 high: 3 medium: 0 low: 0
platformlinux/amd64
size289 MB
packages971
📦 Base Image node:23-alpine
also known as
  • 23-alpine3.22
  • 23.11-alpine
  • 23.11-alpine3.22
  • 23.11.1-alpine
  • 23.11.1-alpine3.22
digestsha256:b9d38d589853406ff0d4364f21969840c3e0397087643aef8eede40edbb6c7cd
vulnerabilitiescritical: 0 high: 0 medium: 0 low: 0
critical: 0 high: 1 medium: 0 low: 0 connect-multiparty 2.2.0 (npm)

pkg:npm/connect-multiparty@2.2.0

high 7.8: CVE--2022--29623 Unrestricted Upload of File with Dangerous Type

Affected range<=2.2.0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.334%
EPSS Percentile56th percentile
Description

An arbitrary file upload vulnerability in the file upload module of Express Connect-Multiparty 2.2.0 allows attackers to execute arbitrary code via a crafted PDF file. NOTE: the Supplier has not verified this vulnerability report.

critical: 0 high: 1 medium: 0 low: 0 async 0.9.2 (npm)

pkg:npm/async@0.9.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score1.061%
EPSS Percentile77th percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

critical: 0 high: 1 medium: 0 low: 0 async 1.5.2 (npm)

pkg:npm/async@1.5.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score1.061%
EPSS Percentile77th percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
docs/aws-architecture-diagram.md (3)

1-4: Add an introductory context section

It may help to include a brief paragraph below the main heading (lines 1–4) that outlines the document’s purpose, target audience, and links to related architecture references or repos. This orients new readers before diving into the diagram.


5-86: Enhance the Mermaid diagram for clarity and consistency

  • Leverage Mermaid’s subgraph blocks to visually group components by layer (Public, Compute, Storage, Security, Monitoring).
  • Align naming between diagram and text (e.g., “Monitoring & Backup” vs “Operations Layer”) to avoid confusion.
  • Optionally set explicit layout direction (e.g., flowchart TB is fine, but you could switch to LR for horizontal views).

Example snippet:

 flowchart TB
-    %% Public facing components
+  subgraph Public Layer
     ALB[⚖️ Application Load Balancer<br/>…]
     CF[🌐 CloudFront Distribution<br/>…]
+  end
+  subgraph Monitoring & Backup Layer
     CloudWatch[📊 CloudWatch<br/>…]
     AWSBackup[💾 AWS Backup<br/>…]
+  end

111-115: Specify AWS tagging best practices

The environment isolation section (lines 111–115) clearly states naming conventions but could be strengthened by listing recommended AWS tag keys (e.g., Environment, Project, Owner) and example values. This enforces consistent cost allocation and resource management across environments.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ec6a27 and ce33511.

📒 Files selected for processing (1)
  • docs/aws-architecture-diagram.md (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: lint
  • GitHub Check: e2e-tests
  • GitHub Check: security-scan
  • GitHub Check: unit-tests

coderabbitai[bot]
coderabbitai bot previously approved these changes May 24, 2025
…lete Terraform configuration

- Replace MongoDB EC2 instance with managed DocumentDB cluster
- Add ElastiCache Redis cluster for caching and session storage
- Implement complete Terraform infrastructure as code with modular design
- Update domain configuration from prettyclear.com to sandbox-prettyclear.com
- Add comprehensive infrastructure documentation and Q&A guide
- Configure multi-environment support with backend configs for dev/staging/prod
- Add monitoring, security groups, and networking modules
- Update ECS environment variables to include Redis URI and DocumentDB endpoint
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (23)
terraform/README.md (2)

93-93: Specify language for code block
The fenced code block showing the Terraform file structure lacks a language specifier. Add ```hcl for syntax highlighting.

- ```
+ ```hcl
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

93-93: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


136-136: Specify language for resource naming example
The code block for resource naming conventions should declare a language (e.g., hcl) for consistency and readability.

- ```
+ ```hcl
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

136-136: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

terraform/modules/vpc/main.tf (1)

142-161: Add Name tag to IAM role for consistency
The IAM role for VPC Flow Logs uses var.tags but misses a Name tag. Consider merging tags to include a Name for easier resource identification.

resource "aws_iam_role" "flow_log" {
  name = "${var.name_prefix}-vpc-flow-log-role-${var.environment}"
- tags = var.tags
+ tags = merge(var.tags, {
+   Name = "${var.name_prefix}-vpc-flow-log-role-${var.environment}"
+ })
}
docs/Infrastructure.md (4)

171-172: Clarify environment variable naming
The variable APOS_MONGODB_URI actually points to a DocumentDB endpoint. For clarity, consider renaming it to APOS_DATABASE_URI or APOS_DOCUMENTDB_URI, or add a note that DocumentDB is MongoDB-compatible to avoid confusion.


252-253: CloudWatch metrics extended to new services
Monitoring now includes DocumentDB and Redis. You might link to example dashboards or alarms in your runbooks to help operators interpret these metrics.


266-318: Review ElastiCache (Redis) section for clarity and redundancy
The Redis section is thorough, but the phrase “Auth token enabled for authentication” followed immediately by the Authentication heading is repetitive. Consider streamlining that wording (e.g., move Auth token enabled under the Authentication heading).

🧰 Tools
🪛 LanguageTool

[duplication] ~294-~294: Possible typo: you repeated a word.
Context: ...on enabled * Auth token enabled for authentication * Authentication: * Auth token stored in AWS Param...

(ENGLISH_WORD_REPEAT_RULE)


321-382: DocumentDB section completeness
Great level of detail. The engine version 4.0.0 is hard-coded—consider parameterizing this (e.g., via var.documentdb_engine_version) so future upgrades need not update the docs & code in tandem.

terraform/main.tf (5)

1-20: Lock Terraform and provider versions
Pinning required_version and providers is solid. For Terraform core, you may use a range (e.g., >= 1.0, < 2.0) to allow patch updates without breaking changes.


39-42: Redis auth token generation
Generating a 32-character password with special characters is secure. If you encounter issues with certain symbols in Redis, consider specifying override_special to control allowable characters.


45-59: Consolidate tag definitions
You merge var.default_tags then re-add Project, CostCenter, and Owner in local.common_tags. Since they’re identical, you can simplify by only merging Environment into var.default_tags.


144-158: Parameterized DocumentDB deployment
Consider exposing the DocumentDB engine version as a variable (e.g., var.documentdb_engine_version) so you can upgrade without code changes.


209-247: ECS environment variables alignment
The app still expects APOS_MONGODB_URI for DocumentDB. Confirm compatibility or consider standardizing to APOS_DATABASE_URI. Additionally, you may want to pass in var.container_image_tag, var.container_cpu, var.container_memory, var.ecs_desired_count, and var.ecs_max_capacity here rather than rely on module defaults.

terraform/modules/s3/main.tf (3)

31-39: SSE-S3 encryption setup
AES256 is suitable; if you need audit/rotation controls, consider KMS-managed keys.


101-109: SSE-S3 encryption for logs
Consider using a KMS key if compliance requires key rotation or audit logging.


147-172: Restrict bucket policy to CloudFront
Your policy locks down S3 access to CloudFront via SourceArn. For extra security, you could also use aws:Referer conditions or an S3 VPC endpoint if needed.

terraform/variables.tf (8)

10-13: Environment variable
Consider adding a validation block (e.g., regex) to restrict var.environment to known values like dev, staging, prod.


38-42: Domain name variable
You might add a validation block to ensure the domain matches *.sandbox-prettyclear.com to catch typos early.


43-47: Media domain variable
Similar to domain_name, a validation rule could ensure it follows the expected pattern.


60-69: Password length validation
Enforcing a minimum of 8 characters is reasonable; you may also enforce complexity rules (upper, lower, digits) if required.


91-94: GitHub repository variable
Consider adding a regex validation to enforce the owner/repo format.


97-101: Slack webhook URL
Marking this as sensitive is correct. If Terraform supports it, you could add validation { condition = can(regex("^https://hooks.slack.com", var.slack_webhook_url)) }.


111-115: Container image tag
Defaulting to latest is convenient for dev but consider requiring explicit tags in prod for immutability.


163-167: DocumentDB instance class
Default db.t3.medium works for small workloads; if you support larger tiers, list them in your module README.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ce33511 and 3fcc647.

📒 Files selected for processing (19)
  • docs/Infrastructure.md (6 hunks)
  • docs/infrastructureQNA.md (1 hunks)
  • terraform/README.md (1 hunks)
  • terraform/backend-dev.hcl (1 hunks)
  • terraform/backend-prod.hcl (1 hunks)
  • terraform/backend-staging.hcl (1 hunks)
  • terraform/main.tf (1 hunks)
  • terraform/modules/s3/main.tf (1 hunks)
  • terraform/modules/s3/outputs.tf (1 hunks)
  • terraform/modules/s3/variables.tf (1 hunks)
  • terraform/modules/security_groups/main.tf (1 hunks)
  • terraform/modules/security_groups/outputs.tf (1 hunks)
  • terraform/modules/security_groups/variables.tf (1 hunks)
  • terraform/modules/vpc/main.tf (1 hunks)
  • terraform/modules/vpc/outputs.tf (1 hunks)
  • terraform/modules/vpc/variables.tf (1 hunks)
  • terraform/outputs.tf (1 hunks)
  • terraform/terraform.tfvars.example (1 hunks)
  • terraform/variables.tf (1 hunks)
✅ Files skipped from review due to trivial changes (10)
  • terraform/backend-dev.hcl
  • terraform/backend-staging.hcl
  • terraform/backend-prod.hcl
  • terraform/terraform.tfvars.example
  • terraform/modules/s3/outputs.tf
  • terraform/modules/security_groups/outputs.tf
  • docs/infrastructureQNA.md
  • terraform/modules/vpc/variables.tf
  • terraform/modules/vpc/outputs.tf
  • terraform/outputs.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/vpc/main.tf

[HIGH] 164-184: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions

(CKV_AWS_355)


[HIGH] 164-184: Ensure IAM policies does not allow write access without constraints

(CKV_AWS_290)

terraform/modules/s3/main.tf

[MEDIUM] 50-76: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)


[MEDIUM] 120-136: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)

🪛 LanguageTool
docs/Infrastructure.md

[duplication] ~294-~294: Possible typo: you repeated a word.
Context: ...on enabled * Auth token enabled for authentication * Authentication: * Auth token stored in AWS Param...

(ENGLISH_WORD_REPEAT_RULE)

🪛 markdownlint-cli2 (0.17.2)
terraform/README.md

93-93: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)


136-136: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: e2e-tests
  • GitHub Check: lint
  • GitHub Check: unit-tests
  • GitHub Check: security-scan
🔇 Additional comments (65)
terraform/modules/security_groups/variables.tf (4)

3-6: Well-defined prefix variable
The name_prefix variable has an appropriate description and enforces a string type, aligning with naming conventions.


8-11: Environment variable is clear
The environment variable properly captures the deployment context with a clear string type.


13-16: VPC ID variable is correctly typed
The vpc_id variable succinctly represents the VPC identifier with the correct string type and description.


18-22: Tags variable defaults correctly
The tags variable uses a map(string) with a sensible default of {}, enabling optional tagging.

terraform/modules/security_groups/main.tf (4)

3-36: ALB Security Group is well-configured
The Application Load Balancer security group allows HTTP/HTTPS from anywhere and all outbound traffic, correctly using cidr_blocks and merged tags.


38-63: ECS Security Group ingress scoped to ALB
Restricting ECS ingress to the ALB SG ID ensures least-privilege access. The egress rule allows all outbound traffic as expected.


65-90: DocumentDB Security Group is correctly locked down
Ingress is limited to the ECS SG on port 27017 and all outbound traffic is permitted. Naming and tagging follow conventions.


92-117: Redis Security Group follows best practices
Port 6379 is restricted to ECS tasks, egress is open, and resource naming and tagging are consistent.

terraform/README.md (1)

3-3: Verify documentation link consistency
The README references docs/Infrastructure.md, but the PR introduces docs/aws-architecture-diagram.md. Confirm that docs/Infrastructure.md exists or update this link to point to the newly added architecture diagram file.

terraform/modules/s3/variables.tf (7)

3-6: Name prefix variable is well-defined
name_prefix includes a clear description and correct string type for resource naming.


8-11: Environment variable is consistent
The environment variable clearly identifies the target environment and uses the appropriate type.


13-17: Tags variable setup is correct
Using map(string) with a default {} allows flexible tagging.


19-23: Domain name for CORS declared
The domain_name variable is correctly set up with string type and an empty default for optional usage.


25-29: Media domain name variable is clear
media_domain_name follows the same pattern as domain_name, supporting CORS configuration.


31-35: CloudFront ARN variable is appropriate
cloudfront_distribution_arn defaults to "", allowing optional policy attachment.


37-41: ECS task role ARN variable is well-typed
The ecs_task_role_arn variable enables granting bucket access and is properly declared.

terraform/modules/vpc/main.tf (12)

3-12: VPC resource configuration is solid
The VPC is created with DNS support and appropriate tagging conventions.


14-21: Internet Gateway setup is correct
Attaching the IGW to the VPC and tagging follow best practices.


23-36: Public subnets are provisioned per AZ
Using cidrsubnet with correct indexing and tagging each subnet ensures a robust multi-AZ design.


38-50: Private subnets mirror public setup
Private subnet CIDR allocation and tagging are consistent with the public subnets.


52-63: Elastic IPs for NAT Gateways are handled properly
Allocating EIPs with depends_on ensures the IGW is ready before assignment.


64-76: NAT Gateways created per AZ
Each NAT Gateway references its EIP and subnet, with explicit dependencies for order.


78-90: Public route table configuration is correct
Default route through IGW is set, and tagging matches naming conventions.


92-99: Public route table associations are accurate
Associating each public subnet to its route table ensures proper routing.


100-114: Private route tables use NAT Gateways
Traffic in private subnets is routed through the corresponding NAT Gateway.


116-123: Private route table associations are correct
Linking each private subnet to its private route table is implemented properly.


124-131: VPC Flow Logs resource is well-defined
Configuring flow logs to send all traffic data to CloudWatch follows best practices.


132-140: CloudWatch Log Group for flow logs is good
The log group is named and retained for 30 days; tags are merged correctly.

docs/Infrastructure.md (4)

7-7: Confirm domain naming consistency
The domain was updated to use the sandbox-prettyclear.com suffix. Please verify that this matches the var.domain_name and var.media_domain_name values in your Terraform configuration and DNS records.


161-162: ECS integration updated with DocumentDB & Redis
The ECS cluster now integrates with both Amazon DocumentDB and ElastiCache Redis services. This aligns with the infrastructure shift—no further changes needed here.


203-203: Validate ALB domain update
The ALB domain is now sf-website-<env>.sandbox-prettyclear.com. Ensure that your ACM certificate and Route 53 records are configured to cover this new domain.


224-224: Verify CloudFront custom domain
The media distribution now uses sf-website-media-<env>.sandbox-prettyclear.com. Confirm that this CNAME is set up in DNS and attached to your CloudFront distribution.

terraform/main.tf (11)

23-29: Use default_tags for consistent resource tagging
The default_tags block ensures all resources carry your standard tags. No action needed.


31-37: Fetch availability zones dynamically
Good use of data sources to populate availability zones.


61-72: Modular VPC configuration
Well-structured VPC module invocation encapsulates networking concerns.


75-84: Modular security groups configuration
Security group module is clear and aligns with your architecture.


97-110: IAM roles for ECS and GitHub OIDC
ECS task and execution roles plus GitHub OIDC configuration look correct.


113-120: Container registry configuration
Enabling image scan on push and a lifecycle policy is best practice.


123-141: Parameter Store secrets management
Good separation of static and generated secrets. Ensure an empty gcs_service_account_key does not result in a malformed secret entry.


161-174: Redis module configuration
Auth token sourcing and subnet/security settings match your docs. Confirm engine version 7.0 matches the module’s default.


177-191: ALB module configuration
You use var.domain_name here—ensure your certificate ARN and DNS (Route 53) match this new domain.


194-207: CloudFront module integration
Good setup; verify that the media_domain_name and OAC settings correspond to the S3 bucket policy.


250-267: CloudWatch module configuration
Monitoring parameters cover all key components. Make sure alert thresholds and your Slack webhook URL are tested in staging.

terraform/modules/s3/main.tf (9)

4-11: Attachments bucket naming
Using ${var.name_prefix}-s3-attachments-${var.environment} follows your naming convention.


14-21: Logs bucket naming
Consistent naming for logs bucket—good.


24-29: Versioning enabled for attachments
Versioning is correctly enabled.


41-48: Block public access
Strict public access block prevents unintended exposure.


78-90: CORS rules for attachments
Origins and methods align with your application’s requirements.


94-99: Versioning enabled for logs
Versioning for logs is enabled as expected.


111-118: Block public access for logs
Correctly restricts public access to your logs bucket.


139-145: CloudFront Origin Access Control
Creating an OAC for the attachments bucket is best practice—ensure your CloudFront distribution references aws_cloudfront_origin_access_control.attachments.id.


175-207: ECS task policy document
IAM policy statements grant the correct S3 permissions. Make sure this data.aws_iam_policy_document.attachments_access output is attached to an aws_iam_policy resource in the parent module.

terraform/variables.tf (13)

4-8: AWS region default
Defaulting to us-east-1 is common. For production, you may require an explicit override to avoid unintended deployments.


15-23: Default tags
Your default tag map is clear and well-structured.


26-35: VPC CIDR validation
Using can(cidrhost(...)) is a robust check. If you plan IPv6 support, consider adding a corresponding CIDR validation for IPv6 blocks.


48-52: Certificate ARN
ARN input is correctly marked as non-sensitive string.


54-59: DocumentDB username
The master username is captured correctly.


72-81: Session secret validation
Requiring 32+ characters improves security.


83-88: Optional GCS key
An empty default is fine; ensure your module logic skips creating a secret when this is blank.


104-108: Route53 zone ID
Default empty is acceptable; if Route53 is optional, document when to set this variable.


117-126: Container CPU validation
Validation against allowed CPU units is well done.


128-137: Container memory validation
The memory range check covers valid Fargate limits.


140-149: ECS desired count validation
Ensuring at least one task is good; consider adding an upper bound for cost control.


151-160: ECS max capacity validation
Linking max_capacity to desired_count prevents misconfiguration—nice touch.


169-173: Redis node type default
cache.t3.micro is appropriate for dev; production overrides should be documented.

Comment on lines 163 to 184
# IAM Policy for VPC Flow Logs
resource "aws_iam_role_policy" "flow_log" {
name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}"
role = aws_iam_role.flow_log.id

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
]
Effect = "Allow"
Resource = "*"
},
]
})
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Restrict IAM policy resource scope
The inline IAM policy allows unrestricted access (Resource = "*") for CloudWatch Logs actions. To adhere to the principle of least privilege, scope the policy to the specific log group ARN.

resource "aws_iam_role_policy" "flow_log" {
  name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}"
  role = aws_iam_role.flow_log.id

- policy = jsonencode({
-   Version = "2012-10-17"
-   Statement = [
-     {
-       Action = [
-         "logs:CreateLogGroup",
-         "logs:CreateLogStream",
-         "logs:PutLogEvents",
-         "logs:DescribeLogGroups",
-         "logs:DescribeLogStreams",
-       ]
-       Effect   = "Allow"
-       Resource = "*"
-     },
-   ]
- })
+ policy = jsonencode({
+   Version = "2012-10-17"
+   Statement = [
+     {
+       Effect   = "Allow"
+       Action = [
+         "logs:CreateLogGroup",
+         "logs:CreateLogStream",
+         "logs:PutLogEvents",
+         "logs:DescribeLogGroups",
+         "logs:DescribeLogStreams",
+       ]
+       Resource = [
+         aws_cloudwatch_log_group.vpc_flow_log.arn,
+         "${aws_cloudwatch_log_group.vpc_flow_log.arn}:*"
+       ]
+     },
+   ]
+ })
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# IAM Policy for VPC Flow Logs
resource "aws_iam_role_policy" "flow_log" {
name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}"
role = aws_iam_role.flow_log.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
]
Effect = "Allow"
Resource = "*"
},
]
})
}
resource "aws_iam_role_policy" "flow_log" {
name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}"
role = aws_iam_role.flow_log.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
]
Resource = [
aws_cloudwatch_log_group.vpc_flow_log.arn,
"${aws_cloudwatch_log_group.vpc_flow_log.arn}:*"
]
},
]
})
}
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 164-184: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions

(CKV_AWS_355)


[HIGH] 164-184: Ensure IAM policies does not allow write access without constraints

(CKV_AWS_290)

🤖 Prompt for AI Agents
In terraform/modules/vpc/main.tf around lines 163 to 184, the IAM policy for VPC
Flow Logs currently allows unrestricted access with Resource set to "*". To fix
this, replace the wildcard resource with the specific ARN of the CloudWatch log
group used by the flow logs. Construct the ARN using the log group name variable
or resource reference, and update the Resource field in the policy JSON to this
ARN to restrict permissions to only the necessary log group.

Comment on lines 87 to 94
module "s3" {
source = "./modules/s3"

name_prefix = local.name_prefix
environment = local.environment

tags = local.common_tags
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Pass required variables to S3 module
The s3 module uses var.domain_name, var.media_domain_name, var.cloudfront_distribution_arn, and var.ecs_task_role_arn internally but these aren’t passed here. Please verify the module interface or update the call to include these inputs.

🤖 Prompt for AI Agents
In terraform/main.tf around lines 87 to 94, the s3 module call is missing
required input variables var.domain_name, var.media_domain_name,
var.cloudfront_distribution_arn, and var.ecs_task_role_arn that the module uses
internally. Review the s3 module's variables.tf or documentation to confirm
these inputs, then update the module block here to pass these variables
explicitly by adding lines like domain_name = var.domain_name, media_domain_name
= var.media_domain_name, cloudfront_distribution_arn =
var.cloudfront_distribution_arn, and ecs_task_role_arn = var.ecs_task_role_arn.

Comment on lines 50 to 76
resource "aws_s3_bucket_lifecycle_configuration" "attachments" {
bucket = aws_s3_bucket.attachments.id

rule {
id = "transition_to_ia"
status = "Enabled"

transition {
days = 30
storage_class = "STANDARD_IA"
}

transition {
days = 90
storage_class = "GLACIER"
}
}

rule {
id = "expire_non_current_versions"
status = "Enabled"

noncurrent_version_expiration {
noncurrent_days = 30
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add abort incomplete multipart upload rule
Your lifecycle rules handle transitions and non-current expiration but don’t clean up failed multipart uploads. Add an abort_incomplete_multipart_upload { days_after_initiation = 7 } block to avoid orphaned parts.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 50-76: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)

🤖 Prompt for AI Agents
In terraform/modules/s3/main.tf around lines 50 to 76, the lifecycle
configuration resource for the S3 bucket is missing a rule to abort incomplete
multipart uploads. Add an abort_incomplete_multipart_upload block inside the
lifecycle rule with days_after_initiation set to 7 to ensure failed multipart
uploads are cleaned up and do not accumulate orphaned parts.

Comment on lines 120 to 136
resource "aws_s3_bucket_lifecycle_configuration" "logs" {
bucket = aws_s3_bucket.logs.id

rule {
id = "log_lifecycle"
status = "Enabled"

transition {
days = 90
storage_class = "GLACIER"
}

expiration {
days = 365
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add abort incomplete multipart upload to logs lifecycle
Like the attachments bucket, include an abort_incomplete_multipart_upload rule to prevent storage bloat from incomplete uploads.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 120-136: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)

🤖 Prompt for AI Agents
In terraform/modules/s3/main.tf around lines 120 to 136, the lifecycle
configuration for the logs bucket is missing an
abort_incomplete_multipart_upload rule. Add an abort_incomplete_multipart_upload
block inside the rule to specify the number of days after which incomplete
multipart uploads should be aborted, similar to the attachments bucket
configuration, to prevent storage bloat.

…ript

- Unify S3 bucket names to 'sf-website-infrastructure' across all environments

- Update state file paths to consistent terraform/terraform-{env}.tfstate format

- Replace hardcoded session secret with environment variable placeholder

- Add comprehensive AWS initialization script for Terraform backend resources

- Include S3 bucket and DynamoDB table management with proper error handling
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
terraform/backend-dev.hcl (1)

1-6: Reduce duplication across environment backend configs
All three backend files differ only by the key value. Consider consolidating these into a single templated backend.hcl and passing the environment-specific key at init time (e.g., terraform init -backend-config="key=terraform/${ENV}.tfstate"). This will DRY up your configs and simplify maintenance.

terraform/init-aws-for-terraform.sh (2)

36-42: Use an array for AWS CLI invocation to avoid word-splitting edge cases
Building the AWS command as an array is more robust than echoing a string, especially when profile names contain spaces or special characters.

-aws_cmd() {
-  if [ -n "${PROFILE}" ]; then
-    echo "aws --profile ${PROFILE}"
-  else
-    echo "aws"
-  fi
-}
+aws_cmd() {
+  local aws_cli=(aws)
+  if [ -n "${PROFILE}" ]; then
+    aws_cli+=(--profile "${PROFILE}")
+  fi
+  echo "${aws_cli[@]}"
+}

243-248: Use raw input flag with read to avoid backslash interpretation
It's safer to use read -r so that any backslashes in user input aren’t treated as escape characters.

-read -p 'Are you sure? (yes/no): ' confirmation
+read -r -p 'Are you sure? (yes/no): ' confirmation
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fcc647 and 196a33b.

📒 Files selected for processing (5)
  • terraform/backend-dev.hcl (1 hunks)
  • terraform/backend-prod.hcl (1 hunks)
  • terraform/backend-staging.hcl (1 hunks)
  • terraform/init-aws-for-terraform.sh (1 hunks)
  • terraform/terraform.tfvars.example (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • terraform/terraform.tfvars.example
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: lint
  • GitHub Check: e2e-tests
  • GitHub Check: security-scan
  • GitHub Check: unit-tests
🔇 Additional comments (4)
terraform/backend-dev.hcl (1)

1-6: Correct remote state configuration for Development environment
The backend file correctly specifies the shared S3 bucket, state key, DynamoDB lock table, region, and enables encryption—this aligns with Terraform best practices for remote state management.

terraform/backend-staging.hcl (1)

1-6: Remote state configuration for Staging is valid
The staging backend correctly mirrors the development setup with the appropriate key path and encryption.

terraform/backend-prod.hcl (1)

1-6: Remote state configuration for Production is valid
Production backend is properly configured with locking, encryption, and distinct state path.

terraform/init-aws-for-terraform.sh (1)

1-338: Bash script for Terraform backend resource management looks solid
The script cleanly implements create/delete/status operations, credential checks, resource existence checks, and user prompts. Functions are well-named and the flow is clear.

# Script to manage Terraform backend resources (S3 bucket and DynamoDB table)
# Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME]

set -e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enable strict error checking for pipelines and unset variables
Currently using set -e only catches non-zero exit codes of the last command. Adding -o pipefail and -u will ensure failures in pipelines and references to unset variables abort the script as expected.

-set -e
+set -euo pipefail
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
set -e
set -euo pipefail
🤖 Prompt for AI Agents
In terraform/init-aws-for-terraform.sh at line 6, the script uses 'set -e' which
only stops execution on the last command's failure. To improve error handling,
modify this line to include '-o pipefail' and '-u' options as well, so the
script will also fail on any command in a pipeline that fails and on any use of
unset variables. This change enhances the script's robustness by ensuring all
errors are caught promptly.

… and comprehensive documentation

- Refactor aws_cmd() to aws_exec() with improved error handling and output suppression
- Add non-interactive mode support for automation and CI/CD pipelines
- Add comprehensive documentation to README.md with usage examples and feature overview
- Create IAM policy file with specific permissions for Terraform backend resources
- Improve function return codes and error handling throughout the script
- Redirect command output to reduce noise and improve user experience
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
terraform/init-aws-for-terraform.sh (1)

6-6: Enhance error handling with pipefail and unset variable checks
As previously suggested, use:

-set -e
+set -euo pipefail

to catch failures in pipelines and references to unset variables.

🧹 Nitpick comments (7)
terraform/terraform-backend-policy.json (3)

4-22: Scope down S3 bucket permissions for least privilege
Currently, the policy grants broad permissions (CreateBucket, DeleteBucket, PutBucketPolicy, etc.) on the entire bucket. Consider whether Terraform truly needs to manage bucket lifecycle (creation/deletion) on an ongoing basis, or if those operations should be separated into a bootstrap policy. Reducing permissions post-creation minimizes blast radius.


40-53: Consolidate DynamoDB permissions and consider wildcard resources
You grant CreateTable/DeleteTable for a single table ARN. If you plan to reuse this policy across environments with table suffixes, you could use a wildcard:

"Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks-*"

Alternatively, parameterize the table name in Terraform. Also verify if CreateTable/DeleteTable should be limited to bootstrap only.


54-61: Restrict STS permissions to specific principals
Granting sts:GetCallerIdentity on all resources is safe but broad. You could tighten this by specifying the ARN of the IAM role(s) intended to call Terraform, e.g.:

"Resource": "arn:aws:iam::695912022152:role/terraform-*"

This follows least-privilege principles.

terraform/init-aws-for-terraform.sh (2)

35-43: Review AWS CLI error suppression
Filtering out any lines containing head: or cat: may inadvertently silence legitimate AWS CLI errors. Consider using the AWS CLI environment variable AWS_PAGER="" to disable paging or refining your grep regex to only ignore known benign messages.


234-236: Use read -r to prevent backslash escapes
Change to:

-read -p 'Are you sure? (yes/no): ' confirmation
+read -r -p 'Are you sure? (yes/no): ' confirmation

to avoid unintended processing of backslashes in user input.

terraform/README.md (2)

93-99: Specify language for fenced code block
The code block starting at line 93 is missing a language identifier. For markdownlint compliance, add text or bash after the backticks, e.g.:

- ```
+ ```text
  terraform/
  ├── main.tf
  └── README.md

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

93-93: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

---

`136-138`: **Specify language for resource naming code block**  
The snippet defining the naming convention lacks a language marker. Update to:  
```diff
- ```
+ ```text
  sf-website-{resource-type}-{environment}
to satisfy markdownlint (MD040).

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

136-136: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 196a33b6bf5e18457ada34a37e94a85424c0f4ef and e5e4b6070d93e313c9faf3560eb092b82b9f8989.

</details>

<details>
<summary>📒 Files selected for processing (3)</summary>

* `terraform/README.md` (1 hunks)
* `terraform/init-aws-for-terraform.sh` (1 hunks)
* `terraform/terraform-backend-policy.json` (1 hunks)

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

<details>
<summary>terraform/README.md</summary>

93-93: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

---

136-136: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms (4)</summary>

* GitHub Check: e2e-tests
* GitHub Check: lint
* GitHub Check: unit-tests
* GitHub Check: security-scan

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines 24 to 26
"arn:aws:s3:::sf-website-infrastructure",
"arn:aws:s3:::sf-website-infrastructure/*"
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Align resource ARNs with environment-specific buckets
The policy hardcodes the S3 bucket sf-website-infrastructure, but your backend configs (e.g., backend-dev.hcl) and README use buckets like sf-website-terraform-state-dev, -staging, -prod. This mismatch could prevent Terraform from accessing the correct bucket. Please confirm the intended bucket names and update either the policy or the backend docs to align.

🤖 Prompt for AI Agents
In terraform/terraform-backend-policy.json around lines 24 to 26, the S3 bucket
ARNs are hardcoded to "sf-website-infrastructure" which does not match the
environment-specific bucket names used in backend configs and documentation like
"sf-website-terraform-state-dev", "-staging", and "-prod". Update the resource
ARNs in this policy file to use the correct environment-specific bucket names
consistent with the backend configuration and README, or alternatively update
the backend configs and docs to match the bucket names in this policy, ensuring
alignment between all references.

Comment on lines 157 to 164
aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \
--query 'Versions[].{Key:Key,VersionId:VersionId}' \
--output text | while read -r key version_id; do
if [ -n "${key}" ] && [ -n "${version_id}" ]; then
aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \
--key "${key}" --version-id "${version_id}"
fi
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle object keys with special characters safely
Using --output text and while read -r key version_id can break on keys containing spaces or unusual characters. A more robust pattern is to output JSON and parse with jq, for example:

aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" --query 'Versions[].{Key:Key,VersionId:VersionId}' --output json |
jq -c '.[]' |
while read -r item; do
  key=$(jq -r '.Key' <<<"$item")
  version_id=$(jq -r '.VersionId' <<<"$item")
  aws_exec s3api delete-object --bucket "$BUCKET_NAME" --key "$key" --version-id "$version_id"
done

This ensures correct handling of any key names.

Also applies to: 167-174

🤖 Prompt for AI Agents
In terraform/init-aws-for-terraform.sh around lines 157 to 164, the current use
of --output text with while read can break on S3 object keys containing spaces
or special characters. To fix this, change the command to output JSON (--output
json), pipe it through jq to parse each object safely, extract Key and VersionId
using jq, and then delete each object using these safely parsed values. Apply
the same fix to lines 167 to 174 where a similar pattern occurs.

Comment on lines 43 to 47
# Create S3 buckets for each environment
aws s3 mb s3://sf-website-terraform-state-dev
aws s3 mb s3://sf-website-terraform-state-staging
aws s3 mb s3://sf-website-terraform-state-prod

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inconsistent bucket naming in Quick Start
The README instructs creating buckets named sf-website-terraform-state-dev etc., but the IAM policy and init-aws-for-terraform.sh reference sf-website-infrastructure. Please verify and align the bucket names across your Terraform backends, scripts, and docs to avoid confusion.

🤖 Prompt for AI Agents
In terraform/README.md around lines 43 to 47, the S3 bucket names used in the
instructions do not match the bucket names referenced in the IAM policy and
init-aws-for-terraform.sh script. Review the bucket naming convention used in
all related files and update the README to use consistent bucket names, ensuring
they align exactly with those in the IAM policy and scripts to prevent confusion
and errors during setup.

- Remove DynamoDB table references from all backend configuration files

- Simplify terraform initialization script by removing DynamoDB functionality

- Add comprehensive terraform modules for AWS services (ALB, CloudFront, CloudWatch, DocumentDB, ECR, ECS, IAM, Parameter Store, Redis)

- Improve initialization script with environment variable configuration support

- Update gitignore to exclude terraform.tfvars and .terraform directory

- Add terraform lock file for dependency management
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (2)
terraform/init-aws-for-terraform.sh (2)

6-6: Enable strict error handling
Already flagged in prior reviews: switch from set -e to set -euo pipefail to catch errors in pipelines and unset variables.

-set -e
+set -euo pipefail

126-133: Handle S3 object keys with special characters safely
The loops using --output text | while read -r key version_id break on keys with spaces or unusual characters. Switch to JSON output and parse with jq, as suggested previously.

- aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \
-   --query 'Versions[].{Key:Key,VersionId:VersionId}' \
-   --output text | while read -r key version_id; do
+ aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \
+   --query 'Versions[].{Key:Key,VersionId:VersionId}' \
+   --output json | jq -c '.[]' | while read -r item; do
+   key=$(jq -r '.Key' <<<"$item")
+   version_id=$(jq -r '.VersionId' <<<"$item")

Apply the same pattern for DeleteMarkers.

Also applies to: 136-143

🧹 Nitpick comments (10)
terraform/modules/cloudfront/main.tf (1)

2-46: Add default root object and consider origin failover & WAF

  • To prevent 404s when accessing the root path, add a default_root_object (e.g., "index.html") (CKV_AWS_305).
  • For high availability, consider configuring an origin_group with failover criteria (CKV_AWS_310).
  • Evaluate attaching a WAFv2 WebACL for added security (CKV2_AWS_47).

Example diff to add just the default root object:

 resource "aws_cloudfront_distribution" "main" {
+  default_root_object = "index.html"
   origin {
     domain_name = var.s3_bucket_domain_name
     origin_id   = "S3-${var.s3_bucket_id}"
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 2-46: Ensure CloudFront distribution has a default root object configured

(CKV_AWS_305)


[MEDIUM] 2-46: Ensure CloudFront distributions should have origin failover configured

(CKV_AWS_310)


[MEDIUM] 2-46: Ensure AWS CloudFront attached WAFv2 WebACL is configured with AMR for Log4j Vulnerability

(CKV2_AWS_47)

terraform/modules/cloudfront/variables.tf (1)

1-35: Input variables defined without validations
All necessary inputs are declared, but adding validation blocks would help catch misconfigurations early. For example:

variable "media_domain_name" {
  description = "Media domain name"
  type        = string

  validation {
    condition     = can(regex("[A-Za-z0-9.-]+\\.[A-Za-z]{2,}", var.media_domain_name))
    error_message = "media_domain_name must be a valid domain (e.g. media.example.com)"
  }
}

Similarly, enforce non-empty strings for critical values like name_prefix and certificate_arn.

terraform/modules/redis/outputs.tf (1)

1-4: Simplify fallback with try()
Consider using Terraform’s built-in try() to streamline the fallback:

-value = aws_elasticache_replication_group.main.configuration_endpoint_address != "" ? aws_elasticache_replication_group.main.configuration_endpoint_address : aws_elasticache_replication_group.main.primary_endpoint_address
+value = try(aws_elasticache_replication_group.main.configuration_endpoint_address, aws_elasticache_replication_group.main.primary_endpoint_address)
terraform/modules/ecr/variables.tf (1)

6-9: Add validation for environment variable
Consider constraining this variable to known environments:

 variable "environment" {
   description = "Environment name"
   type        = string
+  validation {
+    condition     = contains(["dev","staging","prod"], var.environment)
+    error_message = "Environment must be one of dev, staging, or prod."
+  }
 }
terraform/modules/ecr/main.tf (1)

13-47: Parameterize lifecycle policy rules for flexibility
Hardcoded countNumber values can be exposed as module inputs to allow easy tuning. Example refactor:

# In terraform/modules/ecr/variables.tf
+variable "keep_tagged_count" {
+  description = "Number of tagged images to retain"
+  type        = number
+  default     = 10
+}
+variable "untagged_retention_days" {
+  description = "Days to retain untagged images"
+  type        = number
+  default     = 1
+}

# In terraform/modules/ecr/main.tf
   policy = jsonencode({
     rules = [
       {
         ...
-        countNumber   = 10
+        countNumber   = var.keep_tagged_count
         ...
       },
       {
         ...
-        countNumber = 1
+        countNumber = var.untagged_retention_days
         ...
       }
     ]
   })
terraform/modules/cloudwatch/main.tf (2)

1-7: Make retention period configurable

The hard-coded retention_in_days = 7 may not fit all use cases. Consider exposing this as a variable (e.g., var.log_retention_in_days) with a sensible default to allow future flexibility.


9-36: Parameterize AWS region for the dashboard

The dashboard’s region property is currently hard-coded to "us-east-1". To support multi-region deployments, consider replacing it with var.region or data.aws_region.current.name.

terraform/modules/redis/main.tf (1)

10-26: Parameterize cluster sizing and backups

The node_type = "cache.t3.micro", num_cache_clusters = 1, and absence of snapshot retention settings are hard-coded. For production workloads, consider exposing var.node_type, var.num_cache_clusters, and var.snapshot_retention_limit/var.maintenance_window variables to make sizing and backup policies configurable.

terraform/modules/documentdb/main.tf (1)

23-30: Parameterize instance class and snapshot behavior

The instance_class = "db.t3.medium" and skip_final_snapshot = true are hard-coded. Consider introducing var.instance_class and var.skip_final_snapshot (defaulting to false for safer production tear-down) to increase flexibility and prevent accidental data loss.

terraform/modules/ecs/main.tf (1)

93-94: Remove or Correct unnecessary depends_on usage
Using depends_on = [var.target_group_arn] is incorrect—depends_on expects resource addresses, not raw strings. Terraform automatically tracks the dependency because the load_balancer.target_group_arn reference. You can safely remove this block or, if explicit ordering is required, reference the actual ALB target group resource.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5e4b60 and 45d75c7.

📒 Files selected for processing (33)
  • .gitignore (1 hunks)
  • terraform/.terraform.lock.hcl (1 hunks)
  • terraform/backend-dev.hcl (1 hunks)
  • terraform/backend-prod.hcl (1 hunks)
  • terraform/backend-staging.hcl (1 hunks)
  • terraform/init-aws-for-terraform.sh (1 hunks)
  • terraform/modules/alb/main.tf (1 hunks)
  • terraform/modules/alb/outputs.tf (1 hunks)
  • terraform/modules/alb/variables.tf (1 hunks)
  • terraform/modules/cloudfront/main.tf (1 hunks)
  • terraform/modules/cloudfront/outputs.tf (1 hunks)
  • terraform/modules/cloudfront/variables.tf (1 hunks)
  • terraform/modules/cloudwatch/main.tf (1 hunks)
  • terraform/modules/cloudwatch/outputs.tf (1 hunks)
  • terraform/modules/cloudwatch/variables.tf (1 hunks)
  • terraform/modules/documentdb/main.tf (1 hunks)
  • terraform/modules/documentdb/outputs.tf (1 hunks)
  • terraform/modules/documentdb/variables.tf (1 hunks)
  • terraform/modules/ecr/main.tf (1 hunks)
  • terraform/modules/ecr/outputs.tf (1 hunks)
  • terraform/modules/ecr/variables.tf (1 hunks)
  • terraform/modules/ecs/main.tf (1 hunks)
  • terraform/modules/ecs/outputs.tf (1 hunks)
  • terraform/modules/ecs/variables.tf (1 hunks)
  • terraform/modules/iam/main.tf (1 hunks)
  • terraform/modules/iam/outputs.tf (1 hunks)
  • terraform/modules/iam/variables.tf (1 hunks)
  • terraform/modules/parameter_store/main.tf (1 hunks)
  • terraform/modules/parameter_store/outputs.tf (1 hunks)
  • terraform/modules/parameter_store/variables.tf (1 hunks)
  • terraform/modules/redis/main.tf (1 hunks)
  • terraform/modules/redis/outputs.tf (1 hunks)
  • terraform/modules/redis/variables.tf (1 hunks)
✅ Files skipped from review due to trivial changes (13)
  • .gitignore
  • terraform/modules/ecr/outputs.tf
  • terraform/modules/documentdb/outputs.tf
  • terraform/.terraform.lock.hcl
  • terraform/modules/ecs/outputs.tf
  • terraform/modules/iam/outputs.tf
  • terraform/modules/cloudwatch/variables.tf
  • terraform/modules/alb/outputs.tf
  • terraform/modules/iam/variables.tf
  • terraform/modules/ecs/variables.tf
  • terraform/modules/alb/variables.tf
  • terraform/modules/redis/variables.tf
  • terraform/modules/parameter_store/outputs.tf
🚧 Files skipped from review as they are similar to previous changes (2)
  • terraform/backend-prod.hcl
  • terraform/backend-dev.hcl
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/documentdb/main.tf

[MEDIUM] 10-21: Ensure DocumentDB Logging is enabled

(CKV_AWS_85)

terraform/modules/alb/main.tf

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)

terraform/modules/cloudfront/main.tf

[MEDIUM] 2-46: Ensure CloudFront distribution has a default root object configured

(CKV_AWS_305)


[MEDIUM] 2-46: Ensure CloudFront distributions should have origin failover configured

(CKV_AWS_310)


[MEDIUM] 2-46: Ensure AWS CloudFront attached WAFv2 WebACL is configured with AMR for Log4j Vulnerability

(CKV2_AWS_47)

terraform/modules/parameter_store/main.tf

[HIGH] 2-8: Ensure SSM parameters are using KMS CMK

(CKV_AWS_337)


[HIGH] 20-26: Ensure SSM parameters are using KMS CMK

(CKV_AWS_337)


[HIGH] 29-35: Ensure SSM parameters are using KMS CMK

(CKV_AWS_337)


[HIGH] 38-46: Ensure SSM parameters are using KMS CMK

(CKV_AWS_337)

terraform/modules/iam/main.tf

[HIGH] 116-150: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions

(CKV_AWS_355)


[HIGH] 116-150: Ensure IAM policies does not allow write access without constraints

(CKV_AWS_290)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: unit-tests
  • GitHub Check: e2e-tests
  • GitHub Check: lint
  • GitHub Check: security-scan
🔇 Additional comments (14)
terraform/modules/cloudfront/outputs.tf (2)

1-4: Outputs expose CloudFront distribution ID
The distribution_id output correctly exposes aws_cloudfront_distribution.main.id for downstream modules to reference.


6-9: Outputs expose CloudFront distribution domain name
The distribution_domain_name output appropriately exposes aws_cloudfront_distribution.main.domain_name.

terraform/modules/cloudfront/main.tf (1)

48-51: Origin Access Identity configuration is correct
The OAI resource is correctly defined to restrict direct S3 bucket access, with a clear comment reflecting the name prefix and environment.

terraform/modules/redis/outputs.tf (2)

1-4: Clear endpoint fallback logic
The conditional correctly selects the configuration endpoint when available, otherwise falls back to the primary endpoint.


6-9: Expose replication group ID
The cluster_id output correctly surfaces the replication_group_id.

terraform/modules/cloudwatch/outputs.tf (2)

1-4: Output CloudWatch log group name
This output correctly exposes the app_logs log group name.


6-9: Output CloudWatch dashboard name
This output correctly exposes the main dashboard identifier.

terraform/modules/ecr/variables.tf (2)

1-4: name_prefix variable is appropriately defined
The string type and description are clear and align with naming conventions.


11-15: tags variable default is sensible
Defaulting to an empty map is appropriate for optional tags.

terraform/modules/ecr/main.tf (1)

1-11: ECR repository resource is correctly configured
The repository name, tag mutability, scanning, and tagging settings are all sound.

terraform/modules/redis/main.tf (1)

1-7: Subnet group resource configuration looks good

The ElastiCache subnet group is correctly defined with dynamic naming and applied tags.

terraform/modules/documentdb/variables.tf (1)

1-41: Variable definitions look solid

The module’s input variables are well-defined with clear descriptions, types, and sensible defaults (including marking master_password as sensitive).

terraform/modules/ecs/main.tf (2)

1-11: Approve ECS Cluster Configuration
The ECS cluster resource enables Container Insights and applies consistent tagging—this aligns with best practices.


35-47: Verify mapping of environment variables and secrets
The dynamic blocks for environment and secrets rely on var.environment_variables and var.secrets. Ensure these variables are properly defined in variables.tf (with correct types and defaults) to avoid runtime JSON-encode errors.

Comment on lines 1 to 5
# Backend configuration for Staging environment
bucket = "sf-website-infrastructure"
key = "terraform/terraform-staging.tfstate"
region = "us-east-1"
encrypt = true No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing DynamoDB locking configuration
To prevent concurrent state operations corrupting your state, include a dynamodb_table setting for Terraform state locking, for example:

dynamodb_table = "terraform-state-lock"
🤖 Prompt for AI Agents
In terraform/backend-staging.hcl at lines 1 to 5, the backend configuration is
missing the dynamodb_table setting required for state locking. Add a line
specifying dynamodb_table = "terraform-state-lock" to enable Terraform state
locking and prevent concurrent state operation conflicts.

Comment on lines 2 to 12
resource "aws_lb" "main" {
name = "${var.name_prefix}-${var.environment}-alb"
internal = false
load_balancer_type = "application"
security_groups = var.security_group_ids
subnets = var.subnet_ids

enable_deletion_protection = false

tags = var.tags
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enforce dropping of invalid HTTP headers

Checkov (CKV_AWS_131) recommends that the ALB drop invalid or sensitive HTTP headers to prevent information leakage. Add:

 resource "aws_lb" "main" {
   ...
+  drop_invalid_header_fields = true
+  idle_timeout               = 60
   ...
 }

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)

🤖 Prompt for AI Agents
In terraform/modules/alb/main.tf around lines 2 to 12, the AWS ALB resource
lacks configuration to drop invalid or sensitive HTTP headers as recommended by
Checkov (CKV_AWS_131). To fix this, add the setting that enforces dropping
invalid HTTP headers by including the attribute that enables this behavior in
the aws_lb resource block, ensuring the ALB is configured to prevent information
leakage.

Comment on lines 2 to 8
resource "aws_ssm_parameter" "session_secret" {
name = "/${var.name_prefix}/${var.environment}/session-secret"
type = "SecureString"
value = var.session_secret

tags = var.tags
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Specify Customer-Managed KMS Key for SecureString parameters
All aws_ssm_parameter resources of type "SecureString" currently fall back to the AWS-managed KMS key. To meet security best practices and organizational requirements, you should explicitly reference a customer-managed KMS CMK via the key_id attribute. Introduce a kms_key_id variable and apply it to each SecureString parameter.

Example diff:

 resource "aws_ssm_parameter" "session_secret" {
   name  = "/${var.name_prefix}/${var.environment}/session-secret"
   type  = "SecureString"
   value = var.session_secret
+  key_id = var.kms_key_id
   tags  = var.tags
 }
…
 resource "aws_ssm_parameter" "documentdb_password" {
   …
   type  = "SecureString"
   value = var.documentdb_master_password
+  key_id = var.kms_key_id
   tags  = var.tags
 }
…
 resource "aws_ssm_parameter" "redis_auth_token" {
   …
   type  = "SecureString"
   value = var.redis_auth_token
+  key_id = var.kms_key_id
   tags  = var.tags
 }
…
 resource "aws_ssm_parameter" "gcs_service_account_key" {
   …
   type  = "SecureString"
   value = var.gcs_service_account_key
+  key_id = var.kms_key_id
   tags  = var.tags
 }

Also applies to: 20-26, 29-35, 38-46

🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 2-8: Ensure SSM parameters are using KMS CMK

(CKV_AWS_337)

🤖 Prompt for AI Agents
In terraform/modules/parameter_store/main.tf around lines 2 to 8, the
aws_ssm_parameter resource of type SecureString does not specify a
customer-managed KMS key, defaulting to the AWS-managed key. To fix this, add a
new variable kms_key_id to the module inputs and set the key_id attribute of the
aws_ssm_parameter resource to this kms_key_id variable. Repeat this update for
the other SecureString parameters on lines 20-26, 29-35, and 38-46 to ensure all
use the customer-managed KMS key.

Comment on lines 1 to 45
variable "name_prefix" {
description = "Prefix for resource names"
type = string
}

variable "environment" {
description = "Environment name"
type = string
}

variable "documentdb_master_username" {
description = "Master username for DocumentDB"
type = string
}

variable "documentdb_master_password" {
description = "Master password for DocumentDB"
type = string
sensitive = true
}

variable "session_secret" {
description = "Session secret for the application"
type = string
sensitive = true
}

variable "redis_auth_token" {
description = "Auth token for Redis"
type = string
sensitive = true
}

variable "gcs_service_account_key" {
description = "Google Cloud Storage service account key (JSON)"
type = string
default = ""
sensitive = true
}

variable "tags" {
description = "Tags to apply to resources"
type = map(string)
default = {}
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add KMS CMK variable for SecureString parameters
To support specifying a customer-managed key in main.tf, introduce a kms_key_id input variable. This will allow passing your own KMS CMK ARN into the module for stronger encryption controls.

Example addition:

variable "kms_key_id" {
  description = "ARN of the customer-managed KMS key for SecureString SSM parameters"
  type        = string
}
🤖 Prompt for AI Agents
In terraform/modules/parameter_store/variables.tf around lines 1 to 45, add a
new input variable named kms_key_id to allow passing a customer-managed KMS key
ARN for SecureString SSM parameters. Define it as a string type with a
description explaining it is the ARN of the customer-managed KMS key. This
enables stronger encryption controls by allowing the module to use a specified
KMS CMK.

Comment on lines 116 to 150
resource "aws_iam_role_policy" "github_actions_policy" {
name = "${var.name_prefix}-${var.environment}-github-actions-policy"
role = aws_iam_role.github_actions.id

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
]
Resource = [
var.ecr_repository_arn,
"*"
]
},
{
Effect = "Allow"
Action = [
"ecs:UpdateService",
"ecs:DescribeServices"
]
Resource = "*"
}
]
})
} No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Restrict wildcard resource usage in GitHub Actions IAM policy
The aws_iam_role_policy.github_actions_policy currently grants ECR and ECS permissions against "*"—this violates least-privilege principles and raises a Checkov alert. Narrow the scope by removing "*" and only allowing actions on the specific ECR repository ARN and the target ECS cluster or service ARN (pass these ARNs in as variables).

Example diff:

      Resource = [
-        var.ecr_repository_arn,
-        "*"
+        var.ecr_repository_arn
      ]
...
      Statement = [
        {
          Effect = "Allow"
          Action = [
            "ecs:UpdateService",
            "ecs:DescribeServices"
          ]
-        Resource = "*"
+        Resource = var.ecs_service_arn
        }
      ]

Define new input variables ecs_service_arn (or ecs_cluster_arn) and use them here to enforce least privilege.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "aws_iam_role_policy" "github_actions_policy" {
name = "${var.name_prefix}-${var.environment}-github-actions-policy"
role = aws_iam_role.github_actions.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
]
Resource = [
var.ecr_repository_arn,
"*"
]
},
{
Effect = "Allow"
Action = [
"ecs:UpdateService",
"ecs:DescribeServices"
]
Resource = "*"
}
]
})
}
resource "aws_iam_role_policy" "github_actions_policy" {
name = "${var.name_prefix}-${var.environment}-github-actions-policy"
role = aws_iam_role.github_actions.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
]
Resource = [
var.ecr_repository_arn
]
},
{
Effect = "Allow"
Action = [
"ecs:UpdateService",
"ecs:DescribeServices"
]
Resource = var.ecs_service_arn
}
]
})
}
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 116-150: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions

(CKV_AWS_355)


[HIGH] 116-150: Ensure IAM policies does not allow write access without constraints

(CKV_AWS_290)

🤖 Prompt for AI Agents
In terraform/modules/iam/main.tf around lines 116 to 150, the IAM policy uses
"*" as a resource for ECR and ECS actions, which violates least-privilege
principles. To fix this, remove the "*" from the Resource lists and instead add
variables for the specific ECS cluster or service ARNs (e.g., ecs_service_arn or
ecs_cluster_arn). Update the policy to reference these variables so permissions
are scoped only to the intended resources.

- Add Terraform plan files to .gitignore

- Hardcode availability zones to avoid IAM permission issues

- Fix S3 lifecycle configuration filters

- Fix ALB and CloudFront output references

- Split monolithic IAM policy into focused policies

- Create core management policy for IAM and DynamoDB

- Create infrastructure policy for VPC, ALB, and databases

- Create S3 management policy for bucket operations

- Create service management policy for ECR, ECS, and CloudFront
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
terraform/modules/s3/main.tf (2)

50-84: ⚠️ Potential issue

Add rule to abort incomplete multipart uploads for the attachments bucket
Your lifecycle configuration for the attachments bucket is missing a rule to clean up failed multipart uploads, which can accumulate orphaned parts and incur storage costs.
Apply the following diff to add an abort_incomplete_multipart_upload rule:

 resource "aws_s3_bucket_lifecycle_configuration" "attachments" {
   bucket = aws_s3_bucket.attachments.id

   rule {
     id     = "expire_non_current_versions"
     status = "Enabled"

     filter {
       prefix = ""
     }

     noncurrent_version_expiration {
       noncurrent_days = 30
     }
+    abort_incomplete_multipart_upload {
+      days_after_initiation = 7
+    }
   }
 }
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 50-84: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)


128-148: ⚠️ Potential issue

Add rule to abort incomplete multipart uploads for the logs bucket
Similarly, the logs bucket lifecycle config lacks cleanup for failed multipart uploads. Add this to prevent storage bloat:

 resource "aws_s3_bucket_lifecycle_configuration" "logs" {
   bucket = aws_s3_bucket.logs.id

   rule {
     id     = "log_lifecycle"
     status = "Enabled"

     filter {
       prefix = ""
     }

     transition {
       days          = 90
       storage_class = "GLACIER"
     }

     expiration {
       days = 365
     }
+    abort_incomplete_multipart_upload {
+      days_after_initiation = 7
+    }
   }
 }
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 128-148: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)

🧹 Nitpick comments (1)
terraform/terraform-service-management-policy.json (1)

61-66: Optional: Remove legacy OAI permissions if unused
Since you’re provisioning CloudFront Origin Access Control (OAC), the cloudfront:CreateCloudFrontOriginAccessIdentity and cloudfront:DeleteCloudFrontOriginAccessIdentity actions may be unnecessary. Consider removing them to narrow the policy.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 45d75c7 and 0bad9ce.

📒 Files selected for processing (8)
  • .gitignore (1 hunks)
  • terraform/main.tf (1 hunks)
  • terraform/modules/s3/main.tf (1 hunks)
  • terraform/outputs.tf (1 hunks)
  • terraform/terraform-core-management-policy.json (1 hunks)
  • terraform/terraform-infrastructure-management-policy.json (1 hunks)
  • terraform/terraform-s3-management-policy.json (1 hunks)
  • terraform/terraform-service-management-policy.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • .gitignore
  • terraform/main.tf
  • terraform/outputs.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/s3/main.tf

[MEDIUM] 50-84: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)


[MEDIUM] 128-148: Ensure S3 lifecycle configuration sets period for aborting failed uploads

(CKV_AWS_300)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: e2e-tests
  • GitHub Check: unit-tests
  • GitHub Check: lint
  • GitHub Check: security-scan
🔇 Additional comments (4)
terraform/terraform-s3-management-policy.json (1)

1-67: S3 management policy looks good
The IAM policy correctly scopes bucket-level and object-level permissions for Terraform’s S3 backend and attachment buckets, following least-privilege principles with explicit ARNs.
No changes required.

terraform/terraform-core-management-policy.json (1)

1-79: Core Terraform management policy is comprehensive
This policy appropriately grants permissions for DynamoDB state locking, IAM operations, CloudWatch Logs, and STS. It aligns with Terraform’s backend requirements.

terraform/terraform-service-management-policy.json (1)

1-78: Service management policy is complete
The policy grants the necessary ECR, ECS, SSM Parameter Store, and CloudFront permissions for Terraform automation.

terraform/terraform-infrastructure-management-policy.json (1)

1-130: Infrastructure management policy is well-scoped
This IAM policy covers EC2 VPC, ELB, DocumentDB/RDS, and ElastiCache resources needed by Terraform. Actions are correctly grouped and scope is appropriate for infrastructure provisioning.

- Simplify Redis cluster endpoint output to use primary endpoint address only

- Move DocumentDB, ALB, CloudFront, and ACM permissions to core management policy

- Remove duplicate ELB and CloudFront permissions from infrastructure and service policies

- Consolidate AWS service permissions for better organization
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
terraform/terraform-infrastructure-management-policy.json (1)

4-52: Use least privilege by scoping resources

Currently, this Statement grants all EC2 VPC and subnet operations on all resources ("*"). For better security, consider scoping the Resource field to the specific VPC, subnets, and related resources managed by Terraform, or add IAM conditions (e.g., based on resource tags) to enforce least privilege.

terraform/terraform-core-management-policy.json (1)

19-53: Restrict iam:PassRole to specific Terraform roles

Granting iam:PassRole on * can allow privilege escalation. For least privilege, specify the ARN(s) of the IAM role(s) that Terraform will pass to AWS services:

-      "Action": ["iam:PassRole", ...],
-      "Resource": "*"
+      "Action": ["iam:PassRole"],
+      "Resource": "arn:aws:iam::ACCOUNT_ID:role/terraform-*"

Adjust the ARN pattern to match your Terraform-managed roles.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0bad9ce and 5f51b9c.

📒 Files selected for processing (4)
  • terraform/modules/redis/outputs.tf (1 hunks)
  • terraform/terraform-core-management-policy.json (1 hunks)
  • terraform/terraform-infrastructure-management-policy.json (1 hunks)
  • terraform/terraform-service-management-policy.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • terraform/modules/redis/outputs.tf
  • terraform/terraform-service-management-policy.json
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: unit-tests
  • GitHub Check: e2e-tests
  • GitHub Check: lint
  • GitHub Check: security-scan
🔇 Additional comments (6)
terraform/terraform-infrastructure-management-policy.json (1)

53-75: Consider ElastiCache snapshot permissions

The current ElastiCache permissions cover subnet groups, clusters, and replication groups, but if your Terraform Redis module manages snapshots (e.g., automated backups or manual snapshots), you may need to add:

  • elasticache:CreateSnapshot
  • elasticache:DescribeSnapshots
  • elasticache:DeleteSnapshot

Please verify if snapshot operations are required and update accordingly.

terraform/terraform-core-management-policy.json (5)

5-17: DynamoDB lock table permissions scoped correctly

The Statement precisely restricts DynamoDB actions to the sf-website-terraform-locks table. This aligns with Terraform state locking best practices.


55-69: CloudWatch Logs permissions look good

The Statement covers creation, deletion, retention policy management, and tagging for log groups. These actions align with Terraform-managed CloudWatch Logs operations.


71-76: STS identity retrieval permission approved

Allowing sts:GetCallerIdentity is necessary for Terraform to detect the current AWS account and IAM identity.


109-130: ALB permissions are comprehensive

The permissions for Elastic Load Balancing cover load balancer and target group lifecycle, listeners, and tagging. This aligns with Terraform ECS and ALB module requirements.


152-165: ACM certificate permissions look correct

The Statement provides necessary ACM actions for certificate management, listing, and tagging.

Comment on lines 132 to 150
"Sid": "TerraformCloudFrontPermissions",
"Effect": "Allow",
"Action": [
"cloudfront:CreateDistribution",
"cloudfront:DeleteDistribution",
"cloudfront:GetDistribution",
"cloudfront:UpdateDistribution",
"cloudfront:ListDistributions",
"cloudfront:CreateOriginAccessControl",
"cloudfront:DeleteOriginAccessControl",
"cloudfront:GetOriginAccessControl",
"cloudfront:UpdateOriginAccessControl",
"cloudfront:ListOriginAccessControls",
"cloudfront:TagResource",
"cloudfront:UntagResource",
"cloudfront:ListTagsForResource"
],
"Resource": "*"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Include GetDistributionConfig for CloudFront

Terraform's aws_cloudfront_distribution resource requires both cloudfront:GetDistribution and cloudfront:GetDistributionConfig to read existing distribution configurations before updates. Please add:

"Action": [
+  "cloudfront:GetDistributionConfig",
  ...
]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"Sid": "TerraformCloudFrontPermissions",
"Effect": "Allow",
"Action": [
"cloudfront:CreateDistribution",
"cloudfront:DeleteDistribution",
"cloudfront:GetDistribution",
"cloudfront:UpdateDistribution",
"cloudfront:ListDistributions",
"cloudfront:CreateOriginAccessControl",
"cloudfront:DeleteOriginAccessControl",
"cloudfront:GetOriginAccessControl",
"cloudfront:UpdateOriginAccessControl",
"cloudfront:ListOriginAccessControls",
"cloudfront:TagResource",
"cloudfront:UntagResource",
"cloudfront:ListTagsForResource"
],
"Resource": "*"
},
"Sid": "TerraformCloudFrontPermissions",
"Effect": "Allow",
"Action": [
"cloudfront:GetDistributionConfig",
"cloudfront:CreateDistribution",
"cloudfront:DeleteDistribution",
"cloudfront:GetDistribution",
"cloudfront:UpdateDistribution",
"cloudfront:ListDistributions",
"cloudfront:CreateOriginAccessControl",
"cloudfront:DeleteOriginAccessControl",
"cloudfront:GetOriginAccessControl",
"cloudfront:UpdateOriginAccessControl",
"cloudfront:ListOriginAccessControls",
"cloudfront:TagResource",
"cloudfront:UntagResource",
"cloudfront:ListTagsForResource"
],
"Resource": "*"
🤖 Prompt for AI Agents
In terraform/terraform-core-management-policy.json around lines 132 to 150, the
CloudFront permissions are missing the "cloudfront:GetDistributionConfig"
action, which is required by Terraform's aws_cloudfront_distribution resource to
read distribution configurations. Add "cloudfront:GetDistributionConfig" to the
"Action" array alongside the existing CloudFront permissions to ensure proper
access.

Comment on lines 79 to 107
"Sid": "TerraformDocumentDBPermissions",
"Effect": "Allow",
"Action": [
"rds:DescribeGlobalClusters",
"rds:CreateGlobalCluster",
"rds:DeleteGlobalCluster",
"rds:ModifyGlobalCluster",
"rds:CreateDBCluster",
"rds:DeleteDBCluster",
"rds:DescribeDBClusters",
"rds:ModifyDBCluster",
"rds:CreateDBSubnetGroup",
"rds:DeleteDBSubnetGroup",
"rds:DescribeDBSubnetGroups",
"rds:ModifyDBSubnetGroup",
"rds:CreateDBClusterParameterGroup",
"rds:DeleteDBClusterParameterGroup",
"rds:DescribeDBClusterParameterGroups",
"rds:ModifyDBClusterParameterGroup",
"rds:CreateDBInstance",
"rds:DeleteDBInstance",
"rds:DescribeDBInstances",
"rds:ModifyDBInstance",
"rds:AddTagsToResource",
"rds:RemoveTagsFromResource",
"rds:ListTagsForResource"
],
"Resource": "*"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Missing DocumentDB-specific (docdb:) permissions

This Statement uses rds: actions, but Amazon DocumentDB requires docdb:-prefixed actions. Terraform's aws_docdb_cluster resources rely on DocumentDB APIs (e.g., docdb:CreateDBCluster, docdb:DeleteDBInstance, and related). Without these, Terraform may fail to manage DocumentDB clusters. Please replace or supplement the rds: actions with the corresponding docdb: actions:

{
+  "Sid": "TerraformDocumentDBPermissionsDocDB",
+  "Effect": "Allow",
+  "Action": [
+    "docdb:CreateDBCluster",
+    "docdb:DeleteDBCluster",
+    "docdb:DescribeDBClusters",
+    "docdb:ModifyDBCluster",
+    "docdb:CreateDBInstance",
+    "docdb:DeleteDBInstance",
+    "docdb:DescribeDBInstances",
+    "docdb:ModifyDBInstance",
+    "docdb:CreateDBSubnetGroup",
+    "docdb:DeleteDBSubnetGroup",
+    "docdb:DescribeDBSubnetGroups",
+    "docdb:ModifyDBSubnetGroup",
+    "docdb:AddTagsToResource",
+    "docdb:RemoveTagsFromResource",
+    "docdb:ListTagsForResource"
+  ],
+  "Resource": "*"
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"Sid": "TerraformDocumentDBPermissions",
"Effect": "Allow",
"Action": [
"rds:DescribeGlobalClusters",
"rds:CreateGlobalCluster",
"rds:DeleteGlobalCluster",
"rds:ModifyGlobalCluster",
"rds:CreateDBCluster",
"rds:DeleteDBCluster",
"rds:DescribeDBClusters",
"rds:ModifyDBCluster",
"rds:CreateDBSubnetGroup",
"rds:DeleteDBSubnetGroup",
"rds:DescribeDBSubnetGroups",
"rds:ModifyDBSubnetGroup",
"rds:CreateDBClusterParameterGroup",
"rds:DeleteDBClusterParameterGroup",
"rds:DescribeDBClusterParameterGroups",
"rds:ModifyDBClusterParameterGroup",
"rds:CreateDBInstance",
"rds:DeleteDBInstance",
"rds:DescribeDBInstances",
"rds:ModifyDBInstance",
"rds:AddTagsToResource",
"rds:RemoveTagsFromResource",
"rds:ListTagsForResource"
],
"Resource": "*"
},
// … previous statements …
{
"Sid": "TerraformDocumentDBPermissions",
"Effect": "Allow",
"Action": [
"rds:DescribeGlobalClusters",
"rds:CreateGlobalCluster",
"rds:DeleteGlobalCluster",
"rds:ModifyGlobalCluster",
"rds:CreateDBCluster",
"rds:DeleteDBCluster",
"rds:DescribeDBClusters",
"rds:ModifyDBCluster",
"rds:CreateDBSubnetGroup",
"rds:DeleteDBSubnetGroup",
"rds:DescribeDBSubnetGroups",
"rds:ModifyDBSubnetGroup",
"rds:CreateDBClusterParameterGroup",
"rds:DeleteDBClusterParameterGroup",
"rds:DescribeDBClusterParameterGroups",
"rds:ModifyDBClusterParameterGroup",
"rds:CreateDBInstance",
"rds:DeleteDBInstance",
"rds:DescribeDBInstances",
"rds:ModifyDBInstance",
"rds:AddTagsToResource",
"rds:RemoveTagsFromResource",
"rds:ListTagsForResource"
],
"Resource": "*"
},
{
"Sid": "TerraformDocumentDBPermissionsDocDB",
"Effect": "Allow",
"Action": [
"docdb:CreateDBCluster",
"docdb:DeleteDBCluster",
"docdb:DescribeDBClusters",
"docdb:ModifyDBCluster",
"docdb:CreateDBInstance",
"docdb:DeleteDBInstance",
"docdb:DescribeDBInstances",
"docdb:ModifyDBInstance",
"docdb:CreateDBSubnetGroup",
"docdb:DeleteDBSubnetGroup",
"docdb:DescribeDBSubnetGroups",
"docdb:ModifyDBSubnetGroup",
"docdb:AddTagsToResource",
"docdb:RemoveTagsFromResource",
"docdb:ListTagsForResource"
],
"Resource": "*"
}
// … following statements …
🤖 Prompt for AI Agents
In terraform/terraform-core-management-policy.json between lines 79 and 107, the
IAM policy statement currently uses only `rds:` actions, but for managing Amazon
DocumentDB clusters with Terraform, you need to include the corresponding
`docdb:` actions. Update the "Action" array to replace or supplement the
existing `rds:` actions with the equivalent `docdb:` actions such as
`docdb:CreateDBCluster`, `docdb:DeleteDBInstance`, and others required for
DocumentDB management to ensure Terraform can properly manage DocumentDB
resources.

killev added 2 commits May 24, 2025 23:02
- Add elasticloadbalancing DescribeLoadBalancerAttributes permission

- Add elasticloadbalancing DescribeTargetGroupAttributes permission

- Add CloudFront Origin Access Identity management permissions
- Add ECS service-linked role for proper service management
- Make container configuration configurable (CPU, memory, port)
- Add auto-scaling configuration with CPU and memory targets
- Make log retention and scaling parameters configurable
- Filter out empty values from secrets configuration
- Add module outputs for auto-scaling resources
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (10)
terraform/modules/ecs/main.tf (1)

25-25: Make container name configurable
Avoid hard-coding the container name "app". Introduce a variable (e.g., var.container_name) to improve flexibility across different services.

terraform/modules/ecs/variables.tf (2)

1-9: Add validation for naming consistency
Consider enforcing constraints on name_prefix and environment (e.g., allowed character set, length limits, and a predefined list of environment values like "dev", "staging", "prod") to prevent invalid or inconsistent resource names across deployments.


26-29: Optional: Validate ECR URL format
You may add a validation block with a regex to ensure ecr_repository_url follows the ECR URL pattern (e.g., <account>.dkr.ecr.<region>.amazonaws.com/<repo>). Early validation can catch misconfigurations before apply.

terraform/variables.tf (7)

4-13: Add validation for region and environment
Consider adding validation blocks for aws_region to restrict to supported AWS regions and for environment to enforce a known set of environment names (e.g., ["dev","staging","prod","qa","demo","sandbox"]). This reduces typos and misconfigurations.


15-23: Enhance tagging strategy
The default_tags map is clear but could include an Environment = var.environment entry. This automatically labels resources with their environment context.


37-51: Approve domain-related variables
The domain_name, media_domain_name, and certificate_arn variables are well-documented. Optionally, you could add a regex-based validation for certificate_arn to match ACM ARN patterns.


83-88: Clarify optional GCS key usage
Since gcs_service_account_key is optional by defaulting to an empty string, ensure downstream modules skip any GCS-related resources when it's unset, or document that behavior.


110-115: Review default image tag usage
Using "latest" as the default for container_image_tag can reduce operational overhead but may hinder immutability in production. Recommend pinning explicit tags via CI/CD pipelines.


185-189: Optional: Restrict DocumentDB instance types
You might add a validation block to restrict documentdb_instance_class to a predefined list of supported classes (e.g., ["db.t3.medium","db.r5.large"]).


191-195: Optional: Restrict Redis node types
Similarly, consider validating redis_node_type against a known set of ElastiCache node types to catch configuration errors early.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3f4473 and 0ffa8ab.

📒 Files selected for processing (7)
  • .gitignore (1 hunks)
  • terraform/main.tf (1 hunks)
  • terraform/modules/ecs/main.tf (1 hunks)
  • terraform/modules/ecs/outputs.tf (1 hunks)
  • terraform/modules/ecs/variables.tf (1 hunks)
  • terraform/terraform.tfvars.example (1 hunks)
  • terraform/variables.tf (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • terraform/terraform.tfvars.example
🚧 Files skipped from review as they are similar to previous changes (3)
  • .gitignore
  • terraform/modules/ecs/outputs.tf
  • terraform/main.tf
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: e2e-tests
  • GitHub Check: lint
  • GitHub Check: unit-tests
  • GitHub Check: security-scan
🔇 Additional comments (20)
terraform/modules/ecs/main.tf (1)

2-11: ECS cluster resource: correct configuration
Cluster is defined with container insights enabled and tags applied appropriately.

terraform/modules/ecs/variables.tf (7)

11-24: Approve network ID variables
The vpc_id, subnet_ids, and security_group_ids inputs are correctly typed (string and list(string)) with clear descriptions. This aligns with how the root module will supply networking resources.


31-39: Approve IAM role ARN variables
The task_role_arn and execution_role_arn variables are well-defined as string with precise descriptions. They will integrate cleanly with the IAM module outputs.


41-44: Approve ALB target group input
target_group_arn is correctly declared and will allow the ECS service to register tasks with the Application Load Balancer target group.


46-59: Approve container resource variables
Defining container_cpu, container_memory, and container_port without defaults in this module is appropriate since the root module provides defaults. Types and descriptions align with ECS Fargate requirements.


61-74: Approve ECS task scaling configuration
The log_retention_days, ecs_desired_count, and ecs_max_capacity variables are declared correctly. If you want additional safety, you could duplicate the root-level validation for log retention here, but it's not strictly necessary.


76-92: Approve optional maps for container settings
Using environment_variables, secrets, and tags as default-empty map(string) is a solid pattern for optional settings injection without forcing every caller to provide them.


94-97: Ensure service_linked_role_arn is supplied
service_linked_role_arn is required and has no default. Verify that the root module or caller passes this value; otherwise Terraform will error. If you intend to let AWS auto-create the service-linked role, consider defaulting to an empty string or documenting that behavior.

terraform/variables.tf (12)

25-35: Approve VPC CIDR validation
Using can(cidrhost(var.vpc_cidr, 0)) to ensure a valid IPv4 CIDR block is a solid approach.


54-69: Approve DocumentDB credentials
Marking documentdb_master_username and documentdb_master_password as sensitive with a minimum length check for the password is aligned with best security practices.


71-81: Approve session secret validation
session_secret is sensitive and has a strict length requirement (≥32), which is appropriate for cryptographic keys.


90-94: Approve GitHub Actions repo variable
github_repository is required for OIDC integration and is clearly described.


96-101: Approve Slack webhook input
slack_webhook_url marked as sensitive is appropriate for alerting configuration.


103-108: Approve optional Route 53 zone
Defaulting route53_zone_id to an empty string cleanly makes DNS record creation optional.


117-125: Approve container CPU validation
Enforcing allowed CPU units through contains([...]) helps prevent unsupported configurations.


128-137: Approve container memory validation
The memory range check (512–30720 MB) matches ECS requirements and guards against invalid inputs.


139-147: Approve container port validation
Validating that container_port falls within the TCP port range (1–65535) is a good safeguard.


150-159: Approve log retention options
Limiting log_retention_days to supported CloudWatch retention periods prevents silent misconfigurations.


161-170: Approve ECS desired count constraints
Constraining ecs_desired_count between 1 and 10 strikes a balance between flexibility and safety.


173-182: Approve ECS max capacity validation
Ensuring ecs_max_capacity >= ecs_desired_count avoids misconfigured autoscaling setups.

container_definitions = jsonencode([
{
name = "app"
image = "${var.ecr_repository_url}:latest"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid using the “latest” image tag
Referencing :latest can lead to non-deterministic deployments. Use a versioned tag or digest (e.g., "${var.ecr_repository_url}:${var.image_tag}" or specifying imageDigest) for reproducible builds.

I can help refactor this to use a variable-based tag or digest if needed.

🤖 Prompt for AI Agents
In terraform/modules/ecs/main.tf at line 26, the image tag is hardcoded to
"latest", which can cause non-deterministic deployments. Replace the ":latest"
tag with a variable-based tag such as "${var.image_tag}" or use an image digest
variable to ensure reproducible builds. Update the code to reference this
variable instead of the fixed "latest" tag.

Comment on lines 93 to 94
depends_on = [var.target_group_arn, var.service_linked_role_arn]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Invalid depends_on references
The depends_on block references variables instead of resource addresses. Terraform already infers dependencies on the ALB target group and service-linked role via their references. Remove this block or replace it with actual resource references.

 resource "aws_ecs_service" "main" {
-  depends_on = [var.target_group_arn, var.service_linked_role_arn]
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
depends_on = [var.target_group_arn, var.service_linked_role_arn]
resource "aws_ecs_service" "main" {
# …other configuration…
# explicit depends_on removed; Terraform will infer dependencies automatically
}
🤖 Prompt for AI Agents
In terraform/modules/ecs/main.tf at lines 93 to 94, the depends_on block
incorrectly references variables instead of resource addresses, which is
invalid. Remove the depends_on block entirely or replace the variable references
with the actual resource addresses that need to be depended on, as Terraform
automatically infers dependencies from resource references.

- Parameterize container port across ALB, ECS, and security group modules

- Make health check path configurable in ALB module

- Make log retention days configurable in CloudWatch module

- Add target_type = ip to ALB target group for ECS Fargate compatibility

- Add IAM service linked role permissions to core management policy
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
terraform/modules/alb/main.tf (1)

2-12: 🛠️ Refactor suggestion

Enforce dropping invalid headers and set idle timeout
Checkov recommends configuring the ALB to drop invalid or sensitive HTTP headers (CKV_AWS_131) and it’s a best practice to set an idle timeout. Please add the following attributes inside the aws_lb resource:

 resource "aws_lb" "main" {
   ...
+  drop_invalid_header_fields = true
+  idle_timeout               = 60
   ...
 }
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)

🧹 Nitpick comments (2)
terraform/modules/alb/main.tf (1)

38-51: Consider updating TLS policy and enabling HTTP/2
The listener’s ssl_policy is set to ELBSecurityPolicy-TLS-1-2-2017-01. You may want to use a more recent policy (e.g., ELBSecurityPolicy-TLS-1-2-Ext-2018-06) and explicitly enable HTTP/2 support:

 resource "aws_lb_listener" "https" {
   ...
-  ssl_policy        = "ELBSecurityPolicy-TLS-1-2-2017-01"
+  ssl_policy        = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
+  protocol          = "HTTPS"
+  enable_http2      = true
   ...
 }
terraform/main.tf (1)

54-63: Switch from hardcoded AZs to data source when possible
Hardcoding availability zones can drift if you change regions. If you have ec2:DescribeAvailabilityZones permissions, consider using the data source instead:

 locals {
-  azs = ["us-east-1a", "us-east-1b"]
+  azs = slice(data.aws_availability_zones.available.names, 0, 2)
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ffa8ab and a6bd59a.

📒 Files selected for processing (9)
  • terraform/main.tf (1 hunks)
  • terraform/modules/alb/main.tf (1 hunks)
  • terraform/modules/alb/variables.tf (1 hunks)
  • terraform/modules/cloudwatch/main.tf (1 hunks)
  • terraform/modules/cloudwatch/variables.tf (1 hunks)
  • terraform/modules/security_groups/main.tf (1 hunks)
  • terraform/modules/security_groups/variables.tf (1 hunks)
  • terraform/terraform-core-management-policy.json (1 hunks)
  • terraform/variables.tf (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • terraform/modules/cloudwatch/main.tf
  • terraform/modules/security_groups/variables.tf
  • terraform/modules/security_groups/main.tf
  • terraform/modules/cloudwatch/variables.tf
  • terraform/modules/alb/variables.tf
  • terraform/variables.tf
  • terraform/terraform-core-management-policy.json
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/alb/main.tf

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: e2e-tests
  • GitHub Check: lint
  • GitHub Check: unit-tests
  • GitHub Check: security-scan
🔇 Additional comments (3)
terraform/modules/alb/main.tf (2)

15-35: Target group configuration looks good
You’ve correctly set target_type = "ip" for ECS Fargate integration. Health check settings and tagging are all in order.


55-70: HTTP→HTTPS redirect listener is correctly configured
Your redirect listener will send all port 80 traffic to HTTPS with a 301. This aligns with best practices.

terraform/main.tf (1)

102-110:

Details

❓ Verification inconclusive

Verify S3 module interface
The s3 module may expect inputs like domain_name, media_domain_name, cloudfront_distribution_arn, or ecs_task_role_arn, which aren’t passed here. Please confirm the module’s variables.tf and update the call accordingly.

You can run:


🏁 Script executed:

#!/bin/bash
# List variables defined by the s3 module
rg -n '^variable ' terraform/modules/s3/variables.tf

# Search for domain_name usage in s3 module
rg -n 'domain_name' terraform/modules/s3

Length of output: 754


🏁 Script executed:

#!/bin/bash
# Show definitions for relevant variables to check for defaults
rg -n -A5 '^variable "domain_name"' terraform/modules/s3/variables.tf
rg -n -A5 '^variable "media_domain_name"' terraform/modules/s3/variables.tf
rg -n -A5 '^variable "cloudfront_distribution_arn"' terraform/modules/s3/variables.tf
rg -n -A5 '^variable "ecs_task_role_arn"' terraform/modules/s3/variables.tf

Length of output: 912


Verify S3 module CORS and policy inputs

The s3 module defines the following variables (each defaults to an empty string):

  • var.domain_name
  • var.media_domain_name
  • var.cloudfront_distribution_arn
  • var.ecs_task_role_arn

Without overrides, CORS origins will be built as "https://" and policy statements relying on an ARN may be skipped. Please confirm whether you need to supply these values in your module call (or adjust the module to handle blank defaults):

• File: terraform/main.tf, module "s3" block (lines 102–110)

Example update (if required):

module "s3" {
  source                        = "./modules/s3"
  name_prefix                   = local.name_prefix
  environment                   = local.environment
  tags                          = local.common_tags

  domain_name                   = var.domain_name
  media_domain_name             = var.media_domain_name
  cloudfront_distribution_arn   = var.cloudfront_distribution_arn
  ecs_task_role_arn             = var.ecs_task_role_arn
}

Comment on lines 193 to 211
module "alb" {
source = "./modules/alb"

name_prefix = local.name_prefix
environment = local.environment

vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.public_subnet_ids
security_group_ids = [module.security_groups.alb_sg_id]

domain_name = var.domain_name
certificate_arn = var.certificate_arn

# Container configuration
container_port = var.container_port
health_check_path = var.health_check_path

tags = local.common_tags
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Review unused domain_name input for ALB module
You’re passing domain_name = var.domain_name into the alb module, but the module’s main.tf doesn’t reference it. Either remove the unused variable or add logic (e.g., a Route 53 record) that leverages it.

You can verify with:


🏁 Script executed:

#!/bin/bash
# Check if the alb module defines or references domain_name
rg -R 'variable "domain_name"' -n terraform/modules/alb
rg -R 'domain_name' -n terraform/modules/alb

Length of output: 912


🏁 Script executed:

#!/bin/bash
set -e

echo "Listing files in terraform/modules/alb:"
find terraform/modules/alb -maxdepth 1 -type f

echo -e "\nSearching for a variable declaration of domain_name:"
rg -n 'variable "domain_name"' terraform/modules/alb

echo -e "\nSearching for any references to var.domain_name in the module:"
rg -n 'var\.domain_name' terraform/modules/alb

echo -e "\nSearching for any other occurrences of domain_name:"
rg -n 'domain_name' terraform/modules/alb

Length of output: 673


Unused domain_name variable in ALB module
The alb module declares variable "domain_name" in terraform/modules/alb/variables.tf (line 26) but never references it in main.tf or outputs.tf. Either remove this input from the module and drop the domain_name = var.domain_name argument in your root module, or add logic to consume it—e.g. create a Route 53 record:

• terraform/modules/alb/variables.tf:26 – remove or deprecate the variable
• terraform/modules/alb/main.tf – add something like:

resource "aws_route53_record" "alb" {
  zone_id = data.aws_route53_zone.this.zone_id
  name    = var.domain_name
  type    = "A"
  alias {
    name                   = aws_lb.this.dns_name
    zone_id                = aws_lb.this.zone_id
    evaluate_target_health = true
  }
}
🤖 Prompt for AI Agents
In terraform/main.tf lines 193 to 211, the domain_name variable is passed to the
alb module but not used inside the module. To fix this, either remove the
domain_name input from the root module call and delete the corresponding
variable declaration in terraform/modules/alb/variables.tf around line 26, or
add logic inside terraform/modules/alb/main.tf to consume domain_name, such as
creating a Route 53 record resource that uses var.domain_name to point to the
ALB DNS name.

killev added 2 commits May 25, 2025 11:50
…nced ECS module with ElastiCache integration support - Updated infrastructure management policy with ElastiCache permissions - Added comprehensive Redis cluster configuration - Improved security group and networking setup - Added analysis documentation for infrastructure changes
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
terraform/modules/alb/main.tf (1)

2-12: 🛠️ Refactor suggestion

Enforce dropping of invalid HTTP headers and set idle timeout
As per Checkov CKV_AWS_131, consider adding drop_invalid_header_fields = true to prevent leaking sensitive headers and idle_timeout = 60 to align with AWS best practices for ALBs.
Proposed diff:

 resource "aws_lb" "main" {
   name               = "${var.name_prefix}-${var.environment}-alb"
   internal           = false
   load_balancer_type = "application"
   security_groups    = var.security_group_ids
   subnets            = var.subnet_ids

   enable_deletion_protection = false
+  drop_invalid_header_fields = true
+  idle_timeout               = 60

   tags = var.tags
 }
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)

🧹 Nitpick comments (2)
terraform/modules/alb/main.tf (2)

37-58: Reduce downtime by enabling create_before_destroy for HTTPS listener
With create_before_destroy = false, updates to the listener can cause brief traffic disruption. Switching to create_before_destroy = true ensures the new listener is in place before the old one is removed:

 resource "aws_lb_listener" "https" {
   ...
   lifecycle {
-    create_before_destroy = false
+    create_before_destroy = true
   }
   ...
 }

60-84: Reevaluate lifecycle blocks for HTTP listener
This listener only performs an HTTP→HTTPS redirect. The custom lifecycle block may be unnecessary—Terraform’s defaults should handle ordering based on the depends_on. Removing or simplifying it can reduce configuration noise.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a6bd59a and d824a1d.

📒 Files selected for processing (7)
  • terraform/main.tf (1 hunks)
  • terraform/modules/alb/main.tf (1 hunks)
  • terraform/modules/ecs/main.tf (1 hunks)
  • terraform/modules/ecs/variables.tf (1 hunks)
  • terraform/terraform-core-management-policy.json (1 hunks)
  • terraform/terraform-infrastructure-management-policy.json (1 hunks)
  • terraform/terraform-service-management-policy.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • terraform/terraform-service-management-policy.json
  • terraform/terraform-infrastructure-management-policy.json
  • terraform/modules/ecs/variables.tf
  • terraform/modules/ecs/main.tf
  • terraform/terraform-core-management-policy.json
  • terraform/main.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/alb/main.tf

[MEDIUM] 2-12: Ensure that ALB drops HTTP headers

(CKV_AWS_131)


[MEDIUM] 15-35: Ensure AWS Load Balancer doesn't use HTTP protocol

(CKV_AWS_378)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: e2e-tests
  • GitHub Check: unit-tests
  • GitHub Check: security-scan
🔇 Additional comments (1)
terraform/modules/alb/main.tf (1)

15-35: Valid Fargate integration with target_type = "ip"
The inclusion of target_type = "ip" guarantees ECS Fargate tasks (awsvpc mode) register correctly with this target group. The health check configuration also aligns with common requirements.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 15-35: Ensure AWS Load Balancer doesn't use HTTP protocol

(CKV_AWS_378)

killev added 2 commits May 25, 2025 13:33
- Remove random provider dependency from terraform configuration
- Remove random_password resource for Redis auth token
- Add redis_auth_token variable with validation rules
- Update main.tf to use user-provided redis_auth_token variable
- Add redis_auth_token example to terraform.tfvars.example
- Add SSM access policy for ECS execution role to retrieve parameters
- Expand Terraform service management policy with ECR and CloudWatch Logs permissions
- Refactor ApostropheCMS uploadfs configuration for AWS S3 integration
- Add environment: Development to manual-approval job

- Add environment: Development to destroy-apply job
- Simplify plan_only boolean expressions using negation syntax

- Move conditional from job level to step level in manual-approval

- Simplify job dependencies in destroy-apply workflow
- Move plan_only check to job level for manual-approval job

- Change step condition to only require approval for non-Development environments
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (12)
.github/workflows/destroy_aws_environment.yml (12)

25-31: Replace undefined vars.ENV_NAME and hardcoded environment value
Both the job-level environment: Development and the get_env_name step reference an undefined vars.ENV_NAME. Replace these with the workflow input and/or the normalization step output (${{ inputs.environment }}).

-    environment: Development
+    environment: ${{ inputs.environment }}

-      - name: Get Environment Name for ${{ vars.ENV_NAME }}
+      - name: Normalize environment name
       id: get_env_name
       uses: Entepotenz/change-string-case-action-min-dependencies@v1
       with:
-          string: ${{ vars.ENV_NAME }}
+          string: ${{ inputs.environment }}
🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 27-27: wrong indentation: expected 4 but found 6

(indentation)


43-46: Fix tfvars copy to use the correct environment input
The step is still pulling from terraform-config/${{ vars.ENV_NAME }}.tfvars. Update it to use ${{ inputs.environment }} so the right tfvars file is selected.

-      - name: Copy tfvars for ${{ vars.ENV_NAME }}
+      - name: Copy tfvars for ${{ inputs.environment }}
         run: |
-          cat "terraform-config/${{ vars.ENV_NAME }}.tfvars"  "deployment/environments/${{ vars.ENV_NAME }}.tfvars"  > "deployment/terraform.tfvars"
+          cat "terraform-config/${{ inputs.environment }}.tfvars"  "deployment/environments/${{ inputs.environment }}.tfvars"  > "deployment/terraform.tfvars"

53-58: Use dynamic environment in Terraform init and plan
The backend config and plan output still reference vars.ENV_NAME. They must point to the chosen environment via ${{ inputs.environment }} (or use the normalized output).

-          terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
-          terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan"
+          terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl"
+          terraform plan -destroy -out="${{ inputs.environment }}-destroy.tfplan"

76-78: Correct S3 path for destroy plan upload
The aws s3 cp still uses vars.ENV_NAME. Replace it so uploads go to the proper environment folder.

-        run: aws s3 cp "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan"
+        run: aws s3 cp "deployment/${{ inputs.environment }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan"

85-87: Correct S3 path for lock file upload
Similarly, update the lock-file upload path to reference ${{ inputs.environment }}.

-        run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl"
+        run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl"

97-107: Make manual-approval job dynamic for the selected environment
The manual-approval job still hardcodes environment: Development and references vars.ENV_NAME in the step name and issue title. Patch them to use ${{ inputs.environment }}.

-    environment: Development
+    environment: ${{ inputs.environment }}

-      - name: Wait for approval to destroy ${{ vars.ENV_NAME }}
+      - name: Wait for approval to destroy ${{ inputs.environment }}

-          issue-title: "🚨 DESTROY ${{ vars.ENV_NAME }} AWS Environment 🚨"
+          issue-title: "🚨 DESTROY ${{ inputs.environment }} AWS Environment 🚨"
🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 100-100: wrong indentation: expected 4 but found 6

(indentation)


111-113: Use dynamic environment in the issue body
The issue body still shows **Environment**: ${{ vars.ENV_NAME }} which will be empty. Swap to ${{ inputs.environment }} for accuracy.

-            **Environment**: ${{ vars.ENV_NAME }}
+            **Environment**: ${{ inputs.environment }}

145-149: Fix get_env_name invocation in the destroy-apply job
This step is still feeding an undefined variable into the normalization action. Update to use the workflow input.

-      - name: Get Environment Name for ${{ vars.ENV_NAME }}
+      - name: Normalize environment name
        id: get_env_name
        uses: Entepotenz/change-string-case-action-min-dependencies@v1
        with:
-          string: ${{ vars.ENV_NAME }}
+          string: ${{ inputs.environment }}
🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 145-145: wrong indentation: expected 4 but found 6

(indentation)


160-161: Use dynamic environment when fetching the destroy plan
Replace vars.ENV_NAME with ${{ inputs.environment }} so the correct plan is pulled.

-        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" "deployment/${{ vars.ENV_NAME }}-destroy.tfplan"
+        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" "deployment/${{ inputs.environment }}-destroy.tfplan"

168-169: Use dynamic environment when fetching the lock file
Update the S3 key to reference ${{ inputs.environment }} instead of the undefined variable.

-        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"
+        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"

176-181: Use dynamic environment for Terraform apply in destroy-apply
The apply command still points to vars.ENV_NAME for both backend-config and plan file. Switch to ${{ inputs.environment }}.

-          terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
+          terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl"
-          terraform apply "${{ vars.ENV_NAME }}-destroy.tfplan"
+          terraform apply "${{ inputs.environment }}-destroy.tfplan"

188-191: Use dynamic environment for cleanup of S3 artifacts
The cleanup commands still refer to vars.ENV_NAME. Update to ${{ inputs.environment }} to remove the correct files.

-          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true
+          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" || true
-          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true
+          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" || true
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 775bd50 and b59b21d.

📒 Files selected for processing (1)
  • .github/workflows/destroy_aws_environment.yml (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/destroy_aws_environment.yml

[warning] 27-27: wrong indentation: expected 4 but found 6

(indentation)


[warning] 100-100: wrong indentation: expected 4 but found 6

(indentation)


[error] 110-110: trailing spaces

(trailing-spaces)


[error] 114-114: trailing spaces

(trailing-spaces)


[error] 127-127: trailing spaces

(trailing-spaces)


[error] 133-133: trailing spaces

(trailing-spaces)


[error] 136-136: trailing spaces

(trailing-spaces)


[warning] 145-145: wrong indentation: expected 4 but found 6

(indentation)

Comment on lines +63 to +70
- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update summary step to reflect the dynamic environment
The summary header still uses vars.ENV_NAME, which is undefined and will display incorrectly. Swap to ${{ inputs.environment }} to ensure it matches the dispatched environment.

-          echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
+          echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 63 to 70, the
summary header uses the undefined variable vars.ENV_NAME, causing incorrect
display. Replace all instances of vars.ENV_NAME with inputs.environment to
correctly reference the dispatched environment and ensure the summary header
reflects the dynamic environment.

- Enable force_delete on ECR repository resource to allow deletion with images
@sonarqubecloud
Copy link

sonarqubecloud bot commented Jun 7, 2025

Quality Gate Failed Quality Gate failed

Failed conditions
0.0% Coverage on New Code (required ≥ 80%)

See analysis details on SonarQube Cloud

@killev killev marked this pull request as draft June 11, 2025 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants