-
Notifications
You must be signed in to change notification settings - Fork 1
docs: Add AWS architecture diagram and documentation #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add comprehensive Mermaid flowchart showing complete AWS infrastructure - Document key architecture components organized by layers (Public, Compute, Storage, Security, Operations) - Include environment isolation details and resource naming conventions
📝 WalkthroughWalkthroughThis change introduces a comprehensive AWS infrastructure-as-code setup for the "SF Website" project using Terraform. It adds modular Terraform configurations, environment-specific variable files, IAM policies, deployment scripts, and extensive documentation. The infrastructure provisions VPC, ECS Fargate, DocumentDB, ElastiCache Redis, S3, CloudFront, ALB, IAM roles, security groups, monitoring, and CI/CD integration, with environment isolation and operational guidance. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer
participant GitHub as GitHub Actions
participant ECR as AWS ECR
participant ECS as AWS ECS
participant SSM as AWS Parameter Store
participant S3 as AWS S3
participant DocDB as AWS DocumentDB
participant Redis as AWS ElastiCache Redis
participant ALB as AWS ALB
participant CloudFront as AWS CloudFront
participant CloudWatch as AWS CloudWatch
Dev->>GitHub: Push code/trigger workflow
GitHub->>ECR: Build & push Docker image
GitHub->>ECS: Update ECS service with new image
ECS->>SSM: Fetch secrets (DB, Redis, session)
ECS->>DocDB: Connect to DocumentDB (via TLS)
ECS->>Redis: Connect to Redis
ECS->>S3: Store/retrieve media attachments
ECS->>CloudWatch: Send logs/metrics
User->>ALB: HTTPS request to app
ALB->>ECS: Forward request to container
ECS->>S3: Serve/upload media
User->>CloudFront: Request media asset
CloudFront->>S3: Fetch media via OAI
CloudWatch->>Slack: Send alert/notification (if triggered)
Possibly related issues
Possibly related PRs
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces comprehensive AWS infrastructure documentation, complete with a detailed Mermaid diagram that visually represents the SF Website architecture and its key components.
- Adds a Mermaid diagram outlining public, compute, storage, security, and operations layers
- Documents each AWS component with environment-based naming conventions
- Details environment isolation strategies with clear naming patterns
| ECS -->|Send logs| CloudWatch | ||
| %% IAM relationships | ||
| IAM_Task -.->|Assume role| ECS |
Copilot
AI
May 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a legend to clarify the significance of dashed arrows (used for IAM relationships) versus solid arrows in the Mermaid diagram for better reader comprehension.
| MongoDB -->|System metrics| CloudWatch | ||
| %% Backup | ||
| AWSBackup -->|Snapshot| MongoDB |
Copilot
AI
May 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the documentation to specify that AWS Backup snapshots target the EBS volume of the MongoDB EC2 instance, ensuring clear understanding of what is being backed up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
docs/aws-architecture-diagram.md (3)
1-4: Add an introductory context sectionIt may help to include a brief paragraph below the main heading (lines 1–4) that outlines the document’s purpose, target audience, and links to related architecture references or repos. This orients new readers before diving into the diagram.
5-86: Enhance the Mermaid diagram for clarity and consistency
- Leverage Mermaid’s
subgraphblocks to visually group components by layer (Public, Compute, Storage, Security, Monitoring).- Align naming between diagram and text (e.g., “Monitoring & Backup” vs “Operations Layer”) to avoid confusion.
- Optionally set explicit layout direction (e.g.,
flowchart TBis fine, but you could switch toLRfor horizontal views).Example snippet:
flowchart TB - %% Public facing components + subgraph Public Layer ALB[⚖️ Application Load Balancer<br/>…] CF[🌐 CloudFront Distribution<br/>…] + end + subgraph Monitoring & Backup Layer CloudWatch[📊 CloudWatch<br/>…] AWSBackup[💾 AWS Backup<br/>…] + end
111-115: Specify AWS tagging best practicesThe environment isolation section (lines 111–115) clearly states naming conventions but could be strengthened by listing recommended AWS tag keys (e.g.,
Environment,Project,Owner) and example values. This enforces consistent cost allocation and resource management across environments.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/aws-architecture-diagram.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: lint
- GitHub Check: e2e-tests
- GitHub Check: security-scan
- GitHub Check: unit-tests
…lete Terraform configuration - Replace MongoDB EC2 instance with managed DocumentDB cluster - Add ElastiCache Redis cluster for caching and session storage - Implement complete Terraform infrastructure as code with modular design - Update domain configuration from prettyclear.com to sandbox-prettyclear.com - Add comprehensive infrastructure documentation and Q&A guide - Configure multi-environment support with backend configs for dev/staging/prod - Add monitoring, security groups, and networking modules - Update ECS environment variables to include Redis URI and DocumentDB endpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (23)
terraform/README.md (2)
93-93: Specify language for code block
The fenced code block showing the Terraform file structure lacks a language specifier. Add```hclfor syntax highlighting.- ``` + ```hcl🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
93-93: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
136-136: Specify language for resource naming example
The code block for resource naming conventions should declare a language (e.g.,hcl) for consistency and readability.- ``` + ```hcl🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
136-136: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
terraform/modules/vpc/main.tf (1)
142-161: AddNametag to IAM role for consistency
The IAM role for VPC Flow Logs usesvar.tagsbut misses aNametag. Consider merging tags to include a Name for easier resource identification.resource "aws_iam_role" "flow_log" { name = "${var.name_prefix}-vpc-flow-log-role-${var.environment}" - tags = var.tags + tags = merge(var.tags, { + Name = "${var.name_prefix}-vpc-flow-log-role-${var.environment}" + }) }docs/Infrastructure.md (4)
171-172: Clarify environment variable naming
The variableAPOS_MONGODB_URIactually points to a DocumentDB endpoint. For clarity, consider renaming it toAPOS_DATABASE_URIorAPOS_DOCUMENTDB_URI, or add a note that DocumentDB is MongoDB-compatible to avoid confusion.
252-253: CloudWatch metrics extended to new services
Monitoring now includes DocumentDB and Redis. You might link to example dashboards or alarms in your runbooks to help operators interpret these metrics.
266-318: Review ElastiCache (Redis) section for clarity and redundancy
The Redis section is thorough, but the phrase “Auth token enabled for authentication” followed immediately by the Authentication heading is repetitive. Consider streamlining that wording (e.g., moveAuth token enabledunder the Authentication heading).🧰 Tools
🪛 LanguageTool
[duplication] ~294-~294: Possible typo: you repeated a word.
Context: ...on enabled * Auth token enabled for authentication * Authentication: * Auth token stored in AWS Param...(ENGLISH_WORD_REPEAT_RULE)
321-382: DocumentDB section completeness
Great level of detail. The engine version4.0.0is hard-coded—consider parameterizing this (e.g., viavar.documentdb_engine_version) so future upgrades need not update the docs & code in tandem.terraform/main.tf (5)
1-20: Lock Terraform and provider versions
Pinningrequired_versionand providers is solid. For Terraform core, you may use a range (e.g.,>= 1.0, < 2.0) to allow patch updates without breaking changes.
39-42: Redis auth token generation
Generating a 32-character password with special characters is secure. If you encounter issues with certain symbols in Redis, consider specifyingoverride_specialto control allowable characters.
45-59: Consolidate tag definitions
You mergevar.default_tagsthen re-addProject,CostCenter, andOwnerinlocal.common_tags. Since they’re identical, you can simplify by only mergingEnvironmentintovar.default_tags.
144-158: Parameterized DocumentDB deployment
Consider exposing the DocumentDB engine version as a variable (e.g.,var.documentdb_engine_version) so you can upgrade without code changes.
209-247: ECS environment variables alignment
The app still expectsAPOS_MONGODB_URIfor DocumentDB. Confirm compatibility or consider standardizing toAPOS_DATABASE_URI. Additionally, you may want to pass invar.container_image_tag,var.container_cpu,var.container_memory,var.ecs_desired_count, andvar.ecs_max_capacityhere rather than rely on module defaults.terraform/modules/s3/main.tf (3)
31-39: SSE-S3 encryption setup
AES256 is suitable; if you need audit/rotation controls, consider KMS-managed keys.
101-109: SSE-S3 encryption for logs
Consider using a KMS key if compliance requires key rotation or audit logging.
147-172: Restrict bucket policy to CloudFront
Your policy locks down S3 access to CloudFront viaSourceArn. For extra security, you could also useaws:Refererconditions or an S3 VPC endpoint if needed.terraform/variables.tf (8)
10-13: Environment variable
Consider adding avalidationblock (e.g., regex) to restrictvar.environmentto known values likedev,staging,prod.
38-42: Domain name variable
You might add avalidationblock to ensure the domain matches*.sandbox-prettyclear.comto catch typos early.
43-47: Media domain variable
Similar todomain_name, a validation rule could ensure it follows the expected pattern.
60-69: Password length validation
Enforcing a minimum of 8 characters is reasonable; you may also enforce complexity rules (upper,lower,digits) if required.
91-94: GitHub repository variable
Consider adding a regex validation to enforce theowner/repoformat.
97-101: Slack webhook URL
Marking this as sensitive is correct. If Terraform supports it, you could addvalidation { condition = can(regex("^https://hooks.slack.com", var.slack_webhook_url)) }.
111-115: Container image tag
Defaulting tolatestis convenient for dev but consider requiring explicit tags in prod for immutability.
163-167: DocumentDB instance class
Defaultdb.t3.mediumworks for small workloads; if you support larger tiers, list them in your module README.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
docs/Infrastructure.md(6 hunks)docs/infrastructureQNA.md(1 hunks)terraform/README.md(1 hunks)terraform/backend-dev.hcl(1 hunks)terraform/backend-prod.hcl(1 hunks)terraform/backend-staging.hcl(1 hunks)terraform/main.tf(1 hunks)terraform/modules/s3/main.tf(1 hunks)terraform/modules/s3/outputs.tf(1 hunks)terraform/modules/s3/variables.tf(1 hunks)terraform/modules/security_groups/main.tf(1 hunks)terraform/modules/security_groups/outputs.tf(1 hunks)terraform/modules/security_groups/variables.tf(1 hunks)terraform/modules/vpc/main.tf(1 hunks)terraform/modules/vpc/outputs.tf(1 hunks)terraform/modules/vpc/variables.tf(1 hunks)terraform/outputs.tf(1 hunks)terraform/terraform.tfvars.example(1 hunks)terraform/variables.tf(1 hunks)
✅ Files skipped from review due to trivial changes (10)
- terraform/backend-dev.hcl
- terraform/backend-staging.hcl
- terraform/backend-prod.hcl
- terraform/terraform.tfvars.example
- terraform/modules/s3/outputs.tf
- terraform/modules/security_groups/outputs.tf
- docs/infrastructureQNA.md
- terraform/modules/vpc/variables.tf
- terraform/modules/vpc/outputs.tf
- terraform/outputs.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/vpc/main.tf
[HIGH] 164-184: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions
(CKV_AWS_355)
[HIGH] 164-184: Ensure IAM policies does not allow write access without constraints
(CKV_AWS_290)
terraform/modules/s3/main.tf
[MEDIUM] 50-76: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
[MEDIUM] 120-136: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
🪛 LanguageTool
docs/Infrastructure.md
[duplication] ~294-~294: Possible typo: you repeated a word.
Context: ...on enabled * Auth token enabled for authentication * Authentication: * Auth token stored in AWS Param...
(ENGLISH_WORD_REPEAT_RULE)
🪛 markdownlint-cli2 (0.17.2)
terraform/README.md
93-93: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
136-136: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: e2e-tests
- GitHub Check: lint
- GitHub Check: unit-tests
- GitHub Check: security-scan
🔇 Additional comments (65)
terraform/modules/security_groups/variables.tf (4)
3-6: Well-defined prefix variable
Thename_prefixvariable has an appropriate description and enforces astringtype, aligning with naming conventions.
8-11: Environment variable is clear
Theenvironmentvariable properly captures the deployment context with a clearstringtype.
13-16: VPC ID variable is correctly typed
Thevpc_idvariable succinctly represents the VPC identifier with the correctstringtype and description.
18-22: Tags variable defaults correctly
Thetagsvariable uses amap(string)with a sensible default of{}, enabling optional tagging.terraform/modules/security_groups/main.tf (4)
3-36: ALB Security Group is well-configured
The Application Load Balancer security group allows HTTP/HTTPS from anywhere and all outbound traffic, correctly usingcidr_blocksand mergedtags.
38-63: ECS Security Group ingress scoped to ALB
Restricting ECS ingress to the ALB SG ID ensures least-privilege access. The egress rule allows all outbound traffic as expected.
65-90: DocumentDB Security Group is correctly locked down
Ingress is limited to the ECS SG on port 27017 and all outbound traffic is permitted. Naming and tagging follow conventions.
92-117: Redis Security Group follows best practices
Port 6379 is restricted to ECS tasks, egress is open, and resource naming and tagging are consistent.terraform/README.md (1)
3-3: Verify documentation link consistency
The README referencesdocs/Infrastructure.md, but the PR introducesdocs/aws-architecture-diagram.md. Confirm thatdocs/Infrastructure.mdexists or update this link to point to the newly added architecture diagram file.terraform/modules/s3/variables.tf (7)
3-6: Name prefix variable is well-defined
name_prefixincludes a clear description and correctstringtype for resource naming.
8-11: Environment variable is consistent
Theenvironmentvariable clearly identifies the target environment and uses the appropriate type.
13-17: Tags variable setup is correct
Usingmap(string)with a default{}allows flexible tagging.
19-23: Domain name for CORS declared
Thedomain_namevariable is correctly set up withstringtype and an empty default for optional usage.
25-29: Media domain name variable is clear
media_domain_namefollows the same pattern asdomain_name, supporting CORS configuration.
31-35: CloudFront ARN variable is appropriate
cloudfront_distribution_arndefaults to"", allowing optional policy attachment.
37-41: ECS task role ARN variable is well-typed
Theecs_task_role_arnvariable enables granting bucket access and is properly declared.terraform/modules/vpc/main.tf (12)
3-12: VPC resource configuration is solid
The VPC is created with DNS support and appropriate tagging conventions.
14-21: Internet Gateway setup is correct
Attaching the IGW to the VPC and tagging follow best practices.
23-36: Public subnets are provisioned per AZ
Usingcidrsubnetwith correct indexing and tagging each subnet ensures a robust multi-AZ design.
38-50: Private subnets mirror public setup
Private subnet CIDR allocation and tagging are consistent with the public subnets.
52-63: Elastic IPs for NAT Gateways are handled properly
Allocating EIPs withdepends_onensures the IGW is ready before assignment.
64-76: NAT Gateways created per AZ
Each NAT Gateway references its EIP and subnet, with explicit dependencies for order.
78-90: Public route table configuration is correct
Default route through IGW is set, and tagging matches naming conventions.
92-99: Public route table associations are accurate
Associating each public subnet to its route table ensures proper routing.
100-114: Private route tables use NAT Gateways
Traffic in private subnets is routed through the corresponding NAT Gateway.
116-123: Private route table associations are correct
Linking each private subnet to its private route table is implemented properly.
124-131: VPC Flow Logs resource is well-defined
Configuring flow logs to send all traffic data to CloudWatch follows best practices.
132-140: CloudWatch Log Group for flow logs is good
The log group is named and retained for 30 days; tags are merged correctly.docs/Infrastructure.md (4)
7-7: Confirm domain naming consistency
The domain was updated to use thesandbox-prettyclear.comsuffix. Please verify that this matches thevar.domain_nameandvar.media_domain_namevalues in your Terraform configuration and DNS records.
161-162: ECS integration updated with DocumentDB & Redis
The ECS cluster now integrates with both Amazon DocumentDB and ElastiCache Redis services. This aligns with the infrastructure shift—no further changes needed here.
203-203: Validate ALB domain update
The ALB domain is nowsf-website-<env>.sandbox-prettyclear.com. Ensure that your ACM certificate and Route 53 records are configured to cover this new domain.
224-224: Verify CloudFront custom domain
The media distribution now usessf-website-media-<env>.sandbox-prettyclear.com. Confirm that this CNAME is set up in DNS and attached to your CloudFront distribution.terraform/main.tf (11)
23-29: Use default_tags for consistent resource tagging
Thedefault_tagsblock ensures all resources carry your standard tags. No action needed.
31-37: Fetch availability zones dynamically
Good use of data sources to populate availability zones.
61-72: Modular VPC configuration
Well-structured VPC module invocation encapsulates networking concerns.
75-84: Modular security groups configuration
Security group module is clear and aligns with your architecture.
97-110: IAM roles for ECS and GitHub OIDC
ECS task and execution roles plus GitHub OIDC configuration look correct.
113-120: Container registry configuration
Enabling image scan on push and a lifecycle policy is best practice.
123-141: Parameter Store secrets management
Good separation of static and generated secrets. Ensure an emptygcs_service_account_keydoes not result in a malformed secret entry.
161-174: Redis module configuration
Auth token sourcing and subnet/security settings match your docs. Confirm engine version 7.0 matches the module’s default.
177-191: ALB module configuration
You usevar.domain_namehere—ensure your certificate ARN and DNS (Route 53) match this new domain.
194-207: CloudFront module integration
Good setup; verify that themedia_domain_nameand OAC settings correspond to the S3 bucket policy.
250-267: CloudWatch module configuration
Monitoring parameters cover all key components. Make sure alert thresholds and your Slack webhook URL are tested in staging.terraform/modules/s3/main.tf (9)
4-11: Attachments bucket naming
Using${var.name_prefix}-s3-attachments-${var.environment}follows your naming convention.
14-21: Logs bucket naming
Consistent naming for logs bucket—good.
24-29: Versioning enabled for attachments
Versioning is correctly enabled.
41-48: Block public access
Strict public access block prevents unintended exposure.
78-90: CORS rules for attachments
Origins and methods align with your application’s requirements.
94-99: Versioning enabled for logs
Versioning for logs is enabled as expected.
111-118: Block public access for logs
Correctly restricts public access to your logs bucket.
139-145: CloudFront Origin Access Control
Creating an OAC for the attachments bucket is best practice—ensure your CloudFront distribution referencesaws_cloudfront_origin_access_control.attachments.id.
175-207: ECS task policy document
IAM policy statements grant the correct S3 permissions. Make sure thisdata.aws_iam_policy_document.attachments_accessoutput is attached to anaws_iam_policyresource in the parent module.terraform/variables.tf (13)
4-8: AWS region default
Defaulting tous-east-1is common. For production, you may require an explicit override to avoid unintended deployments.
15-23: Default tags
Your default tag map is clear and well-structured.
26-35: VPC CIDR validation
Usingcan(cidrhost(...))is a robust check. If you plan IPv6 support, consider adding a corresponding CIDR validation for IPv6 blocks.
48-52: Certificate ARN
ARN input is correctly marked as non-sensitive string.
54-59: DocumentDB username
The master username is captured correctly.
72-81: Session secret validation
Requiring 32+ characters improves security.
83-88: Optional GCS key
An empty default is fine; ensure your module logic skips creating a secret when this is blank.
104-108: Route53 zone ID
Default empty is acceptable; if Route53 is optional, document when to set this variable.
117-126: Container CPU validation
Validation against allowed CPU units is well done.
128-137: Container memory validation
The memory range check covers valid Fargate limits.
140-149: ECS desired count validation
Ensuring at least one task is good; consider adding an upper bound for cost control.
151-160: ECS max capacity validation
Linkingmax_capacitytodesired_countprevents misconfiguration—nice touch.
169-173: Redis node type default
cache.t3.microis appropriate for dev; production overrides should be documented.
| # IAM Policy for VPC Flow Logs | ||
| resource "aws_iam_role_policy" "flow_log" { | ||
| name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}" | ||
| role = aws_iam_role.flow_log.id | ||
|
|
||
| policy = jsonencode({ | ||
| Version = "2012-10-17" | ||
| Statement = [ | ||
| { | ||
| Action = [ | ||
| "logs:CreateLogGroup", | ||
| "logs:CreateLogStream", | ||
| "logs:PutLogEvents", | ||
| "logs:DescribeLogGroups", | ||
| "logs:DescribeLogStreams", | ||
| ] | ||
| Effect = "Allow" | ||
| Resource = "*" | ||
| }, | ||
| ] | ||
| }) | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restrict IAM policy resource scope
The inline IAM policy allows unrestricted access (Resource = "*") for CloudWatch Logs actions. To adhere to the principle of least privilege, scope the policy to the specific log group ARN.
resource "aws_iam_role_policy" "flow_log" {
name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}"
role = aws_iam_role.flow_log.id
- policy = jsonencode({
- Version = "2012-10-17"
- Statement = [
- {
- Action = [
- "logs:CreateLogGroup",
- "logs:CreateLogStream",
- "logs:PutLogEvents",
- "logs:DescribeLogGroups",
- "logs:DescribeLogStreams",
- ]
- Effect = "Allow"
- Resource = "*"
- },
- ]
- })
+ policy = jsonencode({
+ Version = "2012-10-17"
+ Statement = [
+ {
+ Effect = "Allow"
+ Action = [
+ "logs:CreateLogGroup",
+ "logs:CreateLogStream",
+ "logs:PutLogEvents",
+ "logs:DescribeLogGroups",
+ "logs:DescribeLogStreams",
+ ]
+ Resource = [
+ aws_cloudwatch_log_group.vpc_flow_log.arn,
+ "${aws_cloudwatch_log_group.vpc_flow_log.arn}:*"
+ ]
+ },
+ ]
+ })
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # IAM Policy for VPC Flow Logs | |
| resource "aws_iam_role_policy" "flow_log" { | |
| name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}" | |
| role = aws_iam_role.flow_log.id | |
| policy = jsonencode({ | |
| Version = "2012-10-17" | |
| Statement = [ | |
| { | |
| Action = [ | |
| "logs:CreateLogGroup", | |
| "logs:CreateLogStream", | |
| "logs:PutLogEvents", | |
| "logs:DescribeLogGroups", | |
| "logs:DescribeLogStreams", | |
| ] | |
| Effect = "Allow" | |
| Resource = "*" | |
| }, | |
| ] | |
| }) | |
| } | |
| resource "aws_iam_role_policy" "flow_log" { | |
| name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}" | |
| role = aws_iam_role.flow_log.id | |
| policy = jsonencode({ | |
| Version = "2012-10-17" | |
| Statement = [ | |
| { | |
| Effect = "Allow" | |
| Action = [ | |
| "logs:CreateLogGroup", | |
| "logs:CreateLogStream", | |
| "logs:PutLogEvents", | |
| "logs:DescribeLogGroups", | |
| "logs:DescribeLogStreams", | |
| ] | |
| Resource = [ | |
| aws_cloudwatch_log_group.vpc_flow_log.arn, | |
| "${aws_cloudwatch_log_group.vpc_flow_log.arn}:*" | |
| ] | |
| }, | |
| ] | |
| }) | |
| } |
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 164-184: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions
(CKV_AWS_355)
[HIGH] 164-184: Ensure IAM policies does not allow write access without constraints
(CKV_AWS_290)
🤖 Prompt for AI Agents
In terraform/modules/vpc/main.tf around lines 163 to 184, the IAM policy for VPC
Flow Logs currently allows unrestricted access with Resource set to "*". To fix
this, replace the wildcard resource with the specific ARN of the CloudWatch log
group used by the flow logs. Construct the ARN using the log group name variable
or resource reference, and update the Resource field in the policy JSON to this
ARN to restrict permissions to only the necessary log group.
| module "s3" { | ||
| source = "./modules/s3" | ||
|
|
||
| name_prefix = local.name_prefix | ||
| environment = local.environment | ||
|
|
||
| tags = local.common_tags | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pass required variables to S3 module
The s3 module uses var.domain_name, var.media_domain_name, var.cloudfront_distribution_arn, and var.ecs_task_role_arn internally but these aren’t passed here. Please verify the module interface or update the call to include these inputs.
🤖 Prompt for AI Agents
In terraform/main.tf around lines 87 to 94, the s3 module call is missing
required input variables var.domain_name, var.media_domain_name,
var.cloudfront_distribution_arn, and var.ecs_task_role_arn that the module uses
internally. Review the s3 module's variables.tf or documentation to confirm
these inputs, then update the module block here to pass these variables
explicitly by adding lines like domain_name = var.domain_name, media_domain_name
= var.media_domain_name, cloudfront_distribution_arn =
var.cloudfront_distribution_arn, and ecs_task_role_arn = var.ecs_task_role_arn.
| resource "aws_s3_bucket_lifecycle_configuration" "attachments" { | ||
| bucket = aws_s3_bucket.attachments.id | ||
|
|
||
| rule { | ||
| id = "transition_to_ia" | ||
| status = "Enabled" | ||
|
|
||
| transition { | ||
| days = 30 | ||
| storage_class = "STANDARD_IA" | ||
| } | ||
|
|
||
| transition { | ||
| days = 90 | ||
| storage_class = "GLACIER" | ||
| } | ||
| } | ||
|
|
||
| rule { | ||
| id = "expire_non_current_versions" | ||
| status = "Enabled" | ||
|
|
||
| noncurrent_version_expiration { | ||
| noncurrent_days = 30 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add abort incomplete multipart upload rule
Your lifecycle rules handle transitions and non-current expiration but don’t clean up failed multipart uploads. Add an abort_incomplete_multipart_upload { days_after_initiation = 7 } block to avoid orphaned parts.
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 50-76: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
🤖 Prompt for AI Agents
In terraform/modules/s3/main.tf around lines 50 to 76, the lifecycle
configuration resource for the S3 bucket is missing a rule to abort incomplete
multipart uploads. Add an abort_incomplete_multipart_upload block inside the
lifecycle rule with days_after_initiation set to 7 to ensure failed multipart
uploads are cleaned up and do not accumulate orphaned parts.
| resource "aws_s3_bucket_lifecycle_configuration" "logs" { | ||
| bucket = aws_s3_bucket.logs.id | ||
|
|
||
| rule { | ||
| id = "log_lifecycle" | ||
| status = "Enabled" | ||
|
|
||
| transition { | ||
| days = 90 | ||
| storage_class = "GLACIER" | ||
| } | ||
|
|
||
| expiration { | ||
| days = 365 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add abort incomplete multipart upload to logs lifecycle
Like the attachments bucket, include an abort_incomplete_multipart_upload rule to prevent storage bloat from incomplete uploads.
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 120-136: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
🤖 Prompt for AI Agents
In terraform/modules/s3/main.tf around lines 120 to 136, the lifecycle
configuration for the logs bucket is missing an
abort_incomplete_multipart_upload rule. Add an abort_incomplete_multipart_upload
block inside the rule to specify the number of days after which incomplete
multipart uploads should be aborted, similar to the attachments bucket
configuration, to prevent storage bloat.
…ript
- Unify S3 bucket names to 'sf-website-infrastructure' across all environments
- Update state file paths to consistent terraform/terraform-{env}.tfstate format
- Replace hardcoded session secret with environment variable placeholder
- Add comprehensive AWS initialization script for Terraform backend resources
- Include S3 bucket and DynamoDB table management with proper error handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
terraform/backend-dev.hcl (1)
1-6: Reduce duplication across environment backend configs
All three backend files differ only by thekeyvalue. Consider consolidating these into a single templatedbackend.hcland passing the environment-specific key at init time (e.g.,terraform init -backend-config="key=terraform/${ENV}.tfstate"). This will DRY up your configs and simplify maintenance.terraform/init-aws-for-terraform.sh (2)
36-42: Use an array for AWS CLI invocation to avoid word-splitting edge cases
Building the AWS command as an array is more robust than echoing a string, especially when profile names contain spaces or special characters.-aws_cmd() { - if [ -n "${PROFILE}" ]; then - echo "aws --profile ${PROFILE}" - else - echo "aws" - fi -} +aws_cmd() { + local aws_cli=(aws) + if [ -n "${PROFILE}" ]; then + aws_cli+=(--profile "${PROFILE}") + fi + echo "${aws_cli[@]}" +}
243-248: Use raw input flag with read to avoid backslash interpretation
It's safer to useread -rso that any backslashes in user input aren’t treated as escape characters.-read -p 'Are you sure? (yes/no): ' confirmation +read -r -p 'Are you sure? (yes/no): ' confirmation
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
terraform/backend-dev.hcl(1 hunks)terraform/backend-prod.hcl(1 hunks)terraform/backend-staging.hcl(1 hunks)terraform/init-aws-for-terraform.sh(1 hunks)terraform/terraform.tfvars.example(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- terraform/terraform.tfvars.example
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: lint
- GitHub Check: e2e-tests
- GitHub Check: security-scan
- GitHub Check: unit-tests
🔇 Additional comments (4)
terraform/backend-dev.hcl (1)
1-6: Correct remote state configuration for Development environment
The backend file correctly specifies the shared S3 bucket, state key, DynamoDB lock table, region, and enables encryption—this aligns with Terraform best practices for remote state management.terraform/backend-staging.hcl (1)
1-6: Remote state configuration for Staging is valid
The staging backend correctly mirrors the development setup with the appropriate key path and encryption.terraform/backend-prod.hcl (1)
1-6: Remote state configuration for Production is valid
Production backend is properly configured with locking, encryption, and distinct state path.terraform/init-aws-for-terraform.sh (1)
1-338: Bash script for Terraform backend resource management looks solid
The script cleanly implements create/delete/status operations, credential checks, resource existence checks, and user prompts. Functions are well-named and the flow is clear.
terraform/init-aws-for-terraform.sh
Outdated
| # Script to manage Terraform backend resources (S3 bucket and DynamoDB table) | ||
| # Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME] | ||
|
|
||
| set -e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enable strict error checking for pipelines and unset variables
Currently using set -e only catches non-zero exit codes of the last command. Adding -o pipefail and -u will ensure failures in pipelines and references to unset variables abort the script as expected.
-set -e
+set -euo pipefail📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| set -e | |
| set -euo pipefail |
🤖 Prompt for AI Agents
In terraform/init-aws-for-terraform.sh at line 6, the script uses 'set -e' which
only stops execution on the last command's failure. To improve error handling,
modify this line to include '-o pipefail' and '-u' options as well, so the
script will also fail on any command in a pipeline that fails and on any use of
unset variables. This change enhances the script's robustness by ensuring all
errors are caught promptly.
… and comprehensive documentation - Refactor aws_cmd() to aws_exec() with improved error handling and output suppression - Add non-interactive mode support for automation and CI/CD pipelines - Add comprehensive documentation to README.md with usage examples and feature overview - Create IAM policy file with specific permissions for Terraform backend resources - Improve function return codes and error handling throughout the script - Redirect command output to reduce noise and improve user experience
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
terraform/init-aws-for-terraform.sh (1)
6-6: Enhance error handling withpipefailand unset variable checks
As previously suggested, use:-set -e +set -euo pipefailto catch failures in pipelines and references to unset variables.
🧹 Nitpick comments (7)
terraform/terraform-backend-policy.json (3)
4-22: Scope down S3 bucket permissions for least privilege
Currently, the policy grants broad permissions (CreateBucket, DeleteBucket, PutBucketPolicy, etc.) on the entire bucket. Consider whether Terraform truly needs to manage bucket lifecycle (creation/deletion) on an ongoing basis, or if those operations should be separated into a bootstrap policy. Reducing permissions post-creation minimizes blast radius.
40-53: Consolidate DynamoDB permissions and consider wildcard resources
You grant CreateTable/DeleteTable for a single table ARN. If you plan to reuse this policy across environments with table suffixes, you could use a wildcard:"Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks-*"Alternatively, parameterize the table name in Terraform. Also verify if CreateTable/DeleteTable should be limited to bootstrap only.
54-61: Restrict STS permissions to specific principals
Grantingsts:GetCallerIdentityon all resources is safe but broad. You could tighten this by specifying the ARN of the IAM role(s) intended to call Terraform, e.g.:"Resource": "arn:aws:iam::695912022152:role/terraform-*"This follows least-privilege principles.
terraform/init-aws-for-terraform.sh (2)
35-43: Review AWS CLI error suppression
Filtering out any lines containinghead:orcat:may inadvertently silence legitimate AWS CLI errors. Consider using the AWS CLI environment variableAWS_PAGER=""to disable paging or refining yourgrepregex to only ignore known benign messages.
234-236: Useread -rto prevent backslash escapes
Change to:-read -p 'Are you sure? (yes/no): ' confirmation +read -r -p 'Are you sure? (yes/no): ' confirmationto avoid unintended processing of backslashes in user input.
terraform/README.md (2)
93-99: Specify language for fenced code block
The code block starting at line 93 is missing a language identifier. For markdownlint compliance, addtextorbashafter the backticks, e.g.:- ``` + ```text terraform/ ├── main.tf └── README.md<details> <summary>🧰 Tools</summary> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> 93-93: Fenced code blocks should have a language specified null (MD040, fenced-code-language) </details> </details> --- `136-138`: **Specify language for resource naming code block** The snippet defining the naming convention lacks a language marker. Update to: ```diff - ``` + ```text sf-website-{resource-type}-{environment}to satisfy markdownlint (MD040). <details> <summary>🧰 Tools</summary> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> 136-136: Fenced code blocks should have a language specified null (MD040, fenced-code-language) </details> </details> </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used: CodeRabbit UI** **Review profile: CHILL** **Plan: Pro** <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 196a33b6bf5e18457ada34a37e94a85424c0f4ef and e5e4b6070d93e313c9faf3560eb092b82b9f8989. </details> <details> <summary>📒 Files selected for processing (3)</summary> * `terraform/README.md` (1 hunks) * `terraform/init-aws-for-terraform.sh` (1 hunks) * `terraform/terraform-backend-policy.json` (1 hunks) </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> <details> <summary>terraform/README.md</summary> 93-93: Fenced code blocks should have a language specified null (MD040, fenced-code-language) --- 136-136: Fenced code blocks should have a language specified null (MD040, fenced-code-language) </details> </details> </details> <details> <summary>⏰ Context from checks skipped due to timeout of 90000ms (4)</summary> * GitHub Check: e2e-tests * GitHub Check: lint * GitHub Check: unit-tests * GitHub Check: security-scan </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| "arn:aws:s3:::sf-website-infrastructure", | ||
| "arn:aws:s3:::sf-website-infrastructure/*" | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align resource ARNs with environment-specific buckets
The policy hardcodes the S3 bucket sf-website-infrastructure, but your backend configs (e.g., backend-dev.hcl) and README use buckets like sf-website-terraform-state-dev, -staging, -prod. This mismatch could prevent Terraform from accessing the correct bucket. Please confirm the intended bucket names and update either the policy or the backend docs to align.
🤖 Prompt for AI Agents
In terraform/terraform-backend-policy.json around lines 24 to 26, the S3 bucket
ARNs are hardcoded to "sf-website-infrastructure" which does not match the
environment-specific bucket names used in backend configs and documentation like
"sf-website-terraform-state-dev", "-staging", and "-prod". Update the resource
ARNs in this policy file to use the correct environment-specific bucket names
consistent with the backend configuration and README, or alternatively update
the backend configs and docs to match the bucket names in this policy, ensuring
alignment between all references.
terraform/init-aws-for-terraform.sh
Outdated
| aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ | ||
| --query 'Versions[].{Key:Key,VersionId:VersionId}' \ | ||
| --output text | while read -r key version_id; do | ||
| if [ -n "${key}" ] && [ -n "${version_id}" ]; then | ||
| aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \ | ||
| --key "${key}" --version-id "${version_id}" | ||
| fi | ||
| done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle object keys with special characters safely
Using --output text and while read -r key version_id can break on keys containing spaces or unusual characters. A more robust pattern is to output JSON and parse with jq, for example:
aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" --query 'Versions[].{Key:Key,VersionId:VersionId}' --output json |
jq -c '.[]' |
while read -r item; do
key=$(jq -r '.Key' <<<"$item")
version_id=$(jq -r '.VersionId' <<<"$item")
aws_exec s3api delete-object --bucket "$BUCKET_NAME" --key "$key" --version-id "$version_id"
doneThis ensures correct handling of any key names.
Also applies to: 167-174
🤖 Prompt for AI Agents
In terraform/init-aws-for-terraform.sh around lines 157 to 164, the current use
of --output text with while read can break on S3 object keys containing spaces
or special characters. To fix this, change the command to output JSON (--output
json), pipe it through jq to parse each object safely, extract Key and VersionId
using jq, and then delete each object using these safely parsed values. Apply
the same fix to lines 167 to 174 where a similar pattern occurs.
| # Create S3 buckets for each environment | ||
| aws s3 mb s3://sf-website-terraform-state-dev | ||
| aws s3 mb s3://sf-website-terraform-state-staging | ||
| aws s3 mb s3://sf-website-terraform-state-prod | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent bucket naming in Quick Start
The README instructs creating buckets named sf-website-terraform-state-dev etc., but the IAM policy and init-aws-for-terraform.sh reference sf-website-infrastructure. Please verify and align the bucket names across your Terraform backends, scripts, and docs to avoid confusion.
🤖 Prompt for AI Agents
In terraform/README.md around lines 43 to 47, the S3 bucket names used in the
instructions do not match the bucket names referenced in the IAM policy and
init-aws-for-terraform.sh script. Review the bucket naming convention used in
all related files and update the README to use consistent bucket names, ensuring
they align exactly with those in the IAM policy and scripts to prevent confusion
and errors during setup.
- Remove DynamoDB table references from all backend configuration files - Simplify terraform initialization script by removing DynamoDB functionality - Add comprehensive terraform modules for AWS services (ALB, CloudFront, CloudWatch, DocumentDB, ECR, ECS, IAM, Parameter Store, Redis) - Improve initialization script with environment variable configuration support - Update gitignore to exclude terraform.tfvars and .terraform directory - Add terraform lock file for dependency management
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (2)
terraform/init-aws-for-terraform.sh (2)
6-6: Enable strict error handling
Already flagged in prior reviews: switch fromset -etoset -euo pipefailto catch errors in pipelines and unset variables.-set -e +set -euo pipefail
126-133: Handle S3 object keys with special characters safely
The loops using--output text | while read -r key version_idbreak on keys with spaces or unusual characters. Switch to JSON output and parse withjq, as suggested previously.- aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ - --query 'Versions[].{Key:Key,VersionId:VersionId}' \ - --output text | while read -r key version_id; do + aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ + --query 'Versions[].{Key:Key,VersionId:VersionId}' \ + --output json | jq -c '.[]' | while read -r item; do + key=$(jq -r '.Key' <<<"$item") + version_id=$(jq -r '.VersionId' <<<"$item")Apply the same pattern for DeleteMarkers.
Also applies to: 136-143
🧹 Nitpick comments (10)
terraform/modules/cloudfront/main.tf (1)
2-46: Add default root object and consider origin failover & WAF
- To prevent 404s when accessing the root path, add a
default_root_object(e.g.,"index.html") (CKV_AWS_305).- For high availability, consider configuring an
origin_groupwith failover criteria (CKV_AWS_310).- Evaluate attaching a WAFv2 WebACL for added security (CKV2_AWS_47).
Example diff to add just the default root object:
resource "aws_cloudfront_distribution" "main" { + default_root_object = "index.html" origin { domain_name = var.s3_bucket_domain_name origin_id = "S3-${var.s3_bucket_id}"🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 2-46: Ensure CloudFront distribution has a default root object configured
(CKV_AWS_305)
[MEDIUM] 2-46: Ensure CloudFront distributions should have origin failover configured
(CKV_AWS_310)
[MEDIUM] 2-46: Ensure AWS CloudFront attached WAFv2 WebACL is configured with AMR for Log4j Vulnerability
(CKV2_AWS_47)
terraform/modules/cloudfront/variables.tf (1)
1-35: Input variables defined without validations
All necessary inputs are declared, but addingvalidationblocks would help catch misconfigurations early. For example:variable "media_domain_name" { description = "Media domain name" type = string validation { condition = can(regex("[A-Za-z0-9.-]+\\.[A-Za-z]{2,}", var.media_domain_name)) error_message = "media_domain_name must be a valid domain (e.g. media.example.com)" } }Similarly, enforce non-empty strings for critical values like
name_prefixandcertificate_arn.terraform/modules/redis/outputs.tf (1)
1-4: Simplify fallback withtry()
Consider using Terraform’s built-intry()to streamline the fallback:-value = aws_elasticache_replication_group.main.configuration_endpoint_address != "" ? aws_elasticache_replication_group.main.configuration_endpoint_address : aws_elasticache_replication_group.main.primary_endpoint_address +value = try(aws_elasticache_replication_group.main.configuration_endpoint_address, aws_elasticache_replication_group.main.primary_endpoint_address)terraform/modules/ecr/variables.tf (1)
6-9: Add validation forenvironmentvariable
Consider constraining this variable to known environments:variable "environment" { description = "Environment name" type = string + validation { + condition = contains(["dev","staging","prod"], var.environment) + error_message = "Environment must be one of dev, staging, or prod." + } }terraform/modules/ecr/main.tf (1)
13-47: Parameterize lifecycle policy rules for flexibility
HardcodedcountNumbervalues can be exposed as module inputs to allow easy tuning. Example refactor:# In terraform/modules/ecr/variables.tf +variable "keep_tagged_count" { + description = "Number of tagged images to retain" + type = number + default = 10 +} +variable "untagged_retention_days" { + description = "Days to retain untagged images" + type = number + default = 1 +} # In terraform/modules/ecr/main.tf policy = jsonencode({ rules = [ { ... - countNumber = 10 + countNumber = var.keep_tagged_count ... }, { ... - countNumber = 1 + countNumber = var.untagged_retention_days ... } ] })terraform/modules/cloudwatch/main.tf (2)
1-7: Make retention period configurableThe hard-coded
retention_in_days = 7may not fit all use cases. Consider exposing this as a variable (e.g.,var.log_retention_in_days) with a sensible default to allow future flexibility.
9-36: Parameterize AWS region for the dashboardThe dashboard’s
regionproperty is currently hard-coded to"us-east-1". To support multi-region deployments, consider replacing it withvar.regionordata.aws_region.current.name.terraform/modules/redis/main.tf (1)
10-26: Parameterize cluster sizing and backupsThe
node_type = "cache.t3.micro",num_cache_clusters = 1, and absence of snapshot retention settings are hard-coded. For production workloads, consider exposingvar.node_type,var.num_cache_clusters, andvar.snapshot_retention_limit/var.maintenance_windowvariables to make sizing and backup policies configurable.terraform/modules/documentdb/main.tf (1)
23-30: Parameterize instance class and snapshot behaviorThe
instance_class = "db.t3.medium"andskip_final_snapshot = trueare hard-coded. Consider introducingvar.instance_classandvar.skip_final_snapshot(defaulting tofalsefor safer production tear-down) to increase flexibility and prevent accidental data loss.terraform/modules/ecs/main.tf (1)
93-94: Remove or Correct unnecessarydepends_onusage
Usingdepends_on = [var.target_group_arn]is incorrect—depends_onexpects resource addresses, not raw strings. Terraform automatically tracks the dependency because theload_balancer.target_group_arnreference. You can safely remove this block or, if explicit ordering is required, reference the actual ALB target group resource.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
.gitignore(1 hunks)terraform/.terraform.lock.hcl(1 hunks)terraform/backend-dev.hcl(1 hunks)terraform/backend-prod.hcl(1 hunks)terraform/backend-staging.hcl(1 hunks)terraform/init-aws-for-terraform.sh(1 hunks)terraform/modules/alb/main.tf(1 hunks)terraform/modules/alb/outputs.tf(1 hunks)terraform/modules/alb/variables.tf(1 hunks)terraform/modules/cloudfront/main.tf(1 hunks)terraform/modules/cloudfront/outputs.tf(1 hunks)terraform/modules/cloudfront/variables.tf(1 hunks)terraform/modules/cloudwatch/main.tf(1 hunks)terraform/modules/cloudwatch/outputs.tf(1 hunks)terraform/modules/cloudwatch/variables.tf(1 hunks)terraform/modules/documentdb/main.tf(1 hunks)terraform/modules/documentdb/outputs.tf(1 hunks)terraform/modules/documentdb/variables.tf(1 hunks)terraform/modules/ecr/main.tf(1 hunks)terraform/modules/ecr/outputs.tf(1 hunks)terraform/modules/ecr/variables.tf(1 hunks)terraform/modules/ecs/main.tf(1 hunks)terraform/modules/ecs/outputs.tf(1 hunks)terraform/modules/ecs/variables.tf(1 hunks)terraform/modules/iam/main.tf(1 hunks)terraform/modules/iam/outputs.tf(1 hunks)terraform/modules/iam/variables.tf(1 hunks)terraform/modules/parameter_store/main.tf(1 hunks)terraform/modules/parameter_store/outputs.tf(1 hunks)terraform/modules/parameter_store/variables.tf(1 hunks)terraform/modules/redis/main.tf(1 hunks)terraform/modules/redis/outputs.tf(1 hunks)terraform/modules/redis/variables.tf(1 hunks)
✅ Files skipped from review due to trivial changes (13)
- .gitignore
- terraform/modules/ecr/outputs.tf
- terraform/modules/documentdb/outputs.tf
- terraform/.terraform.lock.hcl
- terraform/modules/ecs/outputs.tf
- terraform/modules/iam/outputs.tf
- terraform/modules/cloudwatch/variables.tf
- terraform/modules/alb/outputs.tf
- terraform/modules/iam/variables.tf
- terraform/modules/ecs/variables.tf
- terraform/modules/alb/variables.tf
- terraform/modules/redis/variables.tf
- terraform/modules/parameter_store/outputs.tf
🚧 Files skipped from review as they are similar to previous changes (2)
- terraform/backend-prod.hcl
- terraform/backend-dev.hcl
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/documentdb/main.tf
[MEDIUM] 10-21: Ensure DocumentDB Logging is enabled
(CKV_AWS_85)
terraform/modules/alb/main.tf
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
terraform/modules/cloudfront/main.tf
[MEDIUM] 2-46: Ensure CloudFront distribution has a default root object configured
(CKV_AWS_305)
[MEDIUM] 2-46: Ensure CloudFront distributions should have origin failover configured
(CKV_AWS_310)
[MEDIUM] 2-46: Ensure AWS CloudFront attached WAFv2 WebACL is configured with AMR for Log4j Vulnerability
(CKV2_AWS_47)
terraform/modules/parameter_store/main.tf
[HIGH] 2-8: Ensure SSM parameters are using KMS CMK
(CKV_AWS_337)
[HIGH] 20-26: Ensure SSM parameters are using KMS CMK
(CKV_AWS_337)
[HIGH] 29-35: Ensure SSM parameters are using KMS CMK
(CKV_AWS_337)
[HIGH] 38-46: Ensure SSM parameters are using KMS CMK
(CKV_AWS_337)
terraform/modules/iam/main.tf
[HIGH] 116-150: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions
(CKV_AWS_355)
[HIGH] 116-150: Ensure IAM policies does not allow write access without constraints
(CKV_AWS_290)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: unit-tests
- GitHub Check: e2e-tests
- GitHub Check: lint
- GitHub Check: security-scan
🔇 Additional comments (14)
terraform/modules/cloudfront/outputs.tf (2)
1-4: Outputs expose CloudFront distribution ID
Thedistribution_idoutput correctly exposesaws_cloudfront_distribution.main.idfor downstream modules to reference.
6-9: Outputs expose CloudFront distribution domain name
Thedistribution_domain_nameoutput appropriately exposesaws_cloudfront_distribution.main.domain_name.terraform/modules/cloudfront/main.tf (1)
48-51: Origin Access Identity configuration is correct
The OAI resource is correctly defined to restrict direct S3 bucket access, with a clear comment reflecting the name prefix and environment.terraform/modules/redis/outputs.tf (2)
1-4: Clear endpoint fallback logic
The conditional correctly selects the configuration endpoint when available, otherwise falls back to the primary endpoint.
6-9: Expose replication group ID
Thecluster_idoutput correctly surfaces thereplication_group_id.terraform/modules/cloudwatch/outputs.tf (2)
1-4: Output CloudWatch log group name
This output correctly exposes theapp_logslog group name.
6-9: Output CloudWatch dashboard name
This output correctly exposes themaindashboard identifier.terraform/modules/ecr/variables.tf (2)
1-4:name_prefixvariable is appropriately defined
The string type and description are clear and align with naming conventions.
11-15:tagsvariable default is sensible
Defaulting to an empty map is appropriate for optional tags.terraform/modules/ecr/main.tf (1)
1-11: ECR repository resource is correctly configured
The repository name, tag mutability, scanning, and tagging settings are all sound.terraform/modules/redis/main.tf (1)
1-7: Subnet group resource configuration looks goodThe ElastiCache subnet group is correctly defined with dynamic naming and applied tags.
terraform/modules/documentdb/variables.tf (1)
1-41: Variable definitions look solidThe module’s input variables are well-defined with clear descriptions, types, and sensible defaults (including marking
master_passwordas sensitive).terraform/modules/ecs/main.tf (2)
1-11: Approve ECS Cluster Configuration
The ECS cluster resource enables Container Insights and applies consistent tagging—this aligns with best practices.
35-47: Verify mapping of environment variables and secrets
The dynamic blocks forenvironmentandsecretsrely onvar.environment_variablesandvar.secrets. Ensure these variables are properly defined invariables.tf(with correct types and defaults) to avoid runtime JSON-encode errors.
| # Backend configuration for Staging environment | ||
| bucket = "sf-website-infrastructure" | ||
| key = "terraform/terraform-staging.tfstate" | ||
| region = "us-east-1" | ||
| encrypt = true No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing DynamoDB locking configuration
To prevent concurrent state operations corrupting your state, include a dynamodb_table setting for Terraform state locking, for example:
dynamodb_table = "terraform-state-lock"🤖 Prompt for AI Agents
In terraform/backend-staging.hcl at lines 1 to 5, the backend configuration is
missing the dynamodb_table setting required for state locking. Add a line
specifying dynamodb_table = "terraform-state-lock" to enable Terraform state
locking and prevent concurrent state operation conflicts.
| resource "aws_lb" "main" { | ||
| name = "${var.name_prefix}-${var.environment}-alb" | ||
| internal = false | ||
| load_balancer_type = "application" | ||
| security_groups = var.security_group_ids | ||
| subnets = var.subnet_ids | ||
|
|
||
| enable_deletion_protection = false | ||
|
|
||
| tags = var.tags | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce dropping of invalid HTTP headers
Checkov (CKV_AWS_131) recommends that the ALB drop invalid or sensitive HTTP headers to prevent information leakage. Add:
resource "aws_lb" "main" {
...
+ drop_invalid_header_fields = true
+ idle_timeout = 60
...
}Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
🤖 Prompt for AI Agents
In terraform/modules/alb/main.tf around lines 2 to 12, the AWS ALB resource
lacks configuration to drop invalid or sensitive HTTP headers as recommended by
Checkov (CKV_AWS_131). To fix this, add the setting that enforces dropping
invalid HTTP headers by including the attribute that enables this behavior in
the aws_lb resource block, ensuring the ALB is configured to prevent information
leakage.
| resource "aws_ssm_parameter" "session_secret" { | ||
| name = "/${var.name_prefix}/${var.environment}/session-secret" | ||
| type = "SecureString" | ||
| value = var.session_secret | ||
|
|
||
| tags = var.tags | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Specify Customer-Managed KMS Key for SecureString parameters
All aws_ssm_parameter resources of type "SecureString" currently fall back to the AWS-managed KMS key. To meet security best practices and organizational requirements, you should explicitly reference a customer-managed KMS CMK via the key_id attribute. Introduce a kms_key_id variable and apply it to each SecureString parameter.
Example diff:
resource "aws_ssm_parameter" "session_secret" {
name = "/${var.name_prefix}/${var.environment}/session-secret"
type = "SecureString"
value = var.session_secret
+ key_id = var.kms_key_id
tags = var.tags
}
…
resource "aws_ssm_parameter" "documentdb_password" {
…
type = "SecureString"
value = var.documentdb_master_password
+ key_id = var.kms_key_id
tags = var.tags
}
…
resource "aws_ssm_parameter" "redis_auth_token" {
…
type = "SecureString"
value = var.redis_auth_token
+ key_id = var.kms_key_id
tags = var.tags
}
…
resource "aws_ssm_parameter" "gcs_service_account_key" {
…
type = "SecureString"
value = var.gcs_service_account_key
+ key_id = var.kms_key_id
tags = var.tags
}Also applies to: 20-26, 29-35, 38-46
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 2-8: Ensure SSM parameters are using KMS CMK
(CKV_AWS_337)
🤖 Prompt for AI Agents
In terraform/modules/parameter_store/main.tf around lines 2 to 8, the
aws_ssm_parameter resource of type SecureString does not specify a
customer-managed KMS key, defaulting to the AWS-managed key. To fix this, add a
new variable kms_key_id to the module inputs and set the key_id attribute of the
aws_ssm_parameter resource to this kms_key_id variable. Repeat this update for
the other SecureString parameters on lines 20-26, 29-35, and 38-46 to ensure all
use the customer-managed KMS key.
| variable "name_prefix" { | ||
| description = "Prefix for resource names" | ||
| type = string | ||
| } | ||
|
|
||
| variable "environment" { | ||
| description = "Environment name" | ||
| type = string | ||
| } | ||
|
|
||
| variable "documentdb_master_username" { | ||
| description = "Master username for DocumentDB" | ||
| type = string | ||
| } | ||
|
|
||
| variable "documentdb_master_password" { | ||
| description = "Master password for DocumentDB" | ||
| type = string | ||
| sensitive = true | ||
| } | ||
|
|
||
| variable "session_secret" { | ||
| description = "Session secret for the application" | ||
| type = string | ||
| sensitive = true | ||
| } | ||
|
|
||
| variable "redis_auth_token" { | ||
| description = "Auth token for Redis" | ||
| type = string | ||
| sensitive = true | ||
| } | ||
|
|
||
| variable "gcs_service_account_key" { | ||
| description = "Google Cloud Storage service account key (JSON)" | ||
| type = string | ||
| default = "" | ||
| sensitive = true | ||
| } | ||
|
|
||
| variable "tags" { | ||
| description = "Tags to apply to resources" | ||
| type = map(string) | ||
| default = {} | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add KMS CMK variable for SecureString parameters
To support specifying a customer-managed key in main.tf, introduce a kms_key_id input variable. This will allow passing your own KMS CMK ARN into the module for stronger encryption controls.
Example addition:
variable "kms_key_id" {
description = "ARN of the customer-managed KMS key for SecureString SSM parameters"
type = string
}🤖 Prompt for AI Agents
In terraform/modules/parameter_store/variables.tf around lines 1 to 45, add a
new input variable named kms_key_id to allow passing a customer-managed KMS key
ARN for SecureString SSM parameters. Define it as a string type with a
description explaining it is the ARN of the customer-managed KMS key. This
enables stronger encryption controls by allowing the module to use a specified
KMS CMK.
| resource "aws_iam_role_policy" "github_actions_policy" { | ||
| name = "${var.name_prefix}-${var.environment}-github-actions-policy" | ||
| role = aws_iam_role.github_actions.id | ||
|
|
||
| policy = jsonencode({ | ||
| Version = "2012-10-17" | ||
| Statement = [ | ||
| { | ||
| Effect = "Allow" | ||
| Action = [ | ||
| "ecr:GetAuthorizationToken", | ||
| "ecr:BatchCheckLayerAvailability", | ||
| "ecr:GetDownloadUrlForLayer", | ||
| "ecr:BatchGetImage", | ||
| "ecr:InitiateLayerUpload", | ||
| "ecr:UploadLayerPart", | ||
| "ecr:CompleteLayerUpload", | ||
| "ecr:PutImage" | ||
| ] | ||
| Resource = [ | ||
| var.ecr_repository_arn, | ||
| "*" | ||
| ] | ||
| }, | ||
| { | ||
| Effect = "Allow" | ||
| Action = [ | ||
| "ecs:UpdateService", | ||
| "ecs:DescribeServices" | ||
| ] | ||
| Resource = "*" | ||
| } | ||
| ] | ||
| }) | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Restrict wildcard resource usage in GitHub Actions IAM policy
The aws_iam_role_policy.github_actions_policy currently grants ECR and ECS permissions against "*"—this violates least-privilege principles and raises a Checkov alert. Narrow the scope by removing "*" and only allowing actions on the specific ECR repository ARN and the target ECS cluster or service ARN (pass these ARNs in as variables).
Example diff:
Resource = [
- var.ecr_repository_arn,
- "*"
+ var.ecr_repository_arn
]
...
Statement = [
{
Effect = "Allow"
Action = [
"ecs:UpdateService",
"ecs:DescribeServices"
]
- Resource = "*"
+ Resource = var.ecs_service_arn
}
]Define new input variables ecs_service_arn (or ecs_cluster_arn) and use them here to enforce least privilege.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_iam_role_policy" "github_actions_policy" { | |
| name = "${var.name_prefix}-${var.environment}-github-actions-policy" | |
| role = aws_iam_role.github_actions.id | |
| policy = jsonencode({ | |
| Version = "2012-10-17" | |
| Statement = [ | |
| { | |
| Effect = "Allow" | |
| Action = [ | |
| "ecr:GetAuthorizationToken", | |
| "ecr:BatchCheckLayerAvailability", | |
| "ecr:GetDownloadUrlForLayer", | |
| "ecr:BatchGetImage", | |
| "ecr:InitiateLayerUpload", | |
| "ecr:UploadLayerPart", | |
| "ecr:CompleteLayerUpload", | |
| "ecr:PutImage" | |
| ] | |
| Resource = [ | |
| var.ecr_repository_arn, | |
| "*" | |
| ] | |
| }, | |
| { | |
| Effect = "Allow" | |
| Action = [ | |
| "ecs:UpdateService", | |
| "ecs:DescribeServices" | |
| ] | |
| Resource = "*" | |
| } | |
| ] | |
| }) | |
| } | |
| resource "aws_iam_role_policy" "github_actions_policy" { | |
| name = "${var.name_prefix}-${var.environment}-github-actions-policy" | |
| role = aws_iam_role.github_actions.id | |
| policy = jsonencode({ | |
| Version = "2012-10-17" | |
| Statement = [ | |
| { | |
| Effect = "Allow" | |
| Action = [ | |
| "ecr:GetAuthorizationToken", | |
| "ecr:BatchCheckLayerAvailability", | |
| "ecr:GetDownloadUrlForLayer", | |
| "ecr:BatchGetImage", | |
| "ecr:InitiateLayerUpload", | |
| "ecr:UploadLayerPart", | |
| "ecr:CompleteLayerUpload", | |
| "ecr:PutImage" | |
| ] | |
| Resource = [ | |
| var.ecr_repository_arn | |
| ] | |
| }, | |
| { | |
| Effect = "Allow" | |
| Action = [ | |
| "ecs:UpdateService", | |
| "ecs:DescribeServices" | |
| ] | |
| Resource = var.ecs_service_arn | |
| } | |
| ] | |
| }) | |
| } |
🧰 Tools
🪛 Checkov (3.2.334)
[HIGH] 116-150: Ensure no IAM policies documents allow "*" as a statement's resource for restrictable actions
(CKV_AWS_355)
[HIGH] 116-150: Ensure IAM policies does not allow write access without constraints
(CKV_AWS_290)
🤖 Prompt for AI Agents
In terraform/modules/iam/main.tf around lines 116 to 150, the IAM policy uses
"*" as a resource for ECR and ECS actions, which violates least-privilege
principles. To fix this, remove the "*" from the Resource lists and instead add
variables for the specific ECS cluster or service ARNs (e.g., ecs_service_arn or
ecs_cluster_arn). Update the policy to reference these variables so permissions
are scoped only to the intended resources.
- Add Terraform plan files to .gitignore - Hardcode availability zones to avoid IAM permission issues - Fix S3 lifecycle configuration filters - Fix ALB and CloudFront output references - Split monolithic IAM policy into focused policies - Create core management policy for IAM and DynamoDB - Create infrastructure policy for VPC, ALB, and databases - Create S3 management policy for bucket operations - Create service management policy for ECR, ECS, and CloudFront
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
terraform/modules/s3/main.tf (2)
50-84:⚠️ Potential issueAdd rule to abort incomplete multipart uploads for the attachments bucket
Your lifecycle configuration for the attachments bucket is missing a rule to clean up failed multipart uploads, which can accumulate orphaned parts and incur storage costs.
Apply the following diff to add anabort_incomplete_multipart_uploadrule:resource "aws_s3_bucket_lifecycle_configuration" "attachments" { bucket = aws_s3_bucket.attachments.id rule { id = "expire_non_current_versions" status = "Enabled" filter { prefix = "" } noncurrent_version_expiration { noncurrent_days = 30 } + abort_incomplete_multipart_upload { + days_after_initiation = 7 + } } }🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 50-84: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
128-148:⚠️ Potential issueAdd rule to abort incomplete multipart uploads for the logs bucket
Similarly, the logs bucket lifecycle config lacks cleanup for failed multipart uploads. Add this to prevent storage bloat:resource "aws_s3_bucket_lifecycle_configuration" "logs" { bucket = aws_s3_bucket.logs.id rule { id = "log_lifecycle" status = "Enabled" filter { prefix = "" } transition { days = 90 storage_class = "GLACIER" } expiration { days = 365 } + abort_incomplete_multipart_upload { + days_after_initiation = 7 + } } }🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 128-148: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
🧹 Nitpick comments (1)
terraform/terraform-service-management-policy.json (1)
61-66: Optional: Remove legacy OAI permissions if unused
Since you’re provisioning CloudFront Origin Access Control (OAC), thecloudfront:CreateCloudFrontOriginAccessIdentityandcloudfront:DeleteCloudFrontOriginAccessIdentityactions may be unnecessary. Consider removing them to narrow the policy.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.gitignore(1 hunks)terraform/main.tf(1 hunks)terraform/modules/s3/main.tf(1 hunks)terraform/outputs.tf(1 hunks)terraform/terraform-core-management-policy.json(1 hunks)terraform/terraform-infrastructure-management-policy.json(1 hunks)terraform/terraform-s3-management-policy.json(1 hunks)terraform/terraform-service-management-policy.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- .gitignore
- terraform/main.tf
- terraform/outputs.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/s3/main.tf
[MEDIUM] 50-84: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
[MEDIUM] 128-148: Ensure S3 lifecycle configuration sets period for aborting failed uploads
(CKV_AWS_300)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: e2e-tests
- GitHub Check: unit-tests
- GitHub Check: lint
- GitHub Check: security-scan
🔇 Additional comments (4)
terraform/terraform-s3-management-policy.json (1)
1-67: S3 management policy looks good
The IAM policy correctly scopes bucket-level and object-level permissions for Terraform’s S3 backend and attachment buckets, following least-privilege principles with explicit ARNs.
No changes required.terraform/terraform-core-management-policy.json (1)
1-79: Core Terraform management policy is comprehensive
This policy appropriately grants permissions for DynamoDB state locking, IAM operations, CloudWatch Logs, and STS. It aligns with Terraform’s backend requirements.terraform/terraform-service-management-policy.json (1)
1-78: Service management policy is complete
The policy grants the necessary ECR, ECS, SSM Parameter Store, and CloudFront permissions for Terraform automation.terraform/terraform-infrastructure-management-policy.json (1)
1-130: Infrastructure management policy is well-scoped
This IAM policy covers EC2 VPC, ELB, DocumentDB/RDS, and ElastiCache resources needed by Terraform. Actions are correctly grouped and scope is appropriate for infrastructure provisioning.
- Simplify Redis cluster endpoint output to use primary endpoint address only - Move DocumentDB, ALB, CloudFront, and ACM permissions to core management policy - Remove duplicate ELB and CloudFront permissions from infrastructure and service policies - Consolidate AWS service permissions for better organization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
terraform/terraform-infrastructure-management-policy.json (1)
4-52: Use least privilege by scoping resourcesCurrently, this Statement grants all EC2 VPC and subnet operations on all resources (
"*"). For better security, consider scoping theResourcefield to the specific VPC, subnets, and related resources managed by Terraform, or add IAM conditions (e.g., based on resource tags) to enforce least privilege.terraform/terraform-core-management-policy.json (1)
19-53: Restrictiam:PassRoleto specific Terraform rolesGranting
iam:PassRoleon*can allow privilege escalation. For least privilege, specify the ARN(s) of the IAM role(s) that Terraform will pass to AWS services:- "Action": ["iam:PassRole", ...], - "Resource": "*" + "Action": ["iam:PassRole"], + "Resource": "arn:aws:iam::ACCOUNT_ID:role/terraform-*"Adjust the ARN pattern to match your Terraform-managed roles.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
terraform/modules/redis/outputs.tf(1 hunks)terraform/terraform-core-management-policy.json(1 hunks)terraform/terraform-infrastructure-management-policy.json(1 hunks)terraform/terraform-service-management-policy.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- terraform/modules/redis/outputs.tf
- terraform/terraform-service-management-policy.json
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: unit-tests
- GitHub Check: e2e-tests
- GitHub Check: lint
- GitHub Check: security-scan
🔇 Additional comments (6)
terraform/terraform-infrastructure-management-policy.json (1)
53-75: Consider ElastiCache snapshot permissionsThe current ElastiCache permissions cover subnet groups, clusters, and replication groups, but if your Terraform Redis module manages snapshots (e.g., automated backups or manual snapshots), you may need to add:
elasticache:CreateSnapshotelasticache:DescribeSnapshotselasticache:DeleteSnapshotPlease verify if snapshot operations are required and update accordingly.
terraform/terraform-core-management-policy.json (5)
5-17: DynamoDB lock table permissions scoped correctlyThe Statement precisely restricts DynamoDB actions to the
sf-website-terraform-lockstable. This aligns with Terraform state locking best practices.
55-69: CloudWatch Logs permissions look goodThe Statement covers creation, deletion, retention policy management, and tagging for log groups. These actions align with Terraform-managed CloudWatch Logs operations.
71-76: STS identity retrieval permission approvedAllowing
sts:GetCallerIdentityis necessary for Terraform to detect the current AWS account and IAM identity.
109-130: ALB permissions are comprehensiveThe permissions for Elastic Load Balancing cover load balancer and target group lifecycle, listeners, and tagging. This aligns with Terraform ECS and ALB module requirements.
152-165: ACM certificate permissions look correctThe Statement provides necessary ACM actions for certificate management, listing, and tagging.
| "Sid": "TerraformCloudFrontPermissions", | ||
| "Effect": "Allow", | ||
| "Action": [ | ||
| "cloudfront:CreateDistribution", | ||
| "cloudfront:DeleteDistribution", | ||
| "cloudfront:GetDistribution", | ||
| "cloudfront:UpdateDistribution", | ||
| "cloudfront:ListDistributions", | ||
| "cloudfront:CreateOriginAccessControl", | ||
| "cloudfront:DeleteOriginAccessControl", | ||
| "cloudfront:GetOriginAccessControl", | ||
| "cloudfront:UpdateOriginAccessControl", | ||
| "cloudfront:ListOriginAccessControls", | ||
| "cloudfront:TagResource", | ||
| "cloudfront:UntagResource", | ||
| "cloudfront:ListTagsForResource" | ||
| ], | ||
| "Resource": "*" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include GetDistributionConfig for CloudFront
Terraform's aws_cloudfront_distribution resource requires both cloudfront:GetDistribution and cloudfront:GetDistributionConfig to read existing distribution configurations before updates. Please add:
"Action": [
+ "cloudfront:GetDistributionConfig",
...
]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "Sid": "TerraformCloudFrontPermissions", | |
| "Effect": "Allow", | |
| "Action": [ | |
| "cloudfront:CreateDistribution", | |
| "cloudfront:DeleteDistribution", | |
| "cloudfront:GetDistribution", | |
| "cloudfront:UpdateDistribution", | |
| "cloudfront:ListDistributions", | |
| "cloudfront:CreateOriginAccessControl", | |
| "cloudfront:DeleteOriginAccessControl", | |
| "cloudfront:GetOriginAccessControl", | |
| "cloudfront:UpdateOriginAccessControl", | |
| "cloudfront:ListOriginAccessControls", | |
| "cloudfront:TagResource", | |
| "cloudfront:UntagResource", | |
| "cloudfront:ListTagsForResource" | |
| ], | |
| "Resource": "*" | |
| }, | |
| "Sid": "TerraformCloudFrontPermissions", | |
| "Effect": "Allow", | |
| "Action": [ | |
| "cloudfront:GetDistributionConfig", | |
| "cloudfront:CreateDistribution", | |
| "cloudfront:DeleteDistribution", | |
| "cloudfront:GetDistribution", | |
| "cloudfront:UpdateDistribution", | |
| "cloudfront:ListDistributions", | |
| "cloudfront:CreateOriginAccessControl", | |
| "cloudfront:DeleteOriginAccessControl", | |
| "cloudfront:GetOriginAccessControl", | |
| "cloudfront:UpdateOriginAccessControl", | |
| "cloudfront:ListOriginAccessControls", | |
| "cloudfront:TagResource", | |
| "cloudfront:UntagResource", | |
| "cloudfront:ListTagsForResource" | |
| ], | |
| "Resource": "*" |
🤖 Prompt for AI Agents
In terraform/terraform-core-management-policy.json around lines 132 to 150, the
CloudFront permissions are missing the "cloudfront:GetDistributionConfig"
action, which is required by Terraform's aws_cloudfront_distribution resource to
read distribution configurations. Add "cloudfront:GetDistributionConfig" to the
"Action" array alongside the existing CloudFront permissions to ensure proper
access.
| "Sid": "TerraformDocumentDBPermissions", | ||
| "Effect": "Allow", | ||
| "Action": [ | ||
| "rds:DescribeGlobalClusters", | ||
| "rds:CreateGlobalCluster", | ||
| "rds:DeleteGlobalCluster", | ||
| "rds:ModifyGlobalCluster", | ||
| "rds:CreateDBCluster", | ||
| "rds:DeleteDBCluster", | ||
| "rds:DescribeDBClusters", | ||
| "rds:ModifyDBCluster", | ||
| "rds:CreateDBSubnetGroup", | ||
| "rds:DeleteDBSubnetGroup", | ||
| "rds:DescribeDBSubnetGroups", | ||
| "rds:ModifyDBSubnetGroup", | ||
| "rds:CreateDBClusterParameterGroup", | ||
| "rds:DeleteDBClusterParameterGroup", | ||
| "rds:DescribeDBClusterParameterGroups", | ||
| "rds:ModifyDBClusterParameterGroup", | ||
| "rds:CreateDBInstance", | ||
| "rds:DeleteDBInstance", | ||
| "rds:DescribeDBInstances", | ||
| "rds:ModifyDBInstance", | ||
| "rds:AddTagsToResource", | ||
| "rds:RemoveTagsFromResource", | ||
| "rds:ListTagsForResource" | ||
| ], | ||
| "Resource": "*" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing DocumentDB-specific (docdb:) permissions
This Statement uses rds: actions, but Amazon DocumentDB requires docdb:-prefixed actions. Terraform's aws_docdb_cluster resources rely on DocumentDB APIs (e.g., docdb:CreateDBCluster, docdb:DeleteDBInstance, and related). Without these, Terraform may fail to manage DocumentDB clusters. Please replace or supplement the rds: actions with the corresponding docdb: actions:
{
+ "Sid": "TerraformDocumentDBPermissionsDocDB",
+ "Effect": "Allow",
+ "Action": [
+ "docdb:CreateDBCluster",
+ "docdb:DeleteDBCluster",
+ "docdb:DescribeDBClusters",
+ "docdb:ModifyDBCluster",
+ "docdb:CreateDBInstance",
+ "docdb:DeleteDBInstance",
+ "docdb:DescribeDBInstances",
+ "docdb:ModifyDBInstance",
+ "docdb:CreateDBSubnetGroup",
+ "docdb:DeleteDBSubnetGroup",
+ "docdb:DescribeDBSubnetGroups",
+ "docdb:ModifyDBSubnetGroup",
+ "docdb:AddTagsToResource",
+ "docdb:RemoveTagsFromResource",
+ "docdb:ListTagsForResource"
+ ],
+ "Resource": "*"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "Sid": "TerraformDocumentDBPermissions", | |
| "Effect": "Allow", | |
| "Action": [ | |
| "rds:DescribeGlobalClusters", | |
| "rds:CreateGlobalCluster", | |
| "rds:DeleteGlobalCluster", | |
| "rds:ModifyGlobalCluster", | |
| "rds:CreateDBCluster", | |
| "rds:DeleteDBCluster", | |
| "rds:DescribeDBClusters", | |
| "rds:ModifyDBCluster", | |
| "rds:CreateDBSubnetGroup", | |
| "rds:DeleteDBSubnetGroup", | |
| "rds:DescribeDBSubnetGroups", | |
| "rds:ModifyDBSubnetGroup", | |
| "rds:CreateDBClusterParameterGroup", | |
| "rds:DeleteDBClusterParameterGroup", | |
| "rds:DescribeDBClusterParameterGroups", | |
| "rds:ModifyDBClusterParameterGroup", | |
| "rds:CreateDBInstance", | |
| "rds:DeleteDBInstance", | |
| "rds:DescribeDBInstances", | |
| "rds:ModifyDBInstance", | |
| "rds:AddTagsToResource", | |
| "rds:RemoveTagsFromResource", | |
| "rds:ListTagsForResource" | |
| ], | |
| "Resource": "*" | |
| }, | |
| // … previous statements … | |
| { | |
| "Sid": "TerraformDocumentDBPermissions", | |
| "Effect": "Allow", | |
| "Action": [ | |
| "rds:DescribeGlobalClusters", | |
| "rds:CreateGlobalCluster", | |
| "rds:DeleteGlobalCluster", | |
| "rds:ModifyGlobalCluster", | |
| "rds:CreateDBCluster", | |
| "rds:DeleteDBCluster", | |
| "rds:DescribeDBClusters", | |
| "rds:ModifyDBCluster", | |
| "rds:CreateDBSubnetGroup", | |
| "rds:DeleteDBSubnetGroup", | |
| "rds:DescribeDBSubnetGroups", | |
| "rds:ModifyDBSubnetGroup", | |
| "rds:CreateDBClusterParameterGroup", | |
| "rds:DeleteDBClusterParameterGroup", | |
| "rds:DescribeDBClusterParameterGroups", | |
| "rds:ModifyDBClusterParameterGroup", | |
| "rds:CreateDBInstance", | |
| "rds:DeleteDBInstance", | |
| "rds:DescribeDBInstances", | |
| "rds:ModifyDBInstance", | |
| "rds:AddTagsToResource", | |
| "rds:RemoveTagsFromResource", | |
| "rds:ListTagsForResource" | |
| ], | |
| "Resource": "*" | |
| }, | |
| { | |
| "Sid": "TerraformDocumentDBPermissionsDocDB", | |
| "Effect": "Allow", | |
| "Action": [ | |
| "docdb:CreateDBCluster", | |
| "docdb:DeleteDBCluster", | |
| "docdb:DescribeDBClusters", | |
| "docdb:ModifyDBCluster", | |
| "docdb:CreateDBInstance", | |
| "docdb:DeleteDBInstance", | |
| "docdb:DescribeDBInstances", | |
| "docdb:ModifyDBInstance", | |
| "docdb:CreateDBSubnetGroup", | |
| "docdb:DeleteDBSubnetGroup", | |
| "docdb:DescribeDBSubnetGroups", | |
| "docdb:ModifyDBSubnetGroup", | |
| "docdb:AddTagsToResource", | |
| "docdb:RemoveTagsFromResource", | |
| "docdb:ListTagsForResource" | |
| ], | |
| "Resource": "*" | |
| } | |
| // … following statements … |
🤖 Prompt for AI Agents
In terraform/terraform-core-management-policy.json between lines 79 and 107, the
IAM policy statement currently uses only `rds:` actions, but for managing Amazon
DocumentDB clusters with Terraform, you need to include the corresponding
`docdb:` actions. Update the "Action" array to replace or supplement the
existing `rds:` actions with the equivalent `docdb:` actions such as
`docdb:CreateDBCluster`, `docdb:DeleteDBInstance`, and others required for
DocumentDB management to ensure Terraform can properly manage DocumentDB
resources.
- Add elasticloadbalancing DescribeLoadBalancerAttributes permission - Add elasticloadbalancing DescribeTargetGroupAttributes permission - Add CloudFront Origin Access Identity management permissions
- Add ECS service-linked role for proper service management - Make container configuration configurable (CPU, memory, port) - Add auto-scaling configuration with CPU and memory targets - Make log retention and scaling parameters configurable - Filter out empty values from secrets configuration - Add module outputs for auto-scaling resources
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (10)
terraform/modules/ecs/main.tf (1)
25-25: Make container name configurable
Avoid hard-coding the container name"app". Introduce a variable (e.g.,var.container_name) to improve flexibility across different services.terraform/modules/ecs/variables.tf (2)
1-9: Add validation for naming consistency
Consider enforcing constraints onname_prefixandenvironment(e.g., allowed character set, length limits, and a predefined list of environment values like"dev","staging","prod") to prevent invalid or inconsistent resource names across deployments.
26-29: Optional: Validate ECR URL format
You may add avalidationblock with a regex to ensureecr_repository_urlfollows the ECR URL pattern (e.g.,<account>.dkr.ecr.<region>.amazonaws.com/<repo>). Early validation can catch misconfigurations before apply.terraform/variables.tf (7)
4-13: Add validation for region and environment
Consider addingvalidationblocks foraws_regionto restrict to supported AWS regions and forenvironmentto enforce a known set of environment names (e.g.,["dev","staging","prod","qa","demo","sandbox"]). This reduces typos and misconfigurations.
15-23: Enhance tagging strategy
Thedefault_tagsmap is clear but could include anEnvironment = var.environmententry. This automatically labels resources with their environment context.
37-51: Approve domain-related variables
Thedomain_name,media_domain_name, andcertificate_arnvariables are well-documented. Optionally, you could add a regex-basedvalidationforcertificate_arnto match ACM ARN patterns.
83-88: Clarify optional GCS key usage
Sincegcs_service_account_keyis optional by defaulting to an empty string, ensure downstream modules skip any GCS-related resources when it's unset, or document that behavior.
110-115: Review default image tag usage
Using"latest"as the default forcontainer_image_tagcan reduce operational overhead but may hinder immutability in production. Recommend pinning explicit tags via CI/CD pipelines.
185-189: Optional: Restrict DocumentDB instance types
You might add avalidationblock to restrictdocumentdb_instance_classto a predefined list of supported classes (e.g.,["db.t3.medium","db.r5.large"]).
191-195: Optional: Restrict Redis node types
Similarly, consider validatingredis_node_typeagainst a known set of ElastiCache node types to catch configuration errors early.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.gitignore(1 hunks)terraform/main.tf(1 hunks)terraform/modules/ecs/main.tf(1 hunks)terraform/modules/ecs/outputs.tf(1 hunks)terraform/modules/ecs/variables.tf(1 hunks)terraform/terraform.tfvars.example(1 hunks)terraform/variables.tf(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- terraform/terraform.tfvars.example
🚧 Files skipped from review as they are similar to previous changes (3)
- .gitignore
- terraform/modules/ecs/outputs.tf
- terraform/main.tf
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: e2e-tests
- GitHub Check: lint
- GitHub Check: unit-tests
- GitHub Check: security-scan
🔇 Additional comments (20)
terraform/modules/ecs/main.tf (1)
2-11: ECS cluster resource: correct configuration
Cluster is defined with container insights enabled and tags applied appropriately.terraform/modules/ecs/variables.tf (7)
11-24: Approve network ID variables
Thevpc_id,subnet_ids, andsecurity_group_idsinputs are correctly typed (stringandlist(string)) with clear descriptions. This aligns with how the root module will supply networking resources.
31-39: Approve IAM role ARN variables
Thetask_role_arnandexecution_role_arnvariables are well-defined asstringwith precise descriptions. They will integrate cleanly with the IAM module outputs.
41-44: Approve ALB target group input
target_group_arnis correctly declared and will allow the ECS service to register tasks with the Application Load Balancer target group.
46-59: Approve container resource variables
Definingcontainer_cpu,container_memory, andcontainer_portwithout defaults in this module is appropriate since the root module provides defaults. Types and descriptions align with ECS Fargate requirements.
61-74: Approve ECS task scaling configuration
Thelog_retention_days,ecs_desired_count, andecs_max_capacityvariables are declared correctly. If you want additional safety, you could duplicate the root-level validation for log retention here, but it's not strictly necessary.
76-92: Approve optional maps for container settings
Usingenvironment_variables,secrets, andtagsas default-emptymap(string)is a solid pattern for optional settings injection without forcing every caller to provide them.
94-97: Ensureservice_linked_role_arnis supplied
service_linked_role_arnis required and has no default. Verify that the root module or caller passes this value; otherwise Terraform will error. If you intend to let AWS auto-create the service-linked role, consider defaulting to an empty string or documenting that behavior.terraform/variables.tf (12)
25-35: Approve VPC CIDR validation
Usingcan(cidrhost(var.vpc_cidr, 0))to ensure a valid IPv4 CIDR block is a solid approach.
54-69: Approve DocumentDB credentials
Markingdocumentdb_master_usernameanddocumentdb_master_passwordassensitivewith a minimum length check for the password is aligned with best security practices.
71-81: Approve session secret validation
session_secretis sensitive and has a strict length requirement (≥32), which is appropriate for cryptographic keys.
90-94: Approve GitHub Actions repo variable
github_repositoryis required for OIDC integration and is clearly described.
96-101: Approve Slack webhook input
slack_webhook_urlmarked assensitiveis appropriate for alerting configuration.
103-108: Approve optional Route 53 zone
Defaultingroute53_zone_idto an empty string cleanly makes DNS record creation optional.
117-125: Approve container CPU validation
Enforcing allowed CPU units throughcontains([...])helps prevent unsupported configurations.
128-137: Approve container memory validation
The memory range check (512–30720 MB) matches ECS requirements and guards against invalid inputs.
139-147: Approve container port validation
Validating thatcontainer_portfalls within the TCP port range (1–65535) is a good safeguard.
150-159: Approve log retention options
Limitinglog_retention_daysto supported CloudWatch retention periods prevents silent misconfigurations.
161-170: Approve ECS desired count constraints
Constrainingecs_desired_countbetween 1 and 10 strikes a balance between flexibility and safety.
173-182: Approve ECS max capacity validation
Ensuringecs_max_capacity >= ecs_desired_countavoids misconfigured autoscaling setups.
| container_definitions = jsonencode([ | ||
| { | ||
| name = "app" | ||
| image = "${var.ecr_repository_url}:latest" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid using the “latest” image tag
Referencing :latest can lead to non-deterministic deployments. Use a versioned tag or digest (e.g., "${var.ecr_repository_url}:${var.image_tag}" or specifying imageDigest) for reproducible builds.
I can help refactor this to use a variable-based tag or digest if needed.
🤖 Prompt for AI Agents
In terraform/modules/ecs/main.tf at line 26, the image tag is hardcoded to
"latest", which can cause non-deterministic deployments. Replace the ":latest"
tag with a variable-based tag such as "${var.image_tag}" or use an image digest
variable to ensure reproducible builds. Update the code to reference this
variable instead of the fixed "latest" tag.
terraform/modules/ecs/main.tf
Outdated
| depends_on = [var.target_group_arn, var.service_linked_role_arn] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid depends_on references
The depends_on block references variables instead of resource addresses. Terraform already infers dependencies on the ALB target group and service-linked role via their references. Remove this block or replace it with actual resource references.
resource "aws_ecs_service" "main" {
- depends_on = [var.target_group_arn, var.service_linked_role_arn]
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| depends_on = [var.target_group_arn, var.service_linked_role_arn] | |
| resource "aws_ecs_service" "main" { | |
| # …other configuration… | |
| # explicit depends_on removed; Terraform will infer dependencies automatically | |
| } |
🤖 Prompt for AI Agents
In terraform/modules/ecs/main.tf at lines 93 to 94, the depends_on block
incorrectly references variables instead of resource addresses, which is
invalid. Remove the depends_on block entirely or replace the variable references
with the actual resource addresses that need to be depended on, as Terraform
automatically infers dependencies from resource references.
- Parameterize container port across ALB, ECS, and security group modules - Make health check path configurable in ALB module - Make log retention days configurable in CloudWatch module - Add target_type = ip to ALB target group for ECS Fargate compatibility - Add IAM service linked role permissions to core management policy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
terraform/modules/alb/main.tf (1)
2-12: 🛠️ Refactor suggestionEnforce dropping invalid headers and set idle timeout
Checkov recommends configuring the ALB to drop invalid or sensitive HTTP headers (CKV_AWS_131) and it’s a best practice to set an idle timeout. Please add the following attributes inside theaws_lbresource:resource "aws_lb" "main" { ... + drop_invalid_header_fields = true + idle_timeout = 60 ... }🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
🧹 Nitpick comments (2)
terraform/modules/alb/main.tf (1)
38-51: Consider updating TLS policy and enabling HTTP/2
The listener’sssl_policyis set toELBSecurityPolicy-TLS-1-2-2017-01. You may want to use a more recent policy (e.g.,ELBSecurityPolicy-TLS-1-2-Ext-2018-06) and explicitly enable HTTP/2 support:resource "aws_lb_listener" "https" { ... - ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01" + ssl_policy = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06" + protocol = "HTTPS" + enable_http2 = true ... }terraform/main.tf (1)
54-63: Switch from hardcoded AZs to data source when possible
Hardcoding availability zones can drift if you change regions. If you haveec2:DescribeAvailabilityZonespermissions, consider using the data source instead:locals { - azs = ["us-east-1a", "us-east-1b"] + azs = slice(data.aws_availability_zones.available.names, 0, 2) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
terraform/main.tf(1 hunks)terraform/modules/alb/main.tf(1 hunks)terraform/modules/alb/variables.tf(1 hunks)terraform/modules/cloudwatch/main.tf(1 hunks)terraform/modules/cloudwatch/variables.tf(1 hunks)terraform/modules/security_groups/main.tf(1 hunks)terraform/modules/security_groups/variables.tf(1 hunks)terraform/terraform-core-management-policy.json(1 hunks)terraform/variables.tf(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- terraform/modules/cloudwatch/main.tf
- terraform/modules/security_groups/variables.tf
- terraform/modules/security_groups/main.tf
- terraform/modules/cloudwatch/variables.tf
- terraform/modules/alb/variables.tf
- terraform/variables.tf
- terraform/terraform-core-management-policy.json
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/alb/main.tf
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: e2e-tests
- GitHub Check: lint
- GitHub Check: unit-tests
- GitHub Check: security-scan
🔇 Additional comments (3)
terraform/modules/alb/main.tf (2)
15-35: Target group configuration looks good
You’ve correctly settarget_type = "ip"for ECS Fargate integration. Health check settings and tagging are all in order.
55-70: HTTP→HTTPS redirect listener is correctly configured
Your redirect listener will send all port 80 traffic to HTTPS with a 301. This aligns with best practices.terraform/main.tf (1)
102-110:Details
❓ Verification inconclusive
Verify S3 module interface
Thes3module may expect inputs likedomain_name,media_domain_name,cloudfront_distribution_arn, orecs_task_role_arn, which aren’t passed here. Please confirm the module’svariables.tfand update the call accordingly.You can run:
🏁 Script executed:
#!/bin/bash # List variables defined by the s3 module rg -n '^variable ' terraform/modules/s3/variables.tf # Search for domain_name usage in s3 module rg -n 'domain_name' terraform/modules/s3Length of output: 754
🏁 Script executed:
#!/bin/bash # Show definitions for relevant variables to check for defaults rg -n -A5 '^variable "domain_name"' terraform/modules/s3/variables.tf rg -n -A5 '^variable "media_domain_name"' terraform/modules/s3/variables.tf rg -n -A5 '^variable "cloudfront_distribution_arn"' terraform/modules/s3/variables.tf rg -n -A5 '^variable "ecs_task_role_arn"' terraform/modules/s3/variables.tfLength of output: 912
Verify S3 module CORS and policy inputs
The
s3module defines the following variables (each defaults to an empty string):
var.domain_namevar.media_domain_namevar.cloudfront_distribution_arnvar.ecs_task_role_arnWithout overrides, CORS origins will be built as
"https://"and policy statements relying on an ARN may be skipped. Please confirm whether you need to supply these values in your module call (or adjust the module to handle blank defaults):• File:
terraform/main.tf, module"s3"block (lines 102–110)Example update (if required):
module "s3" { source = "./modules/s3" name_prefix = local.name_prefix environment = local.environment tags = local.common_tags domain_name = var.domain_name media_domain_name = var.media_domain_name cloudfront_distribution_arn = var.cloudfront_distribution_arn ecs_task_role_arn = var.ecs_task_role_arn }
| module "alb" { | ||
| source = "./modules/alb" | ||
|
|
||
| name_prefix = local.name_prefix | ||
| environment = local.environment | ||
|
|
||
| vpc_id = module.vpc.vpc_id | ||
| subnet_ids = module.vpc.public_subnet_ids | ||
| security_group_ids = [module.security_groups.alb_sg_id] | ||
|
|
||
| domain_name = var.domain_name | ||
| certificate_arn = var.certificate_arn | ||
|
|
||
| # Container configuration | ||
| container_port = var.container_port | ||
| health_check_path = var.health_check_path | ||
|
|
||
| tags = local.common_tags | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Review unused domain_name input for ALB module
You’re passing domain_name = var.domain_name into the alb module, but the module’s main.tf doesn’t reference it. Either remove the unused variable or add logic (e.g., a Route 53 record) that leverages it.
You can verify with:
🏁 Script executed:
#!/bin/bash
# Check if the alb module defines or references domain_name
rg -R 'variable "domain_name"' -n terraform/modules/alb
rg -R 'domain_name' -n terraform/modules/albLength of output: 912
🏁 Script executed:
#!/bin/bash
set -e
echo "Listing files in terraform/modules/alb:"
find terraform/modules/alb -maxdepth 1 -type f
echo -e "\nSearching for a variable declaration of domain_name:"
rg -n 'variable "domain_name"' terraform/modules/alb
echo -e "\nSearching for any references to var.domain_name in the module:"
rg -n 'var\.domain_name' terraform/modules/alb
echo -e "\nSearching for any other occurrences of domain_name:"
rg -n 'domain_name' terraform/modules/albLength of output: 673
Unused domain_name variable in ALB module
The alb module declares variable "domain_name" in terraform/modules/alb/variables.tf (line 26) but never references it in main.tf or outputs.tf. Either remove this input from the module and drop the domain_name = var.domain_name argument in your root module, or add logic to consume it—e.g. create a Route 53 record:
• terraform/modules/alb/variables.tf:26 – remove or deprecate the variable
• terraform/modules/alb/main.tf – add something like:
resource "aws_route53_record" "alb" {
zone_id = data.aws_route53_zone.this.zone_id
name = var.domain_name
type = "A"
alias {
name = aws_lb.this.dns_name
zone_id = aws_lb.this.zone_id
evaluate_target_health = true
}
}🤖 Prompt for AI Agents
In terraform/main.tf lines 193 to 211, the domain_name variable is passed to the
alb module but not used inside the module. To fix this, either remove the
domain_name input from the root module call and delete the corresponding
variable declaration in terraform/modules/alb/variables.tf around line 26, or
add logic inside terraform/modules/alb/main.tf to consume domain_name, such as
creating a Route 53 record resource that uses var.domain_name to point to the
ALB DNS name.
…nced ECS module with ElastiCache integration support - Updated infrastructure management policy with ElastiCache permissions - Added comprehensive Redis cluster configuration - Improved security group and networking setup - Added analysis documentation for infrastructure changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
terraform/modules/alb/main.tf (1)
2-12: 🛠️ Refactor suggestionEnforce dropping of invalid HTTP headers and set idle timeout
As per Checkov CKV_AWS_131, consider addingdrop_invalid_header_fields = trueto prevent leaking sensitive headers andidle_timeout = 60to align with AWS best practices for ALBs.
Proposed diff:resource "aws_lb" "main" { name = "${var.name_prefix}-${var.environment}-alb" internal = false load_balancer_type = "application" security_groups = var.security_group_ids subnets = var.subnet_ids enable_deletion_protection = false + drop_invalid_header_fields = true + idle_timeout = 60 tags = var.tags }🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
🧹 Nitpick comments (2)
terraform/modules/alb/main.tf (2)
37-58: Reduce downtime by enablingcreate_before_destroyfor HTTPS listener
Withcreate_before_destroy = false, updates to the listener can cause brief traffic disruption. Switching tocreate_before_destroy = trueensures the new listener is in place before the old one is removed:resource "aws_lb_listener" "https" { ... lifecycle { - create_before_destroy = false + create_before_destroy = true } ... }
60-84: Reevaluate lifecycle blocks for HTTP listener
This listener only performs an HTTP→HTTPS redirect. The customlifecycleblock may be unnecessary—Terraform’s defaults should handle ordering based on thedepends_on. Removing or simplifying it can reduce configuration noise.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
terraform/main.tf(1 hunks)terraform/modules/alb/main.tf(1 hunks)terraform/modules/ecs/main.tf(1 hunks)terraform/modules/ecs/variables.tf(1 hunks)terraform/terraform-core-management-policy.json(1 hunks)terraform/terraform-infrastructure-management-policy.json(1 hunks)terraform/terraform-service-management-policy.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- terraform/terraform-service-management-policy.json
- terraform/terraform-infrastructure-management-policy.json
- terraform/modules/ecs/variables.tf
- terraform/modules/ecs/main.tf
- terraform/terraform-core-management-policy.json
- terraform/main.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/alb/main.tf
[MEDIUM] 2-12: Ensure that ALB drops HTTP headers
(CKV_AWS_131)
[MEDIUM] 15-35: Ensure AWS Load Balancer doesn't use HTTP protocol
(CKV_AWS_378)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: e2e-tests
- GitHub Check: unit-tests
- GitHub Check: security-scan
🔇 Additional comments (1)
terraform/modules/alb/main.tf (1)
15-35: Valid Fargate integration withtarget_type = "ip"
The inclusion oftarget_type = "ip"guarantees ECS Fargate tasks (awsvpc mode) register correctly with this target group. The health check configuration also aligns with common requirements.🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 15-35: Ensure AWS Load Balancer doesn't use HTTP protocol
(CKV_AWS_378)
- Remove random provider dependency from terraform configuration - Remove random_password resource for Redis auth token - Add redis_auth_token variable with validation rules - Update main.tf to use user-provided redis_auth_token variable - Add redis_auth_token example to terraform.tfvars.example
- Add SSM access policy for ECS execution role to retrieve parameters - Expand Terraform service management policy with ECR and CloudWatch Logs permissions - Refactor ApostropheCMS uploadfs configuration for AWS S3 integration
- Add environment: Development to manual-approval job - Add environment: Development to destroy-apply job
- Simplify plan_only boolean expressions using negation syntax - Move conditional from job level to step level in manual-approval - Simplify job dependencies in destroy-apply workflow
- Move plan_only check to job level for manual-approval job - Change step condition to only require approval for non-Development environments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (12)
.github/workflows/destroy_aws_environment.yml (12)
25-31: Replace undefinedvars.ENV_NAMEand hardcoded environment value
Both the job-levelenvironment: Developmentand theget_env_namestep reference an undefinedvars.ENV_NAME. Replace these with the workflow input and/or the normalization step output (${{ inputs.environment }}).- environment: Development + environment: ${{ inputs.environment }} - - name: Get Environment Name for ${{ vars.ENV_NAME }} + - name: Normalize environment name id: get_env_name uses: Entepotenz/change-string-case-action-min-dependencies@v1 with: - string: ${{ vars.ENV_NAME }} + string: ${{ inputs.environment }}🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 27-27: wrong indentation: expected 4 but found 6
(indentation)
43-46: Fixtfvarscopy to use the correct environment input
The step is still pulling fromterraform-config/${{ vars.ENV_NAME }}.tfvars. Update it to use${{ inputs.environment }}so the righttfvarsfile is selected.- - name: Copy tfvars for ${{ vars.ENV_NAME }} + - name: Copy tfvars for ${{ inputs.environment }} run: | - cat "terraform-config/${{ vars.ENV_NAME }}.tfvars" "deployment/environments/${{ vars.ENV_NAME }}.tfvars" > "deployment/terraform.tfvars" + cat "terraform-config/${{ inputs.environment }}.tfvars" "deployment/environments/${{ inputs.environment }}.tfvars" > "deployment/terraform.tfvars"
53-58: Use dynamic environment in Terraform init and plan
The backend config and plan output still referencevars.ENV_NAME. They must point to the chosen environment via${{ inputs.environment }}(or use the normalized output).- terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" - terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan" + terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl" + terraform plan -destroy -out="${{ inputs.environment }}-destroy.tfplan"
76-78: Correct S3 path for destroy plan upload
Theaws s3 cpstill usesvars.ENV_NAME. Replace it so uploads go to the proper environment folder.- run: aws s3 cp "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" + run: aws s3 cp "deployment/${{ inputs.environment }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan"
85-87: Correct S3 path for lock file upload
Similarly, update the lock-file upload path to reference${{ inputs.environment }}.- run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" + run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl"
97-107: Make manual-approval job dynamic for the selected environment
Themanual-approvaljob still hardcodesenvironment: Developmentand referencesvars.ENV_NAMEin the step name and issue title. Patch them to use${{ inputs.environment }}.- environment: Development + environment: ${{ inputs.environment }} - - name: Wait for approval to destroy ${{ vars.ENV_NAME }} + - name: Wait for approval to destroy ${{ inputs.environment }} - issue-title: "🚨 DESTROY ${{ vars.ENV_NAME }} AWS Environment 🚨" + issue-title: "🚨 DESTROY ${{ inputs.environment }} AWS Environment 🚨"🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 100-100: wrong indentation: expected 4 but found 6
(indentation)
111-113: Use dynamic environment in the issue body
The issue body still shows**Environment**: ${{ vars.ENV_NAME }}which will be empty. Swap to${{ inputs.environment }}for accuracy.- **Environment**: ${{ vars.ENV_NAME }} + **Environment**: ${{ inputs.environment }}
145-149: Fixget_env_nameinvocation in the destroy-apply job
This step is still feeding an undefined variable into the normalization action. Update to use the workflow input.- - name: Get Environment Name for ${{ vars.ENV_NAME }} + - name: Normalize environment name id: get_env_name uses: Entepotenz/change-string-case-action-min-dependencies@v1 with: - string: ${{ vars.ENV_NAME }} + string: ${{ inputs.environment }}🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 145-145: wrong indentation: expected 4 but found 6
(indentation)
160-161: Use dynamic environment when fetching the destroy plan
Replacevars.ENV_NAMEwith${{ inputs.environment }}so the correct plan is pulled.- run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" "deployment/${{ inputs.environment }}-destroy.tfplan"
168-169: Use dynamic environment when fetching the lock file
Update the S3 key to reference${{ inputs.environment }}instead of the undefined variable.- run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl" + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"
176-181: Use dynamic environment for Terraform apply in destroy-apply
The apply command still points tovars.ENV_NAMEfor both backend-config and plan file. Switch to${{ inputs.environment }}.- terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" + terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl" - terraform apply "${{ vars.ENV_NAME }}-destroy.tfplan" + terraform apply "${{ inputs.environment }}-destroy.tfplan"
188-191: Use dynamic environment for cleanup of S3 artifacts
The cleanup commands still refer tovars.ENV_NAME. Update to${{ inputs.environment }}to remove the correct files.- aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true + aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" || true - aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true + aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" || true
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/destroy_aws_environment.yml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.37.1)
.github/workflows/destroy_aws_environment.yml
[warning] 27-27: wrong indentation: expected 4 but found 6
(indentation)
[warning] 100-100: wrong indentation: expected 4 but found 6
(indentation)
[error] 110-110: trailing spaces
(trailing-spaces)
[error] 114-114: trailing spaces
(trailing-spaces)
[error] 127-127: trailing spaces
(trailing-spaces)
[error] 133-133: trailing spaces
(trailing-spaces)
[error] 136-136: trailing spaces
(trailing-spaces)
[warning] 145-145: wrong indentation: expected 4 but found 6
(indentation)
| - name: Show Destroy Plan Summary | ||
| run: | | ||
| cd deployment | ||
| echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
| echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY | ||
| echo "" >> $GITHUB_STEP_SUMMARY | ||
| terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update summary step to reflect the dynamic environment
The summary header still uses vars.ENV_NAME, which is undefined and will display incorrectly. Swap to ${{ inputs.environment }} to ensure it matches the dispatched environment.
- echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
+ echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Show Destroy Plan Summary | |
| run: | | |
| cd deployment | |
| echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY | |
| - name: Show Destroy Plan Summary | |
| run: | | |
| cd deployment | |
| echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY | |
| echo "" >> $GITHUB_STEP_SUMMARY | |
| terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY |
🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 63 to 70, the
summary header uses the undefined variable vars.ENV_NAME, causing incorrect
display. Replace all instances of vars.ENV_NAME with inputs.environment to
correctly reference the dispatched environment and ensure the summary header
reflects the dynamic environment.
- Enable force_delete on ECR repository resource to allow deletion with images
|


This documentation is essential for understanding the AWS infrastructure setup and helps both developers and operations teams navigate the system architecture effectively.