Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
ce33511
Add AWS architecture diagram and documentation
killev May 24, 2025
3fcc647
Migrate infrastructure from MongoDB EC2 to managed services with comp…
killev May 24, 2025
196a33b
Standardize Terraform backend configuration and add initialization sc…
killev May 24, 2025
e5e4b60
Enhance Terraform backend initialization with improved error handling…
killev May 24, 2025
45d75c7
Refactor Terraform configuration and remove DynamoDB state locking
killev May 24, 2025
0bad9ce
Refactor Terraform configuration and split IAM policies
killev May 24, 2025
5f51b9c
Reorganize Terraform IAM policies and simplify Redis configuration
killev May 24, 2025
a3f4473
Add additional AWS permissions to Terraform core management policy
killev May 24, 2025
0ffa8ab
Enhance ECS infrastructure with configurable parameters and auto-scaling
killev May 24, 2025
a6bd59a
Make Terraform configuration more flexible and configurable
killev May 24, 2025
e807885
Update Terraform infrastructure configuration and IAM policies - Enha…
killev May 25, 2025
d824a1d
Update Terraform IAM policies for core and service management
killev May 25, 2025
aa4b964
Replace auto-generated Redis auth token with user-provided variable
killev May 25, 2025
156934d
Enhance AWS infrastructure configuration and S3 setup
killev May 26, 2025
37c7f8a
Add bastion host module for secure SSH access to infrastructure
killev May 26, 2025
b21f767
Improve bastion host configuration and IAM permissions
killev May 26, 2025
ff1a0d0
Add AWS deployment infrastructure and fix configuration issues
killev May 27, 2025
9f1d99d
Add Terraform deployment automation and release tracking
killev May 27, 2025
e72605e
feat: refactor DocumentDB connection to use separate environment vari…
killev May 27, 2025
6d4c974
Improve DocumentDB connection configuration for AWS compatibility
killev May 27, 2025
2851747
Update infrastructure configuration and deployment workflow - Update …
killev May 27, 2025
5d1a159
Merge commit '0db4fcbe6a3b20651351d3131cb2a51467191b61' into create-t…
killev May 27, 2025
5c8bc9e
Standardize environment naming and enhance Terraform configuration
killev May 27, 2025
7e5da24
Restructure infrastructure configuration and update CI/CD workflow
killev May 27, 2025
1b2fece
Update GitHub workflow for Terraform deployment configuration
killev May 27, 2025
15a40d5
Fix GitHub Actions workflow for terraform configuration
killev May 27, 2025
4616764
Reorganize deployment configuration into environments structure
killev May 27, 2025
a53edfa
Fix directory name typo in deployment configuration
killev May 27, 2025
283e35b
Fix Terraform configuration and AWS region in deploy workflow
killev May 27, 2025
4b67b30
Fix terraform plan output path and enable S3 plan storage
killev May 27, 2025
0e79606
Fix Terraform plan file paths in GitHub workflow
killev May 27, 2025
b400800
Fix AWS region configuration in deployment workflow
killev May 27, 2025
88e7564
Simplify GitHub workflow and update ECR repository naming
killev May 28, 2025
8b49799
Update Terraform version to 1.12.0 in apply job
killev May 28, 2025
ad646a3
Fix Terraform lock file consistency issue in GitHub Actions - Add ste…
killev May 28, 2025
03a5c5b
Fix duplicate cd command in terraform apply step
killev May 28, 2025
c2bb29c
Remove duplicate cd deployment command in workflow
killev May 28, 2025
9d2154a
Update Docker build step in AWS deployment workflow
killev May 28, 2025
797968f
Update Docker build configuration and ignore patterns
killev May 28, 2025
50d0594
Simplify ECS secrets configuration in Terraform
killev May 28, 2025
d21c003
Add AWS environment destruction workflow and documentation
killev Jun 5, 2025
5f884d9
Update wget version to 1.25.0-r1 in Dockerfile
killev Jun 5, 2025
b98f1ab
Resolve merge conflicts: keep full implementation and AWS certificate
killev Jun 7, 2025
775bd50
Add conditional approval for AWS environment destruction
killev Jun 7, 2025
168bb71
Add environment specification to destroy workflow jobs
killev Jun 7, 2025
a1aced8
Optimize GitHub Actions destroy workflow conditions
killev Jun 7, 2025
b59b21d
Fix manual approval conditions in destroy AWS environment workflow
killev Jun 7, 2025
3b5bac3
Add force delete option to ECR repository
killev Jun 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .cursor/rules/common
Submodule common updated 1 files
+9 −0 README.md
4 changes: 4 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@
.git
.gitignore

# Submodules (not needed in container)
.cursor/
website-pages/

# Node.js dependencies
node_modules
coverage
Expand Down
574 changes: 96 additions & 478 deletions .github/workflows/deploy_to_aws.yml

Large diffs are not rendered by default.

175 changes: 160 additions & 15 deletions .github/workflows/destroy_aws_environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,30 +22,175 @@ permissions:
jobs:
destroy-plan:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
environment: Development
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Hardcoded environment context

The job uses environment: Development statically. To honor the user’s input, change this to:

environment: ${{ inputs.environment }}

so the workflow dynamically targets the selected environment.

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml at line 25, replace the
hardcoded environment value "Development" with the dynamic expression "${{
inputs.environment }}" to ensure the workflow uses the environment specified by
the user's input instead of a fixed value.

steps:
- name: Placeholder - Destroy plan step
run: |
echo "This step is empty - destroy plan functionality not implemented on main branch"
echo "Environment: ${{ inputs.environment }}"
echo "Plan only: ${{ inputs.plan_only }}"
- name: Get Environment Name for ${{ vars.ENV_NAME }}
id: get_env_name
uses: Entepotenz/change-string-case-action-min-dependencies@v1
with:
string: ${{ vars.ENV_NAME }}

Comment on lines +27 to +32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Undefined variable vars.ENV_NAME

The get_env_name step references ${{ vars.ENV_NAME }}, but no such variable is defined. It should read from the workflow input, for example:

with:
  string: ${{ inputs.environment }}

Adjust downstream references accordingly.

🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 27-27: wrong indentation: expected 4 but found 6

(indentation)

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 27 to 32, the step
get_env_name incorrectly references an undefined variable vars.ENV_NAME. Replace
all occurrences of ${{ vars.ENV_NAME }} with ${{ inputs.environment }} to
correctly use the workflow input variable. Also, update any downstream steps
that depend on this variable to use the corrected input reference.

- name: Checkout repo
uses: actions/checkout@v4

- name: Checkout config repository
uses: actions/checkout@v4
with:
repository: 'speedandfunction/website-ci-secret'
path: 'terraform-config'
token: ${{ secrets.PAT }}

- name: Copy tfvars for ${{ vars.ENV_NAME }}
run: |
cat "terraform-config/${{ vars.ENV_NAME }}.tfvars" "deployment/environments/${{ vars.ENV_NAME }}.tfvars" > "deployment/terraform.tfvars"

Comment on lines +43 to +46
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect reference in tfvars copy command

The cat command concatenates files using terraform-config/${{ vars.ENV_NAME }}.tfvars and deployment/environments/${{ vars.ENV_NAME }}.tfvars, but vars.ENV_NAME is undefined. Update these paths to use ${{ inputs.environment }} or the output of the corrected get_env_name step.

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 43 to 46, the cat
command uses an undefined variable vars.ENV_NAME to reference tfvars files.
Replace all occurrences of vars.ENV_NAME with the correct variable, either
inputs.environment or the output from the get_env_name step, to correctly
reference the environment-specific tfvars files.

- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.12.0
terraform_wrapper: false

- name: Terraform Destroy Plan for ${{ vars.ENV_NAME }}
run: |
cd deployment
terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

Comment on lines +55 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use correct environment variable in Terraform init/plan commands
Backend config and plan steps still reference the undefined ${{ vars.ENV_NAME }}. Switch these to ${{ inputs.environment }} or the normalization output:

-          terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
-          terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan"
+          terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl"
+          terraform plan -destroy -out="${{ inputs.environment }}-destroy.tfplan"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cd deployment
terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
cd deployment
terraform init -backend-config="environments/backend-${{ inputs.environment }}.hcl"
terraform plan -destroy -out="${{ inputs.environment }}-destroy.tfplan"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"
🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 55 to 62, the
Terraform init and plan commands use the undefined variable `${{ vars.ENV_NAME
}}`. Replace all occurrences of `${{ vars.ENV_NAME }}` with `${{
inputs.environment }}` or the appropriate normalized input variable to correctly
reference the environment name in the backend config and plan output file.

- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
Comment on lines +63 to +70
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update summary step to reflect the dynamic environment
The summary header still uses vars.ENV_NAME, which is undefined and will display incorrectly. Swap to ${{ inputs.environment }} to ensure it matches the dispatched environment.

-          echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
+          echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
- name: Show Destroy Plan Summary
run: |
cd deployment
echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ inputs.environment }} 🚨" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY
🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 63 to 70, the
summary header uses the undefined variable vars.ENV_NAME, causing incorrect
display. Replace all instances of vars.ENV_NAME with inputs.environment to
correctly reference the dispatched environment and ensure the summary header
reflects the dynamic environment.

env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

- name: Copy destroy plan to S3 bucket for ${{ vars.ENV_NAME }}
if: ${{ inputs.plan_only == false }}
run: aws s3 cp "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"

Comment on lines +76 to +84
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update S3 artifact paths to use the valid environment variable
Both S3 copy steps reference ${{ vars.ENV_NAME }}, which is undefined. Replace with ${{ inputs.environment }} or the normalization output:

-        run: aws s3 cp "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan"
+        run: aws s3 cp "deployment/${{ inputs.environment }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan"
...
-        run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl"
+        run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl"

Also applies to: 85-92

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 76 to 92, the S3
copy steps incorrectly use the undefined variable `${{ vars.ENV_NAME }}`.
Replace all occurrences of `${{ vars.ENV_NAME }}` with the correct environment
variable `${{ inputs.environment }}` or the normalized output variable to ensure
the paths reference the valid environment name.

- name: Copy lock file to S3 bucket for ${{ vars.ENV_NAME }}
if: ${{ !inputs.plan_only }}
run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"

manual-approval:
needs: [destroy-plan]
runs-on: ubuntu-latest
environment: Development
if: ${{ !inputs.plan_only }}
steps:
- name: Placeholder - Manual approval step
run: |
echo "This step is empty - manual approval functionality not implemented on main branch"
echo "Would wait for approval to destroy ${{ inputs.environment }}"
- name: Wait for approval to destroy ${{ vars.ENV_NAME }}
if: ${{ inputs.environment != 'Development' }}
uses: trstringer/manual-approval@v1
with:
secret: ${{ github.TOKEN }}
approvers: killev
minimum-approvals: 1
issue-title: "🚨 DESTROY ${{ vars.ENV_NAME }} AWS Environment 🚨"
issue-body: |
## ⚠️ CRITICAL: Infrastructure Destruction Request ⚠️

**Environment**: ${{ vars.ENV_NAME }}
**Requested by**: @${{ github.actor }}
**Workflow run**: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}

### 🚨 WARNING: This action will PERMANENTLY DESTROY the following infrastructure:
- VPC and all networking components
- ECS cluster and services
- DocumentDB cluster and all data
- ElastiCache Redis cluster and all data
- Application Load Balancer
- CloudFront distribution
- S3 buckets and all stored files
- ECR repository and all images
- IAM roles and policies
- CloudWatch logs and metrics
- Parameter Store secrets

### 📋 Before approving, please verify:
- [ ] This is the correct environment to destroy
- [ ] All important data has been backed up
- [ ] Team has been notified of the destruction
- [ ] No critical services depend on this infrastructure

**To approve**: Comment "approve" or "approved"
**To deny**: Comment "deny" or "denied"

**⚠️ THIS ACTION CANNOT BE UNDONE ⚠️**

destroy-apply:
needs: [manual-approval]
runs-on: ubuntu-latest
if: ${{ !inputs.plan_only }}
environment: Development
if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }}
steps:
- name: Placeholder - Destroy apply step
run: |
echo "This step is empty - destroy apply functionality not implemented on main branch"
echo "Would destroy infrastructure for ${{ inputs.environment }}"
- name: Get Environment Name for ${{ vars.ENV_NAME }}
id: get_env_name
uses: Entepotenz/change-string-case-action-min-dependencies@v1
with:
string: ${{ vars.ENV_NAME }}

Comment on lines +143 to +150
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix get_env_name invocation in destroy-apply job
The destroy-apply job reuses the get_env_name step but still references ${{ vars.ENV_NAME }}. Update it to use ${{ inputs.environment }} or to reference the previous normalization output:

-      - name: Get Environment Name for ${{ vars.ENV_NAME }}
-        id: get_env_name
-        uses: Entepotenz/change-string-case-action-min-dependencies@v1
-        with:
-          string: ${{ vars.ENV_NAME }}
+      - name: Normalize environment name
+        id: get_env_name
+        uses: Entepotenz/change-string-case-action-min-dependencies@v1
+        with:
+          string: ${{ inputs.environment }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }}
steps:
- name: Placeholder - Destroy apply step
run: |
echo "This step is empty - destroy apply functionality not implemented on main branch"
echo "Would destroy infrastructure for ${{ inputs.environment }}"
- name: Get Environment Name for ${{ vars.ENV_NAME }}
id: get_env_name
uses: Entepotenz/change-string-case-action-min-dependencies@v1
with:
string: ${{ vars.ENV_NAME }}
if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }}
steps:
- name: Normalize environment name
id: get_env_name
uses: Entepotenz/change-string-case-action-min-dependencies@v1
with:
string: ${{ inputs.environment }}
🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 142-142: wrong indentation: expected 4 but found 6

(indentation)

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 140 to 147, the
get_env_name step incorrectly uses the static variable reference ${{
vars.ENV_NAME }} instead of the dynamic input or previous step output. Update
the string input to use ${{ inputs.environment }} or reference the output from
the prior normalization step to ensure the environment name is correctly passed
and processed.

- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.12.0
terraform_wrapper: false

- name: Checkout
uses: actions/checkout@v4

- name: Get destroy plan from S3 bucket for ${{ vars.ENV_NAME }}
run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" "deployment/${{ vars.ENV_NAME }}-destroy.tfplan"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"

Comment on lines +160 to +167
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use valid environment reference for S3 download steps
The Get destroy plan and Get lock file steps reference ${{ vars.ENV_NAME }}. Replace these with ${{ inputs.environment }}:

-        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" "deployment/${{ vars.ENV_NAME }}-destroy.tfplan"
+        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" "deployment/${{ inputs.environment }}-destroy.tfplan"
...
-        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"
+        run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"

Also applies to: 165-172

🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 157 to 164, the
steps that download from S3 use the invalid environment variable reference `${{
vars.ENV_NAME }}`. Replace all occurrences of `${{ vars.ENV_NAME }}` with the
correct input reference `${{ inputs.environment }}` to properly access the
environment value passed to the workflow. Apply the same fix to lines 165 to 172
as well.

- name: Get lock file from S3 bucket for ${{ vars.ENV_NAME }}
run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"

- name: 🚨 DESTROY Infrastructure for ${{ vars.ENV_NAME }} 🚨
run: |
cd deployment
terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl"
echo "🚨 DESTROYING INFRASTRUCTURE FOR ${{ vars.ENV_NAME }} - THIS CANNOT BE UNDONE! 🚨"
terraform apply "${{ vars.ENV_NAME }}-destroy.tfplan"
echo "✅ Infrastructure for ${{ vars.ENV_NAME }} has been destroyed"
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: "us-east-1"

- name: Clean up destroy plan files
run: |
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"
Comment on lines +188 to +196
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix cleanup step to reference the valid environment variable
The cleanup commands use ${{ vars.ENV_NAME }} to remove artifacts. Update to ${{ inputs.environment }}:

-          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true
-          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true
+          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" || true
+          aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" || true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Clean up destroy plan files
run: |
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"
- name: Clean up destroy plan files
run: |
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}-destroy.tfplan" || true
aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ inputs.environment }}.terraform.lock.hcl" || true
env:
AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }}
SETTINGS_BUCKET: sf-website-infrastructure
AWS_DEFAULT_REGION: "us-east-1"
🤖 Prompt for AI Agents
In .github/workflows/destroy_aws_environment.yml around lines 185 to 193, the
cleanup step incorrectly references the environment variable as `${{
vars.ENV_NAME }}`. Update both aws s3 rm commands to use `${{ inputs.environment
}}` instead to correctly reference the valid input variable for the environment
name.

7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -144,3 +144,10 @@ website/public/apos-frontend
dump.archive
.prettierrc
aposUsersSafe.json
terraform.tfvars
deployment/.terraform
*.tfplan
plan.out.txt
.cursor/tmp
.terraform.lock.hcl
*.pem
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@

# Install dependencies needed for health checks with pinned version
RUN apk add --no-cache wget=1.25.0-r1

RUN wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem

Check failure on line 9 in Dockerfile

View workflow job for this annotation

GitHub Actions / docker-lint

DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.

Check failure on line 9 in Dockerfile

View workflow job for this annotation

GitHub Actions / docker-lint

DL3047 info: Avoid use of wget without progress bar. Use `wget --progress=dot:giga <url>`. Or consider using `-q` or `-nv` (shorthands for `--quiet` or `--no-verbose`).
RUN chmod 600 global-bundle.pem

Check failure on line 10 in Dockerfile

View workflow job for this annotation

GitHub Actions / docker-lint

DL3059 info: Multiple consecutive `RUN` instructions. Consider consolidation.

# Create a non-root user and group
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
Expand Down
46 changes: 46 additions & 0 deletions deploy-ecs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

Comment on lines +1 to +2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enable full strict mode for safer Bash execution.

Currently the script uses set -e, but pipelines can mask errors and undefined variables slip through.
Consider using:

set -euo pipefail

to ensure the script exits on errors, undefined variables, and any command in a pipeline.

🤖 Prompt for AI Agents
In deploy-ecs.sh at lines 1 to 2, the script currently lacks full strict mode
which can allow errors and undefined variables to go unnoticed. Update the
script to include `set -euo pipefail` near the top, right after the shebang
line, to ensure it exits on any error, undefined variable usage, or failure in
any part of a pipeline.

# ECS Deployment Script
# This script rebuilds the Docker image, pushes it to ECR, and restarts the ECS service

set -e # Exit on any error

# Configuration
AWS_PROFILE="tf-sf-website"
AWS_REGION="us-east-1"
ECR_REPOSITORY="695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-development"
ECS_CLUSTER="sf-website-dev-cluster"
ECS_SERVICE="sf-website-dev-service"
IMAGE_TAG="latest"

echo "🚀 Starting ECS deployment process..."

# Step 1: Login to ECR
echo "📝 Logging into ECR..."
AWS_PROFILE=$AWS_PROFILE aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY

# Step 2: Build the Docker image
echo "🔨 Building Docker image for linux/amd64 platform..."
docker build --platform linux/amd64 -t $ECR_REPOSITORY:$IMAGE_TAG .

# Step 3: Push the image to ECR
echo "📤 Pushing image to ECR..."
docker push $ECR_REPOSITORY:$IMAGE_TAG

# Step 4: Force ECS service to restart with new image
echo "🔄 Restarting ECS service..."
AWS_PROFILE=$AWS_PROFILE aws ecs update-service \
--cluster $ECS_CLUSTER \
--service $ECS_SERVICE \
--force-new-deployment \
--region $AWS_REGION

# Step 5: Wait for deployment to complete
echo "⏳ Waiting for deployment to complete..."
AWS_PROFILE=$AWS_PROFILE aws ecs wait services-stable \
--cluster $ECS_CLUSTER \
--services $ECS_SERVICE \
--region $AWS_REGION

echo "✅ Deployment completed successfully!"
echo "🌐 Your application should be available at: https://sf-website-dev.sandbox-prettyclear.com"
Comment on lines +45 to +46
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid hard-coded URLs for environment-specific outputs.

The final echo prints a fixed dev URL. For staging or production this will be incorrect.
Parameterize the domain via a variable or derive it from your environment settings.

🤖 Prompt for AI Agents
In deploy-ecs.sh around lines 45 to 46, the script prints a hard-coded URL for
the dev environment, which is incorrect for staging or production. Replace the
fixed URL with a variable that holds the domain name, and set this variable
based on the current environment or configuration. Update the echo statement to
use this variable so the output dynamically reflects the correct environment
URL.

Loading
Loading