From ce33511afe938f0a555e72de256c54bb31d13d03 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 10:07:32 +0200 Subject: [PATCH 01/46] Add AWS architecture diagram and documentation - Add comprehensive Mermaid flowchart showing complete AWS infrastructure - Document key architecture components organized by layers (Public, Compute, Storage, Security, Operations) - Include environment isolation details and resource naming conventions --- docs/aws-architecture-diagram.md | 115 +++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 docs/aws-architecture-diagram.md diff --git a/docs/aws-architecture-diagram.md b/docs/aws-architecture-diagram.md new file mode 100644 index 00000000..d73e6545 --- /dev/null +++ b/docs/aws-architecture-diagram.md @@ -0,0 +1,115 @@ +# SF Website AWS Architecture + +## Principal AWS Architecture Diagram + +```mermaid +flowchart TB + %% External actors + Users[👥 Users] + GitHub[🐙 GitHub Actions CI/CD] + + %% Public facing components + ALB[⚖️ Application Load Balancer
sf-website-alb-env
HTTPS Only] + CF[🌐 CloudFront Distribution
sf-website-media-env
CDN for Media Assets] + + %% Compute layer + ECS[🚢 ECS Fargate Cluster
sf-website-ecs-cluster-env
Apostrophe CMS App] + ECR[🐳 ECR Repository
sf-website-ecr-env
Container Images] + + %% Storage layer + S3_Attachments[🪣 S3 Attachments Bucket
sf-website-s3-attachments-env
Media & Files] + S3_Logs[🪣 S3 Logs Bucket
sf-website-s3-logs-env
Centralized Logs] + MongoDB[📄 MongoDB on EC2
sf-website-mongodb-env
t3.medium + 100GB EBS] + + %% Security & Identity + IAM_Task[👤 ECS Task Role
sf-website-ecs-task-env
S3 Access Permissions] + IAM_Exec[👤 ECS Execution Role
sf-website-ecs-execution-env
ECR & Parameter Store] + ParamStore[🔐 Parameter Store
Session Secrets & DB Credentials] + + %% Monitoring & Backup + CloudWatch[📊 CloudWatch
sf-website-cloudwatch-env
Logs & Metrics] + AWSBackup[💾 AWS Backup
Daily EBS Snapshots
7 daily, 4 weekly retention] + + %% User flows + Users -->|HTTPS requests| ALB + Users -->|Media requests| CF + + %% CI/CD flow + GitHub -->|Build & Push| ECR + GitHub -->|Deploy| ECS + + %% Load balancer to application + ALB -->|Route traffic| ECS + + %% CloudFront to storage + CF -->|Origin requests| S3_Attachments + + %% ECS relationships + ECS -->|Pull images| ECR + ECS -->|Read/Write media| S3_Attachments + ECS -->|Database operations| MongoDB + ECS -->|Get secrets| ParamStore + ECS -->|Send logs| CloudWatch + + %% IAM relationships + IAM_Task -.->|Assume role| ECS + IAM_Exec -.->|Assume role| ECS + IAM_Task -.->|S3 permissions| S3_Attachments + IAM_Exec -.->|ECR permissions| ECR + IAM_Exec -.->|Parameter Store| ParamStore + + %% Logging flows + ALB -->|Access logs| S3_Logs + CF -->|Access logs| S3_Logs + S3_Attachments -->|Server logs| S3_Logs + + %% Monitoring + ECS -->|Metrics & logs| CloudWatch + ALB -->|Metrics| CloudWatch + MongoDB -->|System metrics| CloudWatch + + %% Backup + AWSBackup -->|Snapshot| MongoDB + + %% Styling + classDef public fill:#e1f5fe + classDef compute fill:#f3e5f5 + classDef storage fill:#e8f5e8 + classDef security fill:#fff3e0 + classDef monitoring fill:#fce4ec + + class ALB,CF public + class ECS,ECR compute + class S3_Attachments,S3_Logs,MongoDB storage + class IAM_Task,IAM_Exec,ParamStore security + class CloudWatch,AWSBackup monitoring +``` + +## Key Architecture Components + +### 🌐 Public Layer +- **Application Load Balancer**: HTTPS-only entry point for web traffic +- **CloudFront**: Global CDN for media asset delivery from S3 + +### 🚢 Compute Layer +- **ECS Fargate**: Serverless container hosting for Apostrophe CMS +- **ECR**: Private container registry for application images + +### 🪣 Storage Layer +- **S3 Attachments**: Media files and uploads from CMS +- **S3 Logs**: Centralized logging for all services +- **MongoDB on EC2**: Primary database with automated backups + +### 👤 Security Layer +- **IAM Roles**: Least-privilege access for ECS tasks +- **Parameter Store**: Secure storage for secrets and configuration + +### 📊 Operations Layer +- **CloudWatch**: Monitoring, metrics, and alerting +- **AWS Backup**: Automated daily snapshots with retention policies + +## Environment Isolation +All resources are tagged and named with environment suffix: +- `dev`, `staging`, `prod` +- Complete isolation between environments +- Consistent naming: `sf-website--` \ No newline at end of file From 3fcc64705e8ecec1123f7f444a2fcca5405ea8cc Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 15:21:04 +0200 Subject: [PATCH 02/46] Migrate infrastructure from MongoDB EC2 to managed services with complete Terraform configuration - Replace MongoDB EC2 instance with managed DocumentDB cluster - Add ElastiCache Redis cluster for caching and session storage - Implement complete Terraform infrastructure as code with modular design - Update domain configuration from prettyclear.com to sandbox-prettyclear.com - Add comprehensive infrastructure documentation and Q&A guide - Configure multi-environment support with backend configs for dev/staging/prod - Add monitoring, security groups, and networking modules - Update ECS environment variables to include Redis URI and DocumentDB endpoint --- docs/Infrastructure.md | 142 +++++++-- docs/infrastructureQNA.md | 66 ++++ terraform/README.md | 294 ++++++++++++++++++ terraform/backend-dev.hcl | 6 + terraform/backend-prod.hcl | 6 + terraform/backend-staging.hcl | 6 + terraform/main.tf | 267 ++++++++++++++++ terraform/modules/s3/main.tf | 207 ++++++++++++ terraform/modules/s3/outputs.tf | 31 ++ terraform/modules/s3/variables.tf | 41 +++ terraform/modules/security_groups/main.tf | 117 +++++++ terraform/modules/security_groups/outputs.tf | 21 ++ .../modules/security_groups/variables.tf | 22 ++ terraform/modules/vpc/main.tf | 184 +++++++++++ terraform/modules/vpc/outputs.tf | 46 +++ terraform/modules/vpc/variables.tf | 27 ++ terraform/outputs.tf | 156 ++++++++++ terraform/terraform.tfvars.example | 53 ++++ terraform/variables.tf | 173 +++++++++++ 19 files changed, 1836 insertions(+), 29 deletions(-) create mode 100644 docs/infrastructureQNA.md create mode 100644 terraform/README.md create mode 100644 terraform/backend-dev.hcl create mode 100644 terraform/backend-prod.hcl create mode 100644 terraform/backend-staging.hcl create mode 100644 terraform/main.tf create mode 100644 terraform/modules/s3/main.tf create mode 100644 terraform/modules/s3/outputs.tf create mode 100644 terraform/modules/s3/variables.tf create mode 100644 terraform/modules/security_groups/main.tf create mode 100644 terraform/modules/security_groups/outputs.tf create mode 100644 terraform/modules/security_groups/variables.tf create mode 100644 terraform/modules/vpc/main.tf create mode 100644 terraform/modules/vpc/outputs.tf create mode 100644 terraform/modules/vpc/variables.tf create mode 100644 terraform/outputs.tf create mode 100644 terraform/terraform.tfvars.example create mode 100644 terraform/variables.tf diff --git a/docs/Infrastructure.md b/docs/Infrastructure.md index db89138c..6c2e71ac 100644 --- a/docs/Infrastructure.md +++ b/docs/Infrastructure.md @@ -4,7 +4,7 @@ * **AWS Region**: `us-east-1` * **Environments**: `dev`, `staging`, `prod` -* **Domain**: `sf-website-.prettyclear.com` +* **Domain**: `sf-website-.sandbox-prettyclear.com` * **Structure**: Modular Terraform setup for multi-environment support * **Resource Tags** (applied to all resources): @@ -158,7 +158,8 @@ * ECR (container image repository) * ALB (for ingress) * CloudWatch (for logs & metrics) - * MongoDB EC2 instance (database) + * DocumentDB Cluster (database) + * ElastiCache Redis (caching) * S3 Attachments Bucket * **Service Name**: `sf-website` * **Container Image**: Built from project Dockerfile and stored in ECR @@ -167,7 +168,8 @@ * **Auto-scaling**: Based on CPU usage (target: 70%) * **Environment Variables**: * `NODE_ENV=production` - * `APOS_MONGODB_URI=mongodb://:27017/apostrophe` + * `APOS_MONGODB_URI=mongodb://:27017/apostrophe` + * `REDIS_URI=redis://:6379` * `SESSION_SECRET=` * `APOS_S3_BUCKET=sf-website-s3-attachments-` * `APOS_S3_REGION=us-east-1` @@ -198,7 +200,7 @@ * ACM (for SSL certificates) * **Type**: HTTPS-only * **SSL**: Via AWS ACM - * **Domain**: `sf-website-.prettyclear.com` + * **Domain**: `sf-website-.sandbox-prettyclear.com` --- @@ -219,7 +221,7 @@ * ECS Cluster (via APOS_CDN_URL environment variable) * **Origin**: S3 bucket `sf-website-s3-attachments-` * **Access**: Origin access identity (OAI) to restrict direct S3 access - * **Custom domain**: `sf-website-media-.prettyclear.com` + * **Custom domain**: `sf-website-media-.sandbox-prettyclear.com` * **SSL Certificate**: Managed through AWS ACM * **Cache Behavior**: * Default TTL: 86400 seconds (1 day) @@ -247,51 +249,133 @@ * **Resource Integration**: * ECS Apostrophe Cluster * ALB + * DocumentDB Cluster + * ElastiCache Redis * Slack (for alerts) * **Features**: * ECS logs and detailed metrics * ALB metrics (e.g., 5xx, latency) + * DocumentDB cluster and instance metrics + * ElastiCache Redis performance metrics * CloudWatch alarms for key metrics * **Alerts**: Sent to Slack * **Log retention**: 90 days --- -### 📄 MongoDB on EC2 +### 🔴 Amazon ElastiCache (Redis) -* **MongoDB**: - * **Instance Name**: `sf-website-mongodb-` - * **Purpose**: Primary data store for ApostropheCMS +* **ElastiCache Redis Cluster**: + * **Cluster Name**: `sf-website-redis-` + * **Purpose**: Managed Redis service for session storage and application caching * **Resource Tags**: - * `Name: sf-website-mongodb-` + * `Name: sf-website-redis-` * `Project: Website` * `CostCenter: Website` * `Environment: ` * `Owner: peter.ovchyn` * **Resource Integration**: * ECS Apostrophe Cluster - * AWS Backup service * CloudWatch (for monitoring) - * Parameter Store (for credentials) - * **Instance Type**: t3.medium (2 vCPU, 4GB RAM) - * **Storage**: 100GB gp3 EBS volume with 3000 IOPS - * **AMI**: Amazon Linux 2 - * **Deployment**: Single EC2 instance in private subnet + * Cache Subnet Group (for networking) + * **Engine Version**: Redis 7.0 (latest stable) + * **Node Configuration**: + * **Node Type**: `cache.t3.micro` (1 vCPU, 0.5GB RAM) for dev/staging + * **Node Type**: `cache.t3.small` (2 vCPU, 1.5GB RAM) for production + * **Number of Nodes**: 1 (single node for simplicity) + * **Port**: 6379 (Redis standard) + * **Deployment**: + * Deployed in private subnets + * Cache Subnet Group spans both availability zones * **Security**: - * No public IP assigned - * Security group allows ingress only from ECS service security group on port 27017 - * SSH access via Session Manager (no direct SSH allowed) - * **Authentication**: Username/password authentication enabled - * Credentials stored in AWS Parameter Store + * VPC security group restricting access to ECS service only + * No public access + * Transit encryption enabled + * Auth token enabled for authentication + * **Authentication**: + * Auth token stored in AWS Parameter Store * Referenced in ECS task environment variables * **Backup Strategy**: - * Daily automated snapshots of EBS volume - * Retention period: 7 daily, 4 weekly - * Snapshot automation via AWS Backup service + * **Automatic Backups**: + * Daily snapshots enabled + * Retention period: 5 days + * Backup window: 02:00-03:00 UTC + * **Monitoring**: + * CloudWatch metrics for cluster performance + * CloudWatch alarms for: + * CPU utilization > 80% + * Memory usage > 80% + * Connection count thresholds + * Cache hit ratio < 80% + * **High Availability**: + * Automatic failover enabled + * Multi-AZ deployment for production environment + * Automatic minor version updates during maintenance window + * **Network Configuration**: + * **Cache Subnet Group**: `sf-website-redis-subnet-group-` + * **Security Group**: `sf-website-redis-sg-` + * **Endpoint**: Primary endpoint for read/write operations + +--- + +### 📄 Amazon DocumentDB + +* **DocumentDB Cluster**: + * **Cluster Name**: `sf-website-documentdb-` + * **Purpose**: Managed MongoDB-compatible database service for ApostropheCMS + * **Resource Tags**: + * `Name: sf-website-documentdb-` + * `Project: Website` + * `CostCenter: Website` + * `Environment: ` + * `Owner: peter.ovchyn` + * **Resource Integration**: + * ECS Apostrophe Cluster + * CloudWatch (for monitoring) + * Parameter Store (for credentials) + * DB Subnet Group (for networking) + * **Engine Version**: 4.0.0 (MongoDB compatible) + * **Cluster Configuration**: + * **Primary Instance**: `db.t3.medium` (2 vCPU, 4GB RAM) + * **Replica Instances**: 1 replica for high availability + * **Storage**: Encrypted with AWS managed keys + * **Port**: 27017 (MongoDB standard) + * **Deployment**: + * Multi-AZ deployment across private subnets + * DB Subnet Group spans both availability zones + * **Security**: + * VPC security group restricting access to ECS service only + * TLS encryption in transit required + * No public access + * Authentication required + * **Authentication**: + * Master username/password stored in AWS Parameter Store + * Referenced in ECS task environment variables via Parameter Store + * Database: `apostrophe` + * **Backup Strategy**: + * **Automated Backups**: + * Backup retention period: 7 days + * Backup window: 03:00-04:00 UTC + * Point-in-time recovery enabled + * **Manual Snapshots**: Available for major releases * **Monitoring**: - * CloudWatch agent for system metrics - * Custom MongoDB metrics published to CloudWatch - * Alerts for disk usage, connections, and query performance + * CloudWatch metrics for cluster and instance performance + * Enhanced monitoring enabled (60-second granularity) + * CloudWatch alarms for: + * CPU utilization > 80% + * Database connections > 80% of max + * Free storage < 20% + * Read/Write latency thresholds + * **Parameter Group**: + * Custom parameter group for performance optimization + * TLS enforcement enabled + * Audit logging enabled for security compliance * **High Availability**: - * Configured for future upgrade to a replica set - * Placeholder DNS record for future replica nodes + * Multi-AZ replica instance for automatic failover + * Cross-AZ backup replication + * Automatic minor version updates during maintenance window + * **Network Configuration**: + * **DB Subnet Group**: `sf-website-documentdb-subnet-group-` + * **Security Group**: `sf-website-documentdb-sg-` + * **Endpoint**: Cluster endpoint for write operations + * **Reader Endpoint**: Available for read-only operations diff --git a/docs/infrastructureQNA.md b/docs/infrastructureQNA.md new file mode 100644 index 00000000..b82dcff3 --- /dev/null +++ b/docs/infrastructureQNA.md @@ -0,0 +1,66 @@ +# Infrastructure Q&A for Terraform Implementation + +## Questions and Answers + +### Q1: Certificate ARNs +**Question**: What are the actual ARN values for your existing SSL certificates? +- Main app certificates: `sf-website-{env}.sandbox-prettyclear.com` +- Media certificates: `sf-website-media-{env}.sandbox-prettyclear.com` + +**Answer**: Wildcard certificate `*.sandbox-prettyclear.com` covers all subdomains +**ARN**: `arn:aws:acm:us-east-1:548271326349:certificate/7e11016f-f90e-4800-972d-622bf1a82948` + +--- + +### Q2: Route 53 Hosted Zone ID +**Question**: What's the hosted zone ID for `sandbox-prettyclear.com`? + +**Answer**: [Skipped for now - will address later] + +--- + +### Q3: Parameter Store Secrets +**Question**: Should I generate these automatically or do you have specific values? +- DocumentDB master username/password +- SESSION_SECRET +- Any other app secrets? + +**Answer**: +- **DocumentDB master username/password**: Store in tfvars files +- **SESSION_SECRET**: User will provide specific value in tfvars +- **Other secrets**: Based on docker-compose.yml: + - **REDIS_URI**: Will be auto-generated (ElastiCache endpoint) + - **BASE_URL**: Will be auto-generated from ALB domain + - **SERVICE_ACCOUNT_PRIVATE_KEY**: User will provide if using Google Cloud Storage + - **NODE_ENV**: Will be set to 'production' + +--- + +### Q4: Deployment Scope +**Question**: Should I create Terraform to deploy all three environments at once, or one environment at a time (which one first)? + +**Answer**: Terraform script should create 1 environment at a time. Environment should be specified via tfvars file. + +--- + +### Q5: Remote State +**Question**: Do you want S3 backend for Terraform state storage? + +**Answer**: Yes, use S3 bucket for Terraform state storage with DynamoDB for state locking. + +--- + +### Q6: CI/CD Integration +**Question**: Do you need IAM roles for GitHub Actions to deploy? + +**Answer**: Yes, include all 3: +- IAM role that GitHub Actions can assume +- Permissions for Terraform operations (creating/updating resources) +- ECR permissions for pushing Docker images + +--- + +### Q7: CloudWatch Alerts +**Question**: For notifications, do you have Slack webhook URLs, or should I create SNS topics instead? + +**Answer**: Slack webhook URLs - should be provided in tfvars file \ No newline at end of file diff --git a/terraform/README.md b/terraform/README.md new file mode 100644 index 00000000..f02f029b --- /dev/null +++ b/terraform/README.md @@ -0,0 +1,294 @@ +# SF Website Infrastructure - Terraform + +This Terraform configuration implements the complete infrastructure for the SF Website project as documented in `docs/Infrastructure.md`. + +## 🏗️ **Architecture Overview** + +- **VPC**: Multi-AZ setup with public/private subnets +- **ECS Fargate**: Containerized ApostropheCMS application +- **DocumentDB**: MongoDB-compatible database +- **ElastiCache Redis**: Session storage and caching +- **S3**: Media attachments and logs storage +- **CloudFront**: CDN for media delivery +- **ALB**: Load balancer with SSL termination +- **CloudWatch**: Monitoring and alerting + +## 📋 **Prerequisites** + +1. **AWS CLI** configured with appropriate permissions +2. **Terraform** >= 1.0 installed +3. **S3 buckets** for Terraform state storage (see Backend Setup) +4. **SSL Certificate** in AWS Certificate Manager +5. **Route 53 hosted zone** (optional) + +## 🚀 **Quick Start** + +### 1. **Clone and Setup** +```bash +cd terraform +cp terraform.tfvars.example terraform.tfvars +``` + +### 2. **Configure Variables** +Edit `terraform.tfvars` with your specific values: +- Domain names +- Certificate ARN +- Database credentials +- GitHub repository +- Slack webhook URL + +### 3. **Setup Backend Storage** +Create S3 buckets and DynamoDB table for state management: +```bash +# Create S3 buckets for each environment +aws s3 mb s3://sf-website-terraform-state-dev +aws s3 mb s3://sf-website-terraform-state-staging +aws s3 mb s3://sf-website-terraform-state-prod + +# Enable versioning +aws s3api put-bucket-versioning \ + --bucket sf-website-terraform-state-dev \ + --versioning-configuration Status=Enabled + +# Create DynamoDB table for state locking +aws dynamodb create-table \ + --table-name sf-website-terraform-locks \ + --attribute-definitions AttributeName=LockID,AttributeType=S \ + --key-schema AttributeName=LockID,KeyType=HASH \ + --billing-mode PAY_PER_REQUEST +``` + +### 4. **Initialize and Deploy** +```bash +# Initialize with backend +terraform init -backend-config=backend-dev.hcl + +# Plan deployment +terraform plan + +# Apply changes +terraform apply +``` + +## 🌍 **Environment Management** + +Deploy different environments using different backend configurations: + +```bash +# Development +terraform init -backend-config=backend-dev.hcl +terraform apply -var-file=dev.tfvars + +# Staging +terraform init -backend-config=backend-staging.hcl +terraform apply -var-file=staging.tfvars + +# Production +terraform init -backend-config=backend-prod.hcl +terraform apply -var-file=prod.tfvars +``` + +## 📁 **File Structure** + +``` +terraform/ +├── main.tf # Main configuration +├── variables.tf # Variable definitions +├── outputs.tf # Output values +├── terraform.tfvars.example # Example variables +├── backend-{env}.hcl # Backend configurations +├── modules/ +│ ├── vpc/ # VPC and networking +│ ├── security_groups/ # Security groups +│ ├── s3/ # S3 buckets +│ ├── iam/ # IAM roles and policies +│ ├── ecr/ # Container registry +│ ├── parameter_store/ # Secrets management +│ ├── documentdb/ # Database cluster +│ ├── redis/ # Cache cluster +│ ├── alb/ # Load balancer +│ ├── cloudfront/ # CDN +│ ├── ecs/ # Container service +│ └── cloudwatch/ # Monitoring +└── README.md # This file +``` + +## 🔐 **Security Best Practices** + +### **Secrets Management** +- All sensitive values are stored in AWS Parameter Store +- Database credentials are provided via tfvars (never committed) +- Redis auth token is auto-generated + +### **Network Security** +- Private subnets for all data services +- Security groups with least-privilege access +- VPC Flow Logs enabled + +### **IAM Security** +- Separate roles for ECS tasks and execution +- GitHub Actions OIDC integration (no long-term keys) +- Scoped permissions for each service + +## 🏷️ **Resource Naming** + +All resources follow the naming convention: +``` +sf-website-{resource-type}-{environment} +``` + +Examples: +- `sf-website-vpc-dev` +- `sf-website-ecs-cluster-prod` +- `sf-website-documentdb-staging` + +## 📊 **Monitoring & Alerts** + +CloudWatch alarms are configured for: +- ECS CPU/Memory utilization +- DocumentDB connections and performance +- Redis memory usage and hit ratio +- ALB response times and error rates + +Alerts are sent to Slack via webhook URL. + +## 🔄 **CI/CD Integration** + +The configuration includes IAM roles for GitHub Actions: +- **Terraform Role**: Deploy infrastructure changes +- **ECR Push Role**: Build and push container images + +Example GitHub Actions workflow: +```yaml +name: Deploy Infrastructure +on: + push: + branches: [main] + paths: ['terraform/**'] + +jobs: + terraform: + runs-on: ubuntu-latest + permissions: + id-token: write + contents: read + + steps: + - uses: actions/checkout@v4 + + - name: Configure AWS Credentials + uses: aws-actions/configure-aws-credentials@v4 + with: + role-to-assume: ${{ secrets.AWS_ROLE_ARN }} + aws-region: us-east-1 + + - name: Setup Terraform + uses: hashicorp/setup-terraform@v3 + + - name: Terraform Init + run: terraform init -backend-config=backend-prod.hcl + + - name: Terraform Apply + run: terraform apply -auto-approve +``` + +## 📋 **Common Commands** + +```bash +# Format code +terraform fmt -recursive + +# Validate configuration +terraform validate + +# Plan with specific var file +terraform plan -var-file=prod.tfvars + +# Show current state +terraform show + +# List resources +terraform state list + +# Import existing resource +terraform import aws_s3_bucket.example bucket-name + +# Destroy environment (BE CAREFUL!) +terraform destroy +``` + +## 🔧 **Customization** + +### **Environment-Specific Variables** +Create separate tfvars files for each environment: +- `dev.tfvars` +- `staging.tfvars` +- `prod.tfvars` + +### **Scaling Configuration** +Adjust container resources and auto-scaling: +```hcl +# For production +container_cpu = 2048 # 2 vCPU +container_memory = 4096 # 4 GB +ecs_desired_count = 2 +ecs_max_capacity = 10 +``` + +### **Database Sizing** +Configure instance types per environment: +```hcl +# Development +documentdb_instance_class = "db.t3.medium" +redis_node_type = "cache.t3.micro" + +# Production +documentdb_instance_class = "db.r5.large" +redis_node_type = "cache.r6g.large" +``` + +## 🆘 **Troubleshooting** + +### **Common Issues** + +1. **Backend bucket doesn't exist** + ```bash + aws s3 mb s3://sf-website-terraform-state-dev + ``` + +2. **Certificate ARN invalid** + - Verify certificate exists in correct region + - Ensure certificate covers required domains + +3. **State lock conflicts** + ```bash + terraform force-unlock LOCK_ID + ``` + +4. **Module source errors** + ```bash + terraform get -update + ``` + +### **Getting Help** +- Check AWS CloudTrail for API errors +- Review CloudWatch logs for application issues +- Validate Terraform syntax: `terraform validate` + +## 📈 **Cost Optimization** + +- Use `t3.micro` instances for development +- Schedule ECS tasks to scale down during off-hours +- Enable S3 lifecycle policies for log retention +- Consider Reserved Instances for production + +## 🔄 **Updates and Maintenance** + +1. **Provider Updates**: Regularly update AWS provider version +2. **Module Updates**: Test module changes in dev first +3. **State Backup**: S3 versioning provides automatic backups +4. **Security Updates**: Monitor AWS security bulletins + +--- + +**Next Steps**: After deploying infrastructure, configure your CI/CD pipeline to build and deploy the ApostropheCMS application to the created ECS cluster. \ No newline at end of file diff --git a/terraform/backend-dev.hcl b/terraform/backend-dev.hcl new file mode 100644 index 00000000..6b86f6f0 --- /dev/null +++ b/terraform/backend-dev.hcl @@ -0,0 +1,6 @@ +# Backend configuration for Development environment +bucket = "sf-website-terraform-state-dev" +key = "dev/terraform.tfstate" +region = "us-east-1" +dynamodb_table = "sf-website-terraform-locks" +encrypt = true \ No newline at end of file diff --git a/terraform/backend-prod.hcl b/terraform/backend-prod.hcl new file mode 100644 index 00000000..bc9de90b --- /dev/null +++ b/terraform/backend-prod.hcl @@ -0,0 +1,6 @@ +# Backend configuration for Production environment +bucket = "sf-website-terraform-state-prod" +key = "prod/terraform.tfstate" +region = "us-east-1" +dynamodb_table = "sf-website-terraform-locks" +encrypt = true \ No newline at end of file diff --git a/terraform/backend-staging.hcl b/terraform/backend-staging.hcl new file mode 100644 index 00000000..c43fd1c6 --- /dev/null +++ b/terraform/backend-staging.hcl @@ -0,0 +1,6 @@ +# Backend configuration for Staging environment +bucket = "sf-website-terraform-state" +key = "staging/terraform.tfstate" +region = "us-east-1" +dynamodb_table = "sf-website-terraform-locks" +encrypt = true \ No newline at end of file diff --git a/terraform/main.tf b/terraform/main.tf new file mode 100644 index 00000000..69ec798a --- /dev/null +++ b/terraform/main.tf @@ -0,0 +1,267 @@ +# Terraform configuration for SF Website Infrastructure +terraform { + required_version = ">= 1.0" + + required_providers { + aws = { + source = "hashicorp/aws" + version = "~> 5.0" + } + random = { + source = "hashicorp/random" + version = "~> 3.4" + } + } + + backend "s3" { + # Backend configuration will be provided via backend.hcl files + # Example: terraform init -backend-config=backend-dev.hcl + } +} + +# AWS Provider Configuration +provider "aws" { + region = var.aws_region + + default_tags { + tags = var.default_tags + } +} + +# Data sources +data "aws_caller_identity" "current" {} +data "aws_region" "current" {} +data "aws_availability_zones" "available" { + state = "available" +} + +# Random password for Redis auth token +resource "random_password" "redis_auth_token" { + length = 32 + special = true +} + +# Local values for common naming +locals { + name_prefix = "sf-website" + environment = var.environment + + # Availability zones (first 2) + azs = slice(data.aws_availability_zones.available.names, 0, 2) + + # Common tags for all resources + common_tags = merge(var.default_tags, { + Environment = var.environment + Project = "Website" + CostCenter = "Website" + Owner = "peter.ovchyn" + }) +} + +# VPC Module +module "vpc" { + source = "./modules/vpc" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_cidr = var.vpc_cidr + azs = local.azs + + tags = local.common_tags +} + +# Security Groups Module +module "security_groups" { + source = "./modules/security_groups" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + + tags = local.common_tags +} + +# S3 Module +module "s3" { + source = "./modules/s3" + + name_prefix = local.name_prefix + environment = local.environment + + tags = local.common_tags +} + +# IAM Module +module "iam" { + source = "./modules/iam" + + name_prefix = local.name_prefix + environment = local.environment + + s3_attachments_bucket_arn = module.s3.attachments_bucket_arn + ecr_repository_arn = module.ecr.repository_arn + + # GitHub Actions configuration + github_repo = var.github_repository + + tags = local.common_tags +} + +# ECR Module +module "ecr" { + source = "./modules/ecr" + + name_prefix = local.name_prefix + environment = local.environment + + tags = local.common_tags +} + +# Parameter Store Module +module "parameter_store" { + source = "./modules/parameter_store" + + name_prefix = local.name_prefix + environment = local.environment + + # Secrets from tfvars + documentdb_master_username = var.documentdb_master_username + documentdb_master_password = var.documentdb_master_password + session_secret = var.session_secret + + # Auto-generated Redis auth token + redis_auth_token = random_password.redis_auth_token.result + + # Optional Google Cloud Storage key + gcs_service_account_key = var.gcs_service_account_key + + tags = local.common_tags +} + +# DocumentDB Module +module "documentdb" { + source = "./modules/documentdb" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnet_ids + security_group_ids = [module.security_groups.documentdb_sg_id] + + master_username = var.documentdb_master_username + master_password = var.documentdb_master_password + + tags = local.common_tags +} + +# ElastiCache Redis Module +module "redis" { + source = "./modules/redis" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnet_ids + security_group_ids = [module.security_groups.redis_sg_id] + + auth_token = random_password.redis_auth_token.result + + tags = local.common_tags +} + +# ALB Module +module "alb" { + source = "./modules/alb" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.public_subnet_ids + security_group_ids = [module.security_groups.alb_sg_id] + + domain_name = var.domain_name + certificate_arn = var.certificate_arn + + tags = local.common_tags +} + +# CloudFront Module +module "cloudfront" { + source = "./modules/cloudfront" + + name_prefix = local.name_prefix + environment = local.environment + + s3_bucket_domain_name = module.s3.attachments_bucket_domain_name + s3_bucket_id = module.s3.attachments_bucket_id + + media_domain_name = var.media_domain_name + certificate_arn = var.certificate_arn + + tags = local.common_tags +} + +# ECS Module +module "ecs" { + source = "./modules/ecs" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnet_ids + security_group_ids = [module.security_groups.ecs_sg_id] + + # Service configuration + ecr_repository_url = module.ecr.repository_url + task_role_arn = module.iam.ecs_task_role_arn + execution_role_arn = module.iam.ecs_execution_role_arn + + # Target group for ALB + target_group_arn = module.alb.target_group_arn + + # Environment variables + environment_variables = { + NODE_ENV = "production" + APOS_MONGODB_URI = "mongodb://${module.documentdb.cluster_endpoint}:27017/apostrophe" + REDIS_URI = "redis://${module.redis.cluster_endpoint}:6379" + BASE_URL = "https://${var.domain_name}" + APOS_S3_BUCKET = module.s3.attachments_bucket_id + APOS_S3_REGION = var.aws_region + APOS_CDN_URL = "https://${var.media_domain_name}" + APOS_CDN_ENABLED = "true" + } + + # Secrets from Parameter Store + secrets = { + SESSION_SECRET = module.parameter_store.session_secret_arn + SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn + } + + tags = local.common_tags +} + +# CloudWatch Module +module "cloudwatch" { + source = "./modules/cloudwatch" + + name_prefix = local.name_prefix + environment = local.environment + + # Resources to monitor + ecs_cluster_name = module.ecs.cluster_name + ecs_service_name = module.ecs.service_name + alb_arn_suffix = module.alb.arn_suffix + documentdb_cluster_id = module.documentdb.cluster_identifier + redis_cluster_id = module.redis.cluster_id + + # Slack webhook for notifications + slack_webhook_url = var.slack_webhook_url + + tags = local.common_tags +} \ No newline at end of file diff --git a/terraform/modules/s3/main.tf b/terraform/modules/s3/main.tf new file mode 100644 index 00000000..eeb11296 --- /dev/null +++ b/terraform/modules/s3/main.tf @@ -0,0 +1,207 @@ +# S3 Module for SF Website Infrastructure + +# S3 Bucket for Attachments +resource "aws_s3_bucket" "attachments" { + bucket = "${var.name_prefix}-s3-attachments-${var.environment}" + + tags = merge(var.tags, { + Name = "${var.name_prefix}-s3-attachments-${var.environment}" + Type = "Attachments" + }) +} + +# S3 Bucket for Logs +resource "aws_s3_bucket" "logs" { + bucket = "${var.name_prefix}-s3-logs-${var.environment}" + + tags = merge(var.tags, { + Name = "${var.name_prefix}-s3-logs-${var.environment}" + Type = "Logs" + }) +} + +# Attachments Bucket Configuration +resource "aws_s3_bucket_versioning" "attachments" { + bucket = aws_s3_bucket.attachments.id + versioning_configuration { + status = "Enabled" + } +} + +resource "aws_s3_bucket_server_side_encryption_configuration" "attachments" { + bucket = aws_s3_bucket.attachments.id + + rule { + apply_server_side_encryption_by_default { + sse_algorithm = "AES256" + } + } +} + +resource "aws_s3_bucket_public_access_block" "attachments" { + bucket = aws_s3_bucket.attachments.id + + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true +} + +resource "aws_s3_bucket_lifecycle_configuration" "attachments" { + bucket = aws_s3_bucket.attachments.id + + rule { + id = "transition_to_ia" + status = "Enabled" + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 90 + storage_class = "GLACIER" + } + } + + rule { + id = "expire_non_current_versions" + status = "Enabled" + + noncurrent_version_expiration { + noncurrent_days = 30 + } + } +} + +resource "aws_s3_bucket_cors_configuration" "attachments" { + bucket = aws_s3_bucket.attachments.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["GET", "PUT", "POST", "DELETE"] + allowed_origins = [ + "https://${var.domain_name}", + "https://${var.media_domain_name}" + ] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} + +# Logs Bucket Configuration +resource "aws_s3_bucket_versioning" "logs" { + bucket = aws_s3_bucket.logs.id + versioning_configuration { + status = "Enabled" + } +} + +resource "aws_s3_bucket_server_side_encryption_configuration" "logs" { + bucket = aws_s3_bucket.logs.id + + rule { + apply_server_side_encryption_by_default { + sse_algorithm = "AES256" + } + } +} + +resource "aws_s3_bucket_public_access_block" "logs" { + bucket = aws_s3_bucket.logs.id + + block_public_acls = true + block_public_policy = true + ignore_public_acls = true + restrict_public_buckets = true +} + +resource "aws_s3_bucket_lifecycle_configuration" "logs" { + bucket = aws_s3_bucket.logs.id + + rule { + id = "log_lifecycle" + status = "Enabled" + + transition { + days = 90 + storage_class = "GLACIER" + } + + expiration { + days = 365 + } + } +} + +# CloudFront Origin Access Control for Attachments Bucket +resource "aws_cloudfront_origin_access_control" "attachments" { + name = "${var.name_prefix}-attachments-oac-${var.environment}" + description = "OAC for attachments bucket" + origin_access_control_origin_type = "s3" + signing_behavior = "always" + signing_protocol = "sigv4" +} + +# Bucket Policy for CloudFront Access +resource "aws_s3_bucket_policy" "attachments" { + bucket = aws_s3_bucket.attachments.id + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Sid = "AllowCloudFrontServicePrincipal" + Effect = "Allow" + Principal = { + Service = "cloudfront.amazonaws.com" + } + Action = "s3:GetObject" + Resource = "${aws_s3_bucket.attachments.arn}/*" + Condition = { + StringEquals = { + "AWS:SourceArn" = var.cloudfront_distribution_arn + } + } + } + ] + }) + + depends_on = [aws_s3_bucket_public_access_block.attachments] +} + +# Bucket Policy for Application Access (via IAM role) +data "aws_iam_policy_document" "attachments_access" { + statement { + sid = "AllowECSTaskAccess" + effect = "Allow" + + principals { + type = "AWS" + identifiers = [var.ecs_task_role_arn] + } + + actions = [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject" + ] + + resources = ["${aws_s3_bucket.attachments.arn}/*"] + } + + statement { + sid = "AllowECSTaskListBucket" + effect = "Allow" + + principals { + type = "AWS" + identifiers = [var.ecs_task_role_arn] + } + + actions = ["s3:ListBucket"] + + resources = [aws_s3_bucket.attachments.arn] + } +} \ No newline at end of file diff --git a/terraform/modules/s3/outputs.tf b/terraform/modules/s3/outputs.tf new file mode 100644 index 00000000..97f9f777 --- /dev/null +++ b/terraform/modules/s3/outputs.tf @@ -0,0 +1,31 @@ +# Outputs for S3 Module + +output "attachments_bucket_id" { + description = "ID of the attachments bucket" + value = aws_s3_bucket.attachments.id +} + +output "attachments_bucket_arn" { + description = "ARN of the attachments bucket" + value = aws_s3_bucket.attachments.arn +} + +output "attachments_bucket_domain_name" { + description = "Domain name of the attachments bucket" + value = aws_s3_bucket.attachments.bucket_domain_name +} + +output "logs_bucket_id" { + description = "ID of the logs bucket" + value = aws_s3_bucket.logs.id +} + +output "logs_bucket_arn" { + description = "ARN of the logs bucket" + value = aws_s3_bucket.logs.arn +} + +output "cloudfront_oac_id" { + description = "ID of the CloudFront Origin Access Control" + value = aws_cloudfront_origin_access_control.attachments.id +} \ No newline at end of file diff --git a/terraform/modules/s3/variables.tf b/terraform/modules/s3/variables.tf new file mode 100644 index 00000000..151f4ec5 --- /dev/null +++ b/terraform/modules/s3/variables.tf @@ -0,0 +1,41 @@ +# Variables for S3 Module + +variable "name_prefix" { + description = "Name prefix for resources" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} + +variable "domain_name" { + description = "Application domain name for CORS" + type = string + default = "" +} + +variable "media_domain_name" { + description = "Media domain name for CORS" + type = string + default = "" +} + +variable "cloudfront_distribution_arn" { + description = "ARN of CloudFront distribution for bucket policy" + type = string + default = "" +} + +variable "ecs_task_role_arn" { + description = "ARN of ECS task role for bucket access" + type = string + default = "" +} \ No newline at end of file diff --git a/terraform/modules/security_groups/main.tf b/terraform/modules/security_groups/main.tf new file mode 100644 index 00000000..d882e52d --- /dev/null +++ b/terraform/modules/security_groups/main.tf @@ -0,0 +1,117 @@ +# Security Groups Module for SF Website Infrastructure + +# ALB Security Group +resource "aws_security_group" "alb" { + name = "${var.name_prefix}-alb-sg-${var.environment}" + description = "Security group for Application Load Balancer" + vpc_id = var.vpc_id + + ingress { + description = "HTTP" + from_port = 80 + to_port = 80 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + ingress { + description = "HTTPS" + from_port = 443 + to_port = 443 + protocol = "tcp" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + description = "All outbound traffic" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-alb-sg-${var.environment}" + }) +} + +# ECS Security Group +resource "aws_security_group" "ecs" { + name = "${var.name_prefix}-ecs-sg-${var.environment}" + description = "Security group for ECS tasks" + vpc_id = var.vpc_id + + ingress { + description = "HTTP from ALB" + from_port = 3000 + to_port = 3000 + protocol = "tcp" + security_groups = [aws_security_group.alb.id] + } + + egress { + description = "All outbound traffic" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-ecs-sg-${var.environment}" + }) +} + +# DocumentDB Security Group +resource "aws_security_group" "documentdb" { + name = "${var.name_prefix}-documentdb-sg-${var.environment}" + description = "Security group for DocumentDB cluster" + vpc_id = var.vpc_id + + ingress { + description = "MongoDB from ECS" + from_port = 27017 + to_port = 27017 + protocol = "tcp" + security_groups = [aws_security_group.ecs.id] + } + + egress { + description = "All outbound traffic" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-documentdb-sg-${var.environment}" + }) +} + +# Redis Security Group +resource "aws_security_group" "redis" { + name = "${var.name_prefix}-redis-sg-${var.environment}" + description = "Security group for ElastiCache Redis cluster" + vpc_id = var.vpc_id + + ingress { + description = "Redis from ECS" + from_port = 6379 + to_port = 6379 + protocol = "tcp" + security_groups = [aws_security_group.ecs.id] + } + + egress { + description = "All outbound traffic" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-redis-sg-${var.environment}" + }) +} \ No newline at end of file diff --git a/terraform/modules/security_groups/outputs.tf b/terraform/modules/security_groups/outputs.tf new file mode 100644 index 00000000..47149ec9 --- /dev/null +++ b/terraform/modules/security_groups/outputs.tf @@ -0,0 +1,21 @@ +# Outputs for Security Groups Module + +output "alb_sg_id" { + description = "ID of the ALB security group" + value = aws_security_group.alb.id +} + +output "ecs_sg_id" { + description = "ID of the ECS security group" + value = aws_security_group.ecs.id +} + +output "documentdb_sg_id" { + description = "ID of the DocumentDB security group" + value = aws_security_group.documentdb.id +} + +output "redis_sg_id" { + description = "ID of the Redis security group" + value = aws_security_group.redis.id +} \ No newline at end of file diff --git a/terraform/modules/security_groups/variables.tf b/terraform/modules/security_groups/variables.tf new file mode 100644 index 00000000..ccec9e16 --- /dev/null +++ b/terraform/modules/security_groups/variables.tf @@ -0,0 +1,22 @@ +# Variables for Security Groups Module + +variable "name_prefix" { + description = "Name prefix for resources" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "ID of the VPC" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/vpc/main.tf b/terraform/modules/vpc/main.tf new file mode 100644 index 00000000..6c25cbf0 --- /dev/null +++ b/terraform/modules/vpc/main.tf @@ -0,0 +1,184 @@ +# VPC Module for SF Website Infrastructure + +# VPC +resource "aws_vpc" "main" { + cidr_block = var.vpc_cidr + enable_dns_hostnames = true + enable_dns_support = true + + tags = merge(var.tags, { + Name = "${var.name_prefix}-vpc-${var.environment}" + }) +} + +# Internet Gateway +resource "aws_internet_gateway" "main" { + vpc_id = aws_vpc.main.id + + tags = merge(var.tags, { + Name = "${var.name_prefix}-igw-${var.environment}" + }) +} + +# Public Subnets +resource "aws_subnet" "public" { + count = length(var.azs) + + vpc_id = aws_vpc.main.id + cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index) + availability_zone = var.azs[count.index] + map_public_ip_on_launch = true + + tags = merge(var.tags, { + Name = "${var.name_prefix}-public-subnet-${count.index + 1}-${var.environment}" + Type = "Public" + }) +} + +# Private Subnets +resource "aws_subnet" "private" { + count = length(var.azs) + + vpc_id = aws_vpc.main.id + cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index + 10) + availability_zone = var.azs[count.index] + + tags = merge(var.tags, { + Name = "${var.name_prefix}-private-subnet-${count.index + 1}-${var.environment}" + Type = "Private" + }) +} + +# Elastic IPs for NAT Gateways +resource "aws_eip" "nat" { + count = length(var.azs) + + domain = "vpc" + depends_on = [aws_internet_gateway.main] + + tags = merge(var.tags, { + Name = "${var.name_prefix}-eip-nat-${count.index + 1}-${var.environment}" + }) +} + +# NAT Gateways +resource "aws_nat_gateway" "main" { + count = length(var.azs) + + allocation_id = aws_eip.nat[count.index].id + subnet_id = aws_subnet.public[count.index].id + + tags = merge(var.tags, { + Name = "${var.name_prefix}-nat-${count.index + 1}-${var.environment}" + }) + + depends_on = [aws_internet_gateway.main] +} + +# Route Table for Public Subnets +resource "aws_route_table" "public" { + vpc_id = aws_vpc.main.id + + route { + cidr_block = "0.0.0.0/0" + gateway_id = aws_internet_gateway.main.id + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-public-rt-${var.environment}" + }) +} + +# Route Table Associations for Public Subnets +resource "aws_route_table_association" "public" { + count = length(aws_subnet.public) + + subnet_id = aws_subnet.public[count.index].id + route_table_id = aws_route_table.public.id +} + +# Route Tables for Private Subnets (one per AZ) +resource "aws_route_table" "private" { + count = length(var.azs) + + vpc_id = aws_vpc.main.id + + route { + cidr_block = "0.0.0.0/0" + nat_gateway_id = aws_nat_gateway.main[count.index].id + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-private-rt-${count.index + 1}-${var.environment}" + }) +} + +# Route Table Associations for Private Subnets +resource "aws_route_table_association" "private" { + count = length(aws_subnet.private) + + subnet_id = aws_subnet.private[count.index].id + route_table_id = aws_route_table.private[count.index].id +} + +# VPC Flow Logs +resource "aws_flow_log" "vpc" { + iam_role_arn = aws_iam_role.flow_log.arn + log_destination = aws_cloudwatch_log_group.vpc_flow_log.arn + traffic_type = "ALL" + vpc_id = aws_vpc.main.id +} + +# CloudWatch Log Group for VPC Flow Logs +resource "aws_cloudwatch_log_group" "vpc_flow_log" { + name = "/aws/vpc/flowlogs/${var.name_prefix}-${var.environment}" + retention_in_days = 30 + + tags = merge(var.tags, { + Name = "${var.name_prefix}-vpc-flowlogs-${var.environment}" + }) +} + +# IAM Role for VPC Flow Logs +resource "aws_iam_role" "flow_log" { + name = "${var.name_prefix}-vpc-flow-log-role-${var.environment}" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "vpc-flow-logs.amazonaws.com" + } + }, + ] + }) + + tags = var.tags +} + +# IAM Policy for VPC Flow Logs +resource "aws_iam_role_policy" "flow_log" { + name = "${var.name_prefix}-vpc-flow-log-policy-${var.environment}" + role = aws_iam_role.flow_log.id + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents", + "logs:DescribeLogGroups", + "logs:DescribeLogStreams", + ] + Effect = "Allow" + Resource = "*" + }, + ] + }) +} \ No newline at end of file diff --git a/terraform/modules/vpc/outputs.tf b/terraform/modules/vpc/outputs.tf new file mode 100644 index 00000000..8c2643a1 --- /dev/null +++ b/terraform/modules/vpc/outputs.tf @@ -0,0 +1,46 @@ +# Outputs for VPC Module + +output "vpc_id" { + description = "ID of the VPC" + value = aws_vpc.main.id +} + +output "vpc_cidr_block" { + description = "CIDR block of the VPC" + value = aws_vpc.main.cidr_block +} + +output "internet_gateway_id" { + description = "ID of the Internet Gateway" + value = aws_internet_gateway.main.id +} + +output "public_subnet_ids" { + description = "IDs of the public subnets" + value = aws_subnet.public[*].id +} + +output "private_subnet_ids" { + description = "IDs of the private subnets" + value = aws_subnet.private[*].id +} + +output "public_route_table_id" { + description = "ID of the public route table" + value = aws_route_table.public.id +} + +output "private_route_table_ids" { + description = "IDs of the private route tables" + value = aws_route_table.private[*].id +} + +output "nat_gateway_ids" { + description = "IDs of the NAT gateways" + value = aws_nat_gateway.main[*].id +} + +output "nat_gateway_ips" { + description = "Elastic IP addresses of the NAT gateways" + value = aws_eip.nat[*].public_ip +} \ No newline at end of file diff --git a/terraform/modules/vpc/variables.tf b/terraform/modules/vpc/variables.tf new file mode 100644 index 00000000..76d19bdd --- /dev/null +++ b/terraform/modules/vpc/variables.tf @@ -0,0 +1,27 @@ +# Variables for VPC Module + +variable "name_prefix" { + description = "Name prefix for resources" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_cidr" { + description = "CIDR block for VPC" + type = string +} + +variable "azs" { + description = "List of availability zones" + type = list(string) +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/outputs.tf b/terraform/outputs.tf new file mode 100644 index 00000000..9acd02fa --- /dev/null +++ b/terraform/outputs.tf @@ -0,0 +1,156 @@ +# Outputs for SF Website Infrastructure + +# VPC Outputs +output "vpc_id" { + description = "ID of the VPC" + value = module.vpc.vpc_id +} + +output "vpc_cidr_block" { + description = "CIDR block of the VPC" + value = module.vpc.vpc_cidr_block +} + +output "public_subnet_ids" { + description = "IDs of the public subnets" + value = module.vpc.public_subnet_ids +} + +output "private_subnet_ids" { + description = "IDs of the private subnets" + value = module.vpc.private_subnet_ids +} + +# S3 Outputs +output "attachments_bucket_name" { + description = "Name of the S3 attachments bucket" + value = module.s3.attachments_bucket_id +} + +output "logs_bucket_name" { + description = "Name of the S3 logs bucket" + value = module.s3.logs_bucket_id +} + +# ECR Outputs +output "ecr_repository_url" { + description = "URL of the ECR repository" + value = module.ecr.repository_url +} + +output "ecr_repository_arn" { + description = "ARN of the ECR repository" + value = module.ecr.repository_arn +} + +# ALB Outputs +output "alb_dns_name" { + description = "DNS name of the Application Load Balancer" + value = module.alb.dns_name +} + +output "alb_zone_id" { + description = "Zone ID of the Application Load Balancer" + value = module.alb.zone_id +} + +output "domain_name" { + description = "Domain name for the application" + value = var.domain_name +} + +# CloudFront Outputs +output "cloudfront_distribution_id" { + description = "ID of the CloudFront distribution" + value = module.cloudfront.distribution_id +} + +output "cloudfront_domain_name" { + description = "Domain name of the CloudFront distribution" + value = module.cloudfront.domain_name +} + +output "media_domain_name" { + description = "Domain name for media CDN" + value = var.media_domain_name +} + +# Database Outputs +output "documentdb_cluster_endpoint" { + description = "DocumentDB cluster endpoint" + value = module.documentdb.cluster_endpoint + sensitive = true +} + +output "documentdb_cluster_reader_endpoint" { + description = "DocumentDB cluster reader endpoint" + value = module.documentdb.cluster_reader_endpoint + sensitive = true +} + +# Cache Outputs +output "redis_cluster_endpoint" { + description = "ElastiCache Redis cluster endpoint" + value = module.redis.cluster_endpoint + sensitive = true +} + +# ECS Outputs +output "ecs_cluster_name" { + description = "Name of the ECS cluster" + value = module.ecs.cluster_name +} + +output "ecs_service_name" { + description = "Name of the ECS service" + value = module.ecs.service_name +} + +# IAM Outputs +output "ecs_task_role_arn" { + description = "ARN of the ECS task role" + value = module.iam.ecs_task_role_arn +} + +output "ecs_execution_role_arn" { + description = "ARN of the ECS execution role" + value = module.iam.ecs_execution_role_arn +} + +output "github_actions_role_arn" { + description = "ARN of the GitHub Actions IAM role" + value = module.iam.github_actions_role_arn +} + +# Security Group Outputs +output "security_groups" { + description = "Security group IDs" + value = { + alb = module.security_groups.alb_sg_id + ecs = module.security_groups.ecs_sg_id + documentdb = module.security_groups.documentdb_sg_id + redis = module.security_groups.redis_sg_id + } +} + +# Application URLs +output "application_url" { + description = "URL to access the application" + value = "https://${var.domain_name}" +} + +output "media_url" { + description = "URL for media assets" + value = "https://${var.media_domain_name}" +} + +# Environment Information +output "environment" { + description = "Environment name" + value = var.environment +} + +output "aws_region" { + description = "AWS region" + value = var.aws_region +} \ No newline at end of file diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example new file mode 100644 index 00000000..7f6abaae --- /dev/null +++ b/terraform/terraform.tfvars.example @@ -0,0 +1,53 @@ +# Example Terraform Variables for SF Website Infrastructure +# Copy this file to terraform.tfvars and customize the values + +# General Configuration +aws_region = "us-east-1" +environment = "dev" # Can be any name: dev, staging, prod, qa, demo, sandbox, etc. + +# Networking +vpc_cidr = "10.0.0.0/16" # Choose unique CIDR for each environment to avoid conflicts + +# Domain Configuration +domain_name = "sf-website-dev.sandbox-prettyclear.com" +media_domain_name = "sf-website-media-dev.sandbox-prettyclear.com" +certificate_arn = "arn:aws:acm:us-east-1:548271326349:certificate/7e11016f-f90e-4800-972d-622bf1a82948" + +# Database Configuration (SENSITIVE - Store securely) +documentdb_master_username = "apostrophe_admin" +documentdb_master_password = "your-secure-password-here" # Min 8 characters + +# Application Secrets (SENSITIVE - Store securely) +session_secret = "your-super-secure-32-character-session-secret-here" + +# Optional: Google Cloud Storage Service Account (if using GCS) +gcs_service_account_key = "" # Leave empty if not using GCS + +# GitHub Actions Configuration +github_repository = "your-username/sf-website" # Format: owner/repo + +# Monitoring Configuration (SENSITIVE) +slack_webhook_url = "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK" + +# Optional: Route 53 Configuration +route53_zone_id = "" # Leave empty to skip DNS record creation + +# Container Configuration +container_image_tag = "latest" +container_cpu = 1024 # 1024 = 1 vCPU +container_memory = 2048 # 2048 MB = 2 GB + +# Scaling Configuration +ecs_desired_count = 1 +ecs_max_capacity = 3 + +# Environment-specific Instance Types +documentdb_instance_class = "db.t3.medium" +redis_node_type = "cache.t3.micro" # cache.t3.small for production + +# Tags +default_tags = { + Project = "Website" + CostCenter = "Website" + Owner = "peter.ovchyn" +} \ No newline at end of file diff --git a/terraform/variables.tf b/terraform/variables.tf new file mode 100644 index 00000000..c5d966bf --- /dev/null +++ b/terraform/variables.tf @@ -0,0 +1,173 @@ +# Variables for SF Website Infrastructure + +# General Configuration +variable "aws_region" { + description = "AWS region for resources" + type = string + default = "us-east-1" +} + +variable "environment" { + description = "Environment name (e.g., dev, staging, prod, qa, demo, sandbox, etc.)" + type = string +} + +variable "default_tags" { + description = "Default tags to apply to all resources" + type = map(string) + default = { + Project = "Website" + CostCenter = "Website" + Owner = "peter.ovchyn" + } +} + +# Networking Configuration +variable "vpc_cidr" { + description = "CIDR block for VPC" + type = string + default = "10.0.0.0/16" + + validation { + condition = can(cidrhost(var.vpc_cidr, 0)) + error_message = "VPC CIDR must be a valid IPv4 CIDR block." + } +} + +# Domain Configuration +variable "domain_name" { + description = "Domain name for the application (e.g., sf-website-dev.sandbox-prettyclear.com)" + type = string +} + +variable "media_domain_name" { + description = "Domain name for media CDN (e.g., sf-website-media-dev.sandbox-prettyclear.com)" + type = string +} + +variable "certificate_arn" { + description = "ARN of the SSL certificate from ACM" + type = string +} + +# Database Configuration +variable "documentdb_master_username" { + description = "Master username for DocumentDB cluster" + type = string + sensitive = true +} + +variable "documentdb_master_password" { + description = "Master password for DocumentDB cluster" + type = string + sensitive = true + + validation { + condition = length(var.documentdb_master_password) >= 8 + error_message = "DocumentDB master password must be at least 8 characters long." + } +} + +# Application Secrets +variable "session_secret" { + description = "Secret key for session management" + type = string + sensitive = true + + validation { + condition = length(var.session_secret) >= 32 + error_message = "Session secret must be at least 32 characters long." + } +} + +variable "gcs_service_account_key" { + description = "Google Cloud Storage service account private key (optional)" + type = string + default = "" + sensitive = true +} + +# GitHub Actions Configuration +variable "github_repository" { + description = "GitHub repository in the format 'owner/repo' for OIDC integration" + type = string +} + +# Monitoring Configuration +variable "slack_webhook_url" { + description = "Slack webhook URL for CloudWatch alerts" + type = string + sensitive = true +} + +# Route 53 Configuration (optional) +variable "route53_zone_id" { + description = "Route 53 hosted zone ID for DNS records" + type = string + default = "" +} + +# Container Configuration +variable "container_image_tag" { + description = "Container image tag to deploy" + type = string + default = "latest" +} + +variable "container_cpu" { + description = "CPU units for the container (1024 = 1 vCPU)" + type = number + default = 1024 + + validation { + condition = contains([256, 512, 1024, 2048, 4096], var.container_cpu) + error_message = "Container CPU must be one of: 256, 512, 1024, 2048, 4096." + } +} + +variable "container_memory" { + description = "Memory in MB for the container" + type = number + default = 2048 + + validation { + condition = var.container_memory >= 512 && var.container_memory <= 30720 + error_message = "Container memory must be between 512 and 30720 MB." + } +} + +# Scaling Configuration +variable "ecs_desired_count" { + description = "Desired number of ECS tasks" + type = number + default = 1 + + validation { + condition = var.ecs_desired_count >= 1 && var.ecs_desired_count <= 10 + error_message = "ECS desired count must be between 1 and 10." + } +} + +variable "ecs_max_capacity" { + description = "Maximum number of ECS tasks for auto-scaling" + type = number + default = 3 + + validation { + condition = var.ecs_max_capacity >= var.ecs_desired_count + error_message = "ECS max capacity must be greater than or equal to desired count." + } +} + +# Environment-specific Instance Types +variable "documentdb_instance_class" { + description = "DocumentDB instance class" + type = string + default = "db.t3.medium" +} + +variable "redis_node_type" { + description = "ElastiCache Redis node type" + type = string + default = "cache.t3.micro" +} \ No newline at end of file From 196a33b6bf5e18457ada34a37e94a85424c0f4ef Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 15:48:30 +0200 Subject: [PATCH 03/46] Standardize Terraform backend configuration and add initialization script - Unify S3 bucket names to 'sf-website-infrastructure' across all environments - Update state file paths to consistent terraform/terraform-{env}.tfstate format - Replace hardcoded session secret with environment variable placeholder - Add comprehensive AWS initialization script for Terraform backend resources - Include S3 bucket and DynamoDB table management with proper error handling --- terraform/backend-dev.hcl | 4 +- terraform/backend-prod.hcl | 6 +- terraform/backend-staging.hcl | 4 +- terraform/init-aws-for-terraform.sh | 338 ++++++++++++++++++++++++++++ terraform/terraform.tfvars.example | 2 +- 5 files changed, 346 insertions(+), 8 deletions(-) create mode 100755 terraform/init-aws-for-terraform.sh diff --git a/terraform/backend-dev.hcl b/terraform/backend-dev.hcl index 6b86f6f0..7a6cb983 100644 --- a/terraform/backend-dev.hcl +++ b/terraform/backend-dev.hcl @@ -1,6 +1,6 @@ # Backend configuration for Development environment -bucket = "sf-website-terraform-state-dev" -key = "dev/terraform.tfstate" +bucket = "sf-website-infrastructure" +key = "terraform/terraform-dev.tfstate" region = "us-east-1" dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/backend-prod.hcl b/terraform/backend-prod.hcl index bc9de90b..e2a6b7dc 100644 --- a/terraform/backend-prod.hcl +++ b/terraform/backend-prod.hcl @@ -1,6 +1,6 @@ -# Backend configuration for Production environment -bucket = "sf-website-terraform-state-prod" -key = "prod/terraform.tfstate" +# Backend configuration for Prod environment +bucket = "sf-website-infrastructure" +key = "terraform/terraform-prod.tfstate" region = "us-east-1" dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/backend-staging.hcl b/terraform/backend-staging.hcl index c43fd1c6..dba7befa 100644 --- a/terraform/backend-staging.hcl +++ b/terraform/backend-staging.hcl @@ -1,6 +1,6 @@ # Backend configuration for Staging environment -bucket = "sf-website-terraform-state" -key = "staging/terraform.tfstate" +bucket = "sf-website-infrastructure" +key = "terraform/terraform-staging.tfstate" region = "us-east-1" dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/init-aws-for-terraform.sh b/terraform/init-aws-for-terraform.sh new file mode 100755 index 00000000..39b7bba6 --- /dev/null +++ b/terraform/init-aws-for-terraform.sh @@ -0,0 +1,338 @@ +#!/bin/bash + +# Script to manage Terraform backend resources (S3 bucket and DynamoDB table) +# Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME] + +set -e + +# Configuration from backend-dev.hcl +readonly BUCKET_NAME='sf-website-infrastructure' +readonly DYNAMODB_TABLE='sf-website-terraform-locks' +readonly AWS_REGION='us-east-1' + +# Global variables +PROFILE='' + +# Colors for output +readonly RED='\033[0;31m' +readonly GREEN='\033[0;32m' +readonly YELLOW='\033[1;33m' +readonly NC='\033[0m' # No Color + +# Print colored output +print_info() { + echo -e "${GREEN}[INFO]${NC} $1" +} + +print_warning() { + echo -e "${YELLOW}[WARN]${NC} $1" +} + +print_error() { + echo -e "${RED}[ERROR]${NC} $1" +} + +# Get AWS CLI command with profile if specified +aws_cmd() { + if [ -n "${PROFILE}" ]; then + echo "aws --profile ${PROFILE}" + else + echo "aws" + fi +} + +# Check if AWS CLI is installed and configured +check_aws_cli() { + if ! command -v aws &> /dev/null; then + print_error 'AWS CLI is not installed' + exit 1 + fi + + local aws_command + aws_command=$(aws_cmd) + + # Check credentials by testing the command and capturing exit code + if ${aws_command} sts get-caller-identity >/dev/null 2>&1; then + if [ -n "${PROFILE}" ]; then + print_info "Using AWS profile: ${PROFILE}" + fi + else + if [ -n "${PROFILE}" ]; then + print_error "AWS CLI profile '${PROFILE}' is not configured or credentials are invalid" + else + print_error 'AWS CLI is not configured or credentials are invalid' + fi + exit 1 + fi +} + +# Check if S3 bucket exists +bucket_exists() { + local aws_command + aws_command=$(aws_cmd) + ${aws_command} s3api head-bucket --bucket "${BUCKET_NAME}" \ + --region "${AWS_REGION}" 2>/dev/null +} + +# Check if DynamoDB table exists +table_exists() { + local aws_command + aws_command=$(aws_cmd) + ${aws_command} dynamodb describe-table --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" &> /dev/null +} + +# Create S3 bucket +create_s3_bucket() { + if bucket_exists; then + print_warning "S3 bucket '${BUCKET_NAME}' already exists" + return 0 + fi + + print_info "Creating S3 bucket '${BUCKET_NAME}'..." + + local aws_command + aws_command=$(aws_cmd) + + # Create bucket + if [ "${AWS_REGION}" = 'us-east-1' ]; then + ${aws_command} s3api create-bucket --bucket "${BUCKET_NAME}" \ + --region "${AWS_REGION}" + else + ${aws_command} s3api create-bucket --bucket "${BUCKET_NAME}" \ + --region "${AWS_REGION}" \ + --create-bucket-configuration LocationConstraint="${AWS_REGION}" + fi + + # Enable versioning + ${aws_command} s3api put-bucket-versioning --bucket "${BUCKET_NAME}" \ + --versioning-configuration Status=Enabled + + # Enable server-side encryption + ${aws_command} s3api put-bucket-encryption --bucket "${BUCKET_NAME}" \ + --server-side-encryption-configuration '{ + "Rules": [ + { + "ApplyServerSideEncryptionByDefault": { + "SSEAlgorithm": "AES256" + } + } + ] + }' + + # Block public access + ${aws_command} s3api put-public-access-block --bucket "${BUCKET_NAME}" \ + --public-access-block-configuration \ + BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true + + print_info "S3 bucket '${BUCKET_NAME}' created successfully" +} + +# Create DynamoDB table +create_dynamodb_table() { + if table_exists; then + print_warning "DynamoDB table '${DYNAMODB_TABLE}' already exists" + return 0 + fi + + print_info "Creating DynamoDB table '${DYNAMODB_TABLE}'..." + + local aws_command + aws_command=$(aws_cmd) + + ${aws_command} dynamodb create-table \ + --table-name "${DYNAMODB_TABLE}" \ + --attribute-definitions AttributeName=LockID,AttributeType=S \ + --key-schema AttributeName=LockID,KeyType=HASH \ + --billing-mode PAY_PER_REQUEST \ + --region "${AWS_REGION}" + + print_info "Waiting for DynamoDB table to be active..." + ${aws_command} dynamodb wait table-exists --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" + + print_info "DynamoDB table '${DYNAMODB_TABLE}' created successfully" +} + +# Delete S3 bucket (including all objects and versions) +delete_s3_bucket() { + if ! bucket_exists; then + print_warning "S3 bucket '${BUCKET_NAME}' does not exist" + return 0 + fi + + print_info "Deleting all objects from S3 bucket '${BUCKET_NAME}'..." + + local aws_command + aws_command=$(aws_cmd) + + # Delete all object versions and delete markers + ${aws_command} s3api list-object-versions --bucket "${BUCKET_NAME}" \ + --query 'Versions[].{Key:Key,VersionId:VersionId}' \ + --output text | while read -r key version_id; do + if [ -n "${key}" ] && [ -n "${version_id}" ]; then + ${aws_command} s3api delete-object --bucket "${BUCKET_NAME}" \ + --key "${key}" --version-id "${version_id}" + fi + done + + # Delete all delete markers + ${aws_command} s3api list-object-versions --bucket "${BUCKET_NAME}" \ + --query 'DeleteMarkers[].{Key:Key,VersionId:VersionId}' \ + --output text | while read -r key version_id; do + if [ -n "${key}" ] && [ -n "${version_id}" ]; then + ${aws_command} s3api delete-object --bucket "${BUCKET_NAME}" \ + --key "${key}" --version-id "${version_id}" + fi + done + + print_info "Deleting S3 bucket '${BUCKET_NAME}'..." + ${aws_command} s3api delete-bucket --bucket "${BUCKET_NAME}" \ + --region "${AWS_REGION}" + + print_info "S3 bucket '${BUCKET_NAME}' deleted successfully" +} + +# Delete DynamoDB table +delete_dynamodb_table() { + if ! table_exists; then + print_warning "DynamoDB table '${DYNAMODB_TABLE}' does not exist" + return 0 + fi + + print_info "Deleting DynamoDB table '${DYNAMODB_TABLE}'..." + + local aws_command + aws_command=$(aws_cmd) + + ${aws_command} dynamodb delete-table --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" + + print_info "Waiting for DynamoDB table to be deleted..." + ${aws_command} dynamodb wait table-not-exists --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" + + print_info "DynamoDB table '${DYNAMODB_TABLE}' deleted successfully" +} + +# Show status of resources +show_status() { + print_info 'Checking Terraform backend resources status...' + + if bucket_exists; then + print_info "S3 bucket '${BUCKET_NAME}' exists" + else + print_warning "S3 bucket '${BUCKET_NAME}' does not exist" + fi + + if table_exists; then + print_info "DynamoDB table '${DYNAMODB_TABLE}' exists" + else + print_warning "DynamoDB table '${DYNAMODB_TABLE}' does not exist" + fi +} + +# Create all resources +create_resources() { + print_info 'Creating Terraform backend resources...' + create_s3_bucket + create_dynamodb_table + print_info 'All resources created successfully!' +} + +# Delete all resources +delete_resources() { + print_warning 'This will delete all Terraform backend resources!' + read -p 'Are you sure? (yes/no): ' confirmation + + if [ "${confirmation}" != 'yes' ]; then + print_info 'Operation cancelled' + return 0 + fi + + print_info 'Deleting Terraform backend resources...' + delete_s3_bucket + delete_dynamodb_table + print_info 'All resources deleted successfully!' +} + +# Show usage +show_usage() { + echo 'Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME]' + echo '' + echo 'Commands:' + echo ' create - Create S3 bucket and DynamoDB table' + echo ' delete - Delete S3 bucket and DynamoDB table' + echo ' status - Show status of resources' + echo '' + echo 'Options:' + echo ' --profile PROFILE_NAME - Use specified AWS profile' + echo '' + echo 'Examples:' + echo ' ./init-aws-for-terraform.sh create' + echo ' ./init-aws-for-terraform.sh create --profile tf-sf-website' + echo ' ./init-aws-for-terraform.sh status --profile tf-sf-website' + echo '' + echo 'Resources managed:' + echo " S3 Bucket: ${BUCKET_NAME}" + echo " DynamoDB Table: ${DYNAMODB_TABLE}" + echo " Region: ${AWS_REGION}" +} + +# Main function +main() { + local command='' + + # Parse arguments directly in main function + while [ $# -gt 0 ]; do + case "$1" in + 'create'|'delete'|'status'|'help'|'-h'|'--help') + command="$1" + shift + ;; + '--profile') + if [ -n "$2" ]; then + PROFILE="$2" + shift 2 + else + print_error 'Profile name is required after --profile' + exit 1 + fi + ;; + *) + print_error "Unknown argument: $1" + show_usage + exit 1 + ;; + esac + done + + check_aws_cli + + case "${command}" in + 'create') + create_resources + ;; + 'delete') + delete_resources + ;; + 'status') + show_status + ;; + 'help'|'-h'|'--help') + show_usage + ;; + '') + show_usage + exit 1 + ;; + *) + print_error "Unknown command: ${command}" + show_usage + exit 1 + ;; + esac +} + +# Run main function with all arguments +main "$@" \ No newline at end of file diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index 7f6abaae..c3b1ec8e 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -18,7 +18,7 @@ documentdb_master_username = "apostrophe_admin" documentdb_master_password = "your-secure-password-here" # Min 8 characters # Application Secrets (SENSITIVE - Store securely) -session_secret = "your-super-secure-32-character-session-secret-here" +session_secret = "${SESSION_SECRET_KEY}" # Optional: Google Cloud Storage Service Account (if using GCS) gcs_service_account_key = "" # Leave empty if not using GCS From e5e4b6070d93e313c9faf3560eb092b82b9f8989 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 16:13:58 +0200 Subject: [PATCH 04/46] Enhance Terraform backend initialization with improved error handling and comprehensive documentation - Refactor aws_cmd() to aws_exec() with improved error handling and output suppression - Add non-interactive mode support for automation and CI/CD pipelines - Add comprehensive documentation to README.md with usage examples and feature overview - Create IAM policy file with specific permissions for Terraform backend resources - Improve function return codes and error handling throughout the script - Redirect command output to reduce noise and improve user experience --- terraform/README.md | 58 ++++++++++++++++- terraform/init-aws-for-terraform.sh | 84 +++++++++++-------------- terraform/terraform-backend-policy.json | 63 +++++++++++++++++++ 3 files changed, 157 insertions(+), 48 deletions(-) create mode 100644 terraform/terraform-backend-policy.json diff --git a/terraform/README.md b/terraform/README.md index f02f029b..4b3e123c 100644 --- a/terraform/README.md +++ b/terraform/README.md @@ -291,4 +291,60 @@ redis_node_type = "cache.r6g.large" --- -**Next Steps**: After deploying infrastructure, configure your CI/CD pipeline to build and deploy the ApostropheCMS application to the created ECS cluster. \ No newline at end of file +**Next Steps**: After deploying infrastructure, configure your CI/CD pipeline to build and deploy the ApostropheCMS application to the created ECS cluster. + +# Terraform Infrastructure Setup + +This directory contains the Terraform configuration and initialization scripts for the SF Website infrastructure. + +## Quick Start + +### Initialize AWS Resources + +The `init-aws-for-terraform.sh` script manages the Terraform backend resources (S3 bucket and DynamoDB table). + +```bash +# Create resources +./init-aws-for-terraform.sh create --profile tf-sf-website + +# Check status +./init-aws-for-terraform.sh status --profile tf-sf-website + +# Delete resources (interactive) +./init-aws-for-terraform.sh delete --profile tf-sf-website + +# Delete resources (non-interactive for automation) +TERRAFORM_NON_INTERACTIVE=true ./init-aws-for-terraform.sh delete --profile tf-sf-website +``` + +### Non-Interactive Mode + +The script supports non-interactive mode for automation and CI/CD pipelines: + +- Set `TERRAFORM_NON_INTERACTIVE=true` environment variable +- The script will automatically skip confirmation prompts +- Useful for automated deployments and scripts + +### Error Handling + +The script includes robust error handling: + +- Proper AWS CLI profile support +- Graceful handling of existing resources +- Suppression of spurious AWS CLI configuration errors +- Clear status reporting and logging + +## Script Features + +- ✅ **Non-interactive support** - No user prompts when `TERRAFORM_NON_INTERACTIVE=true` +- ✅ **AWS Profile support** - Works with named AWS profiles +- ✅ **Error suppression** - Filters out spurious AWS CLI configuration errors +- ✅ **Resource validation** - Checks if resources exist before creating/deleting +- ✅ **Comprehensive logging** - Clear status messages and error reporting +- ✅ **Safe deletion** - Confirms before deleting resources (unless non-interactive) + +## Resources Managed + +- **S3 Bucket**: `sf-website-infrastructure` (with versioning and encryption) +- **DynamoDB Table**: `sf-website-terraform-locks` (for state locking) +- **Region**: `us-east-1` \ No newline at end of file diff --git a/terraform/init-aws-for-terraform.sh b/terraform/init-aws-for-terraform.sh index 39b7bba6..631544d4 100755 --- a/terraform/init-aws-for-terraform.sh +++ b/terraform/init-aws-for-terraform.sh @@ -32,12 +32,13 @@ print_error() { echo -e "${RED}[ERROR]${NC} $1" } -# Get AWS CLI command with profile if specified -aws_cmd() { +# Execute AWS command with proper profile handling and error suppression +aws_exec() { if [ -n "${PROFILE}" ]; then - echo "aws --profile ${PROFILE}" + # Suppress spurious head/cat errors that occur with some AWS CLI configurations + aws --profile "${PROFILE}" "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) else - echo "aws" + aws "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) fi } @@ -48,11 +49,8 @@ check_aws_cli() { exit 1 fi - local aws_command - aws_command=$(aws_cmd) - # Check credentials by testing the command and capturing exit code - if ${aws_command} sts get-caller-identity >/dev/null 2>&1; then + if aws_exec sts get-caller-identity >/dev/null 2>&1; then if [ -n "${PROFILE}" ]; then print_info "Using AWS profile: ${PROFILE}" fi @@ -68,18 +66,16 @@ check_aws_cli() { # Check if S3 bucket exists bucket_exists() { - local aws_command - aws_command=$(aws_cmd) - ${aws_command} s3api head-bucket --bucket "${BUCKET_NAME}" \ - --region "${AWS_REGION}" 2>/dev/null + aws_exec s3api head-bucket --bucket "${BUCKET_NAME}" \ + --region "${AWS_REGION}" >/dev/null 2>&1 + return $? } # Check if DynamoDB table exists table_exists() { - local aws_command - aws_command=$(aws_cmd) - ${aws_command} dynamodb describe-table --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" &> /dev/null + aws_exec dynamodb describe-table --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" >/dev/null 2>&1 + return $? } # Create S3 bucket @@ -91,25 +87,22 @@ create_s3_bucket() { print_info "Creating S3 bucket '${BUCKET_NAME}'..." - local aws_command - aws_command=$(aws_cmd) - # Create bucket if [ "${AWS_REGION}" = 'us-east-1' ]; then - ${aws_command} s3api create-bucket --bucket "${BUCKET_NAME}" \ + aws_exec s3api create-bucket --bucket "${BUCKET_NAME}" \ --region "${AWS_REGION}" else - ${aws_command} s3api create-bucket --bucket "${BUCKET_NAME}" \ + aws_exec s3api create-bucket --bucket "${BUCKET_NAME}" \ --region "${AWS_REGION}" \ --create-bucket-configuration LocationConstraint="${AWS_REGION}" fi # Enable versioning - ${aws_command} s3api put-bucket-versioning --bucket "${BUCKET_NAME}" \ + aws_exec s3api put-bucket-versioning --bucket "${BUCKET_NAME}" \ --versioning-configuration Status=Enabled # Enable server-side encryption - ${aws_command} s3api put-bucket-encryption --bucket "${BUCKET_NAME}" \ + aws_exec s3api put-bucket-encryption --bucket "${BUCKET_NAME}" \ --server-side-encryption-configuration '{ "Rules": [ { @@ -121,7 +114,7 @@ create_s3_bucket() { }' # Block public access - ${aws_command} s3api put-public-access-block --bucket "${BUCKET_NAME}" \ + aws_exec s3api put-public-access-block --bucket "${BUCKET_NAME}" \ --public-access-block-configuration \ BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true @@ -137,18 +130,15 @@ create_dynamodb_table() { print_info "Creating DynamoDB table '${DYNAMODB_TABLE}'..." - local aws_command - aws_command=$(aws_cmd) - - ${aws_command} dynamodb create-table \ + aws_exec dynamodb create-table \ --table-name "${DYNAMODB_TABLE}" \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --billing-mode PAY_PER_REQUEST \ - --region "${AWS_REGION}" + --region "${AWS_REGION}" >> /dev/null 2>&1 print_info "Waiting for DynamoDB table to be active..." - ${aws_command} dynamodb wait table-exists --table-name "${DYNAMODB_TABLE}" \ + aws_exec dynamodb wait table-exists --table-name "${DYNAMODB_TABLE}" \ --region "${AWS_REGION}" print_info "DynamoDB table '${DYNAMODB_TABLE}' created successfully" @@ -163,31 +153,28 @@ delete_s3_bucket() { print_info "Deleting all objects from S3 bucket '${BUCKET_NAME}'..." - local aws_command - aws_command=$(aws_cmd) - # Delete all object versions and delete markers - ${aws_command} s3api list-object-versions --bucket "${BUCKET_NAME}" \ + aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ --query 'Versions[].{Key:Key,VersionId:VersionId}' \ --output text | while read -r key version_id; do if [ -n "${key}" ] && [ -n "${version_id}" ]; then - ${aws_command} s3api delete-object --bucket "${BUCKET_NAME}" \ + aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \ --key "${key}" --version-id "${version_id}" fi done # Delete all delete markers - ${aws_command} s3api list-object-versions --bucket "${BUCKET_NAME}" \ + aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ --query 'DeleteMarkers[].{Key:Key,VersionId:VersionId}' \ --output text | while read -r key version_id; do if [ -n "${key}" ] && [ -n "${version_id}" ]; then - ${aws_command} s3api delete-object --bucket "${BUCKET_NAME}" \ + aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \ --key "${key}" --version-id "${version_id}" fi done print_info "Deleting S3 bucket '${BUCKET_NAME}'..." - ${aws_command} s3api delete-bucket --bucket "${BUCKET_NAME}" \ + aws_exec s3api delete-bucket --bucket "${BUCKET_NAME}" \ --region "${AWS_REGION}" print_info "S3 bucket '${BUCKET_NAME}' deleted successfully" @@ -202,14 +189,11 @@ delete_dynamodb_table() { print_info "Deleting DynamoDB table '${DYNAMODB_TABLE}'..." - local aws_command - aws_command=$(aws_cmd) - - ${aws_command} dynamodb delete-table --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" + aws_exec dynamodb delete-table --table-name "${DYNAMODB_TABLE}" \ + --region "${AWS_REGION}" >> /dev/null 2>&1 print_info "Waiting for DynamoDB table to be deleted..." - ${aws_command} dynamodb wait table-not-exists --table-name "${DYNAMODB_TABLE}" \ + aws_exec dynamodb wait table-not-exists --table-name "${DYNAMODB_TABLE}" \ --region "${AWS_REGION}" print_info "DynamoDB table '${DYNAMODB_TABLE}' deleted successfully" @@ -240,10 +224,16 @@ create_resources() { print_info 'All resources created successfully!' } -# Delete all resources +# Delete all resources with automatic confirmation for non-interactive use delete_resources() { - print_warning 'This will delete all Terraform backend resources!' - read -p 'Are you sure? (yes/no): ' confirmation + # Check if running in non-interactive mode (for automation) + if [ "${TERRAFORM_NON_INTERACTIVE:-false}" = "true" ] || [ ! -t 0 ]; then + print_warning 'Running in non-interactive mode, skipping confirmation...' + confirmation="yes" + else + print_warning 'This will delete all Terraform backend resources!' + read -p 'Are you sure? (yes/no): ' confirmation + fi if [ "${confirmation}" != 'yes' ]; then print_info 'Operation cancelled' diff --git a/terraform/terraform-backend-policy.json b/terraform/terraform-backend-policy.json new file mode 100644 index 00000000..a4c8afc2 --- /dev/null +++ b/terraform/terraform-backend-policy.json @@ -0,0 +1,63 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "TerraformBackendS3Permissions", + "Effect": "Allow", + "Action": [ + "s3:CreateBucket", + "s3:DeleteBucket", + "s3:ListBucket", + "s3:ListBucketVersions", + "s3:GetBucketLocation", + "s3:GetBucketVersioning", + "s3:PutBucketVersioning", + "s3:PutEncryptionConfiguration", + "s3:GetEncryptionConfiguration", + "s3:PutBucketPublicAccessBlock", + "s3:GetBucketPublicAccessBlock", + "s3:PutBucketPolicy", + "s3:GetBucketPolicy", + "s3:DeleteBucketPolicy" + ], + "Resource": [ + "arn:aws:s3:::sf-website-infrastructure", + "arn:aws:s3:::sf-website-infrastructure/*" + ] + }, + { + "Sid": "TerraformBackendS3ObjectPermissions", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject", + "s3:GetObjectVersion", + "s3:DeleteObjectVersion" + ], + "Resource": "arn:aws:s3:::sf-website-infrastructure/*" + }, + { + "Sid": "TerraformBackendDynamoDBPermissions", + "Effect": "Allow", + "Action": [ + "dynamodb:CreateTable", + "dynamodb:DeleteTable", + "dynamodb:DescribeTable", + "dynamodb:GetItem", + "dynamodb:PutItem", + "dynamodb:DeleteItem", + "dynamodb:UpdateItem" + ], + "Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks" + }, + { + "Sid": "STSPermissions", + "Effect": "Allow", + "Action": [ + "sts:GetCallerIdentity" + ], + "Resource": "*" + } + ] +} \ No newline at end of file From 45d75c7fa37d7097b0f0b1b5a698eab699202d7b Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 19:24:22 +0200 Subject: [PATCH 05/46] Refactor Terraform configuration and remove DynamoDB state locking - Remove DynamoDB table references from all backend configuration files - Simplify terraform initialization script by removing DynamoDB functionality - Add comprehensive terraform modules for AWS services (ALB, CloudFront, CloudWatch, DocumentDB, ECR, ECS, IAM, Parameter Store, Redis) - Improve initialization script with environment variable configuration support - Update gitignore to exclude terraform.tfvars and .terraform directory - Add terraform lock file for dependency management --- .gitignore | 2 + terraform/.terraform.lock.hcl | 45 ++++++ terraform/backend-dev.hcl | 1 - terraform/backend-prod.hcl | 1 - terraform/backend-staging.hcl | 1 - terraform/init-aws-for-terraform.sh | 92 +++-------- terraform/modules/alb/main.tf | 69 ++++++++ terraform/modules/alb/outputs.tf | 19 +++ terraform/modules/alb/variables.tf | 40 +++++ terraform/modules/cloudfront/main.tf | 51 ++++++ terraform/modules/cloudfront/outputs.tf | 9 ++ terraform/modules/cloudfront/variables.tf | 35 ++++ terraform/modules/cloudwatch/main.tf | 36 +++++ terraform/modules/cloudwatch/outputs.tf | 9 ++ terraform/modules/cloudwatch/variables.tf | 46 ++++++ terraform/modules/documentdb/main.tf | 30 ++++ terraform/modules/documentdb/outputs.tf | 14 ++ terraform/modules/documentdb/variables.tf | 41 +++++ terraform/modules/ecr/main.tf | 47 ++++++ terraform/modules/ecr/outputs.tf | 14 ++ terraform/modules/ecr/variables.tf | 15 ++ terraform/modules/ecs/main.tf | 96 +++++++++++ terraform/modules/ecs/outputs.tf | 14 ++ terraform/modules/ecs/variables.tf | 62 ++++++++ terraform/modules/iam/main.tf | 150 ++++++++++++++++++ terraform/modules/iam/outputs.tf | 14 ++ terraform/modules/iam/variables.tf | 30 ++++ terraform/modules/parameter_store/main.tf | 46 ++++++ terraform/modules/parameter_store/outputs.tf | 24 +++ .../modules/parameter_store/variables.tf | 45 ++++++ terraform/modules/redis/main.tf | 26 +++ terraform/modules/redis/outputs.tf | 9 ++ terraform/modules/redis/variables.tf | 36 +++++ 33 files changed, 1094 insertions(+), 75 deletions(-) create mode 100644 terraform/.terraform.lock.hcl create mode 100644 terraform/modules/alb/main.tf create mode 100644 terraform/modules/alb/outputs.tf create mode 100644 terraform/modules/alb/variables.tf create mode 100644 terraform/modules/cloudfront/main.tf create mode 100644 terraform/modules/cloudfront/outputs.tf create mode 100644 terraform/modules/cloudfront/variables.tf create mode 100644 terraform/modules/cloudwatch/main.tf create mode 100644 terraform/modules/cloudwatch/outputs.tf create mode 100644 terraform/modules/cloudwatch/variables.tf create mode 100644 terraform/modules/documentdb/main.tf create mode 100644 terraform/modules/documentdb/outputs.tf create mode 100644 terraform/modules/documentdb/variables.tf create mode 100644 terraform/modules/ecr/main.tf create mode 100644 terraform/modules/ecr/outputs.tf create mode 100644 terraform/modules/ecr/variables.tf create mode 100644 terraform/modules/ecs/main.tf create mode 100644 terraform/modules/ecs/outputs.tf create mode 100644 terraform/modules/ecs/variables.tf create mode 100644 terraform/modules/iam/main.tf create mode 100644 terraform/modules/iam/outputs.tf create mode 100644 terraform/modules/iam/variables.tf create mode 100644 terraform/modules/parameter_store/main.tf create mode 100644 terraform/modules/parameter_store/outputs.tf create mode 100644 terraform/modules/parameter_store/variables.tf create mode 100644 terraform/modules/redis/main.tf create mode 100644 terraform/modules/redis/outputs.tf create mode 100644 terraform/modules/redis/variables.tf diff --git a/.gitignore b/.gitignore index d8aacc0c..87270015 100644 --- a/.gitignore +++ b/.gitignore @@ -144,3 +144,5 @@ website/public/apos-frontend dump.archive .prettierrc aposUsersSafe.json +terraform.tfvars +terraform/.terraform diff --git a/terraform/.terraform.lock.hcl b/terraform/.terraform.lock.hcl new file mode 100644 index 00000000..60495490 --- /dev/null +++ b/terraform/.terraform.lock.hcl @@ -0,0 +1,45 @@ +# This file is maintained automatically by "terraform init". +# Manual edits may be lost in future updates. + +provider "registry.terraform.io/hashicorp/aws" { + version = "5.98.0" + constraints = "~> 5.0" + hashes = [ + "h1:neMFK/kP1KT6cTGID+Tkkt8L7PsN9XqwrPDGXVw3WVY=", + "zh:23377bd90204b6203b904f48f53edcae3294eb072d8fc18a4531c0cde531a3a1", + "zh:2e55a6ea14cc43b08cf82d43063e96c5c2f58ee953c2628523d0ee918fe3b609", + "zh:4885a817c16fdaaeddc5031edc9594c1f300db0e5b23be7cd76a473e7dcc7b4f", + "zh:6ca7177ad4e5c9d93dee4be1ac0792b37107df04657fddfe0c976f36abdd18b5", + "zh:78bf8eb0a67bae5dede09666676c7a38c9fb8d1b80a90ba06cf36ae268257d6f", + "zh:874b5a99457a3f88e2915df8773120846b63d820868a8f43082193f3dc84adcb", + "zh:95e1e4cf587cde4537ac9dfee9e94270652c812ab31fce3a431778c053abf354", + "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", + "zh:a75145b58b241d64570803e6565c72467cd664633df32678755b51871f553e50", + "zh:aa31b13d0b0e8432940d6892a48b6268721fa54a02ed62ee42745186ee32f58d", + "zh:ae4565770f76672ce8e96528cbb66afdade1f91383123c079c7fdeafcb3d2877", + "zh:b99f042c45bf6aa69dd73f3f6d9cbe0b495b30442c526e0b3810089c059ba724", + "zh:bbb38e86d926ef101cefafe8fe090c57f2b1356eac9fc5ec81af310c50375897", + "zh:d03c89988ba4a0bd3cfc8659f951183ae7027aa8018a7ca1e53a300944af59cb", + "zh:d179ef28843fe663fc63169291a211898199009f0d3f63f0a6f65349e77727ec", + ] +} + +provider "registry.terraform.io/hashicorp/random" { + version = "3.7.2" + constraints = "~> 3.4" + hashes = [ + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", + "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", + ] +} diff --git a/terraform/backend-dev.hcl b/terraform/backend-dev.hcl index 7a6cb983..3cc7b14c 100644 --- a/terraform/backend-dev.hcl +++ b/terraform/backend-dev.hcl @@ -2,5 +2,4 @@ bucket = "sf-website-infrastructure" key = "terraform/terraform-dev.tfstate" region = "us-east-1" -dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/backend-prod.hcl b/terraform/backend-prod.hcl index e2a6b7dc..07d7aee6 100644 --- a/terraform/backend-prod.hcl +++ b/terraform/backend-prod.hcl @@ -2,5 +2,4 @@ bucket = "sf-website-infrastructure" key = "terraform/terraform-prod.tfstate" region = "us-east-1" -dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/backend-staging.hcl b/terraform/backend-staging.hcl index dba7befa..70025e3c 100644 --- a/terraform/backend-staging.hcl +++ b/terraform/backend-staging.hcl @@ -2,5 +2,4 @@ bucket = "sf-website-infrastructure" key = "terraform/terraform-staging.tfstate" region = "us-east-1" -dynamodb_table = "sf-website-terraform-locks" encrypt = true \ No newline at end of file diff --git a/terraform/init-aws-for-terraform.sh b/terraform/init-aws-for-terraform.sh index 631544d4..a50064e1 100755 --- a/terraform/init-aws-for-terraform.sh +++ b/terraform/init-aws-for-terraform.sh @@ -1,17 +1,16 @@ #!/bin/bash -# Script to manage Terraform backend resources (S3 bucket and DynamoDB table) +# Script to manage Terraform backend resources (S3 bucket) # Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME] set -e # Configuration from backend-dev.hcl -readonly BUCKET_NAME='sf-website-infrastructure' -readonly DYNAMODB_TABLE='sf-website-terraform-locks' -readonly AWS_REGION='us-east-1' +BUCKET_NAME="${BUCKET_NAME:-sf-website-infrastructure}" +AWS_REGION="${AWS_REGION:-us-east-1}" # Global variables -PROFILE='' +AWS_PROFILE="${AWS_PROFILE:-}" # Colors for output readonly RED='\033[0;31m' @@ -34,9 +33,9 @@ print_error() { # Execute AWS command with proper profile handling and error suppression aws_exec() { - if [ -n "${PROFILE}" ]; then + if [ -n "${AWS_PROFILE}" ]; then # Suppress spurious head/cat errors that occur with some AWS CLI configurations - aws --profile "${PROFILE}" "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) + aws --profile "${AWS_PROFILE}" "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) else aws "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) fi @@ -51,12 +50,12 @@ check_aws_cli() { # Check credentials by testing the command and capturing exit code if aws_exec sts get-caller-identity >/dev/null 2>&1; then - if [ -n "${PROFILE}" ]; then - print_info "Using AWS profile: ${PROFILE}" + if [ -n "${AWS_PROFILE}" ]; then + print_info "Using AWS profile: ${AWS_PROFILE}" fi else - if [ -n "${PROFILE}" ]; then - print_error "AWS CLI profile '${PROFILE}' is not configured or credentials are invalid" + if [ -n "${AWS_PROFILE}" ]; then + print_error "AWS CLI profile '${AWS_PROFILE}' is not configured or credentials are invalid" else print_error 'AWS CLI is not configured or credentials are invalid' fi @@ -71,13 +70,6 @@ bucket_exists() { return $? } -# Check if DynamoDB table exists -table_exists() { - aws_exec dynamodb describe-table --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" >/dev/null 2>&1 - return $? -} - # Create S3 bucket create_s3_bucket() { if bucket_exists; then @@ -121,29 +113,6 @@ create_s3_bucket() { print_info "S3 bucket '${BUCKET_NAME}' created successfully" } -# Create DynamoDB table -create_dynamodb_table() { - if table_exists; then - print_warning "DynamoDB table '${DYNAMODB_TABLE}' already exists" - return 0 - fi - - print_info "Creating DynamoDB table '${DYNAMODB_TABLE}'..." - - aws_exec dynamodb create-table \ - --table-name "${DYNAMODB_TABLE}" \ - --attribute-definitions AttributeName=LockID,AttributeType=S \ - --key-schema AttributeName=LockID,KeyType=HASH \ - --billing-mode PAY_PER_REQUEST \ - --region "${AWS_REGION}" >> /dev/null 2>&1 - - print_info "Waiting for DynamoDB table to be active..." - aws_exec dynamodb wait table-exists --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" - - print_info "DynamoDB table '${DYNAMODB_TABLE}' created successfully" -} - # Delete S3 bucket (including all objects and versions) delete_s3_bucket() { if ! bucket_exists; then @@ -180,25 +149,6 @@ delete_s3_bucket() { print_info "S3 bucket '${BUCKET_NAME}' deleted successfully" } -# Delete DynamoDB table -delete_dynamodb_table() { - if ! table_exists; then - print_warning "DynamoDB table '${DYNAMODB_TABLE}' does not exist" - return 0 - fi - - print_info "Deleting DynamoDB table '${DYNAMODB_TABLE}'..." - - aws_exec dynamodb delete-table --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" >> /dev/null 2>&1 - - print_info "Waiting for DynamoDB table to be deleted..." - aws_exec dynamodb wait table-not-exists --table-name "${DYNAMODB_TABLE}" \ - --region "${AWS_REGION}" - - print_info "DynamoDB table '${DYNAMODB_TABLE}' deleted successfully" -} - # Show status of resources show_status() { print_info 'Checking Terraform backend resources status...' @@ -208,19 +158,12 @@ show_status() { else print_warning "S3 bucket '${BUCKET_NAME}' does not exist" fi - - if table_exists; then - print_info "DynamoDB table '${DYNAMODB_TABLE}' exists" - else - print_warning "DynamoDB table '${DYNAMODB_TABLE}' does not exist" - fi } # Create all resources create_resources() { print_info 'Creating Terraform backend resources...' create_s3_bucket - create_dynamodb_table print_info 'All resources created successfully!' } @@ -242,7 +185,6 @@ delete_resources() { print_info 'Deleting Terraform backend resources...' delete_s3_bucket - delete_dynamodb_table print_info 'All resources deleted successfully!' } @@ -251,21 +193,27 @@ show_usage() { echo 'Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME]' echo '' echo 'Commands:' - echo ' create - Create S3 bucket and DynamoDB table' - echo ' delete - Delete S3 bucket and DynamoDB table' + echo ' create - Create S3 bucket' + echo ' delete - Delete S3 bucket' echo ' status - Show status of resources' echo '' echo 'Options:' echo ' --profile PROFILE_NAME - Use specified AWS profile' echo '' + echo 'Environment Variables:' + echo ' AWS_PROFILE - AWS profile to use (can be overridden by --profile)' + echo ' BUCKET_NAME - S3 bucket name (default: sf-website-infrastructure)' + echo ' AWS_REGION - AWS region (default: us-east-1)' + echo '' echo 'Examples:' echo ' ./init-aws-for-terraform.sh create' echo ' ./init-aws-for-terraform.sh create --profile tf-sf-website' echo ' ./init-aws-for-terraform.sh status --profile tf-sf-website' + echo ' AWS_PROFILE=tf-sf-website ./init-aws-for-terraform.sh create' + echo ' BUCKET_NAME=my-terraform-state AWS_REGION=us-west-2 ./init-aws-for-terraform.sh create' echo '' echo 'Resources managed:' echo " S3 Bucket: ${BUCKET_NAME}" - echo " DynamoDB Table: ${DYNAMODB_TABLE}" echo " Region: ${AWS_REGION}" } @@ -282,7 +230,7 @@ main() { ;; '--profile') if [ -n "$2" ]; then - PROFILE="$2" + AWS_PROFILE="$2" shift 2 else print_error 'Profile name is required after --profile' diff --git a/terraform/modules/alb/main.tf b/terraform/modules/alb/main.tf new file mode 100644 index 00000000..f20ee80a --- /dev/null +++ b/terraform/modules/alb/main.tf @@ -0,0 +1,69 @@ +# Application Load Balancer +resource "aws_lb" "main" { + name = "${var.name_prefix}-${var.environment}-alb" + internal = false + load_balancer_type = "application" + security_groups = var.security_group_ids + subnets = var.subnet_ids + + enable_deletion_protection = false + + tags = var.tags +} + +# Target Group +resource "aws_lb_target_group" "main" { + name = "${var.name_prefix}-${var.environment}-tg" + port = 3000 + protocol = "HTTP" + vpc_id = var.vpc_id + + health_check { + enabled = true + healthy_threshold = 2 + interval = 30 + matcher = "200" + path = "/health" + port = "traffic-port" + protocol = "HTTP" + timeout = 5 + unhealthy_threshold = 2 + } + + tags = var.tags +} + +# HTTPS Listener +resource "aws_lb_listener" "https" { + load_balancer_arn = aws_lb.main.arn + port = "443" + protocol = "HTTPS" + ssl_policy = "ELBSecurityPolicy-TLS-1-2-2017-01" + certificate_arn = var.certificate_arn + + default_action { + type = "forward" + target_group_arn = aws_lb_target_group.main.arn + } + + tags = var.tags +} + +# HTTP Listener (redirect to HTTPS) +resource "aws_lb_listener" "http" { + load_balancer_arn = aws_lb.main.arn + port = "80" + protocol = "HTTP" + + default_action { + type = "redirect" + + redirect { + port = "443" + protocol = "HTTPS" + status_code = "HTTP_301" + } + } + + tags = var.tags +} \ No newline at end of file diff --git a/terraform/modules/alb/outputs.tf b/terraform/modules/alb/outputs.tf new file mode 100644 index 00000000..fffb0216 --- /dev/null +++ b/terraform/modules/alb/outputs.tf @@ -0,0 +1,19 @@ +output "target_group_arn" { + description = "ARN of the target group" + value = aws_lb_target_group.main.arn +} + +output "alb_dns_name" { + description = "DNS name of the ALB" + value = aws_lb.main.dns_name +} + +output "alb_zone_id" { + description = "Zone ID of the ALB" + value = aws_lb.main.zone_id +} + +output "arn_suffix" { + description = "ARN suffix for CloudWatch monitoring" + value = aws_lb.main.arn_suffix +} \ No newline at end of file diff --git a/terraform/modules/alb/variables.tf b/terraform/modules/alb/variables.tf new file mode 100644 index 00000000..7ee742b8 --- /dev/null +++ b/terraform/modules/alb/variables.tf @@ -0,0 +1,40 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "VPC ID where the ALB will be created" + type = string +} + +variable "subnet_ids" { + description = "List of subnet IDs for the ALB" + type = list(string) +} + +variable "security_group_ids" { + description = "List of security group IDs for the ALB" + type = list(string) +} + +variable "domain_name" { + description = "Domain name for the ALB" + type = string +} + +variable "certificate_arn" { + description = "ACM certificate ARN for HTTPS" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/cloudfront/main.tf b/terraform/modules/cloudfront/main.tf new file mode 100644 index 00000000..3434789a --- /dev/null +++ b/terraform/modules/cloudfront/main.tf @@ -0,0 +1,51 @@ +# CloudFront Distribution +resource "aws_cloudfront_distribution" "main" { + origin { + domain_name = var.s3_bucket_domain_name + origin_id = "S3-${var.s3_bucket_id}" + + s3_origin_config { + origin_access_identity = aws_cloudfront_origin_access_identity.main.cloudfront_access_identity_path + } + } + + enabled = true + comment = "CloudFront distribution for ${var.name_prefix} ${var.environment}" + + aliases = [var.media_domain_name] + + default_cache_behavior { + allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"] + cached_methods = ["GET", "HEAD"] + target_origin_id = "S3-${var.s3_bucket_id}" + compress = true + viewer_protocol_policy = "redirect-to-https" + + forwarded_values { + query_string = false + cookies { + forward = "none" + } + } + } + + price_class = "PriceClass_100" + + restrictions { + geo_restriction { + restriction_type = "none" + } + } + + viewer_certificate { + acm_certificate_arn = var.certificate_arn + ssl_support_method = "sni-only" + } + + tags = var.tags +} + +# Origin Access Identity +resource "aws_cloudfront_origin_access_identity" "main" { + comment = "OAI for ${var.name_prefix} ${var.environment}" +} \ No newline at end of file diff --git a/terraform/modules/cloudfront/outputs.tf b/terraform/modules/cloudfront/outputs.tf new file mode 100644 index 00000000..dfea10df --- /dev/null +++ b/terraform/modules/cloudfront/outputs.tf @@ -0,0 +1,9 @@ +output "distribution_id" { + description = "CloudFront distribution ID" + value = aws_cloudfront_distribution.main.id +} + +output "distribution_domain_name" { + description = "CloudFront distribution domain name" + value = aws_cloudfront_distribution.main.domain_name +} \ No newline at end of file diff --git a/terraform/modules/cloudfront/variables.tf b/terraform/modules/cloudfront/variables.tf new file mode 100644 index 00000000..2201a58b --- /dev/null +++ b/terraform/modules/cloudfront/variables.tf @@ -0,0 +1,35 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "s3_bucket_domain_name" { + description = "S3 bucket domain name" + type = string +} + +variable "s3_bucket_id" { + description = "S3 bucket ID" + type = string +} + +variable "media_domain_name" { + description = "Media domain name" + type = string +} + +variable "certificate_arn" { + description = "ACM certificate ARN" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/cloudwatch/main.tf b/terraform/modules/cloudwatch/main.tf new file mode 100644 index 00000000..6641e9f5 --- /dev/null +++ b/terraform/modules/cloudwatch/main.tf @@ -0,0 +1,36 @@ +# CloudWatch Log Group +resource "aws_cloudwatch_log_group" "app_logs" { + name = "/aws/ecs/${var.name_prefix}-${var.environment}" + retention_in_days = 7 + + tags = var.tags +} + +# CloudWatch Dashboard +resource "aws_cloudwatch_dashboard" "main" { + dashboard_name = "${var.name_prefix}-${var.environment}-dashboard" + + dashboard_body = jsonencode({ + widgets = [ + { + type = "metric" + x = 0 + y = 0 + width = 12 + height = 6 + + properties = { + metrics = [ + ["AWS/ECS", "CPUUtilization", "ServiceName", var.ecs_service_name, "ClusterName", var.ecs_cluster_name], + [".", "MemoryUtilization", ".", ".", ".", "."] + ] + view = "timeSeries" + stacked = false + region = "us-east-1" + title = "ECS Service Metrics" + period = 300 + } + } + ] + }) +} \ No newline at end of file diff --git a/terraform/modules/cloudwatch/outputs.tf b/terraform/modules/cloudwatch/outputs.tf new file mode 100644 index 00000000..8acb3019 --- /dev/null +++ b/terraform/modules/cloudwatch/outputs.tf @@ -0,0 +1,9 @@ +output "log_group_name" { + description = "CloudWatch log group name" + value = aws_cloudwatch_log_group.app_logs.name +} + +output "dashboard_name" { + description = "CloudWatch dashboard name" + value = aws_cloudwatch_dashboard.main.dashboard_name +} \ No newline at end of file diff --git a/terraform/modules/cloudwatch/variables.tf b/terraform/modules/cloudwatch/variables.tf new file mode 100644 index 00000000..a7497516 --- /dev/null +++ b/terraform/modules/cloudwatch/variables.tf @@ -0,0 +1,46 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "ecs_cluster_name" { + description = "ECS cluster name" + type = string +} + +variable "ecs_service_name" { + description = "ECS service name" + type = string +} + +variable "alb_arn_suffix" { + description = "ALB ARN suffix" + type = string +} + +variable "documentdb_cluster_id" { + description = "DocumentDB cluster ID" + type = string +} + +variable "redis_cluster_id" { + description = "Redis cluster ID" + type = string +} + +variable "slack_webhook_url" { + description = "Slack webhook URL for notifications" + type = string + default = "" +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/documentdb/main.tf b/terraform/modules/documentdb/main.tf new file mode 100644 index 00000000..e8c9c50e --- /dev/null +++ b/terraform/modules/documentdb/main.tf @@ -0,0 +1,30 @@ +# DocumentDB Subnet Group +resource "aws_docdb_subnet_group" "main" { + name = "${var.name_prefix}-${var.environment}-docdb-subnet-group" + subnet_ids = var.subnet_ids + + tags = var.tags +} + +# DocumentDB Cluster +resource "aws_docdb_cluster" "main" { + cluster_identifier = "${var.name_prefix}-${var.environment}-docdb-cluster" + engine = "docdb" + master_username = var.master_username + master_password = var.master_password + db_subnet_group_name = aws_docdb_subnet_group.main.name + vpc_security_group_ids = var.security_group_ids + storage_encrypted = true + skip_final_snapshot = true + + tags = var.tags +} + +# DocumentDB Instance +resource "aws_docdb_cluster_instance" "cluster_instances" { + identifier = "${var.name_prefix}-${var.environment}-docdb" + cluster_identifier = aws_docdb_cluster.main.id + instance_class = "db.t3.medium" + + tags = var.tags +} \ No newline at end of file diff --git a/terraform/modules/documentdb/outputs.tf b/terraform/modules/documentdb/outputs.tf new file mode 100644 index 00000000..d1a6bb4b --- /dev/null +++ b/terraform/modules/documentdb/outputs.tf @@ -0,0 +1,14 @@ +output "cluster_endpoint" { + description = "DocumentDB cluster endpoint" + value = aws_docdb_cluster.main.endpoint +} + +output "cluster_identifier" { + description = "DocumentDB cluster identifier" + value = aws_docdb_cluster.main.cluster_identifier +} + +output "cluster_arn" { + description = "DocumentDB cluster ARN" + value = aws_docdb_cluster.main.arn +} \ No newline at end of file diff --git a/terraform/modules/documentdb/variables.tf b/terraform/modules/documentdb/variables.tf new file mode 100644 index 00000000..5b449ec6 --- /dev/null +++ b/terraform/modules/documentdb/variables.tf @@ -0,0 +1,41 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "VPC ID" + type = string +} + +variable "subnet_ids" { + description = "List of subnet IDs" + type = list(string) +} + +variable "security_group_ids" { + description = "List of security group IDs" + type = list(string) +} + +variable "master_username" { + description = "Master username for DocumentDB" + type = string +} + +variable "master_password" { + description = "Master password for DocumentDB" + type = string + sensitive = true +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/ecr/main.tf b/terraform/modules/ecr/main.tf new file mode 100644 index 00000000..1848e142 --- /dev/null +++ b/terraform/modules/ecr/main.tf @@ -0,0 +1,47 @@ +# ECR Repository +resource "aws_ecr_repository" "main" { + name = "${var.name_prefix}-${var.environment}" + image_tag_mutability = "MUTABLE" + + image_scanning_configuration { + scan_on_push = true + } + + tags = var.tags +} + +# ECR Lifecycle Policy +resource "aws_ecr_lifecycle_policy" "main" { + repository = aws_ecr_repository.main.name + + policy = jsonencode({ + rules = [ + { + rulePriority = 1 + description = "Keep last 10 images" + selection = { + tagStatus = "tagged" + tagPrefixList = ["v"] + countType = "imageCountMoreThan" + countNumber = 10 + } + action = { + type = "expire" + } + }, + { + rulePriority = 2 + description = "Delete untagged images older than 1 day" + selection = { + tagStatus = "untagged" + countType = "sinceImagePushed" + countUnit = "days" + countNumber = 1 + } + action = { + type = "expire" + } + } + ] + }) +} \ No newline at end of file diff --git a/terraform/modules/ecr/outputs.tf b/terraform/modules/ecr/outputs.tf new file mode 100644 index 00000000..d2b4d9ba --- /dev/null +++ b/terraform/modules/ecr/outputs.tf @@ -0,0 +1,14 @@ +output "repository_url" { + description = "URL of the ECR repository" + value = aws_ecr_repository.main.repository_url +} + +output "repository_arn" { + description = "ARN of the ECR repository" + value = aws_ecr_repository.main.arn +} + +output "repository_name" { + description = "Name of the ECR repository" + value = aws_ecr_repository.main.name +} \ No newline at end of file diff --git a/terraform/modules/ecr/variables.tf b/terraform/modules/ecr/variables.tf new file mode 100644 index 00000000..9935f6a8 --- /dev/null +++ b/terraform/modules/ecr/variables.tf @@ -0,0 +1,15 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/ecs/main.tf b/terraform/modules/ecs/main.tf new file mode 100644 index 00000000..958f9aa1 --- /dev/null +++ b/terraform/modules/ecs/main.tf @@ -0,0 +1,96 @@ +# ECS Cluster +resource "aws_ecs_cluster" "main" { + name = "${var.name_prefix}-${var.environment}-cluster" + + setting { + name = "containerInsights" + value = "enabled" + } + + tags = var.tags +} + +# ECS Task Definition +resource "aws_ecs_task_definition" "main" { + family = "${var.name_prefix}-${var.environment}-app" + network_mode = "awsvpc" + requires_compatibilities = ["FARGATE"] + cpu = "1024" + memory = "2048" + execution_role_arn = var.execution_role_arn + task_role_arn = var.task_role_arn + + container_definitions = jsonencode([ + { + name = "app" + image = "${var.ecr_repository_url}:latest" + + portMappings = [ + { + containerPort = 3000 + protocol = "tcp" + } + ] + + environment = [ + for key, value in var.environment_variables : { + name = key + value = value + } + ] + + secrets = [ + for key, value in var.secrets : { + name = key + valueFrom = value + } + ] + + logConfiguration = { + logDriver = "awslogs" + options = { + awslogs-group = aws_cloudwatch_log_group.main.name + awslogs-region = data.aws_region.current.name + awslogs-stream-prefix = "ecs" + } + } + } + ]) + + tags = var.tags +} + +# CloudWatch Log Group +resource "aws_cloudwatch_log_group" "main" { + name = "/ecs/${var.name_prefix}-${var.environment}" + retention_in_days = 7 + + tags = var.tags +} + +# Data source for current region +data "aws_region" "current" {} + +# ECS Service +resource "aws_ecs_service" "main" { + name = "${var.name_prefix}-${var.environment}-service" + cluster = aws_ecs_cluster.main.id + task_definition = aws_ecs_task_definition.main.arn + desired_count = 1 + launch_type = "FARGATE" + + network_configuration { + subnets = var.subnet_ids + security_groups = var.security_group_ids + } + + load_balancer { + target_group_arn = var.target_group_arn + container_name = "app" + container_port = 3000 + } + + depends_on = [var.target_group_arn] + + tags = var.tags +} \ No newline at end of file diff --git a/terraform/modules/ecs/outputs.tf b/terraform/modules/ecs/outputs.tf new file mode 100644 index 00000000..48e173e5 --- /dev/null +++ b/terraform/modules/ecs/outputs.tf @@ -0,0 +1,14 @@ +output "cluster_name" { + description = "ECS cluster name" + value = aws_ecs_cluster.main.name +} + +output "service_name" { + description = "ECS service name" + value = aws_ecs_service.main.name +} + +output "cluster_arn" { + description = "ECS cluster ARN" + value = aws_ecs_cluster.main.arn +} \ No newline at end of file diff --git a/terraform/modules/ecs/variables.tf b/terraform/modules/ecs/variables.tf new file mode 100644 index 00000000..ce1619bd --- /dev/null +++ b/terraform/modules/ecs/variables.tf @@ -0,0 +1,62 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "VPC ID" + type = string +} + +variable "subnet_ids" { + description = "List of subnet IDs" + type = list(string) +} + +variable "security_group_ids" { + description = "List of security group IDs" + type = list(string) +} + +variable "ecr_repository_url" { + description = "ECR repository URL" + type = string +} + +variable "task_role_arn" { + description = "ECS task role ARN" + type = string +} + +variable "execution_role_arn" { + description = "ECS execution role ARN" + type = string +} + +variable "target_group_arn" { + description = "ALB target group ARN" + type = string +} + +variable "environment_variables" { + description = "Environment variables for the container" + type = map(string) + default = {} +} + +variable "secrets" { + description = "Secrets for the container" + type = map(string) + default = {} +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/iam/main.tf b/terraform/modules/iam/main.tf new file mode 100644 index 00000000..25bc380c --- /dev/null +++ b/terraform/modules/iam/main.tf @@ -0,0 +1,150 @@ +# ECS Task Execution Role +resource "aws_iam_role" "ecs_execution_role" { + name = "${var.name_prefix}-${var.environment}-ecs-execution-role" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "ecs-tasks.amazonaws.com" + } + } + ] + }) + + tags = var.tags +} + +# Attach AWS managed policy for ECS task execution +resource "aws_iam_role_policy_attachment" "ecs_execution_role_policy" { + role = aws_iam_role.ecs_execution_role.name + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" +} + +# ECS Task Role +resource "aws_iam_role" "ecs_task_role" { + name = "${var.name_prefix}-${var.environment}-ecs-task-role" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "ecs-tasks.amazonaws.com" + } + } + ] + }) + + tags = var.tags +} + +# S3 access policy for ECS task +resource "aws_iam_role_policy" "ecs_s3_policy" { + name = "${var.name_prefix}-${var.environment}-ecs-s3-policy" + role = aws_iam_role.ecs_task_role.id + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject", + "s3:ListBucket" + ] + Resource = [ + var.s3_attachments_bucket_arn, + "${var.s3_attachments_bucket_arn}/*" + ] + } + ] + }) +} + +# GitHub Actions OIDC provider +resource "aws_iam_openid_connect_provider" "github_actions" { + url = "https://token.actions.githubusercontent.com" + + client_id_list = [ + "sts.amazonaws.com" + ] + + thumbprint_list = [ + "6938fd4d98bab03faadb97b34396831e3780aea1" + ] + + tags = var.tags +} + +# GitHub Actions Role +resource "aws_iam_role" "github_actions" { + name = "${var.name_prefix}-${var.environment}-github-actions-role" + + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Principal = { + Federated = aws_iam_openid_connect_provider.github_actions.arn + } + Action = "sts:AssumeRoleWithWebIdentity" + Condition = { + StringEquals = { + "token.actions.githubusercontent.com:aud" = "sts.amazonaws.com" + } + StringLike = { + "token.actions.githubusercontent.com:sub" = "repo:${var.github_repo}:*" + } + } + } + ] + }) + + tags = var.tags +} + +# GitHub Actions policy for ECR and ECS +resource "aws_iam_role_policy" "github_actions_policy" { + name = "${var.name_prefix}-${var.environment}-github-actions-policy" + role = aws_iam_role.github_actions.id + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "ecr:GetAuthorizationToken", + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:BatchGetImage", + "ecr:InitiateLayerUpload", + "ecr:UploadLayerPart", + "ecr:CompleteLayerUpload", + "ecr:PutImage" + ] + Resource = [ + var.ecr_repository_arn, + "*" + ] + }, + { + Effect = "Allow" + Action = [ + "ecs:UpdateService", + "ecs:DescribeServices" + ] + Resource = "*" + } + ] + }) +} \ No newline at end of file diff --git a/terraform/modules/iam/outputs.tf b/terraform/modules/iam/outputs.tf new file mode 100644 index 00000000..b3b4ca41 --- /dev/null +++ b/terraform/modules/iam/outputs.tf @@ -0,0 +1,14 @@ +output "ecs_task_role_arn" { + description = "ARN of the ECS task role" + value = aws_iam_role.ecs_task_role.arn +} + +output "ecs_execution_role_arn" { + description = "ARN of the ECS execution role" + value = aws_iam_role.ecs_execution_role.arn +} + +output "github_actions_role_arn" { + description = "ARN of the GitHub Actions role" + value = aws_iam_role.github_actions.arn +} \ No newline at end of file diff --git a/terraform/modules/iam/variables.tf b/terraform/modules/iam/variables.tf new file mode 100644 index 00000000..d3e6bda2 --- /dev/null +++ b/terraform/modules/iam/variables.tf @@ -0,0 +1,30 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "s3_attachments_bucket_arn" { + description = "ARN of the S3 attachments bucket" + type = string +} + +variable "ecr_repository_arn" { + description = "ARN of the ECR repository" + type = string +} + +variable "github_repo" { + description = "GitHub repository in format owner/repo" + type = string +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/parameter_store/main.tf b/terraform/modules/parameter_store/main.tf new file mode 100644 index 00000000..ace82e88 --- /dev/null +++ b/terraform/modules/parameter_store/main.tf @@ -0,0 +1,46 @@ +# Session Secret Parameter +resource "aws_ssm_parameter" "session_secret" { + name = "/${var.name_prefix}/${var.environment}/session-secret" + type = "SecureString" + value = var.session_secret + + tags = var.tags +} + +# DocumentDB Master Username Parameter +resource "aws_ssm_parameter" "documentdb_username" { + name = "/${var.name_prefix}/${var.environment}/documentdb-username" + type = "String" + value = var.documentdb_master_username + + tags = var.tags +} + +# DocumentDB Master Password Parameter +resource "aws_ssm_parameter" "documentdb_password" { + name = "/${var.name_prefix}/${var.environment}/documentdb-password" + type = "SecureString" + value = var.documentdb_master_password + + tags = var.tags +} + +# Redis Auth Token Parameter +resource "aws_ssm_parameter" "redis_auth_token" { + name = "/${var.name_prefix}/${var.environment}/redis-auth-token" + type = "SecureString" + value = var.redis_auth_token + + tags = var.tags +} + +# Google Cloud Storage Service Account Key Parameter (optional) +resource "aws_ssm_parameter" "gcs_service_account_key" { + count = var.gcs_service_account_key != "" ? 1 : 0 + + name = "/${var.name_prefix}/${var.environment}/gcs-service-account-key" + type = "SecureString" + value = var.gcs_service_account_key + + tags = var.tags +} \ No newline at end of file diff --git a/terraform/modules/parameter_store/outputs.tf b/terraform/modules/parameter_store/outputs.tf new file mode 100644 index 00000000..acebd32b --- /dev/null +++ b/terraform/modules/parameter_store/outputs.tf @@ -0,0 +1,24 @@ +output "session_secret_arn" { + description = "ARN of the session secret parameter" + value = aws_ssm_parameter.session_secret.arn +} + +output "documentdb_username_arn" { + description = "ARN of the DocumentDB username parameter" + value = aws_ssm_parameter.documentdb_username.arn +} + +output "documentdb_password_arn" { + description = "ARN of the DocumentDB password parameter" + value = aws_ssm_parameter.documentdb_password.arn +} + +output "redis_auth_token_arn" { + description = "ARN of the Redis auth token parameter" + value = aws_ssm_parameter.redis_auth_token.arn +} + +output "gcs_service_account_key_arn" { + description = "ARN of the GCS service account key parameter" + value = var.gcs_service_account_key != "" ? aws_ssm_parameter.gcs_service_account_key[0].arn : "" +} \ No newline at end of file diff --git a/terraform/modules/parameter_store/variables.tf b/terraform/modules/parameter_store/variables.tf new file mode 100644 index 00000000..fdfbe606 --- /dev/null +++ b/terraform/modules/parameter_store/variables.tf @@ -0,0 +1,45 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "documentdb_master_username" { + description = "Master username for DocumentDB" + type = string +} + +variable "documentdb_master_password" { + description = "Master password for DocumentDB" + type = string + sensitive = true +} + +variable "session_secret" { + description = "Session secret for the application" + type = string + sensitive = true +} + +variable "redis_auth_token" { + description = "Auth token for Redis" + type = string + sensitive = true +} + +variable "gcs_service_account_key" { + description = "Google Cloud Storage service account key (JSON)" + type = string + default = "" + sensitive = true +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/redis/main.tf b/terraform/modules/redis/main.tf new file mode 100644 index 00000000..b5702ddc --- /dev/null +++ b/terraform/modules/redis/main.tf @@ -0,0 +1,26 @@ +# ElastiCache Subnet Group +resource "aws_elasticache_subnet_group" "main" { + name = "${var.name_prefix}-${var.environment}-redis-subnet-group" + subnet_ids = var.subnet_ids + + tags = var.tags +} + +# ElastiCache Redis Cluster +resource "aws_elasticache_replication_group" "main" { + replication_group_id = "${var.name_prefix}-${var.environment}-redis" + description = "Redis cluster for ${var.name_prefix} ${var.environment}" + node_type = "cache.t3.micro" + port = 6379 + parameter_group_name = "default.redis7" + num_cache_clusters = 1 + + subnet_group_name = aws_elasticache_subnet_group.main.name + security_group_ids = var.security_group_ids + + auth_token = var.auth_token + transit_encryption_enabled = true + at_rest_encryption_enabled = true + + tags = var.tags +} \ No newline at end of file diff --git a/terraform/modules/redis/outputs.tf b/terraform/modules/redis/outputs.tf new file mode 100644 index 00000000..7cba5482 --- /dev/null +++ b/terraform/modules/redis/outputs.tf @@ -0,0 +1,9 @@ +output "cluster_endpoint" { + description = "Redis cluster endpoint" + value = aws_elasticache_replication_group.main.configuration_endpoint_address != "" ? aws_elasticache_replication_group.main.configuration_endpoint_address : aws_elasticache_replication_group.main.primary_endpoint_address +} + +output "cluster_id" { + description = "Redis cluster ID" + value = aws_elasticache_replication_group.main.replication_group_id +} \ No newline at end of file diff --git a/terraform/modules/redis/variables.tf b/terraform/modules/redis/variables.tf new file mode 100644 index 00000000..c4c76d88 --- /dev/null +++ b/terraform/modules/redis/variables.tf @@ -0,0 +1,36 @@ +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "VPC ID" + type = string +} + +variable "subnet_ids" { + description = "List of subnet IDs" + type = list(string) +} + +variable "security_group_ids" { + description = "List of security group IDs" + type = list(string) +} + +variable "auth_token" { + description = "Auth token for Redis" + type = string + sensitive = true +} + +variable "tags" { + description = "Tags to apply to resources" + type = map(string) + default = {} +} \ No newline at end of file From 0bad9cecf85eeb87a74a0d9e7465661acbabdfd5 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 22:15:05 +0200 Subject: [PATCH 06/46] Refactor Terraform configuration and split IAM policies - Add Terraform plan files to .gitignore - Hardcode availability zones to avoid IAM permission issues - Fix S3 lifecycle configuration filters - Fix ALB and CloudFront output references - Split monolithic IAM policy into focused policies - Create core management policy for IAM and DynamoDB - Create infrastructure policy for VPC, ALB, and databases - Create S3 management policy for bucket operations - Create service management policy for ECR, ECS, and CloudFront --- .gitignore | 2 + terraform/main.tf | 17 ++- terraform/modules/s3/main.tf | 12 ++ terraform/outputs.tf | 12 +- terraform/terraform-backend-policy.json | 63 --------- .../terraform-core-management-policy.json | 79 +++++++++++ ...form-infrastructure-management-policy.json | 130 ++++++++++++++++++ terraform/terraform-s3-management-policy.json | 67 +++++++++ .../terraform-service-management-policy.json | 78 +++++++++++ 9 files changed, 383 insertions(+), 77 deletions(-) delete mode 100644 terraform/terraform-backend-policy.json create mode 100644 terraform/terraform-core-management-policy.json create mode 100644 terraform/terraform-infrastructure-management-policy.json create mode 100644 terraform/terraform-s3-management-policy.json create mode 100644 terraform/terraform-service-management-policy.json diff --git a/.gitignore b/.gitignore index 87270015..159f98f2 100644 --- a/.gitignore +++ b/.gitignore @@ -146,3 +146,5 @@ dump.archive aposUsersSafe.json terraform.tfvars terraform/.terraform +dev.tfplan +plan.out.txt diff --git a/terraform/main.tf b/terraform/main.tf index 69ec798a..57986567 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -31,9 +31,12 @@ provider "aws" { # Data sources data "aws_caller_identity" "current" {} data "aws_region" "current" {} -data "aws_availability_zones" "available" { - state = "available" -} + +# Hardcoded availability zones for us-east-1 to avoid IAM permission issues +# Comment out if you have ec2:DescribeAvailabilityZones permission +# data "aws_availability_zones" "available" { +# state = "available" +# } # Random password for Redis auth token resource "random_password" "redis_auth_token" { @@ -46,8 +49,12 @@ locals { name_prefix = "sf-website" environment = var.environment - # Availability zones (first 2) - azs = slice(data.aws_availability_zones.available.names, 0, 2) + # Hardcoded availability zones for us-east-1 (first 2) + # Change this if using a different region + azs = ["us-east-1a", "us-east-1b"] + + # If you have ec2:DescribeAvailabilityZones permission, uncomment this line and comment the one above: + # azs = slice(data.aws_availability_zones.available.names, 0, 2) # Common tags for all resources common_tags = merge(var.default_tags, { diff --git a/terraform/modules/s3/main.tf b/terraform/modules/s3/main.tf index eeb11296..0f3f1813 100644 --- a/terraform/modules/s3/main.tf +++ b/terraform/modules/s3/main.tf @@ -54,6 +54,10 @@ resource "aws_s3_bucket_lifecycle_configuration" "attachments" { id = "transition_to_ia" status = "Enabled" + filter { + prefix = "" + } + transition { days = 30 storage_class = "STANDARD_IA" @@ -69,6 +73,10 @@ resource "aws_s3_bucket_lifecycle_configuration" "attachments" { id = "expire_non_current_versions" status = "Enabled" + filter { + prefix = "" + } + noncurrent_version_expiration { noncurrent_days = 30 } @@ -124,6 +132,10 @@ resource "aws_s3_bucket_lifecycle_configuration" "logs" { id = "log_lifecycle" status = "Enabled" + filter { + prefix = "" + } + transition { days = 90 storage_class = "GLACIER" diff --git a/terraform/outputs.tf b/terraform/outputs.tf index 9acd02fa..6a30b137 100644 --- a/terraform/outputs.tf +++ b/terraform/outputs.tf @@ -46,12 +46,12 @@ output "ecr_repository_arn" { # ALB Outputs output "alb_dns_name" { description = "DNS name of the Application Load Balancer" - value = module.alb.dns_name + value = module.alb.alb_dns_name } output "alb_zone_id" { description = "Zone ID of the Application Load Balancer" - value = module.alb.zone_id + value = module.alb.alb_zone_id } output "domain_name" { @@ -67,7 +67,7 @@ output "cloudfront_distribution_id" { output "cloudfront_domain_name" { description = "Domain name of the CloudFront distribution" - value = module.cloudfront.domain_name + value = module.cloudfront.distribution_domain_name } output "media_domain_name" { @@ -82,12 +82,6 @@ output "documentdb_cluster_endpoint" { sensitive = true } -output "documentdb_cluster_reader_endpoint" { - description = "DocumentDB cluster reader endpoint" - value = module.documentdb.cluster_reader_endpoint - sensitive = true -} - # Cache Outputs output "redis_cluster_endpoint" { description = "ElastiCache Redis cluster endpoint" diff --git a/terraform/terraform-backend-policy.json b/terraform/terraform-backend-policy.json deleted file mode 100644 index a4c8afc2..00000000 --- a/terraform/terraform-backend-policy.json +++ /dev/null @@ -1,63 +0,0 @@ -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "TerraformBackendS3Permissions", - "Effect": "Allow", - "Action": [ - "s3:CreateBucket", - "s3:DeleteBucket", - "s3:ListBucket", - "s3:ListBucketVersions", - "s3:GetBucketLocation", - "s3:GetBucketVersioning", - "s3:PutBucketVersioning", - "s3:PutEncryptionConfiguration", - "s3:GetEncryptionConfiguration", - "s3:PutBucketPublicAccessBlock", - "s3:GetBucketPublicAccessBlock", - "s3:PutBucketPolicy", - "s3:GetBucketPolicy", - "s3:DeleteBucketPolicy" - ], - "Resource": [ - "arn:aws:s3:::sf-website-infrastructure", - "arn:aws:s3:::sf-website-infrastructure/*" - ] - }, - { - "Sid": "TerraformBackendS3ObjectPermissions", - "Effect": "Allow", - "Action": [ - "s3:GetObject", - "s3:PutObject", - "s3:DeleteObject", - "s3:GetObjectVersion", - "s3:DeleteObjectVersion" - ], - "Resource": "arn:aws:s3:::sf-website-infrastructure/*" - }, - { - "Sid": "TerraformBackendDynamoDBPermissions", - "Effect": "Allow", - "Action": [ - "dynamodb:CreateTable", - "dynamodb:DeleteTable", - "dynamodb:DescribeTable", - "dynamodb:GetItem", - "dynamodb:PutItem", - "dynamodb:DeleteItem", - "dynamodb:UpdateItem" - ], - "Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks" - }, - { - "Sid": "STSPermissions", - "Effect": "Allow", - "Action": [ - "sts:GetCallerIdentity" - ], - "Resource": "*" - } - ] -} \ No newline at end of file diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json new file mode 100644 index 00000000..029447a0 --- /dev/null +++ b/terraform/terraform-core-management-policy.json @@ -0,0 +1,79 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "TerraformBackendDynamoDBPermissions", + "Effect": "Allow", + "Action": [ + "dynamodb:CreateTable", + "dynamodb:DeleteTable", + "dynamodb:DescribeTable", + "dynamodb:GetItem", + "dynamodb:PutItem", + "dynamodb:DeleteItem", + "dynamodb:UpdateItem" + ], + "Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks" + }, + { + "Sid": "TerraformIAMPermissions", + "Effect": "Allow", + "Action": [ + "iam:CreateRole", + "iam:DeleteRole", + "iam:GetRole", + "iam:ListRoles", + "iam:UpdateRole", + "iam:PutRolePolicy", + "iam:GetRolePolicy", + "iam:DeleteRolePolicy", + "iam:ListRolePolicies", + "iam:CreatePolicy", + "iam:DeletePolicy", + "iam:GetPolicy", + "iam:GetPolicyVersion", + "iam:ListPolicyVersions", + "iam:AttachRolePolicy", + "iam:DetachRolePolicy", + "iam:ListAttachedRolePolicies", + "iam:PassRole", + "iam:CreateOpenIDConnectProvider", + "iam:DeleteOpenIDConnectProvider", + "iam:GetOpenIDConnectProvider", + "iam:ListOpenIDConnectProviders", + "iam:UpdateOpenIDConnectProviderThumbprint", + "iam:TagRole", + "iam:UntagRole", + "iam:ListRoleTags", + "iam:TagOpenIDConnectProvider", + "iam:UntagOpenIDConnectProvider", + "iam:ListOpenIDConnectProviderTags" + ], + "Resource": "*" + }, + { + "Sid": "TerraformCloudWatchLogsPermissions", + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:DeleteLogGroup", + "logs:DescribeLogGroups", + "logs:PutRetentionPolicy", + "logs:DeleteRetentionPolicy", + "logs:TagLogGroup", + "logs:UntagLogGroup", + "logs:ListTagsForResource", + "logs:ListTagsLogGroup" + ], + "Resource": "*" + }, + { + "Sid": "STSPermissions", + "Effect": "Allow", + "Action": [ + "sts:GetCallerIdentity" + ], + "Resource": "*" + } + ] +} \ No newline at end of file diff --git a/terraform/terraform-infrastructure-management-policy.json b/terraform/terraform-infrastructure-management-policy.json new file mode 100644 index 00000000..4d76055c --- /dev/null +++ b/terraform/terraform-infrastructure-management-policy.json @@ -0,0 +1,130 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "TerraformEC2VPCPermissions", + "Effect": "Allow", + "Action": [ + "ec2:CreateVpc", + "ec2:DeleteVpc", + "ec2:DescribeVpcs", + "ec2:DescribeVpcAttribute", + "ec2:ModifyVpcAttribute", + "ec2:CreateSubnet", + "ec2:DeleteSubnet", + "ec2:DescribeSubnets", + "ec2:ModifySubnetAttribute", + "ec2:CreateInternetGateway", + "ec2:DeleteInternetGateway", + "ec2:DescribeInternetGateways", + "ec2:AttachInternetGateway", + "ec2:DetachInternetGateway", + "ec2:CreateNatGateway", + "ec2:DeleteNatGateway", + "ec2:DescribeNatGateways", + "ec2:AllocateAddress", + "ec2:ReleaseAddress", + "ec2:DescribeAddresses", + "ec2:DescribeAddressesAttribute", + "ec2:CreateRouteTable", + "ec2:DeleteRouteTable", + "ec2:DescribeRouteTables", + "ec2:CreateRoute", + "ec2:DeleteRoute", + "ec2:ReplaceRoute", + "ec2:AssociateRouteTable", + "ec2:DisassociateRouteTable", + "ec2:CreateSecurityGroup", + "ec2:DeleteSecurityGroup", + "ec2:DescribeSecurityGroups", + "ec2:AuthorizeSecurityGroupIngress", + "ec2:AuthorizeSecurityGroupEgress", + "ec2:RevokeSecurityGroupIngress", + "ec2:RevokeSecurityGroupEgress", + "ec2:CreateFlowLogs", + "ec2:DeleteFlowLogs", + "ec2:DescribeFlowLogs", + "ec2:DescribeAvailabilityZones", + "ec2:CreateTags", + "ec2:DeleteTags", + "ec2:DescribeTags" + ], + "Resource": "*" + }, + { + "Sid": "TerraformELBPermissions", + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:CreateTargetGroup", + "elasticloadbalancing:DeleteTargetGroup", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetGroupAttributes", + "elasticloadbalancing:ModifyTargetGroupAttributes", + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:DeregisterTargets", + "elasticloadbalancing:DescribeTargetHealth", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags", + "elasticloadbalancing:DescribeTags" + ], + "Resource": "*" + }, + { + "Sid": "TerraformDocumentDBRDSPermissions", + "Effect": "Allow", + "Action": [ + "rds:CreateDBSubnetGroup", + "rds:DeleteDBSubnetGroup", + "rds:DescribeDBSubnetGroups", + "rds:ModifyDBSubnetGroup", + "rds:AddTagsToResource", + "rds:RemoveTagsFromResource", + "rds:ListTagsForResource", + "rds:CreateDBClusterParameterGroup", + "rds:DeleteDBClusterParameterGroup", + "rds:DescribeDBClusterParameterGroups", + "rds:ModifyDBClusterParameterGroup", + "rds:CreateDBCluster", + "rds:DeleteDBCluster", + "rds:DescribeDBClusters", + "rds:ModifyDBCluster", + "rds:CreateDBInstance", + "rds:DeleteDBInstance", + "rds:DescribeDBInstances", + "rds:ModifyDBInstance" + ], + "Resource": "*" + }, + { + "Sid": "TerraformElastiCachePermissions", + "Effect": "Allow", + "Action": [ + "elasticache:CreateCacheSubnetGroup", + "elasticache:DeleteCacheSubnetGroup", + "elasticache:DescribeCacheSubnetGroups", + "elasticache:ModifyCacheSubnetGroup", + "elasticache:CreateReplicationGroup", + "elasticache:DeleteReplicationGroup", + "elasticache:DescribeReplicationGroups", + "elasticache:ModifyReplicationGroup", + "elasticache:CreateCacheCluster", + "elasticache:DeleteCacheCluster", + "elasticache:DescribeCacheClusters", + "elasticache:ModifyCacheCluster", + "elasticache:AddTagsToResource", + "elasticache:RemoveTagsFromResource", + "elasticache:ListTagsForResource" + ], + "Resource": "*" + } + ] +} \ No newline at end of file diff --git a/terraform/terraform-s3-management-policy.json b/terraform/terraform-s3-management-policy.json new file mode 100644 index 00000000..0a0fc609 --- /dev/null +++ b/terraform/terraform-s3-management-policy.json @@ -0,0 +1,67 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "TerraformBackendS3Permissions", + "Effect": "Allow", + "Action": [ + "s3:CreateBucket", + "s3:DeleteBucket", + "s3:ListBucket", + "s3:ListBucketVersions", + "s3:GetBucketLocation", + "s3:GetBucketVersioning", + "s3:PutBucketVersioning", + "s3:PutEncryptionConfiguration", + "s3:GetEncryptionConfiguration", + "s3:PutBucketPublicAccessBlock", + "s3:GetBucketPublicAccessBlock", + "s3:PutBucketPolicy", + "s3:GetBucketPolicy", + "s3:DeleteBucketPolicy", + "s3:GetBucketAcl", + "s3:PutBucketAcl", + "s3:GetBucketCORS", + "s3:PutBucketCORS", + "s3:GetBucketWebsite", + "s3:PutBucketWebsite", + "s3:DeleteBucketWebsite", + "s3:GetAccelerateConfiguration", + "s3:PutAccelerateConfiguration", + "s3:GetBucketRequestPayment", + "s3:PutBucketRequestPayment", + "s3:GetBucketLogging", + "s3:PutBucketLogging", + "s3:GetLifecycleConfiguration", + "s3:PutLifecycleConfiguration", + "s3:GetReplicationConfiguration", + "s3:PutReplicationConfiguration", + "s3:GetBucketObjectLockConfiguration", + "s3:PutBucketObjectLockConfiguration", + "s3:GetBucketTagging", + "s3:PutBucketTagging" + ], + "Resource": [ + "arn:aws:s3:::sf-website-infrastructure", + "arn:aws:s3:::sf-website-infrastructure/*", + "arn:aws:s3:::sf-website-s3-*", + "arn:aws:s3:::sf-website-s3-*/*" + ] + }, + { + "Sid": "TerraformBackendS3ObjectPermissions", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject", + "s3:GetObjectVersion", + "s3:DeleteObjectVersion" + ], + "Resource": [ + "arn:aws:s3:::sf-website-infrastructure/*", + "arn:aws:s3:::sf-website-s3-*/*" + ] + } + ] +} \ No newline at end of file diff --git a/terraform/terraform-service-management-policy.json b/terraform/terraform-service-management-policy.json new file mode 100644 index 00000000..f779a014 --- /dev/null +++ b/terraform/terraform-service-management-policy.json @@ -0,0 +1,78 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "TerraformECRPermissions", + "Effect": "Allow", + "Action": [ + "ecr:CreateRepository", + "ecr:DeleteRepository", + "ecr:DescribeRepositories", + "ecr:PutLifecyclePolicy", + "ecr:GetLifecyclePolicy", + "ecr:DeleteLifecyclePolicy", + "ecr:PutImageScanningConfiguration", + "ecr:PutImageTagMutability", + "ecr:TagResource", + "ecr:UntagResource", + "ecr:ListTagsForResource" + ], + "Resource": "*" + }, + { + "Sid": "TerraformECSPermissions", + "Effect": "Allow", + "Action": [ + "ecs:CreateCluster", + "ecs:DeleteCluster", + "ecs:DescribeClusters", + "ecs:CreateService", + "ecs:DeleteService", + "ecs:DescribeServices", + "ecs:UpdateService", + "ecs:RegisterTaskDefinition", + "ecs:DeregisterTaskDefinition", + "ecs:DescribeTaskDefinition", + "ecs:TagResource", + "ecs:UntagResource", + "ecs:ListTagsForResource" + ], + "Resource": "*" + }, + { + "Sid": "TerraformSSMParameterStorePermissions", + "Effect": "Allow", + "Action": [ + "ssm:PutParameter", + "ssm:GetParameter", + "ssm:GetParameters", + "ssm:DeleteParameter", + "ssm:DescribeParameters", + "ssm:AddTagsToResource", + "ssm:RemoveTagsFromResource", + "ssm:ListTagsForResource" + ], + "Resource": "*" + }, + { + "Sid": "TerraformCloudFrontPermissions", + "Effect": "Allow", + "Action": [ + "cloudfront:CreateCloudFrontOriginAccessIdentity", + "cloudfront:DeleteCloudFrontOriginAccessIdentity", + "cloudfront:GetCloudFrontOriginAccessIdentity", + "cloudfront:CreateOriginAccessControl", + "cloudfront:DeleteOriginAccessControl", + "cloudfront:GetOriginAccessControl", + "cloudfront:CreateDistribution", + "cloudfront:DeleteDistribution", + "cloudfront:GetDistribution", + "cloudfront:UpdateDistribution", + "cloudfront:TagResource", + "cloudfront:UntagResource", + "cloudfront:ListTagsForResource" + ], + "Resource": "*" + } + ] +} \ No newline at end of file From 5f51b9ce720289e0e7472f89d26c92360f8acb3c Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 22:53:43 +0200 Subject: [PATCH 07/46] Reorganize Terraform IAM policies and simplify Redis configuration - Simplify Redis cluster endpoint output to use primary endpoint address only - Move DocumentDB, ALB, CloudFront, and ACM permissions to core management policy - Remove duplicate ELB and CloudFront permissions from infrastructure and service policies - Consolidate AWS service permissions for better organization --- terraform/modules/redis/outputs.tf | 2 +- .../terraform-core-management-policy.json | 88 +++++++++++++++++++ ...form-infrastructure-management-policy.json | 53 ----------- .../terraform-service-management-policy.json | 20 ----- 4 files changed, 89 insertions(+), 74 deletions(-) diff --git a/terraform/modules/redis/outputs.tf b/terraform/modules/redis/outputs.tf index 7cba5482..472deae1 100644 --- a/terraform/modules/redis/outputs.tf +++ b/terraform/modules/redis/outputs.tf @@ -1,6 +1,6 @@ output "cluster_endpoint" { description = "Redis cluster endpoint" - value = aws_elasticache_replication_group.main.configuration_endpoint_address != "" ? aws_elasticache_replication_group.main.configuration_endpoint_address : aws_elasticache_replication_group.main.primary_endpoint_address + value = aws_elasticache_replication_group.main.primary_endpoint_address } output "cluster_id" { diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index 029447a0..fe1dcbf6 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -74,6 +74,94 @@ "sts:GetCallerIdentity" ], "Resource": "*" + }, + { + "Sid": "TerraformDocumentDBPermissions", + "Effect": "Allow", + "Action": [ + "rds:DescribeGlobalClusters", + "rds:CreateGlobalCluster", + "rds:DeleteGlobalCluster", + "rds:ModifyGlobalCluster", + "rds:CreateDBCluster", + "rds:DeleteDBCluster", + "rds:DescribeDBClusters", + "rds:ModifyDBCluster", + "rds:CreateDBSubnetGroup", + "rds:DeleteDBSubnetGroup", + "rds:DescribeDBSubnetGroups", + "rds:ModifyDBSubnetGroup", + "rds:CreateDBClusterParameterGroup", + "rds:DeleteDBClusterParameterGroup", + "rds:DescribeDBClusterParameterGroups", + "rds:ModifyDBClusterParameterGroup", + "rds:CreateDBInstance", + "rds:DeleteDBInstance", + "rds:DescribeDBInstances", + "rds:ModifyDBInstance", + "rds:AddTagsToResource", + "rds:RemoveTagsFromResource", + "rds:ListTagsForResource" + ], + "Resource": "*" + }, + { + "Sid": "TerraformALBPermissions", + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:CreateTargetGroup", + "elasticloadbalancing:DeleteTargetGroup", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:DescribeListenerAttributes", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags", + "elasticloadbalancing:DescribeTags" + ], + "Resource": "*" + }, + { + "Sid": "TerraformCloudFrontPermissions", + "Effect": "Allow", + "Action": [ + "cloudfront:CreateDistribution", + "cloudfront:DeleteDistribution", + "cloudfront:GetDistribution", + "cloudfront:UpdateDistribution", + "cloudfront:ListDistributions", + "cloudfront:CreateOriginAccessControl", + "cloudfront:DeleteOriginAccessControl", + "cloudfront:GetOriginAccessControl", + "cloudfront:UpdateOriginAccessControl", + "cloudfront:ListOriginAccessControls", + "cloudfront:TagResource", + "cloudfront:UntagResource", + "cloudfront:ListTagsForResource" + ], + "Resource": "*" + }, + { + "Sid": "TerraformACMPermissions", + "Effect": "Allow", + "Action": [ + "acm:DescribeCertificate", + "acm:ListCertificates", + "acm:GetCertificate", + "acm:RequestCertificate", + "acm:DeleteCertificate", + "acm:AddTagsToCertificate", + "acm:RemoveTagsFromCertificate", + "acm:ListTagsForCertificate" + ], + "Resource": "*" } ] } \ No newline at end of file diff --git a/terraform/terraform-infrastructure-management-policy.json b/terraform/terraform-infrastructure-management-policy.json index 4d76055c..51375139 100644 --- a/terraform/terraform-infrastructure-management-policy.json +++ b/terraform/terraform-infrastructure-management-policy.json @@ -51,59 +51,6 @@ ], "Resource": "*" }, - { - "Sid": "TerraformELBPermissions", - "Effect": "Allow", - "Action": [ - "elasticloadbalancing:CreateLoadBalancer", - "elasticloadbalancing:DeleteLoadBalancer", - "elasticloadbalancing:DescribeLoadBalancers", - "elasticloadbalancing:DescribeLoadBalancerAttributes", - "elasticloadbalancing:ModifyLoadBalancerAttributes", - "elasticloadbalancing:CreateTargetGroup", - "elasticloadbalancing:DeleteTargetGroup", - "elasticloadbalancing:DescribeTargetGroups", - "elasticloadbalancing:DescribeTargetGroupAttributes", - "elasticloadbalancing:ModifyTargetGroupAttributes", - "elasticloadbalancing:CreateListener", - "elasticloadbalancing:DeleteListener", - "elasticloadbalancing:DescribeListeners", - "elasticloadbalancing:ModifyListener", - "elasticloadbalancing:RegisterTargets", - "elasticloadbalancing:DeregisterTargets", - "elasticloadbalancing:DescribeTargetHealth", - "elasticloadbalancing:AddTags", - "elasticloadbalancing:RemoveTags", - "elasticloadbalancing:DescribeTags" - ], - "Resource": "*" - }, - { - "Sid": "TerraformDocumentDBRDSPermissions", - "Effect": "Allow", - "Action": [ - "rds:CreateDBSubnetGroup", - "rds:DeleteDBSubnetGroup", - "rds:DescribeDBSubnetGroups", - "rds:ModifyDBSubnetGroup", - "rds:AddTagsToResource", - "rds:RemoveTagsFromResource", - "rds:ListTagsForResource", - "rds:CreateDBClusterParameterGroup", - "rds:DeleteDBClusterParameterGroup", - "rds:DescribeDBClusterParameterGroups", - "rds:ModifyDBClusterParameterGroup", - "rds:CreateDBCluster", - "rds:DeleteDBCluster", - "rds:DescribeDBClusters", - "rds:ModifyDBCluster", - "rds:CreateDBInstance", - "rds:DeleteDBInstance", - "rds:DescribeDBInstances", - "rds:ModifyDBInstance" - ], - "Resource": "*" - }, { "Sid": "TerraformElastiCachePermissions", "Effect": "Allow", diff --git a/terraform/terraform-service-management-policy.json b/terraform/terraform-service-management-policy.json index f779a014..6bbe89cc 100644 --- a/terraform/terraform-service-management-policy.json +++ b/terraform/terraform-service-management-policy.json @@ -53,26 +53,6 @@ "ssm:ListTagsForResource" ], "Resource": "*" - }, - { - "Sid": "TerraformCloudFrontPermissions", - "Effect": "Allow", - "Action": [ - "cloudfront:CreateCloudFrontOriginAccessIdentity", - "cloudfront:DeleteCloudFrontOriginAccessIdentity", - "cloudfront:GetCloudFrontOriginAccessIdentity", - "cloudfront:CreateOriginAccessControl", - "cloudfront:DeleteOriginAccessControl", - "cloudfront:GetOriginAccessControl", - "cloudfront:CreateDistribution", - "cloudfront:DeleteDistribution", - "cloudfront:GetDistribution", - "cloudfront:UpdateDistribution", - "cloudfront:TagResource", - "cloudfront:UntagResource", - "cloudfront:ListTagsForResource" - ], - "Resource": "*" } ] } \ No newline at end of file From a3f44732cfae7475552b2d81cdbdef5120d8e6cd Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 23:02:45 +0200 Subject: [PATCH 08/46] Add additional AWS permissions to Terraform core management policy - Add elasticloadbalancing DescribeLoadBalancerAttributes permission - Add elasticloadbalancing DescribeTargetGroupAttributes permission - Add CloudFront Origin Access Identity management permissions --- terraform/terraform-core-management-policy.json | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index fe1dcbf6..754dcef0 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -112,10 +112,12 @@ "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:CreateListener", "elasticloadbalancing:DeleteListener", @@ -142,6 +144,11 @@ "cloudfront:GetOriginAccessControl", "cloudfront:UpdateOriginAccessControl", "cloudfront:ListOriginAccessControls", + "cloudfront:CreateCloudFrontOriginAccessIdentity", + "cloudfront:DeleteCloudFrontOriginAccessIdentity", + "cloudfront:GetCloudFrontOriginAccessIdentity", + "cloudfront:UpdateCloudFrontOriginAccessIdentity", + "cloudfront:ListCloudFrontOriginAccessIdentities", "cloudfront:TagResource", "cloudfront:UntagResource", "cloudfront:ListTagsForResource" From 0ffa8ab710bc86f00efdee3bb568beffe1e4d54e Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 23:21:47 +0200 Subject: [PATCH 09/46] Enhance ECS infrastructure with configurable parameters and auto-scaling - Add ECS service-linked role for proper service management - Make container configuration configurable (CPU, memory, port) - Add auto-scaling configuration with CPU and memory targets - Make log retention and scaling parameters configurable - Filter out empty values from secrets configuration - Add module outputs for auto-scaling resources --- .gitignore | 1 + terraform/main.tf | 25 ++++++++++-- terraform/modules/ecs/main.tf | 61 ++++++++++++++++++++++++++---- terraform/modules/ecs/outputs.tf | 15 ++++++++ terraform/modules/ecs/variables.tf | 35 +++++++++++++++++ terraform/terraform.tfvars.example | 2 + terraform/variables.tf | 22 +++++++++++ 7 files changed, 151 insertions(+), 10 deletions(-) diff --git a/.gitignore b/.gitignore index 159f98f2..67154a4c 100644 --- a/.gitignore +++ b/.gitignore @@ -148,3 +148,4 @@ terraform.tfvars terraform/.terraform dev.tfplan plan.out.txt +.cursor/tmp diff --git a/terraform/main.tf b/terraform/main.tf index 57986567..386931d7 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -28,6 +28,12 @@ provider "aws" { } } +# ECS Service-Linked Role (required for ECS services) +resource "aws_iam_service_linked_role" "ecs" { + aws_service_name = "ecs.amazonaws.com" + description = "Service-linked role for ECS services" +} + # Data sources data "aws_caller_identity" "current" {} data "aws_region" "current" {} @@ -232,6 +238,17 @@ module "ecs" { # Target group for ALB target_group_arn = module.alb.target_group_arn + # Container configuration + container_cpu = var.container_cpu + container_memory = var.container_memory + container_port = 3000 + log_retention_days = 7 + ecs_desired_count = var.ecs_desired_count + ecs_max_capacity = var.ecs_max_capacity + + # Service-linked role dependency + service_linked_role_arn = aws_iam_service_linked_role.ecs.arn + # Environment variables environment_variables = { NODE_ENV = "production" @@ -244,10 +261,12 @@ module "ecs" { APOS_CDN_ENABLED = "true" } - # Secrets from Parameter Store + # Secrets from Parameter Store - filter out empty values secrets = { - SESSION_SECRET = module.parameter_store.session_secret_arn - SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn + for k, v in { + SESSION_SECRET = module.parameter_store.session_secret_arn + SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn + } : k => v if v != "" } tags = local.common_tags diff --git a/terraform/modules/ecs/main.tf b/terraform/modules/ecs/main.tf index 958f9aa1..79c25ceb 100644 --- a/terraform/modules/ecs/main.tf +++ b/terraform/modules/ecs/main.tf @@ -15,8 +15,8 @@ resource "aws_ecs_task_definition" "main" { family = "${var.name_prefix}-${var.environment}-app" network_mode = "awsvpc" requires_compatibilities = ["FARGATE"] - cpu = "1024" - memory = "2048" + cpu = var.container_cpu + memory = var.container_memory execution_role_arn = var.execution_role_arn task_role_arn = var.task_role_arn @@ -27,7 +27,7 @@ resource "aws_ecs_task_definition" "main" { portMappings = [ { - containerPort = 3000 + containerPort = var.container_port protocol = "tcp" } ] @@ -63,7 +63,7 @@ resource "aws_ecs_task_definition" "main" { # CloudWatch Log Group resource "aws_cloudwatch_log_group" "main" { name = "/ecs/${var.name_prefix}-${var.environment}" - retention_in_days = 7 + retention_in_days = var.log_retention_days tags = var.tags } @@ -76,7 +76,7 @@ resource "aws_ecs_service" "main" { name = "${var.name_prefix}-${var.environment}-service" cluster = aws_ecs_cluster.main.id task_definition = aws_ecs_task_definition.main.arn - desired_count = 1 + desired_count = var.ecs_desired_count launch_type = "FARGATE" network_configuration { @@ -87,10 +87,57 @@ resource "aws_ecs_service" "main" { load_balancer { target_group_arn = var.target_group_arn container_name = "app" - container_port = 3000 + container_port = var.container_port } - depends_on = [var.target_group_arn] + depends_on = [var.target_group_arn, var.service_linked_role_arn] tags = var.tags +} + +# Application Auto Scaling Target +resource "aws_appautoscaling_target" "ecs_target" { + max_capacity = var.ecs_max_capacity + min_capacity = var.ecs_desired_count + resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.main.name}" + scalable_dimension = "ecs:service:DesiredCount" + service_namespace = "ecs" + + tags = var.tags +} + +# Auto Scaling Policy - Scale Up +resource "aws_appautoscaling_policy" "scale_up" { + name = "${var.name_prefix}-${var.environment}-scale-up" + policy_type = "TargetTrackingScaling" + resource_id = aws_appautoscaling_target.ecs_target.resource_id + scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension + service_namespace = aws_appautoscaling_target.ecs_target.service_namespace + + target_tracking_scaling_policy_configuration { + predefined_metric_specification { + predefined_metric_type = "ECSServiceAverageCPUUtilization" + } + target_value = 70.0 + scale_in_cooldown = 300 + scale_out_cooldown = 300 + } +} + +# Auto Scaling Policy - Memory Based +resource "aws_appautoscaling_policy" "scale_memory" { + name = "${var.name_prefix}-${var.environment}-scale-memory" + policy_type = "TargetTrackingScaling" + resource_id = aws_appautoscaling_target.ecs_target.resource_id + scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension + service_namespace = aws_appautoscaling_target.ecs_target.service_namespace + + target_tracking_scaling_policy_configuration { + predefined_metric_specification { + predefined_metric_type = "ECSServiceAverageMemoryUtilization" + } + target_value = 80.0 + scale_in_cooldown = 300 + scale_out_cooldown = 300 + } } \ No newline at end of file diff --git a/terraform/modules/ecs/outputs.tf b/terraform/modules/ecs/outputs.tf index 48e173e5..109c4fbd 100644 --- a/terraform/modules/ecs/outputs.tf +++ b/terraform/modules/ecs/outputs.tf @@ -11,4 +11,19 @@ output "service_name" { output "cluster_arn" { description = "ECS cluster ARN" value = aws_ecs_cluster.main.arn +} + +output "autoscaling_target_arn" { + description = "Application Auto Scaling target ARN" + value = aws_appautoscaling_target.ecs_target.arn +} + +output "cpu_scaling_policy_arn" { + description = "CPU-based scaling policy ARN" + value = aws_appautoscaling_policy.scale_up.arn +} + +output "memory_scaling_policy_arn" { + description = "Memory-based scaling policy ARN" + value = aws_appautoscaling_policy.scale_memory.arn } \ No newline at end of file diff --git a/terraform/modules/ecs/variables.tf b/terraform/modules/ecs/variables.tf index ce1619bd..01d4d89c 100644 --- a/terraform/modules/ecs/variables.tf +++ b/terraform/modules/ecs/variables.tf @@ -43,6 +43,36 @@ variable "target_group_arn" { type = string } +variable "container_cpu" { + description = "CPU units for the container" + type = number +} + +variable "container_memory" { + description = "Memory in MB for the container" + type = number +} + +variable "container_port" { + description = "Port that the container exposes" + type = number +} + +variable "log_retention_days" { + description = "Number of days to retain CloudWatch logs" + type = number +} + +variable "ecs_desired_count" { + description = "Desired number of ECS tasks" + type = number +} + +variable "ecs_max_capacity" { + description = "Maximum number of ECS tasks for auto-scaling" + type = number +} + variable "environment_variables" { description = "Environment variables for the container" type = map(string) @@ -59,4 +89,9 @@ variable "tags" { description = "Tags to apply to resources" type = map(string) default = {} +} + +variable "service_linked_role_arn" { + description = "ARN of the ECS service-linked role" + type = string } \ No newline at end of file diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index c3b1ec8e..c1a54434 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -36,6 +36,8 @@ route53_zone_id = "" # Leave empty to skip DNS record creation container_image_tag = "latest" container_cpu = 1024 # 1024 = 1 vCPU container_memory = 2048 # 2048 MB = 2 GB +container_port = 3000 # Application port +log_retention_days = 7 # CloudWatch log retention # Scaling Configuration ecs_desired_count = 1 diff --git a/terraform/variables.tf b/terraform/variables.tf index c5d966bf..2583322e 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -136,6 +136,28 @@ variable "container_memory" { } } +variable "container_port" { + description = "Port that the container exposes" + type = number + default = 3000 + + validation { + condition = var.container_port > 0 && var.container_port <= 65535 + error_message = "Container port must be between 1 and 65535." + } +} + +variable "log_retention_days" { + description = "Number of days to retain CloudWatch logs" + type = number + default = 7 + + validation { + condition = contains([1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653], var.log_retention_days) + error_message = "Log retention must be one of the valid CloudWatch retention periods." + } +} + # Scaling Configuration variable "ecs_desired_count" { description = "Desired number of ECS tasks" From a6bd59aa4287f90e41fe4715886e842bdc1d496b Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 24 May 2025 23:34:13 +0200 Subject: [PATCH 10/46] Make Terraform configuration more flexible and configurable - Parameterize container port across ALB, ECS, and security group modules - Make health check path configurable in ALB module - Make log retention days configurable in CloudWatch module - Add target_type = ip to ALB target group for ECS Fargate compatibility - Add IAM service linked role permissions to core management policy --- terraform/main.tf | 14 ++++++++++++-- terraform/modules/alb/main.tf | 11 ++++++----- terraform/modules/alb/variables.tf | 12 ++++++++++++ terraform/modules/cloudwatch/main.tf | 2 +- terraform/modules/cloudwatch/variables.tf | 6 ++++++ terraform/modules/security_groups/main.tf | 4 ++-- terraform/modules/security_groups/variables.tf | 6 ++++++ terraform/terraform-core-management-policy.json | 5 ++++- terraform/variables.tf | 11 +++++++++++ 9 files changed, 60 insertions(+), 11 deletions(-) diff --git a/terraform/main.tf b/terraform/main.tf index 386931d7..bbef9788 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -93,6 +93,9 @@ module "security_groups" { vpc_id = module.vpc.vpc_id + # Container configuration + container_port = var.container_port + tags = local.common_tags } @@ -200,6 +203,10 @@ module "alb" { domain_name = var.domain_name certificate_arn = var.certificate_arn + # Container configuration + container_port = var.container_port + health_check_path = var.health_check_path + tags = local.common_tags } @@ -241,8 +248,8 @@ module "ecs" { # Container configuration container_cpu = var.container_cpu container_memory = var.container_memory - container_port = 3000 - log_retention_days = 7 + container_port = var.container_port + log_retention_days = var.log_retention_days ecs_desired_count = var.ecs_desired_count ecs_max_capacity = var.ecs_max_capacity @@ -286,6 +293,9 @@ module "cloudwatch" { documentdb_cluster_id = module.documentdb.cluster_identifier redis_cluster_id = module.redis.cluster_id + # Configuration + log_retention_days = var.log_retention_days + # Slack webhook for notifications slack_webhook_url = var.slack_webhook_url diff --git a/terraform/modules/alb/main.tf b/terraform/modules/alb/main.tf index f20ee80a..cde86068 100644 --- a/terraform/modules/alb/main.tf +++ b/terraform/modules/alb/main.tf @@ -13,17 +13,18 @@ resource "aws_lb" "main" { # Target Group resource "aws_lb_target_group" "main" { - name = "${var.name_prefix}-${var.environment}-tg" - port = 3000 - protocol = "HTTP" - vpc_id = var.vpc_id + name = "${var.name_prefix}-${var.environment}-tg" + port = var.container_port + protocol = "HTTP" + vpc_id = var.vpc_id + target_type = "ip" health_check { enabled = true healthy_threshold = 2 interval = 30 matcher = "200" - path = "/health" + path = var.health_check_path port = "traffic-port" protocol = "HTTP" timeout = 5 diff --git a/terraform/modules/alb/variables.tf b/terraform/modules/alb/variables.tf index 7ee742b8..bec2026b 100644 --- a/terraform/modules/alb/variables.tf +++ b/terraform/modules/alb/variables.tf @@ -33,6 +33,18 @@ variable "certificate_arn" { type = string } +variable "container_port" { + description = "Port that the container exposes" + type = number + default = 3000 +} + +variable "health_check_path" { + description = "Path for ALB health checks" + type = string + default = "/health" +} + variable "tags" { description = "Tags to apply to resources" type = map(string) diff --git a/terraform/modules/cloudwatch/main.tf b/terraform/modules/cloudwatch/main.tf index 6641e9f5..46160da7 100644 --- a/terraform/modules/cloudwatch/main.tf +++ b/terraform/modules/cloudwatch/main.tf @@ -1,7 +1,7 @@ # CloudWatch Log Group resource "aws_cloudwatch_log_group" "app_logs" { name = "/aws/ecs/${var.name_prefix}-${var.environment}" - retention_in_days = 7 + retention_in_days = var.log_retention_days tags = var.tags } diff --git a/terraform/modules/cloudwatch/variables.tf b/terraform/modules/cloudwatch/variables.tf index a7497516..795f4284 100644 --- a/terraform/modules/cloudwatch/variables.tf +++ b/terraform/modules/cloudwatch/variables.tf @@ -39,6 +39,12 @@ variable "slack_webhook_url" { default = "" } +variable "log_retention_days" { + description = "Number of days to retain CloudWatch logs" + type = number + default = 7 +} + variable "tags" { description = "Tags to apply to resources" type = map(string) diff --git a/terraform/modules/security_groups/main.tf b/terraform/modules/security_groups/main.tf index d882e52d..19125873 100644 --- a/terraform/modules/security_groups/main.tf +++ b/terraform/modules/security_groups/main.tf @@ -43,8 +43,8 @@ resource "aws_security_group" "ecs" { ingress { description = "HTTP from ALB" - from_port = 3000 - to_port = 3000 + from_port = var.container_port + to_port = var.container_port protocol = "tcp" security_groups = [aws_security_group.alb.id] } diff --git a/terraform/modules/security_groups/variables.tf b/terraform/modules/security_groups/variables.tf index ccec9e16..d501264b 100644 --- a/terraform/modules/security_groups/variables.tf +++ b/terraform/modules/security_groups/variables.tf @@ -15,6 +15,12 @@ variable "vpc_id" { type = string } +variable "container_port" { + description = "Port that the container exposes" + type = number + default = 3000 +} + variable "tags" { description = "Tags to apply to resources" type = map(string) diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index 754dcef0..f19c1f98 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -47,7 +47,10 @@ "iam:ListRoleTags", "iam:TagOpenIDConnectProvider", "iam:UntagOpenIDConnectProvider", - "iam:ListOpenIDConnectProviderTags" + "iam:ListOpenIDConnectProviderTags", + "iam:CreateServiceLinkedRole", + "iam:DeleteServiceLinkedRole", + "iam:GetServiceLinkedRoleDeletionStatus" ], "Resource": "*" }, diff --git a/terraform/variables.tf b/terraform/variables.tf index 2583322e..1a51ecf8 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -147,6 +147,17 @@ variable "container_port" { } } +variable "health_check_path" { + description = "Path for ALB health checks" + type = string + default = "/health" + + validation { + condition = can(regex("^/", var.health_check_path)) + error_message = "Health check path must start with a forward slash." + } +} + variable "log_retention_days" { description = "Number of days to retain CloudWatch logs" type = number From e807885cf25217d1fd347725599562c4d4fb09a4 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sun, 25 May 2025 11:50:49 +0200 Subject: [PATCH 11/46] Update Terraform infrastructure configuration and IAM policies - Enhanced ECS module with ElastiCache integration support - Updated infrastructure management policy with ElastiCache permissions - Added comprehensive Redis cluster configuration - Improved security group and networking setup - Added analysis documentation for infrastructure changes --- terraform/main.tf | 9 --------- terraform/modules/alb/main.tf | 14 ++++++++++++++ terraform/modules/ecs/main.tf | 2 +- terraform/modules/ecs/variables.tf | 5 ----- terraform/terraform-core-management-policy.json | 12 +++++++++++- ...terraform-infrastructure-management-policy.json | 2 ++ 6 files changed, 28 insertions(+), 16 deletions(-) diff --git a/terraform/main.tf b/terraform/main.tf index bbef9788..e8474d8a 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -28,12 +28,6 @@ provider "aws" { } } -# ECS Service-Linked Role (required for ECS services) -resource "aws_iam_service_linked_role" "ecs" { - aws_service_name = "ecs.amazonaws.com" - description = "Service-linked role for ECS services" -} - # Data sources data "aws_caller_identity" "current" {} data "aws_region" "current" {} @@ -253,9 +247,6 @@ module "ecs" { ecs_desired_count = var.ecs_desired_count ecs_max_capacity = var.ecs_max_capacity - # Service-linked role dependency - service_linked_role_arn = aws_iam_service_linked_role.ecs.arn - # Environment variables environment_variables = { NODE_ENV = "production" diff --git a/terraform/modules/alb/main.tf b/terraform/modules/alb/main.tf index cde86068..821b2f0e 100644 --- a/terraform/modules/alb/main.tf +++ b/terraform/modules/alb/main.tf @@ -47,6 +47,13 @@ resource "aws_lb_listener" "https" { target_group_arn = aws_lb_target_group.main.arn } + # Ensure this listener is created after target group and destroyed before target group + depends_on = [aws_lb_target_group.main] + + lifecycle { + create_before_destroy = false + } + tags = var.tags } @@ -66,5 +73,12 @@ resource "aws_lb_listener" "http" { } } + # Ensure this listener is destroyed before target group (even though it doesn't reference it) + depends_on = [aws_lb_target_group.main] + + lifecycle { + create_before_destroy = false + } + tags = var.tags } \ No newline at end of file diff --git a/terraform/modules/ecs/main.tf b/terraform/modules/ecs/main.tf index 79c25ceb..a5894247 100644 --- a/terraform/modules/ecs/main.tf +++ b/terraform/modules/ecs/main.tf @@ -90,7 +90,7 @@ resource "aws_ecs_service" "main" { container_port = var.container_port } - depends_on = [var.target_group_arn, var.service_linked_role_arn] + depends_on = [var.target_group_arn] tags = var.tags } diff --git a/terraform/modules/ecs/variables.tf b/terraform/modules/ecs/variables.tf index 01d4d89c..27182087 100644 --- a/terraform/modules/ecs/variables.tf +++ b/terraform/modules/ecs/variables.tf @@ -89,9 +89,4 @@ variable "tags" { description = "Tags to apply to resources" type = map(string) default = {} -} - -variable "service_linked_role_arn" { - description = "ARN of the ECS service-linked role" - type = string } \ No newline at end of file diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index f19c1f98..b89a75d7 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -50,7 +50,8 @@ "iam:ListOpenIDConnectProviderTags", "iam:CreateServiceLinkedRole", "iam:DeleteServiceLinkedRole", - "iam:GetServiceLinkedRoleDeletionStatus" + "iam:GetServiceLinkedRoleDeletionStatus", + "iam:ListInstanceProfilesForRole" ], "Resource": "*" }, @@ -122,11 +123,20 @@ "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetGroupAttributes", "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:ModifyTargetGroupAttributes", "elasticloadbalancing:CreateListener", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeListenerAttributes", "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:CreateRule", + "elasticloadbalancing:DeleteRule", + "elasticloadbalancing:DescribeRules", + "elasticloadbalancing:ModifyRule", + "elasticloadbalancing:SetRulePriorities", + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:DeregisterTargets", + "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:AddTags", "elasticloadbalancing:RemoveTags", "elasticloadbalancing:DescribeTags" diff --git a/terraform/terraform-infrastructure-management-policy.json b/terraform/terraform-infrastructure-management-policy.json index 51375139..86e11d04 100644 --- a/terraform/terraform-infrastructure-management-policy.json +++ b/terraform/terraform-infrastructure-management-policy.json @@ -26,6 +26,7 @@ "ec2:ReleaseAddress", "ec2:DescribeAddresses", "ec2:DescribeAddressesAttribute", + "ec2:DisassociateAddress", "ec2:CreateRouteTable", "ec2:DeleteRouteTable", "ec2:DescribeRouteTables", @@ -45,6 +46,7 @@ "ec2:DeleteFlowLogs", "ec2:DescribeFlowLogs", "ec2:DescribeAvailabilityZones", + "ec2:DescribeNetworkInterfaces", "ec2:CreateTags", "ec2:DeleteTags", "ec2:DescribeTags" From d824a1d47694414b4530e20770feeb6616bd431b Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sun, 25 May 2025 12:32:57 +0200 Subject: [PATCH 12/46] Update Terraform IAM policies for core and service management --- terraform/terraform-core-management-policy.json | 16 ++++++++++++++-- .../terraform-service-management-policy.json | 17 +++++++++++++++++ 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index b89a75d7..b3e20ab9 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -56,7 +56,7 @@ "Resource": "*" }, { - "Sid": "TerraformCloudWatchLogsPermissions", + "Sid": "TerraformCloudWatchPermissions", "Effect": "Allow", "Action": [ "logs:CreateLogGroup", @@ -67,7 +67,19 @@ "logs:TagLogGroup", "logs:UntagLogGroup", "logs:ListTagsForResource", - "logs:ListTagsLogGroup" + "logs:ListTagsLogGroup", + "cloudwatch:PutDashboard", + "cloudwatch:GetDashboard", + "cloudwatch:DeleteDashboards", + "cloudwatch:ListDashboards", + "cloudwatch:PutMetricAlarm", + "cloudwatch:DeleteAlarms", + "cloudwatch:DescribeAlarms", + "cloudwatch:EnableAlarmActions", + "cloudwatch:DisableAlarmActions", + "cloudwatch:TagResource", + "cloudwatch:UntagResource", + "cloudwatch:ListTagsForResource" ], "Resource": "*" }, diff --git a/terraform/terraform-service-management-policy.json b/terraform/terraform-service-management-policy.json index 6bbe89cc..15eb4a3e 100644 --- a/terraform/terraform-service-management-policy.json +++ b/terraform/terraform-service-management-policy.json @@ -39,6 +39,23 @@ ], "Resource": "*" }, + { + "Sid": "TerraformApplicationAutoScalingPermissions", + "Effect": "Allow", + "Action": [ + "application-autoscaling:RegisterScalableTarget", + "application-autoscaling:DeregisterScalableTarget", + "application-autoscaling:DescribeScalableTargets", + "application-autoscaling:PutScalingPolicy", + "application-autoscaling:DeleteScalingPolicy", + "application-autoscaling:DescribeScalingPolicies", + "application-autoscaling:DescribeScalingActivities", + "application-autoscaling:TagResource", + "application-autoscaling:UntagResource", + "application-autoscaling:ListTagsForResource" + ], + "Resource": "*" + }, { "Sid": "TerraformSSMParameterStorePermissions", "Effect": "Allow", From aa4b96459b43ec033af7910242549dda43f5b7c6 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sun, 25 May 2025 13:33:21 +0200 Subject: [PATCH 13/46] Replace auto-generated Redis auth token with user-provided variable - Remove random provider dependency from terraform configuration - Remove random_password resource for Redis auth token - Add redis_auth_token variable with validation rules - Update main.tf to use user-provided redis_auth_token variable - Add redis_auth_token example to terraform.tfvars.example --- terraform/main.tf | 14 ++------------ terraform/terraform.tfvars.example | 1 + terraform/variables.tf | 16 ++++++++++++++++ 3 files changed, 19 insertions(+), 12 deletions(-) diff --git a/terraform/main.tf b/terraform/main.tf index e8474d8a..e16da626 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -7,10 +7,6 @@ terraform { source = "hashicorp/aws" version = "~> 5.0" } - random = { - source = "hashicorp/random" - version = "~> 3.4" - } } backend "s3" { @@ -38,12 +34,6 @@ data "aws_region" "current" {} # state = "available" # } -# Random password for Redis auth token -resource "random_password" "redis_auth_token" { - length = 32 - special = true -} - # Local values for common naming locals { name_prefix = "sf-website" @@ -142,7 +132,7 @@ module "parameter_store" { session_secret = var.session_secret # Auto-generated Redis auth token - redis_auth_token = random_password.redis_auth_token.result + redis_auth_token = var.redis_auth_token # Optional Google Cloud Storage key gcs_service_account_key = var.gcs_service_account_key @@ -178,7 +168,7 @@ module "redis" { subnet_ids = module.vpc.private_subnet_ids security_group_ids = [module.security_groups.redis_sg_id] - auth_token = random_password.redis_auth_token.result + auth_token = var.redis_auth_token tags = local.common_tags } diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index c1a54434..969ee481 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -19,6 +19,7 @@ documentdb_master_password = "your-secure-password-here" # Min 8 characters # Application Secrets (SENSITIVE - Store securely) session_secret = "${SESSION_SECRET_KEY}" +redis_auth_token = "your-secure-redis-token-here" # 16-128 chars, alphanumeric and symbols (no @, ", /) # Optional: Google Cloud Storage Service Account (if using GCS) gcs_service_account_key = "" # Leave empty if not using GCS diff --git a/terraform/variables.tf b/terraform/variables.tf index 1a51ecf8..896c68a2 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -80,6 +80,22 @@ variable "session_secret" { } } +variable "redis_auth_token" { + description = "Auth token for Redis cluster (alphanumeric and symbols excluding @, \", /)" + type = string + sensitive = true + + validation { + condition = length(var.redis_auth_token) >= 16 && length(var.redis_auth_token) <= 128 + error_message = "Redis auth token must be between 16 and 128 characters long." + } + + validation { + condition = can(regex("^[A-Za-z0-9!#$%&*()\\-_=\\+\\[\\]{}<>:;.,?~]+$", var.redis_auth_token)) + error_message = "Redis auth token can only contain alphanumeric characters and symbols (excluding @, \", and /)." + } +} + variable "gcs_service_account_key" { description = "Google Cloud Storage service account private key (optional)" type = string From 156934d4a30e9be103b5322240434e2b59418690 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Mon, 26 May 2025 17:42:41 +0200 Subject: [PATCH 14/46] Enhance AWS infrastructure configuration and S3 setup - Add SSM access policy for ECS execution role to retrieve parameters - Expand Terraform service management policy with ECR and CloudWatch Logs permissions - Refactor ApostropheCMS uploadfs configuration for AWS S3 integration --- terraform/modules/iam/main.tf | 22 +++++++++++++ .../terraform-service-management-policy.json | 31 ++++++++++++++++- .../modules/@apostrophecms/uploadfs/index.js | 33 ++++++++++++++----- 3 files changed, 76 insertions(+), 10 deletions(-) diff --git a/terraform/modules/iam/main.tf b/terraform/modules/iam/main.tf index 25bc380c..137cc16d 100644 --- a/terraform/modules/iam/main.tf +++ b/terraform/modules/iam/main.tf @@ -24,6 +24,28 @@ resource "aws_iam_role_policy_attachment" "ecs_execution_role_policy" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" } +# SSM access policy for ECS execution role +resource "aws_iam_role_policy" "ecs_execution_ssm_policy" { + name = "${var.name_prefix}-${var.environment}-ecs-execution-ssm-policy" + role = aws_iam_role.ecs_execution_role.id + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Effect = "Allow" + Action = [ + "ssm:GetParameters", + "ssm:GetParameter" + ] + Resource = [ + "arn:aws:ssm:*:*:parameter/${var.name_prefix}/${var.environment}/*" + ] + } + ] + }) +} + # ECS Task Role resource "aws_iam_role" "ecs_task_role" { name = "${var.name_prefix}-${var.environment}-ecs-task-role" diff --git a/terraform/terraform-service-management-policy.json b/terraform/terraform-service-management-policy.json index 15eb4a3e..be7e53bf 100644 --- a/terraform/terraform-service-management-policy.json +++ b/terraform/terraform-service-management-policy.json @@ -15,7 +15,15 @@ "ecr:PutImageTagMutability", "ecr:TagResource", "ecr:UntagResource", - "ecr:ListTagsForResource" + "ecr:ListTagsForResource", + "ecr:GetAuthorizationToken", + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:BatchGetImage", + "ecr:InitiateLayerUpload", + "ecr:UploadLayerPart", + "ecr:CompleteLayerUpload", + "ecr:PutImage" ], "Resource": "*" }, @@ -26,6 +34,7 @@ "ecs:CreateCluster", "ecs:DeleteCluster", "ecs:DescribeClusters", + "ecs:ListClusters", "ecs:CreateService", "ecs:DeleteService", "ecs:DescribeServices", @@ -33,6 +42,9 @@ "ecs:RegisterTaskDefinition", "ecs:DeregisterTaskDefinition", "ecs:DescribeTaskDefinition", + "ecs:ListServices", + "ecs:DescribeTasks", + "ecs:ListTasks", "ecs:TagResource", "ecs:UntagResource", "ecs:ListTagsForResource" @@ -70,6 +82,23 @@ "ssm:ListTagsForResource" ], "Resource": "*" + }, + { + "Sid": "TerraformCloudWatchLogsPermissions", + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:DeleteLogGroup", + "logs:DescribeLogGroups", + "logs:DescribeLogStreams", + "logs:GetLogEvents", + "logs:FilterLogEvents", + "logs:PutRetentionPolicy", + "logs:TagResource", + "logs:UntagResource", + "logs:ListTagsForResource" + ], + "Resource": "*" } ] } \ No newline at end of file diff --git a/website/modules/@apostrophecms/uploadfs/index.js b/website/modules/@apostrophecms/uploadfs/index.js index 38898a73..00c8e3c7 100644 --- a/website/modules/@apostrophecms/uploadfs/index.js +++ b/website/modules/@apostrophecms/uploadfs/index.js @@ -1,19 +1,34 @@ const { getEnv } = require('../../../utils/env'); +const s3aws = { + bucket: getEnv('APOS_S3_BUCKET'), + region: getEnv('APOS_S3_REGION'),, + https: true, +}; + +// const s3localstack = { +// bucket: getEnv('APOS_S3_BUCKET'), +// region: getEnv('APOS_S3_REGION'), +// endpoint: getEnv('APOS_S3_ENDPOINT'), +// style: 'path', +// https: false, +// }; + const res = { options: { uploadfs: { storage: 's3', + ...s3aws, // Get your credentials at aws.amazon.com - secret: getEnv('APOS_S3_SECRET'), - key: getEnv('APOS_S3_KEY'), - // Bucket name created on aws.amazon.com - bucket: getEnv('APOS_S3_BUCKET'), - // Region name for endpoint - region: getEnv('APOS_S3_REGION'), - endpoint: getEnv('APOS_S3_ENDPOINT'), - style: getEnv('APOS_S3_STYLE'), - https: getEnv('APOS_S3_HTTPS'), + // secret: getEnv('APOS_S3_SECRET'), + // key: getEnv('APOS_S3_KEY'), + // // Bucket name created on aws.amazon.com + // bucket: getEnv('APOS_S3_BUCKET'), + // // Region name for endpoint + // region: getEnv('APOS_S3_REGION'), + // endpoint: getEnv('APOS_S3_ENDPOINT'), + // style: getEnv('APOS_S3_STYLE'), + // https: getEnv('APOS_S3_HTTPS'), cdn: { enabled: true, url: getEnv('APOS_CDN_URL'), From 37c7f8ad8854f8a4f67546cf9fb90b7685bb8b92 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Mon, 26 May 2025 21:37:32 +0200 Subject: [PATCH 15/46] Add bastion host module for secure SSH access to infrastructure - Add bastion host module with EC2 instance in public subnet - Configure security groups for SSH access from bastion to ECS, DocumentDB, and Redis - Add bastion configuration variables and validation rules - Update main.tf to include bastion module integration - Add bastion outputs for instance details and SSH command - Remove terraform lock file for clean state --- terraform/.terraform.lock.hcl | 45 ----------- terraform/main.tf | 19 +++++ terraform/modules/bastion/main.tf | 75 +++++++++++++++++++ terraform/modules/bastion/outputs.tf | 26 +++++++ terraform/modules/bastion/variables.tf | 43 +++++++++++ terraform/modules/security_groups/main.tf | 39 ++++++++++ .../modules/security_groups/variables.tf | 6 ++ terraform/outputs.tf | 21 ++++++ terraform/terraform.tfvars.example | 8 ++ terraform/variables.tf | 27 +++++++ 10 files changed, 264 insertions(+), 45 deletions(-) delete mode 100644 terraform/.terraform.lock.hcl create mode 100644 terraform/modules/bastion/main.tf create mode 100644 terraform/modules/bastion/outputs.tf create mode 100644 terraform/modules/bastion/variables.tf diff --git a/terraform/.terraform.lock.hcl b/terraform/.terraform.lock.hcl deleted file mode 100644 index 60495490..00000000 --- a/terraform/.terraform.lock.hcl +++ /dev/null @@ -1,45 +0,0 @@ -# This file is maintained automatically by "terraform init". -# Manual edits may be lost in future updates. - -provider "registry.terraform.io/hashicorp/aws" { - version = "5.98.0" - constraints = "~> 5.0" - hashes = [ - "h1:neMFK/kP1KT6cTGID+Tkkt8L7PsN9XqwrPDGXVw3WVY=", - "zh:23377bd90204b6203b904f48f53edcae3294eb072d8fc18a4531c0cde531a3a1", - "zh:2e55a6ea14cc43b08cf82d43063e96c5c2f58ee953c2628523d0ee918fe3b609", - "zh:4885a817c16fdaaeddc5031edc9594c1f300db0e5b23be7cd76a473e7dcc7b4f", - "zh:6ca7177ad4e5c9d93dee4be1ac0792b37107df04657fddfe0c976f36abdd18b5", - "zh:78bf8eb0a67bae5dede09666676c7a38c9fb8d1b80a90ba06cf36ae268257d6f", - "zh:874b5a99457a3f88e2915df8773120846b63d820868a8f43082193f3dc84adcb", - "zh:95e1e4cf587cde4537ac9dfee9e94270652c812ab31fce3a431778c053abf354", - "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a75145b58b241d64570803e6565c72467cd664633df32678755b51871f553e50", - "zh:aa31b13d0b0e8432940d6892a48b6268721fa54a02ed62ee42745186ee32f58d", - "zh:ae4565770f76672ce8e96528cbb66afdade1f91383123c079c7fdeafcb3d2877", - "zh:b99f042c45bf6aa69dd73f3f6d9cbe0b495b30442c526e0b3810089c059ba724", - "zh:bbb38e86d926ef101cefafe8fe090c57f2b1356eac9fc5ec81af310c50375897", - "zh:d03c89988ba4a0bd3cfc8659f951183ae7027aa8018a7ca1e53a300944af59cb", - "zh:d179ef28843fe663fc63169291a211898199009f0d3f63f0a6f65349e77727ec", - ] -} - -provider "registry.terraform.io/hashicorp/random" { - version = "3.7.2" - constraints = "~> 3.4" - hashes = [ - "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", - "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", - "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", - "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", - "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", - "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", - "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", - "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", - "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", - "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", - "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", - "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", - ] -} diff --git a/terraform/main.tf b/terraform/main.tf index e16da626..71f3df1d 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -68,6 +68,22 @@ module "vpc" { tags = local.common_tags } +# Bastion Host Module +module "bastion" { + source = "./modules/bastion" + + name_prefix = local.name_prefix + environment = local.environment + + vpc_id = module.vpc.vpc_id + subnet_id = module.vpc.public_subnet_ids[0] + instance_type = var.bastion_instance_type + key_pair_name = var.bastion_key_pair_name + allowed_cidr_blocks = var.bastion_allowed_cidr_blocks + + tags = local.common_tags +} + # Security Groups Module module "security_groups" { source = "./modules/security_groups" @@ -80,6 +96,9 @@ module "security_groups" { # Container configuration container_port = var.container_port + # Bastion security group for SSH access + bastion_security_group_id = module.bastion.security_group_id + tags = local.common_tags } diff --git a/terraform/modules/bastion/main.tf b/terraform/modules/bastion/main.tf new file mode 100644 index 00000000..226b4946 --- /dev/null +++ b/terraform/modules/bastion/main.tf @@ -0,0 +1,75 @@ +# Bastion Host Module + +# Data source for latest Amazon Linux 2023 AMI +data "aws_ami" "amazon_linux" { + most_recent = true + owners = ["amazon"] + + filter { + name = "name" + values = ["al2023-ami-*-x86_64"] + } + + filter { + name = "virtualization-type" + values = ["hvm"] + } +} + +# Security Group for Bastion Host +resource "aws_security_group" "bastion" { + name_prefix = "${var.name_prefix}-bastion-${var.environment}" + description = "Security group for bastion host" + vpc_id = var.vpc_id + + # SSH access from allowed CIDR blocks + ingress { + description = "SSH access" + from_port = 22 + to_port = 22 + protocol = "tcp" + cidr_blocks = var.allowed_cidr_blocks + } + + # All outbound traffic + egress { + description = "All outbound traffic" + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = merge(var.tags, { + Name = "${var.name_prefix}-bastion-sg-${var.environment}" + }) +} + +# Bastion Host EC2 Instance +resource "aws_instance" "bastion" { + ami = data.aws_ami.amazon_linux.id + instance_type = var.instance_type + key_name = var.key_pair_name + subnet_id = var.subnet_id + vpc_security_group_ids = [aws_security_group.bastion.id] + associate_public_ip_address = true + + # User data script for basic setup + user_data = base64encode(templatefile("${path.module}/user_data.sh", {})) + + tags = merge(var.tags, { + Name = "${var.name_prefix}-bastion-${var.environment}" + }) +} + +# Elastic IP for Bastion Host +resource "aws_eip" "bastion" { + instance = aws_instance.bastion.id + domain = "vpc" + + tags = merge(var.tags, { + Name = "${var.name_prefix}-bastion-eip-${var.environment}" + }) + + depends_on = [aws_instance.bastion] +} \ No newline at end of file diff --git a/terraform/modules/bastion/outputs.tf b/terraform/modules/bastion/outputs.tf new file mode 100644 index 00000000..72e8015a --- /dev/null +++ b/terraform/modules/bastion/outputs.tf @@ -0,0 +1,26 @@ +# Outputs for Bastion Host Module + +output "instance_id" { + description = "ID of the bastion host instance" + value = aws_instance.bastion.id +} + +output "public_ip" { + description = "Public IP address of the bastion host" + value = aws_eip.bastion.public_ip +} + +output "private_ip" { + description = "Private IP address of the bastion host" + value = aws_instance.bastion.private_ip +} + +output "security_group_id" { + description = "ID of the bastion host security group" + value = aws_security_group.bastion.id +} + +output "ssh_command" { + description = "SSH command to connect to bastion host" + value = "ssh -i /path/to/${var.key_pair_name}.pem ec2-user@${aws_eip.bastion.public_ip}" +} \ No newline at end of file diff --git a/terraform/modules/bastion/variables.tf b/terraform/modules/bastion/variables.tf new file mode 100644 index 00000000..12168155 --- /dev/null +++ b/terraform/modules/bastion/variables.tf @@ -0,0 +1,43 @@ +# Variables for Bastion Host Module + +variable "name_prefix" { + description = "Prefix for resource names" + type = string +} + +variable "environment" { + description = "Environment name" + type = string +} + +variable "vpc_id" { + description = "VPC ID where bastion host will be deployed" + type = string +} + +variable "subnet_id" { + description = "Public subnet ID where bastion host will be deployed" + type = string +} + +variable "instance_type" { + description = "EC2 instance type for bastion host" + type = string + default = "t3.micro" +} + +variable "key_pair_name" { + description = "Name of the EC2 key pair for SSH access" + type = string +} + +variable "allowed_cidr_blocks" { + description = "List of CIDR blocks allowed to SSH to bastion host" + type = list(string) +} + +variable "tags" { + description = "Tags to apply to all resources" + type = map(string) + default = {} +} \ No newline at end of file diff --git a/terraform/modules/security_groups/main.tf b/terraform/modules/security_groups/main.tf index 19125873..4e737d40 100644 --- a/terraform/modules/security_groups/main.tf +++ b/terraform/modules/security_groups/main.tf @@ -62,6 +62,19 @@ resource "aws_security_group" "ecs" { }) } +# SSH access from bastion to ECS (optional rule) +resource "aws_security_group_rule" "ecs_ssh_from_bastion" { + count = var.bastion_security_group_id != "" ? 1 : 0 + + type = "ingress" + from_port = 22 + to_port = 22 + protocol = "tcp" + source_security_group_id = var.bastion_security_group_id + security_group_id = aws_security_group.ecs.id + description = "SSH from bastion host" +} + # DocumentDB Security Group resource "aws_security_group" "documentdb" { name = "${var.name_prefix}-documentdb-sg-${var.environment}" @@ -89,6 +102,19 @@ resource "aws_security_group" "documentdb" { }) } +# MongoDB access from bastion to DocumentDB (for debugging) +resource "aws_security_group_rule" "documentdb_from_bastion" { + count = var.bastion_security_group_id != "" ? 1 : 0 + + type = "ingress" + from_port = 27017 + to_port = 27017 + protocol = "tcp" + source_security_group_id = var.bastion_security_group_id + security_group_id = aws_security_group.documentdb.id + description = "MongoDB from bastion host" +} + # Redis Security Group resource "aws_security_group" "redis" { name = "${var.name_prefix}-redis-sg-${var.environment}" @@ -114,4 +140,17 @@ resource "aws_security_group" "redis" { tags = merge(var.tags, { Name = "${var.name_prefix}-redis-sg-${var.environment}" }) +} + +# Redis access from bastion (for debugging) +resource "aws_security_group_rule" "redis_from_bastion" { + count = var.bastion_security_group_id != "" ? 1 : 0 + + type = "ingress" + from_port = 6379 + to_port = 6379 + protocol = "tcp" + source_security_group_id = var.bastion_security_group_id + security_group_id = aws_security_group.redis.id + description = "Redis from bastion host" } \ No newline at end of file diff --git a/terraform/modules/security_groups/variables.tf b/terraform/modules/security_groups/variables.tf index d501264b..9b3fca30 100644 --- a/terraform/modules/security_groups/variables.tf +++ b/terraform/modules/security_groups/variables.tf @@ -21,6 +21,12 @@ variable "container_port" { default = 3000 } +variable "bastion_security_group_id" { + description = "Security group ID of the bastion host for SSH access" + type = string + default = "" +} + variable "tags" { description = "Tags to apply to resources" type = map(string) diff --git a/terraform/outputs.tf b/terraform/outputs.tf index 6a30b137..5cc40b2d 100644 --- a/terraform/outputs.tf +++ b/terraform/outputs.tf @@ -21,6 +21,27 @@ output "private_subnet_ids" { value = module.vpc.private_subnet_ids } +# Bastion Host Outputs +output "bastion_instance_id" { + description = "ID of the bastion host instance" + value = module.bastion.instance_id +} + +output "bastion_public_ip" { + description = "Public IP address of the bastion host" + value = module.bastion.public_ip +} + +output "bastion_private_ip" { + description = "Private IP address of the bastion host" + value = module.bastion.private_ip +} + +output "bastion_ssh_command" { + description = "SSH command to connect to bastion host" + value = module.bastion.ssh_command +} + # S3 Outputs output "attachments_bucket_name" { description = "Name of the S3 attachments bucket" diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index 969ee481..be5ae797 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -48,6 +48,14 @@ ecs_max_capacity = 3 documentdb_instance_class = "db.t3.medium" redis_node_type = "cache.t3.micro" # cache.t3.small for production +# Bastion Host Configuration +bastion_instance_type = "t3.micro" +bastion_key_pair_name = "your-ec2-key-pair-name" # Must exist in AWS +bastion_allowed_cidr_blocks = [ + "203.0.113.0/24", # Example: Your office IP range + "198.51.100.0/24" # Example: Your home IP range +] + # Tags default_tags = { Project = "Website" diff --git a/terraform/variables.tf b/terraform/variables.tf index 896c68a2..7138a48a 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -219,4 +219,31 @@ variable "redis_node_type" { description = "ElastiCache Redis node type" type = string default = "cache.t3.micro" +} + +# Bastion Host Configuration +variable "bastion_instance_type" { + description = "EC2 instance type for bastion host" + type = string + default = "t3.micro" + + validation { + condition = can(regex("^[tm][0-9]+\\.", var.bastion_instance_type)) + error_message = "Bastion instance type must be a valid EC2 instance type." + } +} + +variable "bastion_key_pair_name" { + description = "Name of the EC2 key pair for SSH access to bastion host" + type = string +} + +variable "bastion_allowed_cidr_blocks" { + description = "List of CIDR blocks allowed to SSH to bastion host" + type = list(string) + + validation { + condition = length(var.bastion_allowed_cidr_blocks) > 0 + error_message = "At least one CIDR block must be specified for bastion access." + } } \ No newline at end of file From b21f76731e3b9aa76b19c0f413e10655ddb62509 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Mon, 26 May 2025 22:10:42 +0200 Subject: [PATCH 16/46] Improve bastion host configuration and IAM permissions - Add inline user data script with security hardening and useful tools - Make key pair optional for Session Manager only access - Add missing EC2 instance management permissions to IAM policy - Add .terraform.lock.hcl to gitignore --- .gitignore | 1 + terraform/modules/bastion/main.tf | 24 ++++++++++++++++++- terraform/modules/bastion/variables.tf | 3 ++- ...form-infrastructure-management-policy.json | 13 +++++++++- 4 files changed, 38 insertions(+), 3 deletions(-) diff --git a/.gitignore b/.gitignore index 67154a4c..f7ccc0b0 100644 --- a/.gitignore +++ b/.gitignore @@ -149,3 +149,4 @@ terraform/.terraform dev.tfplan plan.out.txt .cursor/tmp +.terraform.lock.hcl diff --git a/terraform/modules/bastion/main.tf b/terraform/modules/bastion/main.tf index 226b4946..cf15f3b9 100644 --- a/terraform/modules/bastion/main.tf +++ b/terraform/modules/bastion/main.tf @@ -55,7 +55,29 @@ resource "aws_instance" "bastion" { associate_public_ip_address = true # User data script for basic setup - user_data = base64encode(templatefile("${path.module}/user_data.sh", {})) + user_data = base64encode(<<-EOF + #!/bin/bash + yum update -y + yum install -y aws-cli + + # Start and enable SSM agent for Session Manager access + systemctl start amazon-ssm-agent + systemctl enable amazon-ssm-agent + + # Basic security hardening + sed -i 's/#PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config + systemctl restart sshd + + # Install additional useful tools + yum install -y htop vim wget curl git + + # Configure timezone + timedatectl set-timezone UTC + + # Create a log file to verify user data execution + echo "Bastion host user data script completed at $(date)" > /var/log/user-data.log + EOF + ) tags = merge(var.tags, { Name = "${var.name_prefix}-bastion-${var.environment}" diff --git a/terraform/modules/bastion/variables.tf b/terraform/modules/bastion/variables.tf index 12168155..362f99c3 100644 --- a/terraform/modules/bastion/variables.tf +++ b/terraform/modules/bastion/variables.tf @@ -27,8 +27,9 @@ variable "instance_type" { } variable "key_pair_name" { - description = "Name of the EC2 key pair for SSH access" + description = "Name of the EC2 key pair for SSH access (optional - leave null for Session Manager only)" type = string + default = null } variable "allowed_cidr_blocks" { diff --git a/terraform/terraform-infrastructure-management-policy.json b/terraform/terraform-infrastructure-management-policy.json index 86e11d04..d973e70f 100644 --- a/terraform/terraform-infrastructure-management-policy.json +++ b/terraform/terraform-infrastructure-management-policy.json @@ -27,6 +27,7 @@ "ec2:DescribeAddresses", "ec2:DescribeAddressesAttribute", "ec2:DisassociateAddress", + "ec2:AssociateAddress", "ec2:CreateRouteTable", "ec2:DeleteRouteTable", "ec2:DescribeRouteTables", @@ -49,7 +50,17 @@ "ec2:DescribeNetworkInterfaces", "ec2:CreateTags", "ec2:DeleteTags", - "ec2:DescribeTags" + "ec2:DescribeTags", + "ec2:DescribeImages", + "ec2:RunInstances", + "ec2:TerminateInstances", + "ec2:DescribeInstances", + "ec2:DescribeInstanceAttribute", + "ec2:ModifyInstanceAttribute", + "ec2:StartInstances", + "ec2:StopInstances", + "ec2:RebootInstances", + "ec2:DescribeKeyPairs" ], "Resource": "*" }, From ff1a0d0e035d8393e24c29a474414b4b2255beb9 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 08:34:33 +0200 Subject: [PATCH 17/46] Add AWS deployment infrastructure and fix configuration issues - Add GitHub Actions workflow for AWS deployment with Terraform - Add ECS deployment script for Docker image builds and service updates - Update Terraform IAM policy with additional EC2 permissions - Remove bastion instance type validation from Terraform variables - Fix syntax error in uploadfs S3 configuration --- .github/workflows/deploy_to_aws.yml | 553 ++++++++++++++++++ deploy-ecs.sh | 46 ++ ...form-infrastructure-management-policy.json | 4 + terraform/variables.tf | 5 - .../modules/@apostrophecms/uploadfs/index.js | 2 +- 5 files changed, 604 insertions(+), 6 deletions(-) create mode 100644 .github/workflows/deploy_to_aws.yml create mode 100755 deploy-ecs.sh diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml new file mode 100644 index 00000000..6aa07f9f --- /dev/null +++ b/.github/workflows/deploy_to_aws.yml @@ -0,0 +1,553 @@ +name: 'Deploy to AWS' + +on: + push: + branches: + - origin/create-terraform-configuration + # - 'releases/**' + + workflow_dispatch: + inputs: + deploy-env: + description: 'Environment to deploy' + required: true + type: choice + options: + - Development + default: Development + deploy-plan-only: + description: 'Plan only' + required: false + type: boolean + default: false + # restore-db: + # description: 'Restore database to original state (Reset database for Development and restore anon dump for Test and Pre-prod)' + # required: false + # type: boolean + # default: false + # clear-opensearch: + # description: 'Clear the custom OpenSearch indexes and templates' + # required: false + # type: boolean + # default: false + +jobs: + init-and-plan: + runs-on: ubuntu-latest + environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'prod/') && 'Production') || (startsWith(github.ref_name, 'staging/') && 'Staging') || 'None' }} + steps: + - name: Get Environment Name for ${{ vars.ENV_NAME }} + id: get_env_name + uses: Entepotenz/change-string-case-action-min-dependencies@v1 + with: + string: ${{ vars.ENV_NAME }} + - name: Checkout repo + uses: actions/checkout@v4 + + - name: Checkout config repository + uses: actions/checkout@v4 + with: + repository: 'speedandfunction/websie-ci-secrets' + path: 'terraform-config' + token: ${{ secrets.PAT }} + + - name: Copy tfvars for ${{ vars.ENV_NAME }} + run: | + cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{env.TFVAR_NAME}}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" + env: + SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + TFVAR_NAME: terraform-${{steps.get_env_name.outputs.lowercase}}.tfvars + + - name: Setup Terraform + uses: hashicorp/setup-terraform@v2 + with: + terraform_version: 1.5.0 + terraform_wrapper: false + + + - name: Terraform Plan for ${{ vars.ENV_NAME }} + run: ./deployment/infra plan + env: + AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "eu-west-1" + SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + + - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + AWS_DEFAULT_REGION: "eu-west-1" + + # detach-waf-if-needed: + # needs: init-and-plan + # if: ${{ inputs.deploy-plan-only == false }} + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + + # - name: Checkout repo + # uses: actions/checkout@v4 + + # - name: Checkout config repository + # uses: actions/checkout@v4 + # with: + # repository: 'opensupplyhub/ci-deployment' + # path: 'terraform-config' + # token: ${{ secrets.PAT }} + + # - name: Copy tfvars for ${{ vars.ENV_NAME }} + # run: | + # cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{ env.TFVAR_NAME }}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" + # env: + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # TFVAR_NAME: terraform-${{ steps.get_env_name.outputs.lowercase }}.tfvars + + # - name: Check whether AWS WAF enabled and find CloudFront distribution id + # run: | + # CLOUDFRONT_DISTRIBUTION_ID=$(./scripts/find_cloudfront_distribution_id.sh "$CLOUDFRONT_DOMAIN") + # waf_enabled=$(grep -E '^waf_enabled\s*=' deployment/environments/${{ env.TFVAR_NAME }} | awk '{print $3}') + + # echo "WAF Enabled: $waf_enabled" + # echo "Distribution ID: $CLOUDFRONT_DISTRIBUTION_ID" + + # if [[ "$waf_enabled" == "false" && -n "$CLOUDFRONT_DISTRIBUTION_ID" ]]; then + # echo "Detaching WAF from CloudFront distribution $CLOUDFRONT_DISTRIBUTION_ID" + # config=$(aws cloudfront get-distribution-config --id "$CLOUDFRONT_DISTRIBUTION_ID") + # etag=$(echo "$config" | jq -r '.ETag') + # dist=$(echo "$config" | jq '.DistributionConfig | .WebACLId = ""') + + # echo "$dist" > updated_config.json + + # aws cloudfront update-distribution \ + # --id "$CLOUDFRONT_DISTRIBUTION_ID" \ + # --if-match "$etag" \ + # --distribution-config file://updated_config.json + # else + # echo "Skipping WAF detachment" + # fi + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # TFVAR_NAME: terraform-${{ steps.get_env_name.outputs.lowercase }}.tfvars + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # CLOUDFRONT_DOMAIN: ${{ vars.CLOUDFRONT_DOMAIN }} + + # apply: + # needs: [init-and-plan, detach-waf-if-needed] + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + + # - name: Setup Terraform + # uses: hashicorp/setup-terraform@v2 + # with: + # terraform_version: 1.5.0 + # terraform_wrapper: false + + # - name: Checkout + # uses: actions/checkout@v4 + + # - name: Get planfile from S3 bucket for ${{ vars.ENV_NAME }} + # run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # AWS_DEFAULT_REGION: "eu-west-1" + + # - name: Terraform Apply for ${{ vars.ENV_NAME }} + # run: | + # ./deployment/infra apply + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # AWS_DEFAULT_REGION: "eu-west-1" + + # build_and_push_react_app: + # needs: apply + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + + # - name: Checkout repo + # uses: actions/checkout@v4 + + # - name: Setup Node.js + # uses: actions/setup-node@v2 + # with: + # node-version: '14' + + # - name: Cache dependencies + # uses: actions/cache@v4 + # with: + # path: | + # src/react/node_modules + # key: ${{ runner.os }}-node-${{ hashFiles('**/yarn.lock') }} + + # - name: Install dependencies + # working-directory: src/react + # run: yarn install + + # - name: Build static assets + # working-directory: src/react + # run: yarn run build + + # - id: project + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.PROJECT }} + + # - name: Move static + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # FRONTEND_BUCKET: ${{ steps.project.outputs.lowercase }}-${{ steps.get_env_name.outputs.lowercase }}-frontend + # AWS_DEFAULT_REGION: "eu-west-1" + # CLOUDFRONT_DOMAIN: ${{ vars.CLOUDFRONT_DOMAIN }} + # run: | + # CLOUDFRONT_DISTRIBUTION_ID=$(./scripts/find_cloudfront_distribution_id.sh "$CLOUDFRONT_DOMAIN") + # if [ -z "$CLOUDFRONT_DISTRIBUTION_ID" ]; then + # echo "Error: No CloudFront distribution found for domain: $CLOUDFRONT_DOMAIN" + # exit 1 + # fi + # aws s3 sync src/react/build/ s3://$FRONTEND_BUCKET-$AWS_DEFAULT_REGION/ --delete + # aws cloudfront create-invalidation --distribution-id "$CLOUDFRONT_DISTRIBUTION_ID" --paths "/*" + + # build_and_push_docker_image: + # needs: build_and_push_react_app + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + + # - name: Checkout repo + # uses: actions/checkout@v4 + + # - name: Configure AWS credentials for ${{ vars.ENV_NAME }} + # uses: aws-actions/configure-aws-credentials@v1 + # with: + # aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} + # aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # aws-region: "eu-west-1" + + # - name: Get GIT_COMMIT + # run: | + # export SHORT_SHA="$(git rev-parse --short HEAD)" + # export GIT_COMMIT_CI="${SHORT_SHA:0:7}" + # echo "GIT_COMMIT=$GIT_COMMIT_CI" >> $GITHUB_ENV + + # - name: Login to Amazon ECR + # uses: aws-actions/amazon-ecr-login@v1 + + + # - name: Build and push Kafka Tools Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # with: + # context: src/kafka-tools/ + # file: src/kafka-tools/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-kafka-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # - name: Build and push Django Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # with: + # context: src/django + # file: src/django/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # - name: Build and push Batch Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # with: + # context: src/batch + # file: src/batch/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-batch-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + # build-args: | + # GIT_COMMIT=${{ env.GIT_COMMIT }} + # DOCKER_IMAGE=${{ vars.DOCKER_IMAGE }} + # ENVIRONMENT=${{ steps.get_env_name.outputs.lowercase }} + + # - name: Build and push Dedupe Hub Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # with: + # context: src/dedupe-hub/api + # file: src/dedupe-hub/api/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-deduplicate-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # - name: Build and push Logstash Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # with: + # context: src/logstash + # file: src/logstash/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-logstash-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # - name: Build and push Database Anonymizer Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # if: ${{ steps.get_env_name.outputs.lowercase == 'production' }} + # with: + # context: deployment/terraform/database_anonymizer_scheduled_task/docker + # file: deployment/terraform/database_anonymizer_scheduled_task/docker/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-database-anonymizer-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # - name: Build and push Anonymize Database Dump Docker image to ECR for ${{ vars.ENV_NAME }} + # uses: docker/build-push-action@v2 + # if: ${{ steps.get_env_name.outputs.lowercase == 'test' }} + # with: + # context: deployment/terraform/anonymized_database_dump_scheduled_task/docker + # file: deployment/terraform/anonymized_database_dump_scheduled_task/docker/Dockerfile + # push: true + # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-anonymized-database-dump-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + + # create_kafka_topic: + # needs: build_and_push_docker_image + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + + # - name: Create or update kafka topics for ${{ vars.ENV_NAME }} + # run: | + # ./deployment/run_kafka_task ${{ vars.ENV_NAME }} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + + # stop_logstash: + # needs: create_kafka_topic + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + # - name: Stop Logstash for ${{ vars.ENV_NAME }} + # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # run: | + # aws \ + # ecs update-service --desired-count 0 --cluster=ecsOpenSupplyHub${{vars.ENV_NAME}}Cluster \ + # --service=OpenSupplyHub${{vars.ENV_NAME}}AppLogstash + + # restore_database: + # needs: stop_logstash + # runs-on: self-hosted + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + # - name: Restore database for ${{ vars.ENV_NAME }} + # if: ${{ (vars.ENV_NAME == 'Preprod' || vars.ENV_NAME == 'Test') && inputs.restore-db == true}} + # run: | + # cd ./src/anon-tools + # mkdir -p ./keys + # echo "${{ secrets.KEY_FILE }}" > ./keys/key + # docker build -t restore -f Dockerfile.restore . + # docker run -v ./keys/key:/keys/key --shm-size=2gb --rm \ + # -e AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} \ + # -e AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }} \ + # -e AWS_DEFAULT_REGION=eu-west-1 \ + # -e ENVIRONMENT=${{ vars.ENV_NAME }} \ + # -e DATABASE_NAME=opensupplyhub \ + # -e DATABASE_USERNAME=opensupplyhub \ + # -e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \ + # restore + # - name: Reset database for ${{ vars.ENV_NAME }} + # if: ${{ vars.ENV_NAME == 'Development' && inputs.restore-db == true}} + # run: | + # echo "Creating an S3 folder with production location lists to reset DB if it doesn't exist." + # aws s3api put-object --bucket ${{ env.LIST_S3_BUCKET }} --key ${{ env.RESET_LISTS_FOLDER }} + # echo "Coping all production location files from the repo to S3 to ensure consistency between local and environment resets." + # aws s3 cp ./src/django/${{ env.RESET_LISTS_FOLDER }} s3://${{ env.LIST_S3_BUCKET }}/${{ env.RESET_LISTS_FOLDER }} --recursive + # echo "Triggering reset commands." + # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "reset_database" + # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "matchfixtures" + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # LIST_S3_BUCKET: opensupplyhub-${{ steps.get_env_name.outputs.lowercase }}-files-eu-west-1 + # RESET_LISTS_FOLDER: "api/fixtures/list_files/" + + # update_services: + # needs: restore_database + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Update ECS Django Service with new Image for ${{ vars.ENV_NAME }} + # run: | + # aws ecs update-service --cluster ${{ vars.CLUSTER }} --service ${{ vars.SERVICE_NAME }} --force-new-deployment --region ${{env.AWS_DEFAULT_REGION}} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # - name: Update ECS Dedupe Hub Service with new Image for ${{ vars.ENV_NAME }} + # run: | + # aws ecs update-service --cluster ${{ vars.CLUSTER }} --service ${{ vars.SERVICE_NAME }}DD --force-new-deployment --region ${{env.AWS_DEFAULT_REGION}} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + + # post_deploy: + # needs: update_services + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + # - name: Run migrations and other post-deployment tasks for ${{ vars.ENV_NAME }} + # run: | + # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "post_deployment" + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + + # clear_opensearch: + # needs: post_deploy + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + # - name: Get OpenSearch domain, filesystem and access point IDs for ${{ vars.ENV_NAME }} + # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} + # id: export_variables + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # run: | + # OS_DOMAIN_NAME=$(echo "${{ vars.ENV_NAME }}-os-domain" | tr '[:upper:]' '[:lower:]') + # OPENSEARCH_DOMAIN=$(aws \ + # es describe-elasticsearch-domains --domain-names $OS_DOMAIN_NAME \ + # --query "DomainStatusList[].Endpoints.vpc" --output text) + # EFS_ID=$(aws \ + # efs describe-file-systems \ + # --query "FileSystems[?Tags[?Key=='Environment' && Value=='${{ vars.ENV_NAME }}']].FileSystemId" \ + # --output text) + # EFS_AP_ID=$(aws \ + # efs describe-access-points \ + # --query "AccessPoints[?FileSystemId=='$EFS_ID'].AccessPointId" \ + # --output text) + # echo "EFS_ID=$EFS_ID" >> $GITHUB_OUTPUT + # echo "EFS_AP_ID=$EFS_AP_ID" >> $GITHUB_OUTPUT + # echo "OPENSEARCH_DOMAIN=$OPENSEARCH_DOMAIN" >> $GITHUB_OUTPUT + # - name: Clear the custom OpenSearch indexes and templates for ${{ vars.ENV_NAME }} + # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # OPENSEARCH_DOMAIN: ${{ steps.export_variables.outputs.OPENSEARCH_DOMAIN }} + # EFS_AP_ID: ${{ steps.export_variables.outputs.EFS_AP_ID }} + # EFS_ID: ${{ steps.export_variables.outputs.EFS_ID }} + # BASTION_IP: ${{ vars.BASTION_IP }} + # run: | + # cd ./deployment/clear_opensearch + # mkdir -p script + # mkdir -p ssh + # echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ssh/config + # printf "%s\n" "${{ secrets.SSH_PRIVATE_KEY }}" > ssh/id_rsa + # echo "" >> ssh/id_rsa + # echo -n ${{ vars.BASTION_IP }} > script/.env + # envsubst < clear_opensearch.sh.tpl > script/clear_opensearch.sh + # envsubst < run.sh.tpl > script/run.sh + # docker run --rm \ + # -v ./script:/script \ + # -v ./ssh:/root/.ssh \ + # kroniak/ssh-client bash /script/run.sh + + # start_logstash: + # needs: clear_opensearch + # runs-on: ubuntu-latest + # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} + # if: ${{ inputs.deploy-plan-only == false }} + # steps: + # - name: Get Environment Name for ${{ vars.ENV_NAME }} + # id: get_env_name + # uses: Entepotenz/change-string-case-action-min-dependencies@v1 + # with: + # string: ${{ vars.ENV_NAME }} + # - name: Checkout repo + # uses: actions/checkout@v4 + # - name: Start Logstash for ${{ vars.ENV_NAME }} + # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # AWS_DEFAULT_REGION: "eu-west-1" + # run: | + # aws \ + # ecs update-service --desired-count 1 --cluster=ecsOpenSupplyHub${{vars.ENV_NAME}}Cluster \ + # --service=OpenSupplyHub${{vars.ENV_NAME}}AppLogstash diff --git a/deploy-ecs.sh b/deploy-ecs.sh new file mode 100755 index 00000000..4f37b312 --- /dev/null +++ b/deploy-ecs.sh @@ -0,0 +1,46 @@ +#!/bin/bash + +# ECS Deployment Script +# This script rebuilds the Docker image, pushes it to ECR, and restarts the ECS service + +set -e # Exit on any error + +# Configuration +AWS_PROFILE="tf-sf-website" +AWS_REGION="us-east-1" +ECR_REPOSITORY="695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-dev" +ECS_CLUSTER="sf-website-dev-cluster" +ECS_SERVICE="sf-website-dev-service" +IMAGE_TAG="latest" + +echo "🚀 Starting ECS deployment process..." + +# Step 1: Login to ECR +echo "📝 Logging into ECR..." +AWS_PROFILE=$AWS_PROFILE aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY + +# Step 2: Build the Docker image +echo "🔨 Building Docker image for linux/amd64 platform..." +docker build --platform linux/amd64 -t $ECR_REPOSITORY:$IMAGE_TAG . + +# Step 3: Push the image to ECR +echo "📤 Pushing image to ECR..." +docker push $ECR_REPOSITORY:$IMAGE_TAG + +# Step 4: Force ECS service to restart with new image +echo "🔄 Restarting ECS service..." +AWS_PROFILE=$AWS_PROFILE aws ecs update-service \ + --cluster $ECS_CLUSTER \ + --service $ECS_SERVICE \ + --force-new-deployment \ + --region $AWS_REGION + +# Step 5: Wait for deployment to complete +echo "⏳ Waiting for deployment to complete..." +AWS_PROFILE=$AWS_PROFILE aws ecs wait services-stable \ + --cluster $ECS_CLUSTER \ + --services $ECS_SERVICE \ + --region $AWS_REGION + +echo "✅ Deployment completed successfully!" +echo "🌐 Your application should be available at: https://sf-website-dev.sandbox-prettyclear.com" \ No newline at end of file diff --git a/terraform/terraform-infrastructure-management-policy.json b/terraform/terraform-infrastructure-management-policy.json index d973e70f..ae57aeb3 100644 --- a/terraform/terraform-infrastructure-management-policy.json +++ b/terraform/terraform-infrastructure-management-policy.json @@ -39,6 +39,7 @@ "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:DescribeSecurityGroups", + "ec2:DescribeSecurityGroupRules", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:RevokeSecurityGroupIngress", @@ -52,6 +53,9 @@ "ec2:DeleteTags", "ec2:DescribeTags", "ec2:DescribeImages", + "ec2:DescribeInstanceTypes", + "ec2:DescribeInstanceCreditSpecifications", + "ec2:DescribeVolumes", "ec2:RunInstances", "ec2:TerminateInstances", "ec2:DescribeInstances", diff --git a/terraform/variables.tf b/terraform/variables.tf index 7138a48a..2aaaac0f 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -226,11 +226,6 @@ variable "bastion_instance_type" { description = "EC2 instance type for bastion host" type = string default = "t3.micro" - - validation { - condition = can(regex("^[tm][0-9]+\\.", var.bastion_instance_type)) - error_message = "Bastion instance type must be a valid EC2 instance type." - } } variable "bastion_key_pair_name" { diff --git a/website/modules/@apostrophecms/uploadfs/index.js b/website/modules/@apostrophecms/uploadfs/index.js index 00c8e3c7..0ae9cdc5 100644 --- a/website/modules/@apostrophecms/uploadfs/index.js +++ b/website/modules/@apostrophecms/uploadfs/index.js @@ -2,7 +2,7 @@ const { getEnv } = require('../../../utils/env'); const s3aws = { bucket: getEnv('APOS_S3_BUCKET'), - region: getEnv('APOS_S3_REGION'),, + region: getEnv('APOS_S3_REGION'), https: true, }; From 9f1d99d58e39a4ff735dff8b0532bb899aa52b5f Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 15:48:45 +0200 Subject: [PATCH 18/46] Add Terraform deployment automation and release tracking - Add build_assets.sh script for generating APOS_RELEASE_ID - Add terraform-deploy.sh script for automated deployment process - Add APOS_RELEASE_ID environment variable to ECS container configuration - Add apos_release_id variable to Terraform configuration - Update terraform.tfvars.example with release ID configuration --- scripts/build_assets.sh | 3 +++ terraform-deploy.sh | 37 ++++++++++++++++++++++++++++++ terraform/main.tf | 1 + terraform/terraform.tfvars.example | 1 + terraform/variables.tf | 5 ++++ 5 files changed, 47 insertions(+) create mode 100755 scripts/build_assets.sh create mode 100755 terraform-deploy.sh diff --git a/scripts/build_assets.sh b/scripts/build_assets.sh new file mode 100755 index 00000000..ed698fb6 --- /dev/null +++ b/scripts/build_assets.sh @@ -0,0 +1,3 @@ +export APOS_RELEASE_ID=`cat /dev/urandom |env LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1` + +echo $APOS_RELEASE_ID > ./release-id \ No newline at end of file diff --git a/terraform-deploy.sh b/terraform-deploy.sh new file mode 100755 index 00000000..eb7c18ef --- /dev/null +++ b/terraform-deploy.sh @@ -0,0 +1,37 @@ +#!/bin/bash + +# Terraform Deployment Script +# This script runs terraform plan and terraform apply for the SF Website infrastructure +# Run this script from the project root directory + +set -e # Exit on any error + +# Configuration from existing setup +AWS_PROFILE="tf-sf-website" +AWS_REGION="us-east-1" +TERRAFORM_DIR="terraform" + +echo "🚀 Starting Terraform deployment process..." + +# Change to terraform directory +echo "📁 Changing to terraform directory..." +cd $TERRAFORM_DIR + +# Step 1: Initialize Terraform (if needed) +echo "🔧 Initializing Terraform..." +AWS_PROFILE=$AWS_PROFILE terraform init + +# Step 2: Run Terraform Plan +echo "📋 Running Terraform Plan..." +AWS_PROFILE=$AWS_PROFILE terraform plan -out=tfplan + +# Step 3: Apply the changes automatically +echo "🚀 Applying Terraform changes..." +AWS_PROFILE=$AWS_PROFILE terraform apply tfplan + +echo "✅ Terraform deployment completed successfully!" + +# Clean up plan file +rm -f tfplan + +echo "🎉 All done!" \ No newline at end of file diff --git a/terraform/main.tf b/terraform/main.tf index 71f3df1d..3ad13535 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -266,6 +266,7 @@ module "ecs" { APOS_S3_REGION = var.aws_region APOS_CDN_URL = "https://${var.media_domain_name}" APOS_CDN_ENABLED = "true" + APOS_RELEASE_ID = var.apos_release_id } # Secrets from Parameter Store - filter out empty values diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index be5ae797..bd0dac2b 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -35,6 +35,7 @@ route53_zone_id = "" # Leave empty to skip DNS record creation # Container Configuration container_image_tag = "latest" +apos_release_id = "v1.0.0" # Release ID for deployment tracking (required) container_cpu = 1024 # 1024 = 1 vCPU container_memory = 2048 # 2048 MB = 2 GB container_port = 3000 # Application port diff --git a/terraform/variables.tf b/terraform/variables.tf index 2aaaac0f..edb325e5 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -130,6 +130,11 @@ variable "container_image_tag" { default = "latest" } +variable "apos_release_id" { + description = "Release ID for the application deployment" + type = string +} + variable "container_cpu" { description = "CPU units for the container (1024 = 1 vCPU)" type = number From e72605e209d301277e95e48336d312e4509d5f66 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 16:22:32 +0200 Subject: [PATCH 19/46] feat: refactor DocumentDB connection to use separate environment variables - Update Terraform to pass individual DocumentDB connection parameters - Add constructMongoDbUri function in app.js to build URI from env vars - Include proper SSL and authentication configuration for DocumentDB - Add logging for successful MongoDB URI construction --- terraform/main.tf | 6 +++++- website/app.js | 28 ++++++++++++++++++++++++++++ 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/terraform/main.tf b/terraform/main.tf index 3ad13535..cb2df1e1 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -259,7 +259,9 @@ module "ecs" { # Environment variables environment_variables = { NODE_ENV = "production" - APOS_MONGODB_URI = "mongodb://${module.documentdb.cluster_endpoint}:27017/apostrophe" + DOCUMENTDB_HOST = module.documentdb.cluster_endpoint + DOCUMENTDB_PORT = "27017" + DOCUMENTDB_DATABASE = "apostrophe" REDIS_URI = "redis://${module.redis.cluster_endpoint}:6379" BASE_URL = "https://${var.domain_name}" APOS_S3_BUCKET = module.s3.attachments_bucket_id @@ -274,6 +276,8 @@ module "ecs" { for k, v in { SESSION_SECRET = module.parameter_store.session_secret_arn SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn + DOCUMENTDB_USERNAME = module.parameter_store.documentdb_username_arn + DOCUMENTDB_PASSWORD = module.parameter_store.documentdb_password_arn } : k => v if v != "" } diff --git a/website/app.js b/website/app.js index e152ad0f..07df09f8 100644 --- a/website/app.js +++ b/website/app.js @@ -2,6 +2,27 @@ const apostrophe = require('apostrophe'); require('dotenv').config({ path: '../.env' }); const { getEnv } = require('./utils/env'); +// Construct MongoDB URI from environment variables +// This must happen before Apostrophe initialization +function constructMongoDbUri() { + const mongoUsername = getEnv('DOCUMENTDB_USERNAME'); + const mongoPassword = getEnv('DOCUMENTDB_PASSWORD'); + const mongoHost = getEnv('DOCUMENTDB_HOST'); + const mongoPort = getEnv('DOCUMENTDB_PORT'); + const mongoDatabase = getEnv('DOCUMENTDB_DATABASE'); + + // Construct the full MongoDB URI with authentication and SSL + const mongoUri = `mongodb://${encodeURIComponent(mongoUsername)}:${encodeURIComponent(mongoPassword)}@${mongoHost}:${mongoPort}/${mongoDatabase}?ssl=true&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false`; + + // Log success (using simple logging since Apostrophe isn't initialized yet) + process.stdout.write('✅ MongoDB URI constructed successfully\n'); + process.stdout.write(` Host: ${mongoHost}:${mongoPort}\n`); + process.stdout.write(` Database: ${mongoDatabase}\n`); + process.stdout.write(` Username: ${mongoUsername}\n`); + + return mongoUri; +} + function createAposConfig() { return { shortName: 'apostrophe-site', @@ -9,6 +30,13 @@ function createAposConfig() { // Session configuration modules: { + // Database configuration with direct URI + '@apostrophecms/db': { + options: { + uri: constructMongoDbUri() + } + }, + // Core modules configuration '@apostrophecms/express': { options: { From 6d4c9744a02617f98523d581ce1a0b2a7f2e7b71 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 18:29:30 +0200 Subject: [PATCH 20/46] Improve DocumentDB connection configuration for AWS compatibility - Update MongoDB URI to use TLS instead of SSL for DocumentDB compatibility - Add comprehensive connection options for DocumentDB timeouts and pooling - Add TLS validation settings for AWS DocumentDB requirements - Include additional logging for TLS configuration status --- website/app.js | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/website/app.js b/website/app.js index 07df09f8..9af47b90 100644 --- a/website/app.js +++ b/website/app.js @@ -11,14 +11,16 @@ function constructMongoDbUri() { const mongoPort = getEnv('DOCUMENTDB_PORT'); const mongoDatabase = getEnv('DOCUMENTDB_DATABASE'); - // Construct the full MongoDB URI with authentication and SSL - const mongoUri = `mongodb://${encodeURIComponent(mongoUsername)}:${encodeURIComponent(mongoPassword)}@${mongoHost}:${mongoPort}/${mongoDatabase}?ssl=true&replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false`; + // AWS DocumentDB requires TLS/SSL encryption + // Build connection string with proper DocumentDB parameters + const mongoUri = `mongodb://${encodeURIComponent(mongoUsername)}:${encodeURIComponent(mongoPassword)}@${mongoHost}:${mongoPort}/${mongoDatabase}?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false&tls=true&tlsAllowInvalidHostnames=true&tlsAllowInvalidCertificates=true`; // Log success (using simple logging since Apostrophe isn't initialized yet) process.stdout.write('✅ MongoDB URI constructed successfully\n'); process.stdout.write(` Host: ${mongoHost}:${mongoPort}\n`); process.stdout.write(` Database: ${mongoDatabase}\n`); process.stdout.write(` Username: ${mongoUsername}\n`); + process.stdout.write(` TLS: enabled with relaxed validation\n`); return mongoUri; } @@ -33,8 +35,24 @@ function createAposConfig() { // Database configuration with direct URI '@apostrophecms/db': { options: { - uri: constructMongoDbUri() - } + uri: constructMongoDbUri(), + // Additional MongoDB connection options for DocumentDB + connect: { + useNewUrlParser: true, + useUnifiedTopology: true, + serverSelectionTimeoutMS: 30000, + connectTimeoutMS: 30000, + socketTimeoutMS: 30000, + maxPoolSize: 10, + minPoolSize: 1, + maxIdleTimeMS: 30000, + waitQueueTimeoutMS: 30000, + heartbeatFrequencyMS: 10000, + // Retry settings for DocumentDB + retryWrites: false, + retryReads: true, + }, + }, }, // Core modules configuration From 28517476617b1591628f9d7e4a8d784a3c0239f6 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 20:01:46 +0200 Subject: [PATCH 21/46] Update infrastructure configuration and deployment workflow - Update .gitignore to exclude terraform state files and sensitive data - Modify terraform core management policy for improved permissions - Update Dockerfile for optimized container builds - Configure DocumentDB module with enhanced security settings - Update ApostropheCMS app.js with improved configuration - Configure uploadfs module for better file handling --- .gitignore | 1 + Dockerfile | 3 ++ terraform/modules/documentdb/main.tf | 40 +++++++++++++++---- .../terraform-core-management-policy.json | 15 +------ website/app.js | 31 +++++--------- .../modules/@apostrophecms/uploadfs/index.js | 30 +++++++------- 6 files changed, 63 insertions(+), 57 deletions(-) diff --git a/.gitignore b/.gitignore index f7ccc0b0..8dcbe04e 100644 --- a/.gitignore +++ b/.gitignore @@ -150,3 +150,4 @@ dev.tfplan plan.out.txt .cursor/tmp .terraform.lock.hcl +*.pem diff --git a/Dockerfile b/Dockerfile index 641c114a..c6f18b61 100644 --- a/Dockerfile +++ b/Dockerfile @@ -5,6 +5,9 @@ WORKDIR /app # Install dependencies needed for health checks with pinned version RUN apk add --no-cache wget=1.25.0-r0 + + RUN wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem + RUN chmod 600 global-bundle.pem # Create a non-root user and group RUN addgroup -S appgroup && adduser -S appuser -G appgroup diff --git a/terraform/modules/documentdb/main.tf b/terraform/modules/documentdb/main.tf index e8c9c50e..04e594e2 100644 --- a/terraform/modules/documentdb/main.tf +++ b/terraform/modules/documentdb/main.tf @@ -6,16 +6,40 @@ resource "aws_docdb_subnet_group" "main" { tags = var.tags } +# DocumentDB Parameter Group +resource "aws_docdb_cluster_parameter_group" "main" { + family = "docdb5.0" + name = "${var.name_prefix}-${var.environment}-docdb-parameter-group" + description = "DocumentDB cluster parameter group for ${var.name_prefix}-${var.environment}" + + parameter { + name = "tls" + value = "enabled" + } + + tags = var.tags +} + # DocumentDB Cluster resource "aws_docdb_cluster" "main" { - cluster_identifier = "${var.name_prefix}-${var.environment}-docdb-cluster" - engine = "docdb" - master_username = var.master_username - master_password = var.master_password - db_subnet_group_name = aws_docdb_subnet_group.main.name - vpc_security_group_ids = var.security_group_ids - storage_encrypted = true - skip_final_snapshot = true + cluster_identifier = "${var.name_prefix}-${var.environment}-docdb-cluster" + engine = "docdb" + engine_version = "5.0.0" + master_username = var.master_username + master_password = var.master_password + db_cluster_parameter_group_name = aws_docdb_cluster_parameter_group.main.name + db_subnet_group_name = aws_docdb_subnet_group.main.name + vpc_security_group_ids = var.security_group_ids + storage_encrypted = true + skip_final_snapshot = true + + # Enable audit logging for security compliance + enabled_cloudwatch_logs_exports = ["audit"] + + # Backup configuration + backup_retention_period = 7 + preferred_backup_window = "03:00-04:00" + preferred_maintenance_window = "sun:04:00-sun:05:00" tags = var.tags } diff --git a/terraform/terraform-core-management-policy.json b/terraform/terraform-core-management-policy.json index b3e20ab9..d34084b1 100644 --- a/terraform/terraform-core-management-policy.json +++ b/terraform/terraform-core-management-policy.json @@ -1,20 +1,6 @@ { "Version": "2012-10-17", "Statement": [ - { - "Sid": "TerraformBackendDynamoDBPermissions", - "Effect": "Allow", - "Action": [ - "dynamodb:CreateTable", - "dynamodb:DeleteTable", - "dynamodb:DescribeTable", - "dynamodb:GetItem", - "dynamodb:PutItem", - "dynamodb:DeleteItem", - "dynamodb:UpdateItem" - ], - "Resource": "arn:aws:dynamodb:us-east-1:695912022152:table/sf-website-terraform-locks" - }, { "Sid": "TerraformIAMPermissions", "Effect": "Allow", @@ -110,6 +96,7 @@ "rds:CreateDBClusterParameterGroup", "rds:DeleteDBClusterParameterGroup", "rds:DescribeDBClusterParameterGroups", + "rds:DescribeDBClusterParameters", "rds:ModifyDBClusterParameterGroup", "rds:CreateDBInstance", "rds:DeleteDBInstance", diff --git a/website/app.js b/website/app.js index 9af47b90..8ffe64f5 100644 --- a/website/app.js +++ b/website/app.js @@ -2,8 +2,10 @@ const apostrophe = require('apostrophe'); require('dotenv').config({ path: '../.env' }); const { getEnv } = require('./utils/env'); -// Construct MongoDB URI from environment variables -// This must happen before Apostrophe initialization +/* + * Construct MongoDB URI from environment variables + * This must happen before Apostrophe initialization + */ function constructMongoDbUri() { const mongoUsername = getEnv('DOCUMENTDB_USERNAME'); const mongoPassword = getEnv('DOCUMENTDB_PASSWORD'); @@ -11,16 +13,18 @@ function constructMongoDbUri() { const mongoPort = getEnv('DOCUMENTDB_PORT'); const mongoDatabase = getEnv('DOCUMENTDB_DATABASE'); - // AWS DocumentDB requires TLS/SSL encryption - // Build connection string with proper DocumentDB parameters - const mongoUri = `mongodb://${encodeURIComponent(mongoUsername)}:${encodeURIComponent(mongoPassword)}@${mongoHost}:${mongoPort}/${mongoDatabase}?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false&tls=true&tlsAllowInvalidHostnames=true&tlsAllowInvalidCertificates=true`; + /* + * AWS DocumentDB requires TLS/SSL encryption and SCRAM-SHA-1 authentication + * Build connection string with proper DocumentDB parameters + */ + const mongoUri = `mongodb://${encodeURIComponent(mongoUsername)}:${encodeURIComponent(mongoPassword)}@${mongoHost}:${mongoPort}/${mongoDatabase}?tls=true&tlsCAFile=global-bundle.pem&retryWrites=false`; // Log success (using simple logging since Apostrophe isn't initialized yet) process.stdout.write('✅ MongoDB URI constructed successfully\n'); process.stdout.write(` Host: ${mongoHost}:${mongoPort}\n`); process.stdout.write(` Database: ${mongoDatabase}\n`); process.stdout.write(` Username: ${mongoUsername}\n`); - process.stdout.write(` TLS: enabled with relaxed validation\n`); + process.stdout.write(` TLS: enabled with CA file validation\n`); return mongoUri; } @@ -37,21 +41,6 @@ function createAposConfig() { options: { uri: constructMongoDbUri(), // Additional MongoDB connection options for DocumentDB - connect: { - useNewUrlParser: true, - useUnifiedTopology: true, - serverSelectionTimeoutMS: 30000, - connectTimeoutMS: 30000, - socketTimeoutMS: 30000, - maxPoolSize: 10, - minPoolSize: 1, - maxIdleTimeMS: 30000, - waitQueueTimeoutMS: 30000, - heartbeatFrequencyMS: 10000, - // Retry settings for DocumentDB - retryWrites: false, - retryReads: true, - }, }, }, diff --git a/website/modules/@apostrophecms/uploadfs/index.js b/website/modules/@apostrophecms/uploadfs/index.js index 0ae9cdc5..56350c14 100644 --- a/website/modules/@apostrophecms/uploadfs/index.js +++ b/website/modules/@apostrophecms/uploadfs/index.js @@ -6,13 +6,15 @@ const s3aws = { https: true, }; -// const s3localstack = { -// bucket: getEnv('APOS_S3_BUCKET'), -// region: getEnv('APOS_S3_REGION'), -// endpoint: getEnv('APOS_S3_ENDPOINT'), -// style: 'path', -// https: false, -// }; +/* + * Const s3localstack = { + * bucket: getEnv('APOS_S3_BUCKET'), + * region: getEnv('APOS_S3_REGION'), + * endpoint: getEnv('APOS_S3_ENDPOINT'), + * style: 'path', + * https: false, + * }; + */ const res = { options: { @@ -20,15 +22,15 @@ const res = { storage: 's3', ...s3aws, // Get your credentials at aws.amazon.com - // secret: getEnv('APOS_S3_SECRET'), - // key: getEnv('APOS_S3_KEY'), + // Secret: getEnv('APOS_S3_SECRET'), + // Key: getEnv('APOS_S3_KEY'), // // Bucket name created on aws.amazon.com - // bucket: getEnv('APOS_S3_BUCKET'), + // Bucket: getEnv('APOS_S3_BUCKET'), // // Region name for endpoint - // region: getEnv('APOS_S3_REGION'), - // endpoint: getEnv('APOS_S3_ENDPOINT'), - // style: getEnv('APOS_S3_STYLE'), - // https: getEnv('APOS_S3_HTTPS'), + // Region: getEnv('APOS_S3_REGION'), + // Endpoint: getEnv('APOS_S3_ENDPOINT'), + // Style: getEnv('APOS_S3_STYLE'), + // Https: getEnv('APOS_S3_HTTPS'), cdn: { enabled: true, url: getEnv('APOS_CDN_URL'), From 5c8bc9e8d924cd4fc40edfc7fa84b2960e47a71d Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 21:19:03 +0200 Subject: [PATCH 22/46] Standardize environment naming and enhance Terraform configuration - Rename 'prod' environment to 'production' throughout configuration - Update backend configuration files for consistent naming - Add comprehensive variable validation rules - Enhance security group module with proper variable definitions - Update deployment workflow for production environment - Add detailed documentation and examples - Improve .gitignore for Terraform files --- .github/workflows/deploy_to_aws.yml | 4 +- .gitignore | 2 +- terraform/README.md | 32 +++++++-------- ...ackend-dev.hcl => backend-development.hcl} | 2 +- terraform/backend-prod.hcl | 5 --- terraform/backend-production.hcl | 5 +++ terraform/init-aws-for-terraform.sh | 2 +- terraform/main.tf | 5 +-- terraform/modules/security_groups/main.tf | 39 ------------------- .../modules/security_groups/variables.tf | 6 --- terraform/terraform.tfvars.example | 6 +-- terraform/variables.tf | 6 +-- 12 files changed, 33 insertions(+), 81 deletions(-) rename terraform/{backend-dev.hcl => backend-development.hcl} (71%) delete mode 100644 terraform/backend-prod.hcl create mode 100644 terraform/backend-production.hcl diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 6aa07f9f..06de7340 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -34,7 +34,7 @@ on: jobs: init-and-plan: runs-on: ubuntu-latest - environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'prod/') && 'Production') || (startsWith(github.ref_name, 'staging/') && 'Staging') || 'None' }} + environment: Development steps: - name: Get Environment Name for ${{ vars.ENV_NAME }} id: get_env_name @@ -47,7 +47,7 @@ jobs: - name: Checkout config repository uses: actions/checkout@v4 with: - repository: 'speedandfunction/websie-ci-secrets' + repository: 'speedandfunction/website-ci-secrets' path: 'terraform-config' token: ${{ secrets.PAT }} diff --git a/.gitignore b/.gitignore index 8dcbe04e..2f15629d 100644 --- a/.gitignore +++ b/.gitignore @@ -146,7 +146,7 @@ dump.archive aposUsersSafe.json terraform.tfvars terraform/.terraform -dev.tfplan +*.tfplan plan.out.txt .cursor/tmp .terraform.lock.hcl diff --git a/terraform/README.md b/terraform/README.md index 4b3e123c..4e112849 100644 --- a/terraform/README.md +++ b/terraform/README.md @@ -41,13 +41,13 @@ Edit `terraform.tfvars` with your specific values: Create S3 buckets and DynamoDB table for state management: ```bash # Create S3 buckets for each environment -aws s3 mb s3://sf-website-terraform-state-dev +aws s3 mb s3://sf-website-terraform-state-development aws s3 mb s3://sf-website-terraform-state-staging -aws s3 mb s3://sf-website-terraform-state-prod +aws s3 mb s3://sf-website-terraform-state-production # Enable versioning aws s3api put-bucket-versioning \ - --bucket sf-website-terraform-state-dev \ + --bucket sf-website-terraform-state-development \ --versioning-configuration Status=Enabled # Create DynamoDB table for state locking @@ -61,7 +61,7 @@ aws dynamodb create-table \ ### 4. **Initialize and Deploy** ```bash # Initialize with backend -terraform init -backend-config=backend-dev.hcl +terraform init -backend-config=backend-development.hcl # Plan deployment terraform plan @@ -76,16 +76,16 @@ Deploy different environments using different backend configurations: ```bash # Development -terraform init -backend-config=backend-dev.hcl -terraform apply -var-file=dev.tfvars +terraform init -backend-config=backend-development.hcl +terraform apply -var-file=development.tfvars # Staging terraform init -backend-config=backend-staging.hcl terraform apply -var-file=staging.tfvars # Production -terraform init -backend-config=backend-prod.hcl -terraform apply -var-file=prod.tfvars +terraform init -backend-config=backend-production.hcl +terraform apply -var-file=production.tfvars ``` ## 📁 **File Structure** @@ -138,8 +138,8 @@ sf-website-{resource-type}-{environment} ``` Examples: -- `sf-website-vpc-dev` -- `sf-website-ecs-cluster-prod` +- `sf-website-vpc-development` +- `sf-website-ecs-cluster-production` - `sf-website-documentdb-staging` ## 📊 **Monitoring & Alerts** @@ -186,7 +186,7 @@ jobs: uses: hashicorp/setup-terraform@v3 - name: Terraform Init - run: terraform init -backend-config=backend-prod.hcl + run: terraform init -backend-config=backend-production.hcl - name: Terraform Apply run: terraform apply -auto-approve @@ -202,7 +202,7 @@ terraform fmt -recursive terraform validate # Plan with specific var file -terraform plan -var-file=prod.tfvars +terraform plan -var-file=production.tfvars # Show current state terraform show @@ -221,9 +221,9 @@ terraform destroy ### **Environment-Specific Variables** Create separate tfvars files for each environment: -- `dev.tfvars` +- `development.tfvars` - `staging.tfvars` -- `prod.tfvars` +- `production.tfvars` ### **Scaling Configuration** Adjust container resources and auto-scaling: @@ -253,7 +253,7 @@ redis_node_type = "cache.r6g.large" 1. **Backend bucket doesn't exist** ```bash - aws s3 mb s3://sf-website-terraform-state-dev + aws s3 mb s3://sf-website-terraform-state-development ``` 2. **Certificate ARN invalid** @@ -285,7 +285,7 @@ redis_node_type = "cache.r6g.large" ## 🔄 **Updates and Maintenance** 1. **Provider Updates**: Regularly update AWS provider version -2. **Module Updates**: Test module changes in dev first +2. **Module Updates**: Test module changes in development first 3. **State Backup**: S3 versioning provides automatic backups 4. **Security Updates**: Monitor AWS security bulletins diff --git a/terraform/backend-dev.hcl b/terraform/backend-development.hcl similarity index 71% rename from terraform/backend-dev.hcl rename to terraform/backend-development.hcl index 3cc7b14c..7fdc94a0 100644 --- a/terraform/backend-dev.hcl +++ b/terraform/backend-development.hcl @@ -1,5 +1,5 @@ # Backend configuration for Development environment bucket = "sf-website-infrastructure" -key = "terraform/terraform-dev.tfstate" +key = "terraform/terraform-development.tfstate" region = "us-east-1" encrypt = true \ No newline at end of file diff --git a/terraform/backend-prod.hcl b/terraform/backend-prod.hcl deleted file mode 100644 index 07d7aee6..00000000 --- a/terraform/backend-prod.hcl +++ /dev/null @@ -1,5 +0,0 @@ -# Backend configuration for Prod environment -bucket = "sf-website-infrastructure" -key = "terraform/terraform-prod.tfstate" -region = "us-east-1" -encrypt = true \ No newline at end of file diff --git a/terraform/backend-production.hcl b/terraform/backend-production.hcl new file mode 100644 index 00000000..64bf2d47 --- /dev/null +++ b/terraform/backend-production.hcl @@ -0,0 +1,5 @@ +# Backend configuration for Production environment +bucket = "sf-website-infrastructure" +key = "terraform/terraform-production.tfstate" +region = "us-east-1" +encrypt = true \ No newline at end of file diff --git a/terraform/init-aws-for-terraform.sh b/terraform/init-aws-for-terraform.sh index a50064e1..35d37df6 100755 --- a/terraform/init-aws-for-terraform.sh +++ b/terraform/init-aws-for-terraform.sh @@ -5,7 +5,7 @@ set -e -# Configuration from backend-dev.hcl +# Configuration from backend-development.hcl BUCKET_NAME="${BUCKET_NAME:-sf-website-infrastructure}" AWS_REGION="${AWS_REGION:-us-east-1}" diff --git a/terraform/main.tf b/terraform/main.tf index cb2df1e1..f1e6c3cb 100644 --- a/terraform/main.tf +++ b/terraform/main.tf @@ -11,7 +11,7 @@ terraform { backend "s3" { # Backend configuration will be provided via backend.hcl files - # Example: terraform init -backend-config=backend-dev.hcl + # Example: terraform init -backend-config=backend-development.hcl } } @@ -96,9 +96,6 @@ module "security_groups" { # Container configuration container_port = var.container_port - # Bastion security group for SSH access - bastion_security_group_id = module.bastion.security_group_id - tags = local.common_tags } diff --git a/terraform/modules/security_groups/main.tf b/terraform/modules/security_groups/main.tf index 4e737d40..19125873 100644 --- a/terraform/modules/security_groups/main.tf +++ b/terraform/modules/security_groups/main.tf @@ -62,19 +62,6 @@ resource "aws_security_group" "ecs" { }) } -# SSH access from bastion to ECS (optional rule) -resource "aws_security_group_rule" "ecs_ssh_from_bastion" { - count = var.bastion_security_group_id != "" ? 1 : 0 - - type = "ingress" - from_port = 22 - to_port = 22 - protocol = "tcp" - source_security_group_id = var.bastion_security_group_id - security_group_id = aws_security_group.ecs.id - description = "SSH from bastion host" -} - # DocumentDB Security Group resource "aws_security_group" "documentdb" { name = "${var.name_prefix}-documentdb-sg-${var.environment}" @@ -102,19 +89,6 @@ resource "aws_security_group" "documentdb" { }) } -# MongoDB access from bastion to DocumentDB (for debugging) -resource "aws_security_group_rule" "documentdb_from_bastion" { - count = var.bastion_security_group_id != "" ? 1 : 0 - - type = "ingress" - from_port = 27017 - to_port = 27017 - protocol = "tcp" - source_security_group_id = var.bastion_security_group_id - security_group_id = aws_security_group.documentdb.id - description = "MongoDB from bastion host" -} - # Redis Security Group resource "aws_security_group" "redis" { name = "${var.name_prefix}-redis-sg-${var.environment}" @@ -140,17 +114,4 @@ resource "aws_security_group" "redis" { tags = merge(var.tags, { Name = "${var.name_prefix}-redis-sg-${var.environment}" }) -} - -# Redis access from bastion (for debugging) -resource "aws_security_group_rule" "redis_from_bastion" { - count = var.bastion_security_group_id != "" ? 1 : 0 - - type = "ingress" - from_port = 6379 - to_port = 6379 - protocol = "tcp" - source_security_group_id = var.bastion_security_group_id - security_group_id = aws_security_group.redis.id - description = "Redis from bastion host" } \ No newline at end of file diff --git a/terraform/modules/security_groups/variables.tf b/terraform/modules/security_groups/variables.tf index 9b3fca30..d501264b 100644 --- a/terraform/modules/security_groups/variables.tf +++ b/terraform/modules/security_groups/variables.tf @@ -21,12 +21,6 @@ variable "container_port" { default = 3000 } -variable "bastion_security_group_id" { - description = "Security group ID of the bastion host for SSH access" - type = string - default = "" -} - variable "tags" { description = "Tags to apply to resources" type = map(string) diff --git a/terraform/terraform.tfvars.example b/terraform/terraform.tfvars.example index bd0dac2b..883d8879 100644 --- a/terraform/terraform.tfvars.example +++ b/terraform/terraform.tfvars.example @@ -3,14 +3,14 @@ # General Configuration aws_region = "us-east-1" -environment = "dev" # Can be any name: dev, staging, prod, qa, demo, sandbox, etc. +environment = "development" # Can be any name: development, staging, production, qa, demo, sandbox, etc. # Networking vpc_cidr = "10.0.0.0/16" # Choose unique CIDR for each environment to avoid conflicts # Domain Configuration -domain_name = "sf-website-dev.sandbox-prettyclear.com" -media_domain_name = "sf-website-media-dev.sandbox-prettyclear.com" +domain_name = "sf-website-development.sandbox-prettyclear.com" +media_domain_name = "sf-website-media-development.sandbox-prettyclear.com" certificate_arn = "arn:aws:acm:us-east-1:548271326349:certificate/7e11016f-f90e-4800-972d-622bf1a82948" # Database Configuration (SENSITIVE - Store securely) diff --git a/terraform/variables.tf b/terraform/variables.tf index edb325e5..accc1624 100644 --- a/terraform/variables.tf +++ b/terraform/variables.tf @@ -8,7 +8,7 @@ variable "aws_region" { } variable "environment" { - description = "Environment name (e.g., dev, staging, prod, qa, demo, sandbox, etc.)" + description = "Environment name (e.g., development, staging, production, qa, demo, sandbox, etc.)" type = string } @@ -36,12 +36,12 @@ variable "vpc_cidr" { # Domain Configuration variable "domain_name" { - description = "Domain name for the application (e.g., sf-website-dev.sandbox-prettyclear.com)" + description = "Domain name for the application (e.g., sf-website-development.sandbox-prettyclear.com)" type = string } variable "media_domain_name" { - description = "Domain name for media CDN (e.g., sf-website-media-dev.sandbox-prettyclear.com)" + description = "Domain name for media CDN (e.g., sf-website-media-development.sandbox-prettyclear.com)" type = string } From 7e5da24a4b646300237d2cc128139f96edc917ac Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 21:21:31 +0200 Subject: [PATCH 23/46] Restructure infrastructure configuration and update CI/CD workflow - Rename terraform directory to deployment for better organization - Update GitHub workflow to use GITHUB_TOKEN instead of PAT - Update gitignore to reflect new deployment directory structure --- .github/workflows/deploy_to_aws.yml | 2 +- .gitignore | 2 +- {terraform => deployment}/README.md | 0 {terraform => deployment}/backend-development.hcl | 0 {terraform => deployment}/backend-production.hcl | 0 {terraform => deployment}/backend-staging.hcl | 0 {terraform => deployment}/init-aws-for-terraform.sh | 0 {terraform => deployment}/main.tf | 0 {terraform => deployment}/modules/alb/main.tf | 0 {terraform => deployment}/modules/alb/outputs.tf | 0 {terraform => deployment}/modules/alb/variables.tf | 0 {terraform => deployment}/modules/bastion/main.tf | 0 {terraform => deployment}/modules/bastion/outputs.tf | 0 {terraform => deployment}/modules/bastion/variables.tf | 0 {terraform => deployment}/modules/cloudfront/main.tf | 0 {terraform => deployment}/modules/cloudfront/outputs.tf | 0 {terraform => deployment}/modules/cloudfront/variables.tf | 0 {terraform => deployment}/modules/cloudwatch/main.tf | 0 {terraform => deployment}/modules/cloudwatch/outputs.tf | 0 {terraform => deployment}/modules/cloudwatch/variables.tf | 0 {terraform => deployment}/modules/documentdb/main.tf | 0 {terraform => deployment}/modules/documentdb/outputs.tf | 0 {terraform => deployment}/modules/documentdb/variables.tf | 0 {terraform => deployment}/modules/ecr/main.tf | 0 {terraform => deployment}/modules/ecr/outputs.tf | 0 {terraform => deployment}/modules/ecr/variables.tf | 0 {terraform => deployment}/modules/ecs/main.tf | 0 {terraform => deployment}/modules/ecs/outputs.tf | 0 {terraform => deployment}/modules/ecs/variables.tf | 0 {terraform => deployment}/modules/iam/main.tf | 0 {terraform => deployment}/modules/iam/outputs.tf | 0 {terraform => deployment}/modules/iam/variables.tf | 0 {terraform => deployment}/modules/parameter_store/main.tf | 0 {terraform => deployment}/modules/parameter_store/outputs.tf | 0 {terraform => deployment}/modules/parameter_store/variables.tf | 0 {terraform => deployment}/modules/redis/main.tf | 0 {terraform => deployment}/modules/redis/outputs.tf | 0 {terraform => deployment}/modules/redis/variables.tf | 0 {terraform => deployment}/modules/s3/main.tf | 0 {terraform => deployment}/modules/s3/outputs.tf | 0 {terraform => deployment}/modules/s3/variables.tf | 0 {terraform => deployment}/modules/security_groups/main.tf | 0 {terraform => deployment}/modules/security_groups/outputs.tf | 0 {terraform => deployment}/modules/security_groups/variables.tf | 0 {terraform => deployment}/modules/vpc/main.tf | 0 {terraform => deployment}/modules/vpc/outputs.tf | 0 {terraform => deployment}/modules/vpc/variables.tf | 0 {terraform => deployment}/outputs.tf | 0 {terraform => deployment}/terraform-core-management-policy.json | 0 .../terraform-infrastructure-management-policy.json | 0 {terraform => deployment}/terraform-s3-management-policy.json | 0 .../terraform-service-management-policy.json | 0 {terraform => deployment}/terraform.tfvars.example | 0 {terraform => deployment}/variables.tf | 0 54 files changed, 2 insertions(+), 2 deletions(-) rename {terraform => deployment}/README.md (100%) rename {terraform => deployment}/backend-development.hcl (100%) rename {terraform => deployment}/backend-production.hcl (100%) rename {terraform => deployment}/backend-staging.hcl (100%) rename {terraform => deployment}/init-aws-for-terraform.sh (100%) rename {terraform => deployment}/main.tf (100%) rename {terraform => deployment}/modules/alb/main.tf (100%) rename {terraform => deployment}/modules/alb/outputs.tf (100%) rename {terraform => deployment}/modules/alb/variables.tf (100%) rename {terraform => deployment}/modules/bastion/main.tf (100%) rename {terraform => deployment}/modules/bastion/outputs.tf (100%) rename {terraform => deployment}/modules/bastion/variables.tf (100%) rename {terraform => deployment}/modules/cloudfront/main.tf (100%) rename {terraform => deployment}/modules/cloudfront/outputs.tf (100%) rename {terraform => deployment}/modules/cloudfront/variables.tf (100%) rename {terraform => deployment}/modules/cloudwatch/main.tf (100%) rename {terraform => deployment}/modules/cloudwatch/outputs.tf (100%) rename {terraform => deployment}/modules/cloudwatch/variables.tf (100%) rename {terraform => deployment}/modules/documentdb/main.tf (100%) rename {terraform => deployment}/modules/documentdb/outputs.tf (100%) rename {terraform => deployment}/modules/documentdb/variables.tf (100%) rename {terraform => deployment}/modules/ecr/main.tf (100%) rename {terraform => deployment}/modules/ecr/outputs.tf (100%) rename {terraform => deployment}/modules/ecr/variables.tf (100%) rename {terraform => deployment}/modules/ecs/main.tf (100%) rename {terraform => deployment}/modules/ecs/outputs.tf (100%) rename {terraform => deployment}/modules/ecs/variables.tf (100%) rename {terraform => deployment}/modules/iam/main.tf (100%) rename {terraform => deployment}/modules/iam/outputs.tf (100%) rename {terraform => deployment}/modules/iam/variables.tf (100%) rename {terraform => deployment}/modules/parameter_store/main.tf (100%) rename {terraform => deployment}/modules/parameter_store/outputs.tf (100%) rename {terraform => deployment}/modules/parameter_store/variables.tf (100%) rename {terraform => deployment}/modules/redis/main.tf (100%) rename {terraform => deployment}/modules/redis/outputs.tf (100%) rename {terraform => deployment}/modules/redis/variables.tf (100%) rename {terraform => deployment}/modules/s3/main.tf (100%) rename {terraform => deployment}/modules/s3/outputs.tf (100%) rename {terraform => deployment}/modules/s3/variables.tf (100%) rename {terraform => deployment}/modules/security_groups/main.tf (100%) rename {terraform => deployment}/modules/security_groups/outputs.tf (100%) rename {terraform => deployment}/modules/security_groups/variables.tf (100%) rename {terraform => deployment}/modules/vpc/main.tf (100%) rename {terraform => deployment}/modules/vpc/outputs.tf (100%) rename {terraform => deployment}/modules/vpc/variables.tf (100%) rename {terraform => deployment}/outputs.tf (100%) rename {terraform => deployment}/terraform-core-management-policy.json (100%) rename {terraform => deployment}/terraform-infrastructure-management-policy.json (100%) rename {terraform => deployment}/terraform-s3-management-policy.json (100%) rename {terraform => deployment}/terraform-service-management-policy.json (100%) rename {terraform => deployment}/terraform.tfvars.example (100%) rename {terraform => deployment}/variables.tf (100%) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 06de7340..d5f8ce00 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -49,7 +49,7 @@ jobs: with: repository: 'speedandfunction/website-ci-secrets' path: 'terraform-config' - token: ${{ secrets.PAT }} + token: ${{ secrets.GITHUB_TOKEN }} - name: Copy tfvars for ${{ vars.ENV_NAME }} run: | diff --git a/.gitignore b/.gitignore index 2f15629d..600e7477 100644 --- a/.gitignore +++ b/.gitignore @@ -145,7 +145,7 @@ dump.archive .prettierrc aposUsersSafe.json terraform.tfvars -terraform/.terraform +deployment/.terraform *.tfplan plan.out.txt .cursor/tmp diff --git a/terraform/README.md b/deployment/README.md similarity index 100% rename from terraform/README.md rename to deployment/README.md diff --git a/terraform/backend-development.hcl b/deployment/backend-development.hcl similarity index 100% rename from terraform/backend-development.hcl rename to deployment/backend-development.hcl diff --git a/terraform/backend-production.hcl b/deployment/backend-production.hcl similarity index 100% rename from terraform/backend-production.hcl rename to deployment/backend-production.hcl diff --git a/terraform/backend-staging.hcl b/deployment/backend-staging.hcl similarity index 100% rename from terraform/backend-staging.hcl rename to deployment/backend-staging.hcl diff --git a/terraform/init-aws-for-terraform.sh b/deployment/init-aws-for-terraform.sh similarity index 100% rename from terraform/init-aws-for-terraform.sh rename to deployment/init-aws-for-terraform.sh diff --git a/terraform/main.tf b/deployment/main.tf similarity index 100% rename from terraform/main.tf rename to deployment/main.tf diff --git a/terraform/modules/alb/main.tf b/deployment/modules/alb/main.tf similarity index 100% rename from terraform/modules/alb/main.tf rename to deployment/modules/alb/main.tf diff --git a/terraform/modules/alb/outputs.tf b/deployment/modules/alb/outputs.tf similarity index 100% rename from terraform/modules/alb/outputs.tf rename to deployment/modules/alb/outputs.tf diff --git a/terraform/modules/alb/variables.tf b/deployment/modules/alb/variables.tf similarity index 100% rename from terraform/modules/alb/variables.tf rename to deployment/modules/alb/variables.tf diff --git a/terraform/modules/bastion/main.tf b/deployment/modules/bastion/main.tf similarity index 100% rename from terraform/modules/bastion/main.tf rename to deployment/modules/bastion/main.tf diff --git a/terraform/modules/bastion/outputs.tf b/deployment/modules/bastion/outputs.tf similarity index 100% rename from terraform/modules/bastion/outputs.tf rename to deployment/modules/bastion/outputs.tf diff --git a/terraform/modules/bastion/variables.tf b/deployment/modules/bastion/variables.tf similarity index 100% rename from terraform/modules/bastion/variables.tf rename to deployment/modules/bastion/variables.tf diff --git a/terraform/modules/cloudfront/main.tf b/deployment/modules/cloudfront/main.tf similarity index 100% rename from terraform/modules/cloudfront/main.tf rename to deployment/modules/cloudfront/main.tf diff --git a/terraform/modules/cloudfront/outputs.tf b/deployment/modules/cloudfront/outputs.tf similarity index 100% rename from terraform/modules/cloudfront/outputs.tf rename to deployment/modules/cloudfront/outputs.tf diff --git a/terraform/modules/cloudfront/variables.tf b/deployment/modules/cloudfront/variables.tf similarity index 100% rename from terraform/modules/cloudfront/variables.tf rename to deployment/modules/cloudfront/variables.tf diff --git a/terraform/modules/cloudwatch/main.tf b/deployment/modules/cloudwatch/main.tf similarity index 100% rename from terraform/modules/cloudwatch/main.tf rename to deployment/modules/cloudwatch/main.tf diff --git a/terraform/modules/cloudwatch/outputs.tf b/deployment/modules/cloudwatch/outputs.tf similarity index 100% rename from terraform/modules/cloudwatch/outputs.tf rename to deployment/modules/cloudwatch/outputs.tf diff --git a/terraform/modules/cloudwatch/variables.tf b/deployment/modules/cloudwatch/variables.tf similarity index 100% rename from terraform/modules/cloudwatch/variables.tf rename to deployment/modules/cloudwatch/variables.tf diff --git a/terraform/modules/documentdb/main.tf b/deployment/modules/documentdb/main.tf similarity index 100% rename from terraform/modules/documentdb/main.tf rename to deployment/modules/documentdb/main.tf diff --git a/terraform/modules/documentdb/outputs.tf b/deployment/modules/documentdb/outputs.tf similarity index 100% rename from terraform/modules/documentdb/outputs.tf rename to deployment/modules/documentdb/outputs.tf diff --git a/terraform/modules/documentdb/variables.tf b/deployment/modules/documentdb/variables.tf similarity index 100% rename from terraform/modules/documentdb/variables.tf rename to deployment/modules/documentdb/variables.tf diff --git a/terraform/modules/ecr/main.tf b/deployment/modules/ecr/main.tf similarity index 100% rename from terraform/modules/ecr/main.tf rename to deployment/modules/ecr/main.tf diff --git a/terraform/modules/ecr/outputs.tf b/deployment/modules/ecr/outputs.tf similarity index 100% rename from terraform/modules/ecr/outputs.tf rename to deployment/modules/ecr/outputs.tf diff --git a/terraform/modules/ecr/variables.tf b/deployment/modules/ecr/variables.tf similarity index 100% rename from terraform/modules/ecr/variables.tf rename to deployment/modules/ecr/variables.tf diff --git a/terraform/modules/ecs/main.tf b/deployment/modules/ecs/main.tf similarity index 100% rename from terraform/modules/ecs/main.tf rename to deployment/modules/ecs/main.tf diff --git a/terraform/modules/ecs/outputs.tf b/deployment/modules/ecs/outputs.tf similarity index 100% rename from terraform/modules/ecs/outputs.tf rename to deployment/modules/ecs/outputs.tf diff --git a/terraform/modules/ecs/variables.tf b/deployment/modules/ecs/variables.tf similarity index 100% rename from terraform/modules/ecs/variables.tf rename to deployment/modules/ecs/variables.tf diff --git a/terraform/modules/iam/main.tf b/deployment/modules/iam/main.tf similarity index 100% rename from terraform/modules/iam/main.tf rename to deployment/modules/iam/main.tf diff --git a/terraform/modules/iam/outputs.tf b/deployment/modules/iam/outputs.tf similarity index 100% rename from terraform/modules/iam/outputs.tf rename to deployment/modules/iam/outputs.tf diff --git a/terraform/modules/iam/variables.tf b/deployment/modules/iam/variables.tf similarity index 100% rename from terraform/modules/iam/variables.tf rename to deployment/modules/iam/variables.tf diff --git a/terraform/modules/parameter_store/main.tf b/deployment/modules/parameter_store/main.tf similarity index 100% rename from terraform/modules/parameter_store/main.tf rename to deployment/modules/parameter_store/main.tf diff --git a/terraform/modules/parameter_store/outputs.tf b/deployment/modules/parameter_store/outputs.tf similarity index 100% rename from terraform/modules/parameter_store/outputs.tf rename to deployment/modules/parameter_store/outputs.tf diff --git a/terraform/modules/parameter_store/variables.tf b/deployment/modules/parameter_store/variables.tf similarity index 100% rename from terraform/modules/parameter_store/variables.tf rename to deployment/modules/parameter_store/variables.tf diff --git a/terraform/modules/redis/main.tf b/deployment/modules/redis/main.tf similarity index 100% rename from terraform/modules/redis/main.tf rename to deployment/modules/redis/main.tf diff --git a/terraform/modules/redis/outputs.tf b/deployment/modules/redis/outputs.tf similarity index 100% rename from terraform/modules/redis/outputs.tf rename to deployment/modules/redis/outputs.tf diff --git a/terraform/modules/redis/variables.tf b/deployment/modules/redis/variables.tf similarity index 100% rename from terraform/modules/redis/variables.tf rename to deployment/modules/redis/variables.tf diff --git a/terraform/modules/s3/main.tf b/deployment/modules/s3/main.tf similarity index 100% rename from terraform/modules/s3/main.tf rename to deployment/modules/s3/main.tf diff --git a/terraform/modules/s3/outputs.tf b/deployment/modules/s3/outputs.tf similarity index 100% rename from terraform/modules/s3/outputs.tf rename to deployment/modules/s3/outputs.tf diff --git a/terraform/modules/s3/variables.tf b/deployment/modules/s3/variables.tf similarity index 100% rename from terraform/modules/s3/variables.tf rename to deployment/modules/s3/variables.tf diff --git a/terraform/modules/security_groups/main.tf b/deployment/modules/security_groups/main.tf similarity index 100% rename from terraform/modules/security_groups/main.tf rename to deployment/modules/security_groups/main.tf diff --git a/terraform/modules/security_groups/outputs.tf b/deployment/modules/security_groups/outputs.tf similarity index 100% rename from terraform/modules/security_groups/outputs.tf rename to deployment/modules/security_groups/outputs.tf diff --git a/terraform/modules/security_groups/variables.tf b/deployment/modules/security_groups/variables.tf similarity index 100% rename from terraform/modules/security_groups/variables.tf rename to deployment/modules/security_groups/variables.tf diff --git a/terraform/modules/vpc/main.tf b/deployment/modules/vpc/main.tf similarity index 100% rename from terraform/modules/vpc/main.tf rename to deployment/modules/vpc/main.tf diff --git a/terraform/modules/vpc/outputs.tf b/deployment/modules/vpc/outputs.tf similarity index 100% rename from terraform/modules/vpc/outputs.tf rename to deployment/modules/vpc/outputs.tf diff --git a/terraform/modules/vpc/variables.tf b/deployment/modules/vpc/variables.tf similarity index 100% rename from terraform/modules/vpc/variables.tf rename to deployment/modules/vpc/variables.tf diff --git a/terraform/outputs.tf b/deployment/outputs.tf similarity index 100% rename from terraform/outputs.tf rename to deployment/outputs.tf diff --git a/terraform/terraform-core-management-policy.json b/deployment/terraform-core-management-policy.json similarity index 100% rename from terraform/terraform-core-management-policy.json rename to deployment/terraform-core-management-policy.json diff --git a/terraform/terraform-infrastructure-management-policy.json b/deployment/terraform-infrastructure-management-policy.json similarity index 100% rename from terraform/terraform-infrastructure-management-policy.json rename to deployment/terraform-infrastructure-management-policy.json diff --git a/terraform/terraform-s3-management-policy.json b/deployment/terraform-s3-management-policy.json similarity index 100% rename from terraform/terraform-s3-management-policy.json rename to deployment/terraform-s3-management-policy.json diff --git a/terraform/terraform-service-management-policy.json b/deployment/terraform-service-management-policy.json similarity index 100% rename from terraform/terraform-service-management-policy.json rename to deployment/terraform-service-management-policy.json diff --git a/terraform/terraform.tfvars.example b/deployment/terraform.tfvars.example similarity index 100% rename from terraform/terraform.tfvars.example rename to deployment/terraform.tfvars.example diff --git a/terraform/variables.tf b/deployment/variables.tf similarity index 100% rename from terraform/variables.tf rename to deployment/variables.tf From 1b2fece150f8cc1d179a7c66108b4e9348075519 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 21:35:45 +0200 Subject: [PATCH 24/46] Update GitHub workflow for Terraform deployment configuration - Fix branch reference from origin/create-terraform-configuration to create-terraform-configuration - Correct repository name from website-ci-secrets to website-ci-secret - Comment out tfvars copying step for debugging - Update Terraform version from 1.5.0 to 1.12.0 - Replace custom infra script with direct terraform commands - Update AWS credentials to use TF_AWS prefixed secrets - Change AWS region from eu-west-1 to eu-east-1 --- .github/workflows/deploy_to_aws.yml | 32 +++++++++++++++-------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index d5f8ce00..1a3b1e63 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -3,7 +3,7 @@ name: 'Deploy to AWS' on: push: branches: - - origin/create-terraform-configuration + - create-terraform-configuration # - 'releases/**' workflow_dispatch: @@ -47,32 +47,34 @@ jobs: - name: Checkout config repository uses: actions/checkout@v4 with: - repository: 'speedandfunction/website-ci-secrets' + repository: 'speedandfunction/website-ci-secret' path: 'terraform-config' token: ${{ secrets.GITHUB_TOKEN }} - - name: Copy tfvars for ${{ vars.ENV_NAME }} - run: | - cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{env.TFVAR_NAME}}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" - env: - SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - TFVAR_NAME: terraform-${{steps.get_env_name.outputs.lowercase}}.tfvars + # - name: Copy tfvars for ${{ vars.ENV_NAME }} + # run: | + # cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{env.TFVAR_NAME}}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" + # env: + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # TFVAR_NAME: terraform-${{steps.get_env_name.outputs.lowercase}}.tfvars - name: Setup Terraform uses: hashicorp/setup-terraform@v2 with: - terraform_version: 1.5.0 + terraform_version: 1.12.0 terraform_wrapper: false - name: Terraform Plan for ${{ vars.ENV_NAME }} - run: ./deployment/infra plan + run: | + cd deployment + terraform init -backend-config="backend-${{ vars.ENV_NAME }}.hcl" + terraform plan -var-file="terraform-config/environments/${{ env.TFVAR_NAME }}" -out="terraform/${{ env.SETTINGS_BUCKET }}.tfplan" env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - AWS_DEFAULT_REGION: "eu-west-1" - SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "eu-east-1" + - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} run: aws s3 cp "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" env: From 15a40d5dc66330e8b9e99b4d10ca6250dda0b1fd Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 21:52:03 +0200 Subject: [PATCH 25/46] Fix GitHub Actions workflow for terraform configuration - Change token from GITHUB_TOKEN to PAT for private repo access - Enable tfvars copying step with corrected environment variables - Fix tfvars file path to use proper terraform.tfvars naming --- .github/workflows/deploy_to_aws.yml | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 1a3b1e63..25c335ba 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -49,14 +49,12 @@ jobs: with: repository: 'speedandfunction/website-ci-secret' path: 'terraform-config' - token: ${{ secrets.GITHUB_TOKEN }} - - # - name: Copy tfvars for ${{ vars.ENV_NAME }} - # run: | - # cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{env.TFVAR_NAME}}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" - # env: - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # TFVAR_NAME: terraform-${{steps.get_env_name.outputs.lowercase}}.tfvars + token: ${{ secrets.PAT }} + + - name: Copy tfvars for ${{ vars.ENV_NAME }} + run: | + cat "terraform-config/environments/${{ vars.ENV_NAME }}" "deployment/environments/${{ vars.ENV_NAME }}" > "deployment/terraform/terraform.tfvars" + - name: Setup Terraform uses: hashicorp/setup-terraform@v2 From 461676447a2007ee394519b1435749b96dce5a77 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 21:59:35 +0200 Subject: [PATCH 26/46] Reorganize deployment configuration into environments structure - Move backend configuration files to deployment/enviroments/ directory - Add development.tfvars with complete infrastructure configuration - Update GitHub workflow to use new file paths for environment configs - Remove deprecated init-aws-for-terraform.sh script --- .github/workflows/deploy_to_aws.yml | 2 +- .../{ => enviroments}/backend-development.hcl | 0 .../{ => enviroments}/backend-production.hcl | 0 .../{ => enviroments}/backend-staging.hcl | 0 deployment/enviroments/development.tfvars | 44 +++ deployment/init-aws-for-terraform.sh | 276 ------------------ 6 files changed, 45 insertions(+), 277 deletions(-) rename deployment/{ => enviroments}/backend-development.hcl (100%) rename deployment/{ => enviroments}/backend-production.hcl (100%) rename deployment/{ => enviroments}/backend-staging.hcl (100%) create mode 100644 deployment/enviroments/development.tfvars delete mode 100755 deployment/init-aws-for-terraform.sh diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 25c335ba..2761565c 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -53,7 +53,7 @@ jobs: - name: Copy tfvars for ${{ vars.ENV_NAME }} run: | - cat "terraform-config/environments/${{ vars.ENV_NAME }}" "deployment/environments/${{ vars.ENV_NAME }}" > "deployment/terraform/terraform.tfvars" + cat "terraform-config/${{ vars.ENV_NAME }}.tfvars" "deployment/environments/${{ vars.ENV_NAME }}.tfvars" > "deployment/terraform.tfvars" - name: Setup Terraform diff --git a/deployment/backend-development.hcl b/deployment/enviroments/backend-development.hcl similarity index 100% rename from deployment/backend-development.hcl rename to deployment/enviroments/backend-development.hcl diff --git a/deployment/backend-production.hcl b/deployment/enviroments/backend-production.hcl similarity index 100% rename from deployment/backend-production.hcl rename to deployment/enviroments/backend-production.hcl diff --git a/deployment/backend-staging.hcl b/deployment/enviroments/backend-staging.hcl similarity index 100% rename from deployment/backend-staging.hcl rename to deployment/enviroments/backend-staging.hcl diff --git a/deployment/enviroments/development.tfvars b/deployment/enviroments/development.tfvars new file mode 100644 index 00000000..b5476dc0 --- /dev/null +++ b/deployment/enviroments/development.tfvars @@ -0,0 +1,44 @@ +# Example Terraform Variables for SF Website Infrastructure +# Copy this file to terraform.tfvars and customize the values + +# General Configuration +aws_region = "us-east-1" +environment = "development" # Can be any name: development, staging, production, qa, demo, sandbox, etc. + +apos_release_id = "v1.0.0(0)" +# Domain Configuration +domain_name = "sf-website-development.sandbox-prettyclear.com" +media_domain_name = "sf-website-media-development.sandbox-prettyclear.com" +certificate_arn = "arn:aws:acm:us-east-1:695912022152:certificate/3dd0c51e-6749-485b-95f8-c92a4950ac93" + +# GitHub Actions Configuration +github_repository = "speedandfunction/website" # Format: owner/repo + +# Monitoring Configuration (SENSITIVE) +slack_webhook_url = "" + +# Optional: Route 53 Configuration +route53_zone_id = "Z031220720LW1I1AB9GUY" # Leave empty to skip DNS record creation + +# Container Configuration +container_image_tag = "latest" +container_cpu = 1024 # 1024 = 1 vCPU +container_memory = 2048 # 2048 MB = 2 GB +container_port = 3000 # Application port +log_retention_days = 7 # CloudWatch log retention + +# Scaling Configuration +ecs_desired_count = 1 +ecs_max_capacity = 3 + +# Environment-specific Instance Types +documentdb_instance_class = "db.t3.micro" +redis_node_type = "cache.t3.micro" # cache.t3.small for production + +# Tags +default_tags = { + Project = "Website" + Environment = "development" + CostCenter = "Website" + Owner = "peter.ovchyn" +} \ No newline at end of file diff --git a/deployment/init-aws-for-terraform.sh b/deployment/init-aws-for-terraform.sh deleted file mode 100755 index 35d37df6..00000000 --- a/deployment/init-aws-for-terraform.sh +++ /dev/null @@ -1,276 +0,0 @@ -#!/bin/bash - -# Script to manage Terraform backend resources (S3 bucket) -# Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME] - -set -e - -# Configuration from backend-development.hcl -BUCKET_NAME="${BUCKET_NAME:-sf-website-infrastructure}" -AWS_REGION="${AWS_REGION:-us-east-1}" - -# Global variables -AWS_PROFILE="${AWS_PROFILE:-}" - -# Colors for output -readonly RED='\033[0;31m' -readonly GREEN='\033[0;32m' -readonly YELLOW='\033[1;33m' -readonly NC='\033[0m' # No Color - -# Print colored output -print_info() { - echo -e "${GREEN}[INFO]${NC} $1" -} - -print_warning() { - echo -e "${YELLOW}[WARN]${NC} $1" -} - -print_error() { - echo -e "${RED}[ERROR]${NC} $1" -} - -# Execute AWS command with proper profile handling and error suppression -aws_exec() { - if [ -n "${AWS_PROFILE}" ]; then - # Suppress spurious head/cat errors that occur with some AWS CLI configurations - aws --profile "${AWS_PROFILE}" "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) - else - aws "$@" 2> >(grep -v "head:" | grep -v "cat:" >&2) - fi -} - -# Check if AWS CLI is installed and configured -check_aws_cli() { - if ! command -v aws &> /dev/null; then - print_error 'AWS CLI is not installed' - exit 1 - fi - - # Check credentials by testing the command and capturing exit code - if aws_exec sts get-caller-identity >/dev/null 2>&1; then - if [ -n "${AWS_PROFILE}" ]; then - print_info "Using AWS profile: ${AWS_PROFILE}" - fi - else - if [ -n "${AWS_PROFILE}" ]; then - print_error "AWS CLI profile '${AWS_PROFILE}' is not configured or credentials are invalid" - else - print_error 'AWS CLI is not configured or credentials are invalid' - fi - exit 1 - fi -} - -# Check if S3 bucket exists -bucket_exists() { - aws_exec s3api head-bucket --bucket "${BUCKET_NAME}" \ - --region "${AWS_REGION}" >/dev/null 2>&1 - return $? -} - -# Create S3 bucket -create_s3_bucket() { - if bucket_exists; then - print_warning "S3 bucket '${BUCKET_NAME}' already exists" - return 0 - fi - - print_info "Creating S3 bucket '${BUCKET_NAME}'..." - - # Create bucket - if [ "${AWS_REGION}" = 'us-east-1' ]; then - aws_exec s3api create-bucket --bucket "${BUCKET_NAME}" \ - --region "${AWS_REGION}" - else - aws_exec s3api create-bucket --bucket "${BUCKET_NAME}" \ - --region "${AWS_REGION}" \ - --create-bucket-configuration LocationConstraint="${AWS_REGION}" - fi - - # Enable versioning - aws_exec s3api put-bucket-versioning --bucket "${BUCKET_NAME}" \ - --versioning-configuration Status=Enabled - - # Enable server-side encryption - aws_exec s3api put-bucket-encryption --bucket "${BUCKET_NAME}" \ - --server-side-encryption-configuration '{ - "Rules": [ - { - "ApplyServerSideEncryptionByDefault": { - "SSEAlgorithm": "AES256" - } - } - ] - }' - - # Block public access - aws_exec s3api put-public-access-block --bucket "${BUCKET_NAME}" \ - --public-access-block-configuration \ - BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true - - print_info "S3 bucket '${BUCKET_NAME}' created successfully" -} - -# Delete S3 bucket (including all objects and versions) -delete_s3_bucket() { - if ! bucket_exists; then - print_warning "S3 bucket '${BUCKET_NAME}' does not exist" - return 0 - fi - - print_info "Deleting all objects from S3 bucket '${BUCKET_NAME}'..." - - # Delete all object versions and delete markers - aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ - --query 'Versions[].{Key:Key,VersionId:VersionId}' \ - --output text | while read -r key version_id; do - if [ -n "${key}" ] && [ -n "${version_id}" ]; then - aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \ - --key "${key}" --version-id "${version_id}" - fi - done - - # Delete all delete markers - aws_exec s3api list-object-versions --bucket "${BUCKET_NAME}" \ - --query 'DeleteMarkers[].{Key:Key,VersionId:VersionId}' \ - --output text | while read -r key version_id; do - if [ -n "${key}" ] && [ -n "${version_id}" ]; then - aws_exec s3api delete-object --bucket "${BUCKET_NAME}" \ - --key "${key}" --version-id "${version_id}" - fi - done - - print_info "Deleting S3 bucket '${BUCKET_NAME}'..." - aws_exec s3api delete-bucket --bucket "${BUCKET_NAME}" \ - --region "${AWS_REGION}" - - print_info "S3 bucket '${BUCKET_NAME}' deleted successfully" -} - -# Show status of resources -show_status() { - print_info 'Checking Terraform backend resources status...' - - if bucket_exists; then - print_info "S3 bucket '${BUCKET_NAME}' exists" - else - print_warning "S3 bucket '${BUCKET_NAME}' does not exist" - fi -} - -# Create all resources -create_resources() { - print_info 'Creating Terraform backend resources...' - create_s3_bucket - print_info 'All resources created successfully!' -} - -# Delete all resources with automatic confirmation for non-interactive use -delete_resources() { - # Check if running in non-interactive mode (for automation) - if [ "${TERRAFORM_NON_INTERACTIVE:-false}" = "true" ] || [ ! -t 0 ]; then - print_warning 'Running in non-interactive mode, skipping confirmation...' - confirmation="yes" - else - print_warning 'This will delete all Terraform backend resources!' - read -p 'Are you sure? (yes/no): ' confirmation - fi - - if [ "${confirmation}" != 'yes' ]; then - print_info 'Operation cancelled' - return 0 - fi - - print_info 'Deleting Terraform backend resources...' - delete_s3_bucket - print_info 'All resources deleted successfully!' -} - -# Show usage -show_usage() { - echo 'Usage: ./init-aws-for-terraform.sh [create|delete|status] [--profile PROFILE_NAME]' - echo '' - echo 'Commands:' - echo ' create - Create S3 bucket' - echo ' delete - Delete S3 bucket' - echo ' status - Show status of resources' - echo '' - echo 'Options:' - echo ' --profile PROFILE_NAME - Use specified AWS profile' - echo '' - echo 'Environment Variables:' - echo ' AWS_PROFILE - AWS profile to use (can be overridden by --profile)' - echo ' BUCKET_NAME - S3 bucket name (default: sf-website-infrastructure)' - echo ' AWS_REGION - AWS region (default: us-east-1)' - echo '' - echo 'Examples:' - echo ' ./init-aws-for-terraform.sh create' - echo ' ./init-aws-for-terraform.sh create --profile tf-sf-website' - echo ' ./init-aws-for-terraform.sh status --profile tf-sf-website' - echo ' AWS_PROFILE=tf-sf-website ./init-aws-for-terraform.sh create' - echo ' BUCKET_NAME=my-terraform-state AWS_REGION=us-west-2 ./init-aws-for-terraform.sh create' - echo '' - echo 'Resources managed:' - echo " S3 Bucket: ${BUCKET_NAME}" - echo " Region: ${AWS_REGION}" -} - -# Main function -main() { - local command='' - - # Parse arguments directly in main function - while [ $# -gt 0 ]; do - case "$1" in - 'create'|'delete'|'status'|'help'|'-h'|'--help') - command="$1" - shift - ;; - '--profile') - if [ -n "$2" ]; then - AWS_PROFILE="$2" - shift 2 - else - print_error 'Profile name is required after --profile' - exit 1 - fi - ;; - *) - print_error "Unknown argument: $1" - show_usage - exit 1 - ;; - esac - done - - check_aws_cli - - case "${command}" in - 'create') - create_resources - ;; - 'delete') - delete_resources - ;; - 'status') - show_status - ;; - 'help'|'-h'|'--help') - show_usage - ;; - '') - show_usage - exit 1 - ;; - *) - print_error "Unknown command: ${command}" - show_usage - exit 1 - ;; - esac -} - -# Run main function with all arguments -main "$@" \ No newline at end of file From a53edfad277f3f22f5c04ffbe9d9f9966950cad7 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 22:02:18 +0200 Subject: [PATCH 27/46] Fix directory name typo in deployment configuration - Rename deployment/enviroments to deployment/environments - Move all backend configuration files to correctly spelled directory - Move development.tfvars to correctly spelled directory --- deployment/{enviroments => environments}/backend-development.hcl | 0 deployment/{enviroments => environments}/backend-production.hcl | 0 deployment/{enviroments => environments}/backend-staging.hcl | 0 deployment/{enviroments => environments}/development.tfvars | 0 4 files changed, 0 insertions(+), 0 deletions(-) rename deployment/{enviroments => environments}/backend-development.hcl (100%) rename deployment/{enviroments => environments}/backend-production.hcl (100%) rename deployment/{enviroments => environments}/backend-staging.hcl (100%) rename deployment/{enviroments => environments}/development.tfvars (100%) diff --git a/deployment/enviroments/backend-development.hcl b/deployment/environments/backend-development.hcl similarity index 100% rename from deployment/enviroments/backend-development.hcl rename to deployment/environments/backend-development.hcl diff --git a/deployment/enviroments/backend-production.hcl b/deployment/environments/backend-production.hcl similarity index 100% rename from deployment/enviroments/backend-production.hcl rename to deployment/environments/backend-production.hcl diff --git a/deployment/enviroments/backend-staging.hcl b/deployment/environments/backend-staging.hcl similarity index 100% rename from deployment/enviroments/backend-staging.hcl rename to deployment/environments/backend-staging.hcl diff --git a/deployment/enviroments/development.tfvars b/deployment/environments/development.tfvars similarity index 100% rename from deployment/enviroments/development.tfvars rename to deployment/environments/development.tfvars From 283e35bb21cb73e4aed6bd36329ab7bb00742eb1 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 22:09:28 +0200 Subject: [PATCH 28/46] Fix Terraform configuration and AWS region in deploy workflow - Fix backend config path to use environments/ subdirectory - Remove var-file parameter from terraform plan command - Change AWS region from eu-east-1 to us-east-1 - Comment out S3 planfile copy step --- .github/workflows/deploy_to_aws.yml | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 2761565c..ed079a58 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -66,20 +66,20 @@ jobs: - name: Terraform Plan for ${{ vars.ENV_NAME }} run: | cd deployment - terraform init -backend-config="backend-${{ vars.ENV_NAME }}.hcl" - terraform plan -var-file="terraform-config/environments/${{ env.TFVAR_NAME }}" -out="terraform/${{ env.SETTINGS_BUCKET }}.tfplan" + terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" + terraform plan -out="terraform/${{ env.SETTINGS_BUCKET }}.tfplan" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} - AWS_DEFAULT_REGION: "eu-east-1" + AWS_DEFAULT_REGION: "us-east-1" - - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} - run: aws s3 cp "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" - env: - AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - AWS_DEFAULT_REGION: "eu-west-1" + # - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} + # run: aws s3 cp "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" + # env: + # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} + # AWS_DEFAULT_REGION: "eu-west-1" # detach-waf-if-needed: # needs: init-and-plan From 4b67b307063801722b4c6919174387b9332f60d7 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 22:14:50 +0200 Subject: [PATCH 29/46] Fix terraform plan output path and enable S3 plan storage - Fix terraform plan output path to use vars.ENV_NAME instead of env.SETTINGS_BUCKET - Enable S3 plan file storage step with correct paths and environment variables - Update S3 bucket path structure to use plans/ subdirectory - Remove extra blank line for cleaner formatting --- .github/workflows/deploy_to_aws.yml | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index ed079a58..eef2d240 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -62,24 +62,23 @@ jobs: terraform_version: 1.12.0 terraform_wrapper: false - - name: Terraform Plan for ${{ vars.ENV_NAME }} run: | cd deployment terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" - terraform plan -out="terraform/${{ env.SETTINGS_BUCKET }}.tfplan" + terraform plan -out="terraform/${{ vars.ENV_NAME }}.tfplan" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} AWS_DEFAULT_REGION: "us-east-1" - # - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} - # run: aws s3 cp "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # AWS_DEFAULT_REGION: "eu-west-1" + - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "terraform/${{ vars.ENV_NAME }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "eu-east-1" # detach-waf-if-needed: # needs: init-and-plan From 0e796061486f2b3ff1ad013e370a5a793cf33c3f Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 22:16:26 +0200 Subject: [PATCH 30/46] Fix Terraform plan file paths in GitHub workflow - Update terraform plan output path to use deployment directory - Fix S3 copy command to use correct plan file location --- .github/workflows/deploy_to_aws.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index eef2d240..e9641ed3 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -66,14 +66,14 @@ jobs: run: | cd deployment terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" - terraform plan -out="terraform/${{ vars.ENV_NAME }}.tfplan" + terraform plan -out="${{ vars.ENV_NAME }}.tfplan" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} AWS_DEFAULT_REGION: "us-east-1" - name: Copy planfile to S3 bucket for ${{ vars.ENV_NAME }} - run: aws s3 cp "terraform/${{ vars.ENV_NAME }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.tfplan" + run: aws s3 cp "deployment/${{ vars.ENV_NAME }}.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.tfplan" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} From b4008007e303b9da75d2cdede87a9ccc6710a6bd Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Tue, 27 May 2025 22:22:44 +0200 Subject: [PATCH 31/46] Fix AWS region configuration in deployment workflow - Change AWS_DEFAULT_REGION from eu-east-1 to us-east-1 --- .github/workflows/deploy_to_aws.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index e9641ed3..40795b4a 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -78,7 +78,7 @@ jobs: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} SETTINGS_BUCKET: sf-website-infrastructure - AWS_DEFAULT_REGION: "eu-east-1" + AWS_DEFAULT_REGION: "us-east-1" # detach-waf-if-needed: # needs: init-and-plan From 88e7564f2c1acb56dbd669a94ed019a386f5bbe1 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 10:27:53 +0200 Subject: [PATCH 32/46] Simplify GitHub workflow and update ECR repository naming - Remove commented out jobs and simplify workflow structure - Add apply and build_and_push_docker_image jobs to workflow - Update ECR repository name from sf-website-dev to sf-website-development - Configure proper job dependencies and environment settings --- .github/workflows/deploy_to_aws.yml | 523 ++++------------------------ deploy-ecs.sh | 2 +- 2 files changed, 64 insertions(+), 461 deletions(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 40795b4a..bc44b564 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -79,474 +79,77 @@ jobs: AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} SETTINGS_BUCKET: sf-website-infrastructure AWS_DEFAULT_REGION: "us-east-1" + apply: + needs: [init-and-plan] + runs-on: ubuntu-latest + environment: Development + if: ${{ inputs.deploy-plan-only == false }} + steps: + - name: Get Environment Name for ${{ vars.ENV_NAME }} + id: get_env_name + uses: Entepotenz/change-string-case-action-min-dependencies@v1 + with: + string: ${{ vars.ENV_NAME }} - # detach-waf-if-needed: - # needs: init-and-plan - # if: ${{ inputs.deploy-plan-only == false }} - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - - # - name: Checkout repo - # uses: actions/checkout@v4 - - # - name: Checkout config repository - # uses: actions/checkout@v4 - # with: - # repository: 'opensupplyhub/ci-deployment' - # path: 'terraform-config' - # token: ${{ secrets.PAT }} - - # - name: Copy tfvars for ${{ vars.ENV_NAME }} - # run: | - # cat "terraform-config/environments/${{ env.TFVAR_NAME }}" "deployment/environments/${{ env.TFVAR_NAME }}" > "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfvars" - # env: - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # TFVAR_NAME: terraform-${{ steps.get_env_name.outputs.lowercase }}.tfvars - - # - name: Check whether AWS WAF enabled and find CloudFront distribution id - # run: | - # CLOUDFRONT_DISTRIBUTION_ID=$(./scripts/find_cloudfront_distribution_id.sh "$CLOUDFRONT_DOMAIN") - # waf_enabled=$(grep -E '^waf_enabled\s*=' deployment/environments/${{ env.TFVAR_NAME }} | awk '{print $3}') - - # echo "WAF Enabled: $waf_enabled" - # echo "Distribution ID: $CLOUDFRONT_DISTRIBUTION_ID" - - # if [[ "$waf_enabled" == "false" && -n "$CLOUDFRONT_DISTRIBUTION_ID" ]]; then - # echo "Detaching WAF from CloudFront distribution $CLOUDFRONT_DISTRIBUTION_ID" - # config=$(aws cloudfront get-distribution-config --id "$CLOUDFRONT_DISTRIBUTION_ID") - # etag=$(echo "$config" | jq -r '.ETag') - # dist=$(echo "$config" | jq '.DistributionConfig | .WebACLId = ""') - - # echo "$dist" > updated_config.json - - # aws cloudfront update-distribution \ - # --id "$CLOUDFRONT_DISTRIBUTION_ID" \ - # --if-match "$etag" \ - # --distribution-config file://updated_config.json - # else - # echo "Skipping WAF detachment" - # fi - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # TFVAR_NAME: terraform-${{ steps.get_env_name.outputs.lowercase }}.tfvars - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # CLOUDFRONT_DOMAIN: ${{ vars.CLOUDFRONT_DOMAIN }} - - # apply: - # needs: [init-and-plan, detach-waf-if-needed] - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - - # - name: Setup Terraform - # uses: hashicorp/setup-terraform@v2 - # with: - # terraform_version: 1.5.0 - # terraform_wrapper: false - - # - name: Checkout - # uses: actions/checkout@v4 - - # - name: Get planfile from S3 bucket for ${{ vars.ENV_NAME }} - # run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" "deployment/terraform/${{ env.SETTINGS_BUCKET }}.tfplan" - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # AWS_DEFAULT_REGION: "eu-west-1" - - # - name: Terraform Apply for ${{ vars.ENV_NAME }} - # run: | - # ./deployment/infra apply - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # SETTINGS_BUCKET: oshub-settings-${{ steps.get_env_name.outputs.lowercase }} - # AWS_DEFAULT_REGION: "eu-west-1" - - # build_and_push_react_app: - # needs: apply - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - - # - name: Checkout repo - # uses: actions/checkout@v4 - - # - name: Setup Node.js - # uses: actions/setup-node@v2 - # with: - # node-version: '14' - - # - name: Cache dependencies - # uses: actions/cache@v4 - # with: - # path: | - # src/react/node_modules - # key: ${{ runner.os }}-node-${{ hashFiles('**/yarn.lock') }} - - # - name: Install dependencies - # working-directory: src/react - # run: yarn install - - # - name: Build static assets - # working-directory: src/react - # run: yarn run build - - # - id: project - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.PROJECT }} - - # - name: Move static - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # FRONTEND_BUCKET: ${{ steps.project.outputs.lowercase }}-${{ steps.get_env_name.outputs.lowercase }}-frontend - # AWS_DEFAULT_REGION: "eu-west-1" - # CLOUDFRONT_DOMAIN: ${{ vars.CLOUDFRONT_DOMAIN }} - # run: | - # CLOUDFRONT_DISTRIBUTION_ID=$(./scripts/find_cloudfront_distribution_id.sh "$CLOUDFRONT_DOMAIN") - # if [ -z "$CLOUDFRONT_DISTRIBUTION_ID" ]; then - # echo "Error: No CloudFront distribution found for domain: $CLOUDFRONT_DOMAIN" - # exit 1 - # fi - # aws s3 sync src/react/build/ s3://$FRONTEND_BUCKET-$AWS_DEFAULT_REGION/ --delete - # aws cloudfront create-invalidation --distribution-id "$CLOUDFRONT_DISTRIBUTION_ID" --paths "/*" - - # build_and_push_docker_image: - # needs: build_and_push_react_app - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - - # - name: Checkout repo - # uses: actions/checkout@v4 - - # - name: Configure AWS credentials for ${{ vars.ENV_NAME }} - # uses: aws-actions/configure-aws-credentials@v1 - # with: - # aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} - # aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # aws-region: "eu-west-1" - - # - name: Get GIT_COMMIT - # run: | - # export SHORT_SHA="$(git rev-parse --short HEAD)" - # export GIT_COMMIT_CI="${SHORT_SHA:0:7}" - # echo "GIT_COMMIT=$GIT_COMMIT_CI" >> $GITHUB_ENV - - # - name: Login to Amazon ECR - # uses: aws-actions/amazon-ecr-login@v1 - - - # - name: Build and push Kafka Tools Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # with: - # context: src/kafka-tools/ - # file: src/kafka-tools/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-kafka-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} - - # - name: Build and push Django Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # with: - # context: src/django - # file: src/django/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} - - # - name: Build and push Batch Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # with: - # context: src/batch - # file: src/batch/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-batch-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} - # build-args: | - # GIT_COMMIT=${{ env.GIT_COMMIT }} - # DOCKER_IMAGE=${{ vars.DOCKER_IMAGE }} - # ENVIRONMENT=${{ steps.get_env_name.outputs.lowercase }} - - # - name: Build and push Dedupe Hub Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # with: - # context: src/dedupe-hub/api - # file: src/dedupe-hub/api/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-deduplicate-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} - - # - name: Build and push Logstash Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # with: - # context: src/logstash - # file: src/logstash/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-logstash-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + - name: Setup Terraform + uses: hashicorp/setup-terraform@v2 + with: + terraform_version: 1.5.0 + terraform_wrapper: false - # - name: Build and push Database Anonymizer Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # if: ${{ steps.get_env_name.outputs.lowercase == 'production' }} - # with: - # context: deployment/terraform/database_anonymizer_scheduled_task/docker - # file: deployment/terraform/database_anonymizer_scheduled_task/docker/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-database-anonymizer-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + - name: Checkout + uses: actions/checkout@v4 - # - name: Build and push Anonymize Database Dump Docker image to ECR for ${{ vars.ENV_NAME }} - # uses: docker/build-push-action@v2 - # if: ${{ steps.get_env_name.outputs.lowercase == 'test' }} - # with: - # context: deployment/terraform/anonymized_database_dump_scheduled_task/docker - # file: deployment/terraform/anonymized_database_dump_scheduled_task/docker/Dockerfile - # push: true - # tags: ${{ vars.ECR_REGISTRY }}/${{ vars.IMAGE_NAME }}-anonymized-database-dump-${{ steps.get_env_name.outputs.lowercase }}:${{ env.GIT_COMMIT }} + - name: Get planfile from S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.tfplan" "deployment/${{ vars.ENV_NAME }}.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" - # create_kafka_topic: - # needs: build_and_push_docker_image - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 + - name: Terraform Apply for ${{ vars.ENV_NAME }} + run: | + cd deployment + terraform apply "${{ vars.ENV_NAME }}.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "us-east-1" - # - name: Create or update kafka topics for ${{ vars.ENV_NAME }} - # run: | - # ./deployment/run_kafka_task ${{ vars.ENV_NAME }} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" + + build_and_push_docker_image: + needs: [apply] + runs-on: ubuntu-latest + environment: Development + if: ${{ inputs.deploy-plan-only == false }} + steps: + - name: Get Environment Name for ${{ vars.ENV_NAME }} + id: get_env_name + uses: Entepotenz/change-string-case-action-min-dependencies@v1 + with: + string: ${{ vars.ENV_NAME }} - # stop_logstash: - # needs: create_kafka_topic - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 - # - name: Stop Logstash for ${{ vars.ENV_NAME }} - # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # run: | - # aws \ - # ecs update-service --desired-count 0 --cluster=ecsOpenSupplyHub${{vars.ENV_NAME}}Cluster \ - # --service=OpenSupplyHub${{vars.ENV_NAME}}AppLogstash + - name: Checkout repo + uses: actions/checkout@v4 - # restore_database: - # needs: stop_logstash - # runs-on: self-hosted - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 - # - name: Restore database for ${{ vars.ENV_NAME }} - # if: ${{ (vars.ENV_NAME == 'Preprod' || vars.ENV_NAME == 'Test') && inputs.restore-db == true}} - # run: | - # cd ./src/anon-tools - # mkdir -p ./keys - # echo "${{ secrets.KEY_FILE }}" > ./keys/key - # docker build -t restore -f Dockerfile.restore . - # docker run -v ./keys/key:/keys/key --shm-size=2gb --rm \ - # -e AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} \ - # -e AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }} \ - # -e AWS_DEFAULT_REGION=eu-west-1 \ - # -e ENVIRONMENT=${{ vars.ENV_NAME }} \ - # -e DATABASE_NAME=opensupplyhub \ - # -e DATABASE_USERNAME=opensupplyhub \ - # -e DATABASE_PASSWORD=${{ secrets.DATABASE_PASSWORD }} \ - # restore - # - name: Reset database for ${{ vars.ENV_NAME }} - # if: ${{ vars.ENV_NAME == 'Development' && inputs.restore-db == true}} - # run: | - # echo "Creating an S3 folder with production location lists to reset DB if it doesn't exist." - # aws s3api put-object --bucket ${{ env.LIST_S3_BUCKET }} --key ${{ env.RESET_LISTS_FOLDER }} - # echo "Coping all production location files from the repo to S3 to ensure consistency between local and environment resets." - # aws s3 cp ./src/django/${{ env.RESET_LISTS_FOLDER }} s3://${{ env.LIST_S3_BUCKET }}/${{ env.RESET_LISTS_FOLDER }} --recursive - # echo "Triggering reset commands." - # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "reset_database" - # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "matchfixtures" - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # LIST_S3_BUCKET: opensupplyhub-${{ steps.get_env_name.outputs.lowercase }}-files-eu-west-1 - # RESET_LISTS_FOLDER: "api/fixtures/list_files/" + - name: Configure AWS credentials for ${{ vars.ENV_NAME }} + uses: aws-actions/configure-aws-credentials@v1 + with: + aws-access-key-id: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + aws-secret-access-key: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + aws-region: "us-east-1" - # update_services: - # needs: restore_database - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Update ECS Django Service with new Image for ${{ vars.ENV_NAME }} - # run: | - # aws ecs update-service --cluster ${{ vars.CLUSTER }} --service ${{ vars.SERVICE_NAME }} --force-new-deployment --region ${{env.AWS_DEFAULT_REGION}} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # - name: Update ECS Dedupe Hub Service with new Image for ${{ vars.ENV_NAME }} - # run: | - # aws ecs update-service --cluster ${{ vars.CLUSTER }} --service ${{ vars.SERVICE_NAME }}DD --force-new-deployment --region ${{env.AWS_DEFAULT_REGION}} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" + - name: Login to Amazon ECR + uses: aws-actions/amazon-ecr-login@v1 - # post_deploy: - # needs: update_services - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 - # - name: Run migrations and other post-deployment tasks for ${{ vars.ENV_NAME }} - # run: | - # ./deployment/run_cli_task ${{ vars.ENV_NAME }} "post_deployment" - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # clear_opensearch: - # needs: post_deploy - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 - # - name: Get OpenSearch domain, filesystem and access point IDs for ${{ vars.ENV_NAME }} - # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} - # id: export_variables - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # run: | - # OS_DOMAIN_NAME=$(echo "${{ vars.ENV_NAME }}-os-domain" | tr '[:upper:]' '[:lower:]') - # OPENSEARCH_DOMAIN=$(aws \ - # es describe-elasticsearch-domains --domain-names $OS_DOMAIN_NAME \ - # --query "DomainStatusList[].Endpoints.vpc" --output text) - # EFS_ID=$(aws \ - # efs describe-file-systems \ - # --query "FileSystems[?Tags[?Key=='Environment' && Value=='${{ vars.ENV_NAME }}']].FileSystemId" \ - # --output text) - # EFS_AP_ID=$(aws \ - # efs describe-access-points \ - # --query "AccessPoints[?FileSystemId=='$EFS_ID'].AccessPointId" \ - # --output text) - # echo "EFS_ID=$EFS_ID" >> $GITHUB_OUTPUT - # echo "EFS_AP_ID=$EFS_AP_ID" >> $GITHUB_OUTPUT - # echo "OPENSEARCH_DOMAIN=$OPENSEARCH_DOMAIN" >> $GITHUB_OUTPUT - # - name: Clear the custom OpenSearch indexes and templates for ${{ vars.ENV_NAME }} - # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # OPENSEARCH_DOMAIN: ${{ steps.export_variables.outputs.OPENSEARCH_DOMAIN }} - # EFS_AP_ID: ${{ steps.export_variables.outputs.EFS_AP_ID }} - # EFS_ID: ${{ steps.export_variables.outputs.EFS_ID }} - # BASTION_IP: ${{ vars.BASTION_IP }} - # run: | - # cd ./deployment/clear_opensearch - # mkdir -p script - # mkdir -p ssh - # echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ssh/config - # printf "%s\n" "${{ secrets.SSH_PRIVATE_KEY }}" > ssh/id_rsa - # echo "" >> ssh/id_rsa - # echo -n ${{ vars.BASTION_IP }} > script/.env - # envsubst < clear_opensearch.sh.tpl > script/clear_opensearch.sh - # envsubst < run.sh.tpl > script/run.sh - # docker run --rm \ - # -v ./script:/script \ - # -v ./ssh:/root/.ssh \ - # kroniak/ssh-client bash /script/run.sh + - name: Build and push Kafka Tools Docker image to ECR for ${{ vars.ENV_NAME }} + uses: docker/build-push-action@v2 + with: + push: true + tags: ${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }} + env: + IMAGE_TAG: latest + ECR_REPOSITORY: 695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-${{ vars.ENV_NAME }} - # start_logstash: - # needs: clear_opensearch - # runs-on: ubuntu-latest - # environment: ${{ inputs.deploy-env || (github.ref_name == 'main' && 'Development') || (startsWith(github.ref_name, 'releases/') && 'Pre-prod') }} - # if: ${{ inputs.deploy-plan-only == false }} - # steps: - # - name: Get Environment Name for ${{ vars.ENV_NAME }} - # id: get_env_name - # uses: Entepotenz/change-string-case-action-min-dependencies@v1 - # with: - # string: ${{ vars.ENV_NAME }} - # - name: Checkout repo - # uses: actions/checkout@v4 - # - name: Start Logstash for ${{ vars.ENV_NAME }} - # if: ${{ inputs.restore-db == true || inputs.clear-opensearch == true}} - # env: - # AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - # AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - # AWS_DEFAULT_REGION: "eu-west-1" - # run: | - # aws \ - # ecs update-service --desired-count 1 --cluster=ecsOpenSupplyHub${{vars.ENV_NAME}}Cluster \ - # --service=OpenSupplyHub${{vars.ENV_NAME}}AppLogstash diff --git a/deploy-ecs.sh b/deploy-ecs.sh index 4f37b312..9c70becc 100755 --- a/deploy-ecs.sh +++ b/deploy-ecs.sh @@ -8,7 +8,7 @@ set -e # Exit on any error # Configuration AWS_PROFILE="tf-sf-website" AWS_REGION="us-east-1" -ECR_REPOSITORY="695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-dev" +ECR_REPOSITORY="695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-development" ECS_CLUSTER="sf-website-dev-cluster" ECS_SERVICE="sf-website-dev-service" IMAGE_TAG="latest" From 8b49799a8e4f1b2cc037ecdbcff4282bde261479 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 10:29:53 +0200 Subject: [PATCH 33/46] Update Terraform version to 1.12.0 in apply job - Align Terraform version in apply job with init-and-plan job for consistency --- .github/workflows/deploy_to_aws.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index bc44b564..33a418bd 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -94,7 +94,7 @@ jobs: - name: Setup Terraform uses: hashicorp/setup-terraform@v2 with: - terraform_version: 1.5.0 + terraform_version: 1.12.0 terraform_wrapper: false - name: Checkout From ad646a3fb81977c7b9b5e08a5907ee5e6f5c7a22 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 10:45:37 +0200 Subject: [PATCH 34/46] Fix Terraform lock file consistency issue in GitHub Actions - Add steps to save/restore .terraform.lock.hcl to/from S3 - Ensures consistent provider versions between plan and apply phases --- .github/workflows/deploy_to_aws.yml | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 33a418bd..6118fc69 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -79,6 +79,14 @@ jobs: AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} SETTINGS_BUCKET: sf-website-infrastructure AWS_DEFAULT_REGION: "us-east-1" + + - name: Copy lock file to S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" apply: needs: [init-and-plan] runs-on: ubuntu-latest @@ -108,6 +116,14 @@ jobs: SETTINGS_BUCKET: sf-website-infrastructure AWS_DEFAULT_REGION: "us-east-1" + - name: Get lock file from S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" + - name: Terraform Apply for ${{ vars.ENV_NAME }} run: | cd deployment From 03a5c5bc14e66faff21bd1187524b7eb9612b601 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 10:50:43 +0200 Subject: [PATCH 35/46] Fix duplicate cd command in terraform apply step --- .github/workflows/deploy_to_aws.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 6118fc69..80628a85 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -127,6 +127,8 @@ jobs: - name: Terraform Apply for ${{ vars.ENV_NAME }} run: | cd deployment + cd deployment + terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" terraform apply "${{ vars.ENV_NAME }}.tfplan" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} From c2bb29c075ea24d82e9b6562f1dd96bc590c23fe Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 10:52:49 +0200 Subject: [PATCH 36/46] Remove duplicate cd deployment command in workflow - Remove redundant cd deployment line in Terraform Apply step --- .github/workflows/deploy_to_aws.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 80628a85..73687f90 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -126,7 +126,6 @@ jobs: - name: Terraform Apply for ${{ vars.ENV_NAME }} run: | - cd deployment cd deployment terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" terraform apply "${{ vars.ENV_NAME }}.tfplan" From 9d2154afe91dbc6d1f15719e18a4ca59d621e193 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 11:34:05 +0200 Subject: [PATCH 37/46] Update Docker build step in AWS deployment workflow - Change step name from Kafka Tools to Apostrophe Docker image - Add github-token parameter to docker/build-push-action --- .github/workflows/deploy_to_aws.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 73687f90..95922901 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -161,11 +161,12 @@ jobs: uses: aws-actions/amazon-ecr-login@v1 - - name: Build and push Kafka Tools Docker image to ECR for ${{ vars.ENV_NAME }} + - name: Build and push Apostrophe Docker image to ECR for ${{ vars.ENV_NAME }} uses: docker/build-push-action@v2 with: push: true tags: ${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }} + github-token: ${{ secrets.PAT }} env: IMAGE_TAG: latest ECR_REPOSITORY: 695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-${{ vars.ENV_NAME }} From 797968f873916256e788c6b454239c5a5bf40626 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 11:43:33 +0200 Subject: [PATCH 38/46] Update Docker build configuration and ignore patterns - Add .cursor/ and website-pages/ to .dockerignore to exclude submodules from container - Replace docker/build-push-action with direct docker commands in GitHub workflow - Specify linux/amd64 platform for Docker build compatibility --- .dockerignore | 4 ++++ .github/workflows/deploy_to_aws.yml | 8 +++----- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/.dockerignore b/.dockerignore index 47691999..13b0a0ae 100644 --- a/.dockerignore +++ b/.dockerignore @@ -2,6 +2,10 @@ .git .gitignore +# Submodules (not needed in container) +.cursor/ +website-pages/ + # Node.js dependencies node_modules coverage diff --git a/.github/workflows/deploy_to_aws.yml b/.github/workflows/deploy_to_aws.yml index 95922901..78063326 100644 --- a/.github/workflows/deploy_to_aws.yml +++ b/.github/workflows/deploy_to_aws.yml @@ -162,11 +162,9 @@ jobs: - name: Build and push Apostrophe Docker image to ECR for ${{ vars.ENV_NAME }} - uses: docker/build-push-action@v2 - with: - push: true - tags: ${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }} - github-token: ${{ secrets.PAT }} + run: | + docker build --platform linux/amd64 -t $ECR_REPOSITORY:$IMAGE_TAG . + docker push $ECR_REPOSITORY:$IMAGE_TAG env: IMAGE_TAG: latest ECR_REPOSITORY: 695912022152.dkr.ecr.us-east-1.amazonaws.com/sf-website-${{ vars.ENV_NAME }} From 50d0594584111ed21be7070a5c295f61e39c8a45 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Wed, 28 May 2025 12:02:08 +0200 Subject: [PATCH 39/46] Simplify ECS secrets configuration in Terraform - Remove conditional filtering for empty values from secrets block --- deployment/main.tf | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/deployment/main.tf b/deployment/main.tf index f1e6c3cb..460fd810 100644 --- a/deployment/main.tf +++ b/deployment/main.tf @@ -270,12 +270,10 @@ module "ecs" { # Secrets from Parameter Store - filter out empty values secrets = { - for k, v in { - SESSION_SECRET = module.parameter_store.session_secret_arn - SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn - DOCUMENTDB_USERNAME = module.parameter_store.documentdb_username_arn - DOCUMENTDB_PASSWORD = module.parameter_store.documentdb_password_arn - } : k => v if v != "" + SESSION_SECRET = module.parameter_store.session_secret_arn + SERVICE_ACCOUNT_PRIVATE_KEY = module.parameter_store.gcs_service_account_key_arn + DOCUMENTDB_USERNAME = module.parameter_store.documentdb_username_arn + DOCUMENTDB_PASSWORD = module.parameter_store.documentdb_password_arn } tags = local.common_tags From d21c0030a9a647ee607e7788a372fbb294423239 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Thu, 5 Jun 2025 18:18:16 +0200 Subject: [PATCH 40/46] Add AWS environment destruction workflow and documentation - Add destroy_aws_environment.yml GitHub workflow for safe infrastructure teardown - Add destroy_workflow_requirements.md with detailed workflow specifications - Update cursor rules submodule to latest version --- .cursor/rules/common | 2 +- .github/workflows/destroy_aws_environment.yml | 193 ++++++++++++++++++ destroy_workflow_requirements.md | 105 ++++++++++ 3 files changed, 299 insertions(+), 1 deletion(-) create mode 100644 .github/workflows/destroy_aws_environment.yml create mode 100644 destroy_workflow_requirements.md diff --git a/.cursor/rules/common b/.cursor/rules/common index 402558c9..0048a591 160000 --- a/.cursor/rules/common +++ b/.cursor/rules/common @@ -1 +1 @@ -Subproject commit 402558c9424d32857338fd40a8c52f63050e424b +Subproject commit 0048a59175d3774a9ed517c1fe8ade681d8cea42 diff --git a/.github/workflows/destroy_aws_environment.yml b/.github/workflows/destroy_aws_environment.yml new file mode 100644 index 00000000..737ff00f --- /dev/null +++ b/.github/workflows/destroy_aws_environment.yml @@ -0,0 +1,193 @@ +name: 'Destroy AWS Environment' + +on: + workflow_dispatch: + inputs: + environment: + description: 'Environment to destroy' + required: true + type: choice + options: + - Development + default: Development + destroy-plan-only: + description: 'Plan only (show what would be destroyed without destroying)' + required: false + type: boolean + default: false + +permissions: + issues: write + +jobs: + destroy-plan: + runs-on: ubuntu-latest + environment: Development + steps: + - name: Get Environment Name for ${{ vars.ENV_NAME }} + id: get_env_name + uses: Entepotenz/change-string-case-action-min-dependencies@v1 + with: + string: ${{ vars.ENV_NAME }} + + - name: Checkout repo + uses: actions/checkout@v4 + + - name: Checkout config repository + uses: actions/checkout@v4 + with: + repository: 'speedandfunction/website-ci-secret' + path: 'terraform-config' + token: ${{ secrets.PAT }} + + - name: Copy tfvars for ${{ vars.ENV_NAME }} + run: | + cat "terraform-config/${{ vars.ENV_NAME }}.tfvars" "deployment/environments/${{ vars.ENV_NAME }}.tfvars" > "deployment/terraform.tfvars" + + - name: Setup Terraform + uses: hashicorp/setup-terraform@v2 + with: + terraform_version: 1.12.0 + terraform_wrapper: false + + - name: Terraform Destroy Plan for ${{ vars.ENV_NAME }} + run: | + cd deployment + terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" + terraform plan -destroy -out="${{ vars.ENV_NAME }}-destroy.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "us-east-1" + + - name: Show Destroy Plan Summary + run: | + cd deployment + echo "## 🚨 DESTROY PLAN SUMMARY FOR ${{ vars.ENV_NAME }} 🚨" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + echo "The following resources will be **DESTROYED**:" >> $GITHUB_STEP_SUMMARY + echo "" >> $GITHUB_STEP_SUMMARY + terraform show -no-color "${{ vars.ENV_NAME }}-destroy.tfplan" | head -50 >> $GITHUB_STEP_SUMMARY + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "us-east-1" + + - name: Copy destroy plan to S3 bucket for ${{ vars.ENV_NAME }} + if: ${{ inputs.destroy-plan-only == false }} + run: aws s3 cp "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" + + - name: Copy lock file to S3 bucket for ${{ vars.ENV_NAME }} + if: ${{ inputs.destroy-plan-only == false }} + run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" + + manual-approval: + needs: [destroy-plan] + runs-on: ubuntu-latest + if: ${{ inputs.destroy-plan-only == false }} + steps: + - name: Wait for approval to destroy ${{ vars.ENV_NAME }} + uses: trstringer/manual-approval@v1 + with: + secret: ${{ github.TOKEN }} + approvers: killev + minimum-approvals: 1 + issue-title: "🚨 DESTROY ${{ vars.ENV_NAME }} AWS Environment 🚨" + issue-body: | + ## ⚠️ CRITICAL: Infrastructure Destruction Request ⚠️ + + **Environment**: ${{ vars.ENV_NAME }} + **Requested by**: @${{ github.actor }} + **Workflow run**: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }} + + ### 🚨 WARNING: This action will PERMANENTLY DESTROY the following infrastructure: + - VPC and all networking components + - ECS cluster and services + - DocumentDB cluster and all data + - ElastiCache Redis cluster and all data + - Application Load Balancer + - CloudFront distribution + - S3 buckets and all stored files + - ECR repository and all images + - IAM roles and policies + - CloudWatch logs and metrics + - Parameter Store secrets + + ### 📋 Before approving, please verify: + - [ ] This is the correct environment to destroy + - [ ] All important data has been backed up + - [ ] Team has been notified of the destruction + - [ ] No critical services depend on this infrastructure + + **To approve**: Comment "approve" or "approved" + **To deny**: Comment "deny" or "denied" + + **⚠️ THIS ACTION CANNOT BE UNDONE ⚠️** + + destroy-apply: + needs: [manual-approval] + runs-on: ubuntu-latest + if: ${{ inputs.destroy-plan-only == false }} + steps: + - name: Get Environment Name for ${{ vars.ENV_NAME }} + id: get_env_name + uses: Entepotenz/change-string-case-action-min-dependencies@v1 + with: + string: ${{ vars.ENV_NAME }} + + - name: Setup Terraform + uses: hashicorp/setup-terraform@v2 + with: + terraform_version: 1.12.0 + terraform_wrapper: false + + - name: Checkout + uses: actions/checkout@v4 + + - name: Get destroy plan from S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" "deployment/${{ vars.ENV_NAME }}-destroy.tfplan" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" + + - name: Get lock file from S3 bucket for ${{ vars.ENV_NAME }} + run: aws s3 cp "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" "deployment/.terraform.lock.hcl" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" + + - name: 🚨 DESTROY Infrastructure for ${{ vars.ENV_NAME }} 🚨 + run: | + cd deployment + terraform init -backend-config="environments/backend-${{ vars.ENV_NAME }}.hcl" + echo "🚨 DESTROYING INFRASTRUCTURE FOR ${{ vars.ENV_NAME }} - THIS CANNOT BE UNDONE! 🚨" + terraform apply "${{ vars.ENV_NAME }}-destroy.tfplan" + echo "✅ Infrastructure for ${{ vars.ENV_NAME }} has been destroyed" + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: "us-east-1" + + - name: Clean up destroy plan files + run: | + aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}-destroy.tfplan" || true + aws s3 rm "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" || true + env: + AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.TF_AWS_SECRET_ACCESS_KEY }} + SETTINGS_BUCKET: sf-website-infrastructure + AWS_DEFAULT_REGION: "us-east-1" \ No newline at end of file diff --git a/destroy_workflow_requirements.md b/destroy_workflow_requirements.md new file mode 100644 index 00000000..461908e8 --- /dev/null +++ b/destroy_workflow_requirements.md @@ -0,0 +1,105 @@ +# AWS Environment Destroy Workflow Requirements + +## Overview +Create a GitHub Actions workflow to destroy AWS environments created by the "Deploy to AWS" workflow. + +## Trigger Configuration +- **Manual trigger only**: `workflow_dispatch` +- **No automatic triggers**: Only humans can initiate the workflow +- **Environment selection**: Human selects environment when initiating workflow + +## Environment Support +- **Current scope**: Development environment only +- **Future scope**: Staging and Production environments are not ready yet +- **Environment source**: Match existing deploy workflow environment configuration + +## Safety Measures +### Approval Process +- **Method**: Use Manual Workflow Approval action (`trstringer/manual-approval@v1`) +- **Approver**: @killev (GitHub username) +- **Minimum approvals**: 1 approval required +- **Process**: Creates GitHub issue assigned to approver, requires "approve"/"deny" response + +### Confirmation Requirements +- **No manual confirmation typing**: User does not need to type "DESTROY" or environment name +- **Plan-only option**: Include "destroy plan only" option to preview what will be destroyed +- **Safety warnings**: Display warnings about what will be destroyed in approval issue + +## Workflow Structure +### Job 1: destroy-plan +- Run `terraform destroy -plan` +- Show what resources would be destroyed +- Store destroy plan file in S3 bucket +- Always runs when workflow is triggered + +### Job 2: manual-approval +- Use `trstringer/manual-approval@v1` action +- Create GitHub issue assigned to @killev +- Require 1 approval to proceed +- Only runs if `destroy-plan-only` is false + +### Job 3: destroy-apply +- Execute actual infrastructure destruction using stored plan +- Only runs after approval +- Only runs if `destroy-plan-only` is false + +## Technical Configuration +### AWS Configuration +- **Region**: us-east-1 +- **Credentials**: Same as deploy workflow (`TF_AWS_ACCESS_KEY_ID`, `TF_AWS_SECRET_ACCESS_KEY`) +- **S3 bucket**: sf-website-infrastructure (for storing plans) + +### Terraform Configuration +- **Version**: 1.12.0 +- **Backend**: Use same backend configuration as deploy workflow +- **Directory**: `deployment/` +- **Backend config**: `environments/backend-Development.hcl` + +### Workflow Inputs +```yaml +environment: + description: 'Environment to destroy' + required: true + type: choice + options: + - Development + default: Development + +destroy-plan-only: + description: 'Plan only (show what would be destroyed without destroying)' + required: false + type: boolean + default: false +``` + +## Infrastructure Scope +Based on existing Terraform configuration, the workflow will destroy: +- VPC and networking components +- ECS cluster and services +- DocumentDB cluster +- ElastiCache Redis cluster +- Application Load Balancer +- CloudFront distribution +- S3 buckets +- ECR repository +- IAM roles and policies +- CloudWatch resources +- Parameter Store secrets + +## Security Considerations +- Manual approval required before destruction +- Only authorized approver (@killev) can approve +- Plan-only option allows safe preview +- No automatic triggers prevent accidental execution +- Uses same secure credential management as deploy workflow +- Clear audit trail via GitHub issues +- Workflow will wait indefinitely for approval (subject to 35-day GitHub limit) + +## Setup Requirements +- **Permissions**: Workflow needs `issues: write` permission to create approval issues +- **Approver access**: @killev must have repository access to comment on issues +- **No additional environment setup required** + +## File Location +- **Path**: `.github/workflows/destroy_aws_environment.yml` +- **Repository**: Same repository as existing deploy workflow \ No newline at end of file From 5f884d93b5fc55321a94bab42d781e7aefd595a2 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Thu, 5 Jun 2025 18:23:21 +0200 Subject: [PATCH 41/46] Update wget version to 1.25.0-r1 in Dockerfile - Update wget package version from 1.25.0-r0 to 1.25.0-r1 --- Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Dockerfile b/Dockerfile index c6f18b61..8ce66e4d 100644 --- a/Dockerfile +++ b/Dockerfile @@ -4,7 +4,7 @@ FROM node:23-alpine WORKDIR /app # Install dependencies needed for health checks with pinned version - RUN apk add --no-cache wget=1.25.0-r0 + RUN apk add --no-cache wget=1.25.0-r1 RUN wget https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem RUN chmod 600 global-bundle.pem From 775bd50f6bb938a1fefcb8d8715b0e0f7d73ace3 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 7 Jun 2025 15:08:27 +0200 Subject: [PATCH 42/46] Add conditional approval for AWS environment destruction - Skip manual approval for Development environment destruction - Add both destroy-plan and manual-approval as dependencies for destroy-apply job - Update destroy-apply condition to handle Development environment without approval --- .github/workflows/destroy_aws_environment.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/destroy_aws_environment.yml b/.github/workflows/destroy_aws_environment.yml index 56a29831..72ccd465 100644 --- a/.github/workflows/destroy_aws_environment.yml +++ b/.github/workflows/destroy_aws_environment.yml @@ -94,7 +94,7 @@ jobs: manual-approval: needs: [destroy-plan] runs-on: ubuntu-latest - if: ${{ inputs.plan_only == false }} + if: ${{ inputs.plan_only == false && inputs.environment != 'Development' }} steps: - name: Wait for approval to destroy ${{ vars.ENV_NAME }} uses: trstringer/manual-approval@v1 @@ -135,9 +135,9 @@ jobs: **⚠️ THIS ACTION CANNOT BE UNDONE ⚠️** destroy-apply: - needs: [manual-approval] + needs: [destroy-plan, manual-approval] runs-on: ubuntu-latest - if: ${{ inputs.plan_only == false }} + if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }} steps: - name: Get Environment Name for ${{ vars.ENV_NAME }} id: get_env_name From 168bb715e8a8637b92e3a8680c3a5c4542c02b8b Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 7 Jun 2025 15:49:46 +0200 Subject: [PATCH 43/46] Add environment specification to destroy workflow jobs - Add environment: Development to manual-approval job - Add environment: Development to destroy-apply job --- .github/workflows/destroy_aws_environment.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.github/workflows/destroy_aws_environment.yml b/.github/workflows/destroy_aws_environment.yml index 72ccd465..62ac17fc 100644 --- a/.github/workflows/destroy_aws_environment.yml +++ b/.github/workflows/destroy_aws_environment.yml @@ -94,6 +94,7 @@ jobs: manual-approval: needs: [destroy-plan] runs-on: ubuntu-latest + environment: Development if: ${{ inputs.plan_only == false && inputs.environment != 'Development' }} steps: - name: Wait for approval to destroy ${{ vars.ENV_NAME }} @@ -137,6 +138,7 @@ jobs: destroy-apply: needs: [destroy-plan, manual-approval] runs-on: ubuntu-latest + environment: Development if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }} steps: - name: Get Environment Name for ${{ vars.ENV_NAME }} From a1aced8cb1bc15ad4aa5c3ffeb3352448ad16dfe Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 7 Jun 2025 15:53:14 +0200 Subject: [PATCH 44/46] Optimize GitHub Actions destroy workflow conditions - Simplify plan_only boolean expressions using negation syntax - Move conditional from job level to step level in manual-approval - Simplify job dependencies in destroy-apply workflow --- .github/workflows/destroy_aws_environment.yml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/destroy_aws_environment.yml b/.github/workflows/destroy_aws_environment.yml index 62ac17fc..091b8683 100644 --- a/.github/workflows/destroy_aws_environment.yml +++ b/.github/workflows/destroy_aws_environment.yml @@ -83,7 +83,7 @@ jobs: AWS_DEFAULT_REGION: "us-east-1" - name: Copy lock file to S3 bucket for ${{ vars.ENV_NAME }} - if: ${{ inputs.plan_only == false }} + if: ${{ !inputs.plan_only }} run: aws s3 cp "deployment/.terraform.lock.hcl" "s3://${{ env.SETTINGS_BUCKET }}/destroy-plans/${{ vars.ENV_NAME }}.terraform.lock.hcl" env: AWS_ACCESS_KEY_ID: ${{ secrets.TF_AWS_ACCESS_KEY_ID }} @@ -95,9 +95,9 @@ jobs: needs: [destroy-plan] runs-on: ubuntu-latest environment: Development - if: ${{ inputs.plan_only == false && inputs.environment != 'Development' }} steps: - name: Wait for approval to destroy ${{ vars.ENV_NAME }} + if: ${{ !inputs.plan_only }} uses: trstringer/manual-approval@v1 with: secret: ${{ github.TOKEN }} @@ -136,7 +136,7 @@ jobs: **⚠️ THIS ACTION CANNOT BE UNDONE ⚠️** destroy-apply: - needs: [destroy-plan, manual-approval] + needs: [manual-approval] runs-on: ubuntu-latest environment: Development if: ${{ inputs.plan_only == false && (inputs.environment == 'Development' || needs.manual-approval.result == 'success') }} From b59b21d46ed326ce2f21d014074a8e098fb7fc38 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 7 Jun 2025 15:55:19 +0200 Subject: [PATCH 45/46] Fix manual approval conditions in destroy AWS environment workflow - Move plan_only check to job level for manual-approval job - Change step condition to only require approval for non-Development environments --- .github/workflows/destroy_aws_environment.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/destroy_aws_environment.yml b/.github/workflows/destroy_aws_environment.yml index 091b8683..670affc3 100644 --- a/.github/workflows/destroy_aws_environment.yml +++ b/.github/workflows/destroy_aws_environment.yml @@ -95,9 +95,10 @@ jobs: needs: [destroy-plan] runs-on: ubuntu-latest environment: Development + if: ${{ !inputs.plan_only }} steps: - name: Wait for approval to destroy ${{ vars.ENV_NAME }} - if: ${{ !inputs.plan_only }} + if: ${{ inputs.environment != 'Development' }} uses: trstringer/manual-approval@v1 with: secret: ${{ github.TOKEN }} From 3b5bac3908a8a65ff36df5a3d08435d6e477539a Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Sat, 7 Jun 2025 21:24:43 +0200 Subject: [PATCH 46/46] Add force delete option to ECR repository - Enable force_delete on ECR repository resource to allow deletion with images --- deployment/modules/ecr/main.tf | 1 + 1 file changed, 1 insertion(+) diff --git a/deployment/modules/ecr/main.tf b/deployment/modules/ecr/main.tf index 1848e142..328b409b 100644 --- a/deployment/modules/ecr/main.tf +++ b/deployment/modules/ecr/main.tf @@ -2,6 +2,7 @@ resource "aws_ecr_repository" "main" { name = "${var.name_prefix}-${var.environment}" image_tag_mutability = "MUTABLE" + force_delete = true image_scanning_configuration { scan_on_push = true