This Terraform module deploys Detectify's Internal Scanning solution on AWS using Amazon EKS with Auto Mode.
- EKS Auto Mode - Simplified Kubernetes management with automatic node provisioning
- Automatic TLS - ACM certificate provisioning with DNS validation
- Internal ALB - Secure internal Application Load Balancer
- Horizontal Pod Autoscaling - Automatic scaling based on CPU/memory utilization
- KMS Encryption - Secrets encrypted at rest using AWS KMS
- CloudWatch Integration - Optional observability with CloudWatch Logs and Metrics
- Prometheus Monitoring - Optional Prometheus stack with Pushgateway
- Terraform >= 1.5.0
- AWS CLI configured with credentials (
aws configureor environment variables) - AWS IAM permissions - The IAM principal running Terraform needs permissions to create:
- EKS clusters and node groups
- IAM roles and policies
- KMS keys
- Security groups
- Application Load Balancers
- Route53 records (if using DNS integration)
- ACM certificates (if using automatic TLS)
- VPC with at least 2 private subnets (in different availability zones)
- Detectify License - Contact Detectify to obtain:
- License key
- Registry credentials
- Connector API key
provider "aws" {
region = "eu-west-1"
default_tags {
tags = {
Project = "detectify-internal-scanning"
Environment = "production"
ManagedBy = "terraform"
}
}
}
# EKS authentication - retrieves token for Kubernetes and Helm providers
data "aws_eks_cluster_auth" "cluster" {
name = module.internal_scanner.cluster_name
depends_on = [
# Explicit dependency must be set for data resources or it'll be
# evaluated as soon as the cluster name is known rather than waiting
# until the cluster is deployed.
# https://developer.hashicorp.com/terraform/language/data-sources#dependencies
module.internal_scanner.cluster_name,
]
}
provider "kubernetes" {
host = module.internal_scanner.cluster_endpoint
cluster_ca_certificate = base64decode(module.internal_scanner.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
provider "helm" {
kubernetes = {
host = module.internal_scanner.cluster_endpoint
cluster_ca_certificate = base64decode(module.internal_scanner.cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.cluster.token
}
}
module "internal_scanner" {
source = "detectify/detectify-internal-scanning/aws"
version = "~> 1.0"
# Core Configuration
environment = "production"
aws_region = "eu-west-1"
# Network Configuration
vpc_id = "vpc-xxxxx"
private_subnet_ids = ["subnet-xxxxx", "subnet-yyyyy"]
alb_inbound_cidrs = ["10.0.0.0/8"]
# Scanner Endpoint
scanner_url = "scanner.internal.example.com"
create_route53_record = true
route53_zone_id = "Z1234567890ABC"
# License Configuration (provided by Detectify)
license_key = var.license_key
connector_api_key = var.connector_api_key
# Registry Authentication (provided by Detectify)
registry_username = var.registry_username
registry_password = var.registry_password
}- Basic Deployment - Minimal configuration
- Complete Deployment - Full-featured with autoscaling, monitoring, and DNS
| Name | Version |
|---|---|
| terraform | >= 1.5.0 |
| aws | >= 5.52 |
| helm | >= 2.9.0 |
| kubernetes | >= 2.13.1 |
| Name | Version |
|---|---|
| aws | >= 5.52 |
| helm | >= 2.9.0 |
| kubernetes | >= 2.13.1 |
| Name | Source | Version |
|---|---|---|
| eks | terraform-aws-modules/eks/aws | ~> 20.0 |
| load_balancer_controller_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks | ~> 5.0 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| acm_certificate_arn | ARN of an existing ACM certificate. Required when create_acm_certificate = false. | string |
null |
no |
| acm_validation_zone_id | Route53 hosted zone ID for ACM certificate DNS validation. Defaults to route53_zone_id if not specified. IMPORTANT: Must be a PUBLIC hosted zone - ACM certificate validation requires publicly resolvable DNS records. If route53_zone_id is a private zone, you must provide a separate public zone here, or use create_acm_certificate = false with your own certificate. |
string |
null |
no |
| alb_inbound_cidrs | CIDR blocks allowed to access the scanner API endpoint via the internal ALB. Typically includes: - Your VPC CIDR (required - scanner components communicate via the ALB) - VPN/corporate network CIDRs (for administrative access and debugging) Example: ["10.0.0.0/16", "172.16.0.0/12"] |
list(string) |
n/a | yes |
| aws_region | AWS Region where resources will be deployed | string |
"us-east-1" |
no |
| chrome_controller_replicas | Number of Chrome Controller replicas | number |
1 |
no |
| chrome_controller_resources | Resource requests and limits for Chrome Controller | object({ |
{ |
no |
| cluster_admin_role_arns | IAM role ARNs to grant cluster admin access (for AWS Console/CLI access) | list(string) |
[] |
no |
| cluster_endpoint_public_access | Enable public access to the EKS cluster API endpoint. When true, the Kubernetes API is reachable over the internet (subject to cluster_endpoint_public_access_cidrs). Use this when users need kubectl/deployment access without VPN connectivity. Private access remains enabled regardless of this setting. IMPORTANT: Even with public access, all requests still require valid IAM authentication. Restrict access further using cluster_endpoint_public_access_cidrs. |
bool |
false |
no |
| cluster_endpoint_public_access_cidrs | CIDR blocks allowed to access the EKS cluster API endpoint over the public internet. Only applies when cluster_endpoint_public_access = true. IMPORTANT: When enabling public access, restrict this to specific IPs instead of using the default 0.0.0.0/0. AWS requires at least one CIDR in this list. Example: ["203.0.113.0/24", "198.51.100.10/32"] |
list(string) |
[ |
no |
| cluster_name_prefix | Prefix for EKS cluster name (will be combined with environment) | string |
"internal-scanning" |
no |
| cluster_security_group_additional_rules | Additional security group rules for the EKS cluster API endpoint. Required when Terraform runs from a network that doesn't have direct access to the private subnets (e.g., local machine via VPN, CI/CD pipeline in another VPC, bastion host). Add an ingress rule for port 443 from the CIDR where Terraform is running. Example: cluster_security_group_additional_rules = { terraform_access = { description = "Allow Terraform access from VPN" type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["10.0.0.0/8"] # Your VPN/CI network CIDR } } |
map(any) |
{} |
no |
| cluster_version | Kubernetes version for EKS cluster | string |
"1.35" |
no |
| completed_scans_poll_interval_seconds | Interval in seconds for checking if running scans have completed. Minimum: 10 seconds. Lower values provide faster result reporting but increase Redis load. | number |
60 |
no |
| connector_api_key | Connector API key for authentication with Detectify services | string |
n/a | yes |
| connector_server_url | Connector service URL for scanner communication. Defaults to production connector. | string |
"https://connector.detectify.com" |
no |
| create_acm_certificate | Create and validate an ACM certificate for the scanner endpoint. Set to false to use an existing certificate. | bool |
true |
no |
| create_route53_record | Create Route53 DNS A record pointing scanner_url to the ALB. Set to false if managing DNS externally. | bool |
false |
no |
| deploy_redis | Deploy in-cluster Redis. Set to false when using managed Redis (e.g., ElastiCache, Memorystore) and override redis_url. | bool |
true |
no |
| enable_autoscaling | Enable Horizontal Pod Autoscaler (HPA) for scan-scheduler and scan-manager | bool |
false |
no |
| enable_cloudwatch_observability | Enable Amazon CloudWatch Observability addon for logs and metrics | bool |
true |
no |
| enable_cluster_creator_admin_permissions | Whether to grant the cluster creator admin permissions. Set to true to allow the creator to manage the cluster, false to manage all access manually via cluster_admin_role_arns. | bool |
true |
no |
| enable_prometheus | Enable Prometheus monitoring stack with pushgateway. When true, deploys Prometheus for metrics scraping and pushgateway for ephemeral job metrics. When false, no metrics infrastructure is deployed. | bool |
false |
no |
| environment | Environment name (e.g., development, staging, production) | string |
n/a | yes |
| helm_chart_path | Local path to Helm chart. Takes precedence over helm_chart_repository when set. | string |
null |
no |
| helm_chart_repository | Helm chart repository URL. Set to null to use helm_chart_path instead. | string |
"https://detectify.github.io/helm-charts" |
no |
| helm_chart_version | Helm chart version. Only used when helm_chart_repository is set. | string |
null |
no |
| image_registry_path | Path within the registry where scanner images are stored. Combined with registry_server to form full image URLs (e.g., registry_server/image_registry_path/scan-scheduler). | string |
"internal-scanning" |
no |
| internal_scanning_version | Version tag for all scanner images (scan-scheduler, scan-manager, scan-worker, chrome-controller, chrome-container). Defaults to 'latest' to ensure customers always have the newest security tests. For production stability, consider pinning to a specific version (e.g., 'v1.0.0'). | string |
"stable" |
no |
| kms_key_arn | ARN of an existing KMS key to use for secrets encryption. If not provided, a new KMS key will be created. | string |
null |
no |
| kms_key_deletion_window | Number of days before KMS key is deleted after destruction (7-30 days) | number |
30 |
no |
| license_key | Scanner license key | string |
n/a | yes |
| license_server_url | License validation server URL. Defaults to production license server. | string |
"https://license.detectify.com" |
no |
| log_format | Log output format. Use 'json' for machine-readable logs (recommended for log aggregation systems like ELK, Splunk, CloudWatch). Use 'text' for human-readable console output. | string |
"json" |
no |
| max_scan_duration_seconds | Maximum duration for a single scan in seconds. If not specified, defaults to 172800 (2 days). Only set this if you need to override the default. | number |
null |
no |
| private_subnet_ids | Private subnet IDs for EKS nodes and internal ALB | list(string) |
n/a | yes |
| prometheus_url | Full URL for Prometheus endpoint (required if enable_prometheus is true) | string |
null |
no |
| redis_resources | Resource requests and limits for Redis | object({ |
{ |
no |
| redis_storage_class | Kubernetes StorageClass for the Redis PVC. Defaults to ebs-gp3 (EKS Auto Mode with EBS CSI driver). | string |
"ebs-gp3" |
no |
| redis_storage_size | Redis persistent volume size | string |
"8Gi" |
no |
| redis_url | Redis connection URL. Override when using external/managed Redis. Include credentials and use rediss:// for TLS (e.g., rediss://user:pass@my-redis.example.com:6379). | string |
"redis://redis:6379" |
no |
| registry_password | Docker registry password for image pulls | string |
n/a | yes |
| registry_server | Docker registry server hostname for authentication and image pulls (e.g., registry.detectify.com) | string |
"registry.detectify.com" |
no |
| registry_username | Docker registry username for image pulls | string |
n/a | yes |
| route53_zone_id | Route53 hosted zone ID for the scanner DNS A record. Required when create_route53_record = true. Can be a private hosted zone if the scanner is only accessed internally. The zone must contain the domain used in scanner_url. |
string |
null |
no |
| scan_manager_autoscaling | Autoscaling configuration for Scan Manager | object({ |
{ |
no |
| scan_manager_replicas | Number of Scan Manager replicas | number |
1 |
no |
| scan_manager_resources | Resource requests and limits for Scan Manager | object({ |
{ |
no |
| scan_scheduler_autoscaling | Autoscaling configuration for Scan Scheduler | object({ |
{ |
no |
| scan_scheduler_replicas | Number of Scan Scheduler replicas | number |
1 |
no |
| scan_scheduler_resources | Resource requests and limits for Scan Scheduler | object({ |
{ |
no |
| scanner_url | Hostname for the scanner API endpoint (e.g., scanner.example.com). This endpoint is used to: - Start scans (from CI/CD pipelines or manually) - Get scan status - Get logs (for support requests to Detectify) The endpoint is exposed via an internal ALB and is only accessible from networks specified in alb_inbound_cidrs. |
string |
n/a | yes |
| scheduled_scans_poll_interval_seconds | Interval in seconds for polling the connector for scheduled scans. Minimum: 60 seconds. Lower values increase API calls to Detectify. | number |
600 |
no |
| vpc_id | VPC ID where EKS cluster will be deployed | string |
n/a | yes |
| Name | Description |
|---|---|
| acm_certificate_arn | ARN of the ACM certificate for scan scheduler |
| acm_certificate_domain_validation_options | Domain validation options for ACM certificate. Use these to create DNS validation records when managing DNS externally (create_route53_record = false). |
| alb_controller_role_arn | IAM role ARN for AWS Load Balancer Controller |
| alb_dns_name | DNS name of the ALB created for scan scheduler |
| alb_zone_id | Route53 zone ID of the ALB (for DNS record creation) |
| cloudwatch_observability_role_arn | IAM role ARN for CloudWatch Observability addon |
| cluster_certificate_authority_data | Base64 encoded certificate data for cluster authentication |
| cluster_endpoint | EKS cluster API endpoint |
| cluster_id | EKS cluster ID |
| cluster_name | EKS cluster name |
| cluster_oidc_issuer_url | The URL on the EKS cluster OIDC Issuer |
| cluster_primary_security_group_id | Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console |
| cluster_security_group_id | Security group ID attached to the EKS cluster |
| kms_key_arn | ARN of the KMS key used for EKS secrets encryption |
| kms_key_id | ID of the KMS key used for EKS secrets encryption (only if created by module) |
| kubeconfig_command | Command to update kubeconfig for kubectl access |
| oidc_provider_arn | ARN of the OIDC Provider for EKS |
| prometheus_url | Prometheus endpoint URL (if enabled) |
| scanner_namespace | Kubernetes namespace where scanner is deployed |
| scanner_url | Scanner API endpoint URL |
| vpc_cni_role_arn | IAM role ARN for VPC CNI addon |
+------------------+
| Route53 DNS |
+--------+---------+
|
+--------v---------+
| Internal ALB |
| (HTTPS/443) |
+--------+---------+
|
+--------v---------+
| Scan Scheduler | <-- API Entry Point
| (HPA: 2-10) |
+--------+---------+
|
+--------v---------+
| Redis | <-- Job Queue
| (Persistent) |
+--------+---------+
|
+--------v---------+
| Scan Manager | <-- Creates scan workers
| (HPA: 1-20) |
+--------+---------+
| |
+--------------+ +--------------+
| |
+--------v--------+ +-----------v-----------+
| Scan Worker | | Chrome Controller |
| (ephemeral pods)|<------------------>| |
+-----------------+ +-----------+-----------+
|
+-----------v-----------+
| Chrome Container |
| (browser instances) |
+-----------------------+
Component Responsibilities:
- Scan Scheduler: API entry point, validates licenses, queues scan jobs to Redis
- Redis: Persistent job queue for scan requests
- Scan Manager: Polls Redis, creates ephemeral scan-worker pods, reports results to Connector
- Scan Worker: Ephemeral pods that execute security scans
- Chrome Controller: Manages browser instances for JavaScript-heavy scanning
- Chrome Container: Ephemeral Chrome instances used by scan workers
The scanner exposes an internal API endpoint via an Application Load Balancer (ALB). This endpoint is used to:
- Start scans from CI/CD pipelines or manually
- Get scan status to monitor progress
- Get logs for support requests to Detectify
Network Access: Configure alb_inbound_cidrs to allow access from:
- Your VPC CIDR (for CI/CD pipelines running in your network)
- VPN/corporate networks (for manual access and support debugging)
If you have a public hosted zone:
scanner_url = "scanner.example.com"
create_route53_record = true
route53_zone_id = "Z1234567890ABC" # Public zone works for both DNS and ACMIf you have separate private and public zones:
scanner_url = "scanner.internal.example.com"
create_route53_record = true
route53_zone_id = "Z_PRIVATE_ZONE" # Private zone for DNS A record
acm_validation_zone_id = "Z_PUBLIC_ZONE" # Public zone for ACM validationIf you manage DNS outside of Route53 (create_route53_record = false):
-
Option A: Bring your own certificate
create_acm_certificate = false acm_certificate_arn = "arn:aws:acm:region:account:certificate/xxx"
Then create your DNS record pointing to the ALB (available in outputs).
-
Option B: Manual certificate validation
- Set
create_acm_certificate = true - The certificate will be created in
PENDING_VALIDATIONstate - Retrieve validation records from the
acm_certificate_domain_validation_optionsoutput - Create the CNAME records in your DNS provider
- ACM will automatically validate once records are detected
- Set
By default, the EKS cluster API endpoint is only accessible from within the VPC (private access). If your users don't have VPN connectivity to the VPC, you can enable public access:
module "internal_scanner" {
# ...
# Enable public Kubernetes API access
cluster_endpoint_public_access = true
cluster_endpoint_public_access_cidrs = ["198.51.100.0/24"] # Your office/CI IP range
}When both private and public access are enabled:
- From inside the VPC (EKS nodes, ALB controller): traffic routes over the private endpoint
- From outside the VPC (your laptop, CI/CD): traffic routes over the public endpoint
IAM authentication is always required regardless of endpoint type. Restrict cluster_endpoint_public_access_cidrs to known IP ranges for defense in depth.
The module supports Horizontal Pod Autoscaling (HPA) for scan-scheduler and scan-manager:
| Component | Default Min | Default Max | Scale Trigger |
|---|---|---|---|
| Scan Scheduler | 2 | 10 | 70% CPU |
| Scan Manager | 1 | 20 | 80% CPU |
Scaling Behavior:
- Scale Up: Fast (100% increase every 30s or +2 pods, whichever is larger)
- Scale Down: Gradual (50% decrease every 60s with 5-minute stabilization)
Node autoscaling is handled automatically by EKS Auto Mode.
To upgrade the scanner components:
module "internal_scanner" {
# ...
internal_scanning_version = "v1.2.0" # New version
}Then apply:
terraform apply# Get kubeconfig
aws eks update-kubeconfig --region <region> --name <cluster-name>
# Check pods
kubectl get pods -n scanner
# Check logs
kubectl logs -n scanner deployment/scan-scheduler
kubectl logs -n scanner deployment/scan-manager| Issue | Solution |
|---|---|
| Terraform hangs or times out connecting to EKS | Terraform needs network access to the EKS API endpoint (port 443). Option 1 (no VPN): Set cluster_endpoint_public_access = true and restrict cluster_endpoint_public_access_cidrs to your IP. Option 2 (VPN/peering): Add cluster_security_group_additional_rules with an ingress rule for your network CIDR. See variable descriptions for examples. |
Pods stuck in ImagePullBackOff |
Verify registry credentials are correct |
| ALB not provisioning | Check subnet tags and IAM permissions |
| Certificate validation failing | Ensure ACM validation DNS zone is public |
Business Source License 1.1 (BSL 1.1) - See LICENSE for details.
- Documentation: Detectify Docs
- Contact: Detectify Support