Conversation
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.financials | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Dependent Resources
Calculating...
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.financials | ID: caswalker_AWS_1638221712530
Dependent Resources
Calculating...
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data_science | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
Dependent Resources
Calculating...
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.logs | ID: caswalker_AWS_1638221712530
Dependent Resources
Calculating...
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.Dependent Resources
Calculating...
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data_science | ID: caswalker_AWS_1638221712530
Dependent Resources
Calculating...
| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_56
Description
TBADependent Resources
Calculating...
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
Calculating...
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.operations | ID: caswalker_AWS_1638221712530
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_56
Description
TBA| @@ -0,0 +1,153 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.logs | ID: caswalker_AWS_1638221712530
| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.financials | ID: BC_AWS_GENERAL_56
Description
TBA| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.logs | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "aws:kms" | |
| } | |
| } | |
| } | |
| } |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.operations | ID: BC_AWS_GENERAL_56
Description
TBA| @@ -0,0 +1,153 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data | ID: caswalker_AWS_1638221712530
| }) | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "financials" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.financials | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Ensure all S3 bucket has owner tag
Resource: aws_s3_bucket.data_science | ID: caswalker_AWS_1638221712530
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data_science | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| @@ -0,0 +1,153 @@ | |||
| resource "aws_s3_bucket" "data" { | |||
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.data | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.operations | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.logs | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| git_repo = "terragoat" | ||
| yor_trace = "9a7c8788-5655-4708-bbc3-64ead9847f64" | ||
| } | ||
| } |
There was a problem hiding this comment.
| } | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.data_science | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2.1 3.4
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- CIS AWS V1.3 2.1.1
- FEDRAMP (MODERATE) SC-28
| server_side_encryption_configuration { | ||
| rule { | ||
| apply_server_side_encryption_by_default { | ||
| sse_algorithm = "AES256" |
There was a problem hiding this comment.
| sse_algorithm = "AES256" | |
| sse_algorithm = "aws:kms" |
Ensure S3 buckets are encrypted with KMS by default
Resource: aws_s3_bucket.data | ID: BC_AWS_GENERAL_56
Description
TBA| } | ||
| } | ||
|
|
||
| resource "aws_s3_bucket" "logs" { |
There was a problem hiding this comment.
Ensure S3 Bucket has public access blocks
Resource: aws_s3_bucket.logs | ID: BC_AWS_NETWORKING_52
How to Fix
resource "aws_s3_bucket" "bucket_good_1" {
bucket = "bucket_good"
}
resource "aws_s3_bucket_public_access_block" "access_good_1" {
bucket = aws_s3_bucket.bucket_good_1.id
block_public_acls = true
block_public_policy = true
}
Description
When you create an S3 bucket, it is good practice to set the additional resource **aws_s3_bucket_public_access_block** to ensure the bucket is never accidentally public.We recommend you ensure S3 bucket has public access blocks. If the public access block is not attached it defaults to False.
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "operations" { |
There was a problem hiding this comment.
Ensure AWS access logging is enabled on S3 buckets
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_13
How to Fix
resource "aws_s3_bucket" "bucket" {
acl = var.s3_bucket_acl
bucket = var.s3_bucket_name
policy = var.s3_bucket_policy
force_destroy = var.s3_bucket_force_destroy
versioning {
enabled = var.versioning
mfa_delete = var.mfa_delete
}
+ dynamic "logging" {
+ for_each = var.logging
+ content {
+ target_bucket = logging.value["target_bucket"]
+ target_prefix = "log/${var.s3_bucket_name}"
+ }
+ }
}Description
Access logging provides detailed audit logging for all objects and folders in an S3 bucket.Benchmarks
- HIPAA 164.312(B) Audit controls
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
| } | |
| versioning { | |
| enabled = true | |
| } | |
| } |
Ensure AWS S3 object versioning is enabled
Resource: aws_s3_bucket.data | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
|
|
||
| } | ||
|
|
||
| resource "aws_s3_bucket" "data_science" { |
There was a problem hiding this comment.
Ensure S3 bucket has cross-region replication enabled
Resource: aws_s3_bucket.data_science | ID: BC_AWS_GENERAL_72
How to Fix
resource "aws_s3_bucket" "test" {
...
+ replication_configuration {
+ role = aws_iam_role.replication.arn
+ rules {
+ id = "foobar"
+ prefix = "foo"
+ status = "Enabled"
+
+ destination {
+ bucket = aws_s3_bucket.destination.arn
+ storage_class = "STANDARD"
+ }
+ }
+ }
}Description
Cross-region replication enables automatic, asynchronous copying of objects across S3 buckets. By default, replication supports copying new S3 objects after it is enabled. It is also possible to use replication to copy existing objects and clone them to a different bucket, but in order to do so, you must contact AWS Support.| yor_trace = "0e012640-b597-4e5d-9378-d4b584aea913" | ||
| }) | ||
|
|
||
| } |
There was a problem hiding this comment.
| } | |
| versioning { | |
| enabled = true | |
| } | |
| } |
Ensure AWS S3 object versioning is enabled
Resource: aws_s3_bucket.financials | ID: BC_AWS_S3_16
Description
S3 versioning is a managed data backup and recovery service provided by AWS. When enabled it allows users to retrieve and restore previous versions of their buckets.S3 versioning can be used for data protection and retention scenarios such as recovering objects that have been accidentally/intentionally deleted or overwritten.
Benchmarks
- PCI-DSS V3.2.1 10.5.3
- FEDRAMP (MODERATE) CP-10, SI-12
|
|
||
|
|
||
|
|
||
| resource "aws_s3_bucket_object" "data_object" { |
There was a problem hiding this comment.
Ensure S3 bucket Object is encrypted by KMS using a customer managed Key (CMK)
Resource: aws_s3_bucket_object.data_object | ID: BC_AWS_GENERAL_106
How to Fix
resource "aws_s3_bucket_object" "object" {
bucket = "your_bucket_name"
key = "new_object_key"
source = "path/to/file"
+ kms_key_id = "ckv_kms"
# The filemd5() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the md5() function and the file() function:
# etag = "${md5(file("path/to/file"))}"
etag = filemd5("path/to/file")
}
Description
This is a simple check to ensure that the S3 bucket Object is using AWS key management - KMS to encrypt its contents. To resolve add the ARN of your KMS or link on creation of the object.Co-authored-by: bridgecrew[bot] <60663194+bridgecrew[bot]@users.noreply.github.com>
Co-authored-by: bridgecrew[bot] <60663194+bridgecrew[bot]@users.noreply.github.com>
| yor_trace = "29efcf7b-22a8-4bd6-8e14-1f55b3a2d743" | ||
| }) | ||
|
|
||
| server_side_encryption_configuration { |
There was a problem hiding this comment.
| server_side_encryption_configuration { | |
| server_side_encryption_configuration { | |
| rule { | |
| apply_server_side_encryption_by_default { | |
| sse_algorithm = "AES256" | |
| } | |
| } | |
| } | |
| } |
Ensure data stored in the S3 bucket is securely encrypted at rest
Resource: aws_s3_bucket.operations | ID: BC_AWS_S3_14
Description
SSE helps prevent unauthorized access to S3 buckets. Encrypting and decrypting data at the S3 bucket level is transparent to users when accessing data.Benchmarks
- PCI-DSS V3.2 3
- NIST-800-53 AC-17, SC-2
- PCI-DSS V3.2.1 3.4
- FEDRAMP (MODERATE) SC-28
- CIS AWS V1.3 2.1.1
Calculating...
🎉 Fixed by commit bf71701 - Update terraform/s324.tf
There was a problem hiding this comment.
Change details
-
Error ID Change Path Resource BC_AWS_S3_14 Fixed /terraform/s324.tf aws_s3_bucket.operations
No description provided.