Skip to content

launchbynttdata/tf-aws-module_collection-s3_bucket

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tf-aws-module_collection-s3_bucket

License License: CC BY-NC-ND 4.0

Overview

This terraform module creates an S3 bucket

Usage

A sample variable file example.tfvars is available in the root directory which can be used to test this module. User needs to follow the below steps to execute this module

  1. Update the example.tfvars to manually enter values for all fields marked within <> to make the variable file usable
  2. Create a file provider.tf with the below contents
     provider "aws" {
       profile = "<profile_name>"
       region  = "<region_name>"
     }
    
    If using SSO, make sure you are logged in aws sso login --profile <profile_name>
  3. Make sure terraform binary is installed on your local. Use command type terraform to find the installation location. If you are using asdf, you can run asfd install and it will install the correct terraform version for you. .tool-version contains all the dependencies.
  4. Run the terraform to provision infrastructure on AWS
    # Initialize
    terraform init
    # Plan
    terraform plan -var-file example.tfvars
    # Apply (this is create the actual infrastructure)
    terraform apply -var-file example.tfvars -auto-approve
    

Known Issues

  1. NA

Pre-Commit hooks

.pre-commit-config.yaml file defines certain pre-commit hooks that are relevant to terraform, golang and common linting tasks. There are no custom hooks added.

commitlint hook enforces commit message in certain format. The commit contains the following structural elements, to communicate intent to the consumers of your commit messages:

  • fix: a commit of the type fix patches a bug in your codebase (this correlates with PATCH in Semantic Versioning).
  • feat: a commit of the type feat introduces a new feature to the codebase (this correlates with MINOR in Semantic Versioning).
  • BREAKING CHANGE: a commit that has a footer BREAKING CHANGE:, or appends a ! after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type. footers other than BREAKING CHANGE: may be provided and follow a convention similar to git trailer format.
  • build: a commit of the type build adds changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
  • chore: a commit of the type chore adds changes that don't modify src or test files
  • ci: a commit of the type ci adds changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)
  • docs: a commit of the type docs adds documentation only changes
  • perf: a commit of the type perf adds code change that improves performance
  • refactor: a commit of the type refactor adds code change that neither fixes a bug nor adds a feature
  • revert: a commit of the type revert reverts a previous commit
  • style: a commit of the type style adds code changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: a commit of the type test adds missing tests or correcting existing tests

Base configuration used for this project is commitlint-config-conventional (based on the Angular convention)

If you are a developer using vscode, this plugin may be helpful.

detect-secrets-hook prevents new secrets from being introduced into the baseline. TODO: INSERT DOC LINK ABOUT HOOKS

In order for pre-commit hooks to work properly

  • You need to have the pre-commit package manager installed. Here are the installation instructions.
  • pre-commit would install all the hooks when commit message is added by default except for commitlint hook. commitlint hook would need to be installed manually using the command below
pre-commit install --hook-type commit-msg

To test the resource group module locally

  1. For development/enhancements to this module locally, you'll need to install all of its components. This is controlled by the configure target in the project's Makefile. Before you can run configure, familiarize yourself with the variables in the Makefile and ensure they're pointing to the right places.
make configure

This adds in several files and directories that are ignored by git. They expose many new Make targets.

  1. The first target you care about is env. This is the common interface for setting up environment variables. The values of the environment variables will be used to authenticate with cloud provider from local development workstation.

make configure command will bring down aws_env.sh file on local workstation. Developer would need to modify this file, replace the environment variable values with relevant values.

These environment variables are used by terratest integration suit.

Then run this make target to set the environment variables on developer workstation.

make env
  1. The first target you care about is check.

Pre-requisites Before running this target it is important to ensure that, developer has created files mentioned below on local workstation under root directory of git repository that contains code for primitives/segments. Note that these files are aws specific. If primitive/segment under development uses any other cloud provider than AWS, this section may not be relevant.

  • A file named provider.tf with contents below
provider "aws" {
  profile = "<profile_name>"
  region  = "<region_name>"
}
  • A file named terraform.tfvars which contains key value pair of variables used.

Note that since these files are added in gitignore they would not be checked in into primitive/segment's git repo.

After creating these files, for running tests associated with the primitive/segment, run

make check

If make check target is successful, developer is good to commit the code to primitive/segment's git repo.

make check target

  • runs terraform commands to lint,validate and plan terraform code.
  • runs conftests. conftests make sure policy checks are successful.
  • runs terratest. This is integration test suit.
  • runs opa tests

Know Issues

Currently, the encrypt at transit is not supported in terraform. There is an open issue for this logged with Hashicorp - hashicorp/terraform-provider-aws#26987

Requirements

Name Version
terraform ~> 1.0
aws ~>5.0

Providers

No providers.

Modules

Name Source Version
resource_names terraform.registry.launch.nttdata.com/module_library/resource_name/launch ~> 2.0
s3_bucket terraform-aws-modules/s3-bucket/aws ~> 3.14.0

Resources

No resources.

Inputs

Name Description Type Default Required
logical_product_family (Required) Name of the product family for which the resource is created.
Example: org_name, department_name.
string "launch" no
logical_product_service (Required) Name of the product service for which the resource is created.
For example, backend, frontend, middleware etc.
string "backend" no
region (Required) The location where the resource will be created. Must not have spaces
For example, us-east-1, us-west-2, eu-west-1, etc.
string "us-east-2" no
class_env (Required) Environment where resource is going to be deployed. For example. dev, qa, uat string "dev" no
instance_env Number that represents the instance of the environment. number 0 no
instance_resource Number that represents the instance of the resource. number 0 no
maximum_length Number that represents the maximum length the resource name could have. number 60 no
separator Separator to be used in the name string "-" no
resource_names_map A map of key to resource_name that will be used by tf-launch-module_library-resource_name to generate resource names
map(object(
{
name = string
max_length = optional(number, 60)
}
))
{
"s3": {
"max_length": 63,
"name": "s3"
}
}
no
block_public_acls Whether Amazon S3 should block public ACLs for this bucket. bool true no
block_public_policy Whether Amazon S3 should block public bucket policies for this bucket. bool true no
ignore_public_acls Whether Amazon S3 should ignore public ACLs for this bucket. bool true no
restrict_public_buckets Whether Amazon S3 should restrict public bucket policies for this bucket. bool true no
use_default_server_side_encryption Flag to indiate if default server side encryption should be used. SSE-KMS encryption is used if the flag value is set to false(which is default). If flag value is set to true then default server side encryption(encryption set by AWS for all S3 objects) bool false no
kms_s3_key_arn ARN of the AWS S3 key used for S3 bucket encryption string "aws/s3" no
kms_s3_key_sse_algorithm Server-side encryption algorithm to use. Valid values are AES256 and aws:kms string "aws:kms" no
bucket_key_enabled Whether to enable bucket_key for encryption. It reduces encryption costs. Default is false bool false no
enable_versioning Whether to enable versioning for this S3 bucket. Default is false bool false no
object_lock_enabled Whether S3 bucket should have an Object Lock configuration enabled. bool false no
object_lock_configuration Map containing S3 object locking configuration. any {} no
lifecycle_rule List of maps containing configuration of object lifecycle management. any [] no
metric_configuration Map containing bucket metric configuration. any [] no
analytics_configuration Map containing bucket analytics configuration. any {} no
policy (Optional) A valid bucket policy JSON document. Note that if the policy document is not specific enough (but still valid),
Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the
verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform,
see the AWS IAM Policy Document Guide.
string null no
tags An arbitrary map of tags that can be added to all resources. map(string) {} no
bucket_name (Optional, Forces new resource) The name of the bucket. If value is set to null, then this module will generate a name as per standard naming convention and assign it. string null no
logging Map containing access bucket logging configuration. map(string) {} no
access_log_delivery_policy_source_buckets (Optional) List of S3 bucket ARNs wich should be allowed to deliver access logs to this bucket. list(string) [] no
access_log_delivery_policy_source_accounts (Optional) List of AWS Account IDs should be allowed to deliver access logs to this bucket. list(string) [] no
attach_access_log_delivery_policy Controls if S3 bucket should have S3 access log delivery policy attached bool false no
attach_policy Controls if S3 bucket should have bucket policy attached (set to true to use value of policy as bucket policy) bool false no
object_ownership Object ownership. Valid values: BucketOwnerEnforced, BucketOwnerPreferred or ObjectWriter. 'BucketOwnerEnforced': ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. 'BucketOwnerPreferred': Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control canned ACL. 'ObjectWriter': The uploading account will own the object if the object is uploaded with the bucket-owner-full-control canned ACL. string "BucketOwnerEnforced" no
control_object_ownership Whether to manage S3 Bucket Ownership Controls on this bucket. bool false no
acl (Optional) The canned ACL to apply. Conflicts with grant string null no

Outputs

Name Description
id ID of the Autoscaling target
arn ARN of the Autoscaling target
bucket_regional_domain_name The bucket region-specific domain name. The bucket domain name including the region name.

Packages

No packages published

Contributors 7