AWS EKS cluster using terraform, AWS Controller for Kubernetes (ACK), and ELB Controlller for Kubernetes
-
Login into your AWS account:
Here you have many options:
- AWS Access Key
- IAM user
- SSO
We recommend using IAM user or SSO because usually, when developers use Access Key method they create the keys under the Root User profiles, what prevents limiting the access rights for the key (root user cannot be used as the principal while assigning roles).
aws configure
or
aws configure sso
- Create initial dependency selections that will initialize the dependency lock file (setting up providers).
terraform init
- Apply the terraform state
terraform apply -auto-approve
- Login into EC2 bastion host instance:
ssh -i "aws-terraform-key.pem" ec2-user@54.237.112.108
- From Bastion Instance, create an SSH connection to any EC2 node instance from the EKS cluster using the EKS cluster node instance private IP
ssh -i "/tmp/eks_nodes_keypair.pem" ec2-user@10.0.26.128
Then you should be able to access the EC2 instances from the EKS cluster using the Bastion Host instance as a reverse proxy.
To close the connection, run the following command:
exit -- Closes the SSH connection between the bastion instance and the EKS Cluster instance
exit -- Closes the SSH connection between your local machine and the bastion instance
- Export Kubernetes context:
aws eks --region us-east-1 update-kubeconfig --name ekscluster-simpleecommerce
Check connection to the control plane:
kubectl get svc
kubectl get pods --all-namespaces
Deploy public NLBs:
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/public-lb.yaml
Deploy private NLBs:
kubectl apply -f k8s/private-lb.yaml
- Deploy EKS cluster autoscaler
kubectl apply -f k8s/cluster-autoscaler.yaml
Verify that the autoscaler pod is up and running:
kubectl get pods -n kube-system
Check logs for any errors:
kubectl logs -l app=cluster-autoscaler -n kube-system -f
Verify that AWS autoscaling group has required tags:
k8s.io/cluster-autoscaler/<cluster-name> : owned
k8s.io/cluster-autoscaler/enabled : TRUE
Split the terminal screen. In the first window run:
watch -n 1 -t kubectl get pods
In the second window run:
watch -n 1 -t kubectl get nodes
Now, to trigger autoscaling, by increasing replica for nginx deployment from 1 to 5.
kubectl apply -f k8s/deployment.yaml
- To remove all the resources created, run the destroy command
kubectl delete -f k8s/cluster-autoscaler.yaml
kubectl delete -f k8s/private-lb.yaml
kubectl delete -f k8s/deployment.yaml
kubectl delete -f k8s/public-lb.yaml
terraform destroy --auto-approve
We are using tls_private_key to create a PEM (and OpenSSH) formatted private key. The private key generated by this resource will be stored unencrypted in your Terraform state file. Use of this resource for production deployments is not recommended. Instead, generate a private key file outside of Terraform and distribute it securely to the system where Terraform will be run.