A distributed, cloud-native voting application built with a microservices architecture. This application demonstrates modern container orchestration, security monitoring, and multi-cloud deployment patterns using Kubernetes, Terraform, and security best practices.
Vote App is a multi-tier application that allows users to cast votes and view real-time results. The application showcases:
- Microservices architecture with independent, scalable components
- Multi-cloud deployment support for AWS (EKS), Azure (AKS), and GCP (GKE)
- Infrastructure as Code using Terraform and AWS CloudFormation
- Security monitoring integration with Lacework
- CI/CD pipeline support with Jenkins
- Container orchestration with Kubernetes
The application consists of five main microservices:
βββββββββββββββ βββββββββββββββ
β Vote β β Result β
β Frontend β β Frontend β
β (Python) β β (Node.js) β
ββββββββ¬βββββββ ββββββββ¬βββββββ
β β
β β (reads)
βΌ βΌ
βββββββββββ ββββββββββββββ
β Redis β β PostgreSQL β
β Queue β β Database β
ββββββ¬βββββ βββββββ²βββββββ
β β
β ββββββββββββ β
βββββΊβ Worker ββββββββ
β (Java) β (writes)
ββββββββββββ
-
Vote Service: Python-based frontend where users cast their votes
- Exposed via LoadBalancer on port 80
- Stores votes in Redis queue
- Runs with 2 replicas for high availability
-
Result Service: Node.js-based frontend displaying real-time voting results
- Exposed via LoadBalancer on port 5001
- Reads results from PostgreSQL database
- Single replica deployment
-
Worker Service: Java-based background worker
- Processes votes from Redis queue
- Stores processed votes in PostgreSQL
- Ensures vote persistence and reliability
-
Redis: In-memory data store
- Acts as a message queue for incoming votes
- Uses Alpine Linux image for minimal footprint
-
PostgreSQL: Relational database
- Stores processed voting results
- Configured with persistent volume for data durability
- 1Gi storage allocation
-
Maintenance Container: Ubuntu-based utility container
- Used for debugging and administrative tasks
- Provides shell access to the cluster environment
- Kubernetes cluster (1.19+)
kubectlconfigured to access your cluster- LoadBalancer support (cloud provider or MetalLB for on-premises)
- Terraform (>= 0.14)
- Cloud provider CLI tools:
- AWS:
aws-cliconfigured with credentials - Azure:
az-cliwith active subscription - GCP:
gcloudwith authenticated account
- AWS:
- Appropriate IAM permissions for resource creation
- AWS CLI configured with credentials
- Appropriate IAM permissions
# Apply the Kubernetes manifests
kubectl apply -f deploys/voteapp/vote.yml
# Check deployment status
kubectl get pods
kubectl get services
# Get the LoadBalancer URLs
kubectl get svc vote resultAccess the application:
- Vote:
http://<vote-service-external-ip> - Results:
http://<result-service-external-ip>:5001
cd terraform/aws/eks
# Initialize Terraform
terraform init
# Set variables
export TF_VAR_AWS_REGION="us-west-2"
export TF_VAR_DEPLOYMENT_NAME="voteapp-demo"
# Plan and apply
terraform plan
terraform applycd terraform/azure/aks
terraform init
terraform plan
terraform applycd terraform/gcp/gke
terraform init
terraform plan
terraform applycd cft/voteapp
aws cloudformation create-stack \
--stack-name voteapp \
--template-body file://voteapp-frontend.json \
--region us-west-2The application includes Lacework security monitoring for:
- Runtime threat detection
- Container vulnerability scanning
- Compliance monitoring
- Anomaly detection
To enable Lacework:
# Configure Lacework credentials
kubectl create secret generic lacework-config \
--from-file=config.json=./deploys/lacework/lacework-cfg-k8s.yaml
# Deploy Lacework agent
kubectl apply -f deploys/lacework/lacework-k8s.yaml- Container image scanning and vulnerability management
- Kubernetes RBAC policies
- Network policies for service isolation
- Secret management for sensitive data
- Infrastructure compliance validation
The repository includes Jenkins configuration for automated builds and deployments:
# Deploy Jenkins in your cluster
cd terraform/aws/jenkins
terraform init
terraform apply
# Apply service account for build robot
kubectl apply -f deploys/jenkins/build-robot-sa.yamlFor testing and demo purposes, traffic generation modules are available:
# AWS traffic generation
cd terraform/aws/traffic
terraform apply
# Azure traffic generation
cd terraform/azure/traffic
terraform apply
# GCP traffic generation
cd terraform/gcp/traffic
terraform applyAccess the maintenance pod for troubleshooting:
kubectl exec -it deployment/maintenance -- /bin/bashvoteapp/
βββ cft/ # CloudFormation templates
β βββ voteapp/ # Frontend CFT template
βββ deploys/ # Kubernetes manifests
β βββ jenkins/ # Jenkins CI/CD configs
β βββ lacework/ # Security monitoring configs
β βββ voteapp/ # Main application manifest
βββ terraform/ # Infrastructure as Code
β βββ aws/ # AWS deployments (EKS)
β βββ azure/ # Azure deployments (AKS)
β βββ gcp/ # GCP deployments (GKE)
β βββ scripts/ # Helper scripts
βββ README.md
The application components can be configured through environment variables:
Database (PostgreSQL):
POSTGRES_USER: Database username (default: postgres)POSTGRES_PASSWORD: Database password (default: postgres)POSTGRES_HOST_AUTH_METHOD: Authentication method (default: trust)PGDATA: Data directory path
Maintenance Container:
AWS_DEFAULT_REGION: AWS region for CLI operations (default: us-west-2)
Scale components based on your needs:
# Scale vote frontend
kubectl scale deployment vote --replicas=5
# Scale worker service
kubectl scale deployment worker --replicas=3Pods not starting:
kubectl describe pod <pod-name>
kubectl logs <pod-name>LoadBalancer pending:
- Ensure your cluster has LoadBalancer support
- Check cloud provider quotas and permissions
Database connection issues:
- Verify PersistentVolumeClaim is bound
- Check database pod logs
- Ensure worker can reach database service
Vote not appearing in results:
- Check worker pod is running
- Verify Redis and PostgreSQL connectivity
- Review worker logs for processing errors
Monitor application health:
# Check all components
kubectl get all
# Check persistent volumes
kubectl get pvc
# View logs
kubectl logs -f deployment/vote
kubectl logs -f deployment/result
kubectl logs -f deployment/workerThis application is ideal for demonstrating:
- Microservices Architecture: Show how independent services communicate
- Cloud-Native Deployment: Deploy across multiple cloud providers
- Container Orchestration: Demonstrate Kubernetes capabilities
- Infrastructure as Code: Provision infrastructure declaratively
- Security Monitoring: Real-time threat detection with Lacework
- CI/CD Pipelines: Automated testing and deployment
- Scalability: Horizontal scaling of application components
- High Availability: Multi-replica deployments and load balancing
Contributions are welcome! Please ensure:
- Terraform code follows best practices
- Kubernetes manifests are validated
- Security scanning shows no critical vulnerabilities
- Documentation is updated for new features
This project is maintained as a demonstration application. Please check with the repository owner for specific licensing terms.
For issues, questions, or feature requests, please open an issue in the GitHub repository.
Note: This is a demonstration application. For production use, ensure you:
- Change default credentials
- Implement proper secret management
- Configure appropriate resource limits
- Enable monitoring and alerting
- Apply security hardening
- Set up backup and disaster recovery