This project demonstrates real-world load balancing behavior using NGINX, Docker, and multiple backend microservices.
You will learn how modern systems gradually roll out features, balance traffic efficiently, and prevent outages using traffic weights, canary releases, sticky routing, and circuit-breaker principles.
| Functionality | Status |
|---|---|
| Round Robin Load Balancing | ✔ |
| Weighted Load Balancing (90/10 Canary) | ✔ |
| Sticky Session Routing | ✔ |
| Canary Release Simulation | ✔ |
| Circuit Breaker Failure Handling | ✔ |
| Auto Canary Monitoring Script | ✔ |
LoadBalancer-Project/ │ ├── backend1/ # Stable Version (Blue) │ ├── server.js │ └── Dockerfile │ ├── backend2/ # Canary Version (Green) │ ├── server.js │ └── Dockerfile │ ├── nginx.conf # NGINX Load Balancer Configuration ├── docker-compose.yaml # Runs all services ├── canary-monitor.js # Health Test Script └── README.md # Documentation
const express = require("express"); const app = express(); app.get("/", (req, res) => res.send("⚡ Backend 1 - Stable OK")); app.listen(3001, () => console.log("Backend1 running on port 3001"));
const express = require("express"); const app = express(); app.get("/", (req, res) => res.send("🌱 Backend 2 - Canary OK")); app.listen(3002, () => console.log("Backend2 running on port 3002"));
FROM node:18 WORKDIR /app COPY server.js . RUN npm install express EXPOSE 3001 # backend2 uses 3002 CMD ["node","server.js"]
services: backend1: build: ./backend1 ports: ["3001:3001"]
backend2: build: ./backend2 ports: ["3002:3002"]
nginx: image: nginx ports: ["8080:8080"] depends_on: [backend1, backend2] volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro
Run project:
docker-compose up --build
Visit:
http { upstream backend_servers { server backend1:3001 weight=9; # 90% server backend2:3002 weight=1; # 10% Canary }
server {
listen 8080;
location / {
proxy_pass http://backend_servers;
}
}
}
for i in {1..20}; do curl -s localhost:8080; echo ""; done
Expected Output:
⚡ Backend1 ⚡ Backend1 🌱 Backend2 ← Canary Hit ⚡ Backend1 🌱 Backend2
upstream backend_servers { ip_hash; server backend1:3001; server backend2:3002; }
docker stop System self-heals using only backend1.
const { execSync } = require("child_process"); let success=0, fail=0;
for(let i=0;i<200;i++){ try{ execSync("curl -s localhost:8080"); success++; } catch{ fail++; } }
console.log(Total: ${success+fail} Success: ${success} Failures: ${fail});
Run:
node canary-monitor.js
| Concept | Status |
|---|---|
| Round Robin LB | ✔ |
| Weighted Canary | ✔ |
| Sticky Sessions | ✔ |
| Circuit Breaker | ✔ |
| Real Deployment Simulation | ✔ |