(README generated with Claude Code)
A real-time multi-user webchat application built with Python (aiohttp), Redis, and vanilla JavaScript. This application demonstrates modern web technologies including WebSockets, containerization, and load balancing.
- Real-time messaging: Instant message delivery using WebSockets
- Multi-user support: Multiple users can chat simultaneously in a single room
- Message persistence: Messages are stored in Redis Streams with configurable history retrieval
- Automatic reconnection: Client automatically reconnects on connection loss with exponential backoff
- Scalable architecture: Horizontally scalable using Redis Streams for inter-process communication
- Load balancing: Nginx reverse proxy with support for multiple backend instances
- Containerized deployment: Full Docker Compose setup for easy deployment
- Health checks: Built-in health monitoring for all services
- Observability: Prometheus metrics with Grafana dashboards for monitoring
- Backend: Python 3.13, aiohttp, Redis
- Frontend: Vanilla JavaScript, HTML5, CSS3
- Infrastructure: Docker, Nginx, Redis, Prometheus, Grafana
- Build System: Pants build system
- Code Quality: Ruff (linting/formatting), MyPy (type checking)
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Browser │◄──►│ Nginx │◄──►│ Chat App │
│ │ │ (Load Bal.) │ │ (Multiple) │
└─────────────┘ └─────────────┘ └─────────────┘
│ │
│ ▼
│ ┌─────────────┐
│ │ Redis │
│ │ (Streams) │
│ └─────────────┘
│
▼
┌─────────────┐
│ Static │
│ Files │
└─────────────┘
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Grafana │◄───│ Prometheus │◄───│ Chat App │
│ :3001 │ │ :9091 │ │ /metrics │
└─────────────┘ └─────────────┘ └─────────────┘
| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Serves the main chat application (static files) |
/ws |
WebSocket | WebSocket endpoint for real-time messaging |
/healthz |
GET | Health check endpoint (returns 200 if healthy) |
/metrics |
GET | Prometheus metrics endpoint |
/messages |
GET | Retrieve message history (see below) |
The /messages endpoint returns historical messages from Redis Streams:
# Get messages from the last 30 minutes (default)
curl http://localhost:8080/messages
# Get messages from the last 60 minutes
curl http://localhost:8080/messages?minutes=60Query Parameters:
| Parameter | Default | Range | Description |
|---|---|---|---|
minutes |
30 | 1-1440 | Number of minutes of history to retrieve |
Response Format:
[
{
"text": "Hello, world!",
"type": "message",
"ts": 1703347200000
}
]Messages are automatically loaded on the frontend when a user first connects, displaying a visual indicator showing how many historical messages were loaded.
- Docker: Version 20.10 or higher
- Docker Compose: Version 2.0 or higher
- Python: 3.13.x (Recommended: pyenv)
- Pants: Build system (link)
- Redis: For local Redis instance (if not using Docker)
-
Clone the repository:
git clone <repository-url> cd multiuser-webchat
-
Start the application:
docker compose up -d
-
Access the application: Open your browser and navigate to
http://localhost:8080 -
Scale the application (optional):
WEB_REPLICAS=8 docker compose up -d --scale web=8
-
Stop the application:
docker compose down
If you prefer to build the Docker image manually:
# Build the image
docker build -t chat-app:latest .
# Run with Docker Compose
docker compose up -dFor development without Docker:
-
Set up Python environment:
# Install Python 3.13 with pyenv pyenv install 3.13 # Set local Python version for this project pyenv local 3.13 # Verify Python version python --version
-
Install dependencies:
pants --version
-
Start Redis:
# Using Docker docker run -d -p 6379:6379 redis:7-alpine # Or install Redis locally and start it redis-server
-
Run the application:
# Development mode pants run src/server:app -- --host 0.0.0.0 --port 8080 --workers 1 --redis-url redis://localhost:6379 # Or build and run PEX ./pants package src/server:app python dist/src.server/app.pex --host 0.0.0.0 --port 8080
-
Access the application: Open your browser and navigate to
http://localhost:8080
You can customize the application behavior using environment variables:
| Variable | Default | Description |
|---|---|---|
HOST |
0.0.0.0 |
Server bind address |
PORT |
8080 |
Server port |
WORKERS |
1 |
Number of worker processes |
REDIS_URL |
redis://localhost:6379 |
Redis connection URL |
WEB_REPLICAS |
4 |
(Docker Compose only) Number of web service replicas |
PROMETHEUS_MULTIPROC_DIR |
/tmp/prometheus_multiproc |
Directory for Prometheus multiprocess metrics |
Note: Message history is stored in Redis Streams with a maximum of 10,000 messages (auto-trimmed). Messages older than this limit are automatically removed.
Example:
# Run with custom settings
HOST=127.0.0.1 PORT=3000 WORKERS=4 pants run src/server:appThis project uses pre-commit to automatically run code quality checks before each commit.
-
Install pre-commit:
# Using pip pip install pre-commit # Or using Homebrew (macOS) brew install pre-commit
-
Install the git hook scripts:
pre-commit install
-
Verify installation (optional):
# Run against all files to test pre-commit run --all-files
Once installed, pre-commit hooks run automatically before each commit. If any checks fail:
- Review the error messages
- Fix the issues (many are auto-fixed)
- Stage the fixed files:
git add . - Commit again:
git commit -m "your message"
Run hooks manually without committing:
# Run on all files
pre-commit run --all-files
# Run on specific files
pre-commit run --files src/server/app.py
# Run a specific hook
pre-commit run pants-fmt --all-filesKeep hooks up to date:
# Update to latest versions
pre-commit autoupdatemultiuser-webchat/
├── src/server/ # Python backend application
│ ├── app.py # Main application entry point
│ ├── ws.py # WebSocket handling
│ ├── models.py # Data models
│ ├── redis.py # Redis connection management
│ └── metrics.py # Prometheus metrics definitions
├── frontend/static/ # Frontend assets
│ ├── index.html # Main HTML page
│ ├── app.js # WebSocket client with auto-reconnection
│ └── style.css # Styles
├── monitoring/ # Observability configuration
│ ├── prometheus/ # Prometheus config
│ └── grafana/ # Grafana dashboards and datasources
├── tests/ # Test suite
├── docker-compose.yml # Multi-service Docker setup
├── Dockerfile # Application container definition
├── nginx.conf # Nginx load balancer configuration
├── pants.toml # Pants build configuration
├── mypy.ini # Type checking configuration
└── ruff.toml # Linting and formatting configuration
Run linting and type checking:
# Format code
pants fmt ::
# Lint code
pants lint ::
# Type check
pants check ::
# Run all checks
pants fmt lint check ::# Run all tests
pants test ::
# Run specific test
pants test tests/test_example.py
# Run tests with coverage
pants test --coverage-report=html ::# Build PEX executable
pants package src/server:app
# Build Docker image
docker build -t chat-app:latest .-
Security:
- Use HTTPS in production (configure SSL termination at Nginx or load balancer)
- Set secure environment variables
- Consider using Redis AUTH for Redis connections
-
Scaling:
- Adjust
WEB_REPLICASbased on expected load - Monitor Redis memory usage (messages are stored in Redis Streams)
- Consider Redis Cluster for high availability
- Adjust
-
Message Persistence:
- Messages are stored in Redis Streams (up to 10,000 messages)
- Enable Redis persistence (RDB/AOF) to survive Redis restarts
- Configure
maxmemory-policyappropriately for your use case
-
Monitoring:
- Application exposes health check endpoints at
/healthz - Prometheus metrics available at
/metrics - Grafana dashboards for real-time monitoring
- Monitor Docker container health status
- Set up logging aggregation for distributed logs
- Application exposes health check endpoints at
For production deployment, consider:
# Production deployment with scaling
WEB_REPLICAS=8 docker compose -f docker-compose.yml up -d
# View logs
docker compose logs -f
# Monitor health
docker compose psThe application includes built-in observability with Prometheus metrics and Grafana dashboards.
- Grafana:
http://localhost:3001(anonymous access enabled) - Prometheus:
http://localhost:9091
The application exposes the following metrics at /metrics:
| Metric | Type | Description |
|---|---|---|
webchat_connected_users |
Gauge | Number of currently connected WebSocket users |
webchat_messages_total |
Counter | Total number of messages processed |
webchat_message_latency_seconds |
Histogram | Time to process a message |
webchat_connections_total |
Counter | Total WebSocket connection attempts (by status) |
webchat_disconnections_total |
Counter | Total WebSocket disconnections (by reason) |
webchat_redis_operations_total |
Counter | Total Redis operations (by operation/status) |
webchat_redis_latency_seconds |
Histogram | Redis operation latency |
webchat_errors_total |
Counter | Total errors (by type) |
Prometheus uses Docker service discovery to automatically find and scrape all web container replicas. This means:
- No manual configuration needed when scaling workers
- Metrics are collected directly from each container (not through nginx)
- Use
sum(webchat_connected_users)in PromQL to get total connected users across all replicas
-
WebSocket connection failed:
- Ensure Redis is running and accessible
- Check Docker container logs:
docker compose logs web - Verify port 8080 is not in use by another application
-
Messages not appearing for all users:
- Confirm Redis is properly configured for pub/sub
- Check that multiple application instances are connecting to the same Redis instance
-
Build failures:
- Ensure Python 3.13 is available in the build environment
- Check that all dependencies in
pyproject.tomlare accessible
- Application:
http://localhost:8080/healthz - Nginx: Monitors application health and fails over accordingly
- Redis: Built-in health check via
redis-cli ping
This project uses GitHub Actions for automated CI/CD. The pipeline runs on every push and pull request, checking:
- Code formatting and linting (Ruff)
- Type checking (MyPy)
- Tests with Redis integration
- Security scanning (Gitleaks)
- Application builds (PEX and Docker)
Check the Actions tab to view workflow runs. CI checks must pass before pull requests can be merged.
This project is licensed under the terms specified in the LICENSE file.
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and ensure tests pass:
pants test :: - Run code quality checks:
pants fmt lint check :: - Commit your changes:
git commit -am 'Add feature' - Push to the branch:
git push origin feature-name - Submit a pull request (CI will run automatically)