Shortly is a scalable URL shortener built entirely in Go with a clean microservices architecture. It includes authentication, analytics, rate limiting, Redis-based caching, and a high-performance Key Generation Service (KGS) that pre-generates short keys to ensure consistent latency under heavy load.
Below is a high-level illustration of Shortlyβs architecture to help you visualize the flow and interactions between components:
- Built using Go + Gin framework for RESTful HTTP APIs.
- Handles user authentication (signup, signin, logout) with JWT tokens.
- Manages URL lifecycle operations: creating shortened URLs, fetching details, updating, deleting, and redirection.
- Communicates internally with the KGS Service using gRPC to request pre-generated keys for URL shortening.
- Caches frequently accessed data like user profiles and short key mappings in Redis for low-latency reads.
- Implements rate limiting using Redis (DB 2) with ulule middleware to protect endpoints from abuse.
- For redirection, looks up the short key in Redis; if cached, instantly issues a 302 redirect response.
- Store analytics data asynchronously using Goroutines to avoid blocking the request lifecycle.
- A dedicated microservice written in Go and exposed via gRPC.
- Responsible for generating unique short keys ahead of time (pre-generation), avoiding latency spikes during user requests.
- Stores generated keys in MongoDB for persistence and in Redis (DB 0) as a queue (using
LPUSH/RPOPoperations). - When API service requests a key, KGS pop from Redis queue, marks it as used in MongoDB, and returns it.
- Automatically refills the key queue if the pool size drops below a threshold, ensuring high availability.
- Ensures uniqueness and consistency of keys even under high concurrent load, avoiding collisions. The system can generate up to ~56.8 billion unique keys using 6-character Base62 encoding (62βΆ combinations), with an extremely low probability of collision.
- Currently updates newly generated keys in MongoDB using bulk operations with
InsertMany. The current batch size is around 1000 keys, but withInsertManyit can efficiently bulk update up to 100,000 keys at once.
-
PostgreSQL:
- Stores core relational data: users, URLs, and analytics events.
- Structured schema supports complex queries and joins for analytics and user management.
-
MongoDB:
- Holds the collection of pre-generated short keys.
- Acts as durable storage for key state (used or available).
-
Redis (Logical DBs):
- DB 0: Queue of pre-generated keys used by the KGS service.
- DB 1: Cache layer for fast access to user profiles and short key mappings (TTL 24h).
- DB 2: Implements rate limiting counters per user/IP via ulule middleware.
- On each redirection or URL access, metadata such as OS, device type, IP address, country, and timestamps are collected.
- Analytics logging is done asynchronously in the API service using Goroutines, ensuring minimal impact on latency.
- Stored in PostgreSQL for reporting and metrics.
| Layer | Technology |
|---|---|
| API Service | Go + Gin |
| Auth | JWT-based Authentication |
| Microservice | Go + gRPC |
| Key Queue | Redis (RPOP/LPUSH as queue) |
| DB (Main) | PostgreSQL |
| DB (Keys) | MongoDB |
| Cache | Redis |
| Logging | Slog |
| Rate Limiting | Redis + ulule Middleware |
All endpoints are prefixed with: /api/v1
POST /auth/signupPOST /auth/signinPOST /auth/logout
GET /profile/PATCH /profile/update
GET /url/POST /url/shortenGET /url/:shortKeyPATCH /url/:shortKeyDELETE /url/:shortKeyGET /url/redirect/:shortKey(302 Redirection)
GET /analytics/:urlId
- Users sends a request to:
POST /auth/signupβ Register a new user.POST /auth/signinβ Authenticate and login.
- The API Service:
- Validates the request.
- Stores user credentials in PostgreSQL (hashed password).
- On login, generates and returns a JWT.
- The token is used for all authenticated endpoints and is validated in middleware.
- Endpoint:
GET /profile/ - The API service attempts to fetch user profile data from Redis DB 1.
- If hit: Return cached profile immediately.
- If miss:
- Query PostgreSQL for user data.
- On success, cache it in Redis DB 1 with a TTL of 30 minutes.
- Then return the response.
- This lazy caching pattern ensures minimal database hits under load.
- If the user provides a custom key, API checks for uniqueness:
- Lookup PostgreSQL to see if the key is already used.
- If available:
- Insert the mapping into PostgreSQL and Redis.
- Else:
- Return a 409 Conflict.
- API makes a gRPC request to the KGS service to fetch a short key:
- KGS does an
RPOPfrom Redis DB 0 (acts as a key queue). - Marks the key as used in MongoDB (to avoid reuse).
- Returns the key to the API service.
- KGS does an
- The API service:
- Stores
{shortKey β originalURL}in PostgreSQL. - Also caches the same mapping in Redis DB 1 with a 24-hour TTL.
- Returns the shortened URL to the client.
- Stores
- Checks Redis DB 1 for the short key.
- If hit:
- Instantly redirects using HTTP
302. - Triggers analytics logging asynchronously via goroutines.
- Instantly redirects using HTTP
- If cache miss:
- Query PostgreSQL for the original URL.
- If found:
- Return
302redirect. - Re-populate Redis with this mapping (24h TTL).
- Fire off async analytics logging.
- Return
- If not found:
- Return
404 Not Found.
- Return
- Triggered non-blocking from redirect handler via goroutines.
- Collected metadata includes:
- IP address
- User Agent (parsed for OS and device)
- Country (via IP geo lookup)
- Timestamp
- Data is stored in PostgreSQL under the analytics table.
- This is fully decoupled to keep the redirect fast and scalable.
- All critical endpoints are protected with a custom rate limiter middleware.
- Uses Redis DB 2 to store counters per user ID or IP address.
- Returns
429 Too Many Requestswhen the quota is exceeded. - Helps prevent abuse and maintains QoS under high traffic.
-
The KGS service runs as a standalone microservice.
-
Maintains a queue of short keys in Redis DB 0.
-
When the API service requests a short key (via gRPC):
- KGS performs an
RPOPfrom Redis DB 0 to fetch a key. - Immediately checks the queue length.
- If the queue size is below a configured threshold (e.g., 1000):
- KGS generates a new batch of keys.
- Stores them in:
- Redis DB 0 β via
LPUSH. - MongoDB β Each key is marked as
available.
- Redis DB 0 β via
- If the queue size is below a configured threshold (e.g., 1000):
- KGS performs an
-
Once a key is popped:
- KGS attempts to mark the key as
usedin MongoDB. - If the DB update fails (e.g., Mongo is down or query error):
- The popped key is immediately pushed back into Redis DB 0 via
LPUSH. - This ensures no key is lost in the system.
- The popped key is immediately pushed back into Redis DB 0 via
- KGS attempts to mark the key as
- JWT Auth (Signup, Signin, Logout)
- Pre-generated Key Pool with Redis Queue
- gRPC for Internal Microservice Communication
- Asynchronous Analytics Collection
- Redis-based Profile and URL Caching
- Rate Limiting with Redis (DB 2)
- 302 Redirection with TTL
- Gin Web Framework for REST API
- MongoDB-backed KGS Validation
- Clean and Maintainable Microservice Architecture
