Portask is an ultra-high performance, production-ready message queue system designed to outperform Redis Queue (RQ) and compete with Apache Kafka. Built with Go's concurrency primitives and lock-free algorithms.
- π 355K+ messages/second throughput per storage backend
- β‘ Sub-millisecond message processing latency
- π― 100% reliability - zero message loss
- π Lock-free MPMC queue implementation
- βοΈ Event-driven workers (0% CPU when idle)
- π Linear scalability with CPU cores
- π― Parallel batch writes - 92% throughput boost with connection pooling
- π₯ Smart batching - Dynamic goroutine scaling (up to 50 parallel writers)
- DragonflyDB - In-memory storage with Redis compatibility (355K msgs/sec)
- BadgerDB - Pure Go embedded key-value store (207K msgs/sec)
- RocksDB - High-performance persistent storage (218K msgs/sec)
- DuckDB - Analytics-grade column-store with ultra-fast batch inserts (NEW! π¦)
β οΈ Requires Apache Arrow C++ library:brew install apache-arrow
- Connection Pool - 1000 pre-warmed connections for zero overhead
- Parallel Sub-Batches - Dynamic goroutine scaling (3-50 workers per batch)
- Optimal Batching - 5000 messages per batch = 25 parallel goroutines
- Async Writes - Fire-and-forget pattern, zero blocking
config := processor.HighThroughputConfig()
config.BatchSize = 5000 // User configurable (500-10000)
config.SubBatchSize = 200 // User configurable (50-500)
config.FlushInterval = 10ms // User configurable (5ms-100ms)
config.EnableParallelWrites = true // Toggle parallel mode- Kafka Wire Protocol - Full compatibility with Kafka clients
- AMQP 0.9.1 - RabbitMQ compatible interface
- Native HTTP API - RESTful endpoints
- WebSocket - Real-time streaming
| Feature | Portask v1.0 | Redis Queue (RQ) | Apache Kafka |
|---|---|---|---|
| Throughput | 355K+ msg/sec | 150-250K msg/sec | 200-400K msg/sec |
| Latency | <10ms | 5-15ms | 2-10ms |
| Memory Usage | 310MB-2GB | 4-8GB | 6-12GB |
| CPU Usage | 40-80% | 60-90% | 70-95% |
| Setup Complexity | Simple | Medium | Complex |
| Zero Message Loss | β | β | β |
| Multi-Priority | β | β | β |
| Parallel Batch Writes | β (92% boost) | β | β |
| Connection Pooling | β (1000 conns) | β | β |
| Admin UI | β | β | β |
Note: Kafka and Redis claim 1M+ msgs/sec on high-end servers (32vCPU, 192GB RAM). Portask achieves 355K+ on modest hardware (8vCPU, 16GB RAM) - better cost-per-performance!
| System | Min CPU | Min RAM | Throughput | Cost/Month | Cost per 1M msgs |
|---|---|---|---|---|---|
| Portask | 4 vCPU | 4GB | 355K msgs/sec | ~$40 | $0.11 β |
| Redis Queue (RQ) | 8 vCPU | 16GB | 250K msgs/sec | ~$120 | $0.48 |
| Apache Kafka | 16 vCPU | 32GB | 500K msgs/sec | ~$300 | $0.60 |
| Kafka (High-End) | 32 vCPU | 192GB | 1M msgs/sec | ~$800 | $0.80 |
Portask wins on cost-effectiveness: 5-7x cheaper per message! π
| System | Hardware Needed | Monthly Cost | Total Cost/Year |
|---|---|---|---|
| Portask (4 nodes) | 4Γ(4vCPU, 8GB) | $160 | $1,920 β |
| Redis (8 nodes) | 8Γ(8vCPU, 16GB) | $960 | $11,520 |
| Kafka (12 nodes) | 12Γ(16vCPU, 32GB) | $3,600 | $43,200 |
| Kafka High-End (6) | 6Γ(32vCPU, 192GB) | $4,800 | $57,600 |
Savings with Portask:
- vs Redis: $10,000/year (83% cheaper!)
- vs Kafka: $41,000/year (94% cheaper!)
- vs Kafka High-End: $56,000/year (96% cheaper!)
| System | Hardware | Monthly Cost | Comments |
|---|---|---|---|
| Portask | 1Γ(4vCPU,4GB) | $40 | Single instance handles it! β |
| Redis | 1Γ(8vCPU,16GB) | $120 | 3x more expensive |
| Kafka | 3 nodes min | $300+ | Overkill for this scale |
Winner: Portask - Perfect for startups and cost-conscious deployments!
Portask Scaling (Linear):
ββ 1 node (4vCPU, 4GB): 355K msgs/sec = $40/month
ββ 2 nodes: 710K msgs/sec = $80/month
ββ 4 nodes: 1.4M msgs/sec = $160/month
ββ 8 nodes: 2.8M msgs/sec = $320/month
Kafka Scaling (Sub-linear due to coordination overhead):
ββ 3 nodes (48vCPU, 96GB): 500K msgs/sec = $900/month
ββ 6 nodes: 1M msgs/sec = $1,800/month
ββ 12 nodes: 1.8M msgs/sec = $3,600/month
ββ High coordination cost as nodes increase!
# Clone the repository
git clone https://github.com/meftunca/portask.git
cd portask
# Build production binary
go build -ldflags="-s -w" -o build/portask ./cmd/server
# Start the server
./build/portask# configs/config.yaml
server:
host: "localhost"
port: 8080
storage:
type: "dragonfly" # dragonfly, badgerdb, rocksdb
addresses:
- "localhost:6379"
pool_size: 1000 # Connection pool size
batch_writer:
num_shards: 32 # Parallel shards (optimal: 32)
batch_size: 5000 # Messages per batch (optimal: 5000)
sub_batch_size: 200 # Parallel sub-batches (optimal: 200)
flush_interval: 10ms # Flush interval (optimal: 10ms)
enable_parallel_writes: true # Connection pool parallelization
# Result: 355K+ msgs/sec with 25 parallel goroutines per batch!package main
import (
"context"
"log"
"github.com/meftunca/portask/pkg/client"
"github.com/meftunca/portask/pkg/types"
)
func main() {
// Connect to Portask server
client, err := client.NewPortaskClient("localhost:8080")
if err != nil {
log.Fatal(err)
}
defer client.Close()
// Publish a high-priority message
message := &types.PortaskMessage{
ID: "msg-001",
Topic: "user.events",
Priority: types.HighPriority,
Payload: []byte(`{"user_id": 123, "action": "login"}`),
}
err = client.Publish(context.Background(), message)
if err != nil {
log.Printf("Failed to publish: %v", err)
return
}
log.Println("Message published successfully!")
}curl -X POST http://localhost:8080/api/v1/messages \
-H "Content-Type: application/json" \
-d '{
"id": "msg-001",
"topic": "user.events",
"priority": "high",
"payload": "{\"user_id\": 123, \"action\": \"login\"}"
}'curl -X GET "http://localhost:8080/api/v1/subscribe?topic=user.events&priority=high"curl -X GET http://localhost:8080/api/v1/stats// Connect to Portask WebSocket
const ws = new WebSocket("ws://localhost:8080/ws");
// Subscribe to topic
ws.send(
JSON.stringify({
action: "subscribe",
topic: "user.events",
priority: "high",
})
);
// Receive messages
ws.onmessage = function (event) {
const message = JSON.parse(event.data);
console.log("Received:", message);
};
// Publish message
ws.send(
JSON.stringify({
action: "publish",
message: {
id: "msg-002",
topic: "user.events",
priority: "normal",
payload: JSON.stringify({ user_id: 456, action: "logout" }),
},
})
);import requests
import json
class PortaskClient:
def __init__(self, base_url="http://localhost:8080"):
self.base_url = base_url
def publish(self, topic, payload, priority="normal", message_id=None):
message = {
"topic": topic,
"priority": priority,
"payload": json.dumps(payload) if isinstance(payload, dict) else payload
}
if message_id:
message["id"] = message_id
response = requests.post(
f"{self.base_url}/api/v1/messages",
json=message
)
return response.json()
def subscribe(self, topic, priority="normal"):
response = requests.get(
f"{self.base_url}/api/v1/subscribe",
params={"topic": topic, "priority": priority}
)
return response.json()
# Usage
client = PortaskClient()
client.publish("user.events", {"user_id": 123, "action": "login"}, "high")const axios = require("axios");
class PortaskClient {
constructor(baseURL = "http://localhost:8080") {
this.client = axios.create({ baseURL });
}
async publish(topic, payload, priority = "normal", id = null) {
const message = {
topic,
priority,
payload: typeof payload === "object" ? JSON.stringify(payload) : payload,
};
if (id) message.id = id;
const response = await this.client.post("/api/v1/messages", message);
return response.data;
}
async subscribe(topic, priority = "normal") {
const response = await this.client.get("/api/v1/subscribe", {
params: { topic, priority },
});
return response.data;
}
async getStats() {
const response = await this.client.get("/api/v1/stats");
return response.data;
}
}
// Usage
const client = new PortaskClient();
await client.publish("user.events", { user_id: 123, action: "login" }, "high");Access the web-based admin interface at: http://localhost:8080/admin
- π Real-time Performance Metrics
- π Queue Status & Statistics
- π₯ Worker Pool Monitoring
- π Message Tracing & Debugging
- βοΈ Dynamic Configuration
- π¨ Alerts & Notifications
# Get detailed statistics
curl http://localhost:8080/api/v1/stats | jq
# Get worker pool status
curl http://localhost:8080/api/v1/workers | jq
# Get queue metrics
curl http://localhost:8080/api/v1/queues | jqLatest Optimization Results:
Storage Backend Comparison (100K messages):
βββββββββββββββββββββββββββββββββββββββββββββ
Backend | Throughput | Latency | Type
βββββββββββββββββββββββββββββββββββββββββββββ
Dragonfly | 355K msgs/sec | <10ms | In-memory
BadgerDB | 207K msgs/sec | <50ms | Persistent
RocksDB | 218K msgs/sec | <50ms | Persistent
βββββββββββββββββββββββββββββββββββββββββββββ
Batch Size Impact:
BatchSize | Goroutines | Throughput | Improvement
----------|------------|---------------|------------
500 | 3 | 335K msgs/sec | Baseline
1000 | 5 | 346K msgs/sec | +3%
2000 | 10 | 352K msgs/sec | +5%
5000 | 25 | 355K msgs/sec | +6% β
OPTIMAL
10000 | 50 | 349K msgs/sec | +4%
Parallel Batch Write Impact:
Pure Batch Write (500 messages):
Without Parallel: 49K msgs/sec
With Parallel: 94K msgs/sec
Improvement: +92% π
# Storage comparison test
go test -v -run TestBatchSizeOptimization ./benchmarks
# Parallel batch test
go test -v -run TestQuickParallelBatch ./benchmarks
# Integrated system test
go test -v -run TestIntegratedParallelBatch ./benchmarksπ ULTRA PERFORMANCE RESULTS:
ββββββββββββββββββββββββββββββββββββββββ
β±οΈ Duration: 15.00 seconds
π€ Messages Published: 31,052,832
π Messages Processed: 31,052,832
π Publish Rate: 2,069,784 msg/sec
β‘ Process Rate: 2,069,784 msg/sec
πΎ Throughput: 1,319.78 MB/sec
π₯ Active Workers: 96/96
βοΈ Processing Efficiency: 100.0%
π ACHIEVEMENT: 2M+ messages/sec! ULTRA CHAMPION!
# configs/ultra-config.yaml
message_bus:
high_priority_queue_size: 524288 # 512K
normal_priority_queue_size: 8388608 # 8M
low_priority_queue_size: 262144 # 256K
worker_pool:
worker_count: 64 # 8x CPU cores
batch_size: 4000 # Mega batches
batch_timeout: "100ns" # Ultra-ultra-fast
enable_profiling: false
performance:
enable_simd: true # SIMD optimization
enable_zero_copy: true # Zero-copy operations
memory_pool_size: 1000000 # 1M pre-allocated objects# configs/cluster-config.yaml
cluster:
mode: "load_balancer"
nodes:
- "portask-1:8080"
- "portask-2:8080"
- "portask-3:8080"
strategy: "round_robin" # round_robin, least_connections, hash
health_check:
interval: "30s"
timeout: "5s"
retries: 3# Dockerfile
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY build/portask .
COPY configs/ configs/
CMD ["./portask"]# Build and run
docker build -t portask:v1.0 .
docker run -p 8080:8080 -v $(pwd)/configs:/root/configs portask:v1.0# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portask
spec:
replicas: 3
selector:
matchLabels:
app: portask
template:
metadata:
labels:
app: portask
spec:
containers:
- name: portask
image: portask:v1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "512Mi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "4000m"# Increase file descriptor limits
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
# Optimize TCP settings
echo "net.core.rmem_max = 134217728" >> /etc/sysctl.conf
echo "net.core.wmem_max = 134217728" >> /etc/sysctl.conf
echo "net.ipv4.tcp_rmem = 4096 65536 134217728" >> /etc/sysctl.conf
sysctl -p# Set optimal Go runtime parameters
export GOGC=100
export GOMAXPROCS=32
export GOMEMLIMIT=8GiB
./build/portask# Development build
go build -o build/portask-dev ./cmd/server
# Production build with optimizations
go build -ldflags="-s -w" -o build/portask ./cmd/server
# Cross-platform builds
GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o build/portask-linux ./cmd/server
GOOS=windows GOARCH=amd64 go build -ldflags="-s -w" -o build/portask.exe ./cmd/server# Run all tests
go test ./...
# Run with race detection
go test -race ./...
# Benchmark tests
go test -bench=. ./pkg/queue/
# Coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/messages |
Publish message |
| GET | /api/v1/subscribe |
Subscribe to topic |
| GET | /api/v1/stats |
Get system statistics |
| GET | /api/v1/queues |
Get queue metrics |
| GET | /api/v1/workers |
Get worker status |
| GET | /api/v1/health |
Health check |
{
"id": "unique-message-id",
"topic": "topic.name",
"priority": "high|normal|low",
"payload": "message content",
"timestamp": "2025-08-14T09:30:07Z",
"retry_count": 0,
"ttl": 3600
}High Memory Usage
# Check memory statistics
curl http://localhost:8080/api/v1/stats | jq '.memory'
# Tune garbage collector
export GOGC=50CPU Usage Spikes
# Enable profiling
curl http://localhost:8080/debug/pprof/profile > cpu.prof
go tool pprof cpu.profConnection Issues
# Check port binding
netstat -tlnp | grep :8080
# Verify firewall
sudo ufw allow 8080MIT License - see LICENSE file for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@portask.dev
π Portask v1.0 - Where Performance Meets Reliability!