A high-performance HTTP proxy server for IPTV content with true live proxying, per-client connection management, and seamless failover support. Built with FastAPI and optimized for efficiency.
Feel free to open an issue on this repo, or hit us up on Discord
Join our Discord server to ask questions and get help, help others, suggest new ideas, and offer suggestions for improvements! You can also try out and help test new features! π
- π Pure HTTP Proxy: Zero transcoding, direct byte-for-byte streaming
- π― Per-Client Connections: Each client gets independent provider connection
- β‘ Truly Ephemeral: Provider connections open only when client consuming
- πΊ HLS Support: Optimized playlist and segment handling (.m3u8)
- π‘ Continuous Streams: Direct proxy for .ts, .mp4, .mkv, .webm, .avi files
- π Real-time URL Rewriting: Automatic playlist modification for proxied content
- π± Full VOD Support: Byte-range requests, seeking, multiple positions
- π¬ Strict Live TS Mode: Enhanced stability for live MPEG-TS with pre-buffering & circuit breaker
- β‘ uvloop Integration: 2-4x faster async I/O operations
- π Seamless Failover: <100ms transparent URL switching per client
- π― Immediate Cleanup: Connections close instantly when client stops
- π¬ FFmpeg Integration: Built-in hardware-accelerated video processing
- π GPU Acceleration: Automatic detection of NVIDIA, Intel, and AMD GPUs
- β‘ VAAPI Support: Intel/AMD hardware encoding (3-8x faster than CPU)
- π― NVENC Support: NVIDIA hardware encoding (10-20x faster than CPU)
- π§ Auto-Configuration: Zero-config hardware acceleration setup
- π Multiple Codecs: H.264, H.265/HEVC, VP8, VP9, AV1 support
- π₯ Client Tracking: Individual client sessions and bandwidth monitoring
- π Real-time Statistics: Live metrics on streams, clients, and data usage
- π Stream Type Detection: Automatic HLS/VOD/Live detection
- π§Ή Automatic Cleanup: Inactive streams and clients auto-removed
- π£ Event System: Real-time events and webhook notifications
- π©Ί Health Checks: Built-in health endpoints for monitoring
- π·οΈ Custom Metadata: Attach arbitrary key/value pairs to streams for identification
Use the below example to run using the precompiled Dockerhub image.
You can also replace latest with dev or experimental to try another branch.
services:
m3u-proxy:
image: sparkison/m3u-proxy:latest
container_name: m3u-proxy
ports:
- "8085:8085"
# Hardware acceleration (optional)
devices:
- /dev/dri:/dev/dri # Intel/AMD GPU support
# For NVIDIA GPUs, use this instead:
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
environment:
# Server Configuration
- M3U_PROXY_HOST=0.0.0.0
- M3U_PROXY_PORT=8085
- LOG_LEVEL=INFO
# Base path (default: /m3u-proxy for m3u-editor integration)
# Set to empty string if not using reverse proxy: ROOT_PATH=
- ROOT_PATH=/m3u-proxy
# Hardware acceleration (optional)
- LIBVA_DRIVER_NAME=i965 # For older Intel GPUs
# - LIBVA_DRIVER_NAME=iHD # For newer Intel GPUs
# Timeouts (optional)
- CLIENT_TIMEOUT=300
- CLEANUP_INTERVAL=60
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8085/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10spythoninstalled on your system:>=3.10pipinstalled on your system:>=23
git clone https://github.com/sparkison/m3u-proxy.git && cd m3u-proxy
pip install -r requirements.txtpython main.py --debugServer will start on http://localhost:8085
# HLS stream with custom user agent
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8", "user_agent": "MyApp/1.0"}'
# Direct IPTV stream with failover
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{
"url": "http://server.com/stream.ts",
"failover_urls": ["http://backup.com/stream.ts"],
"user_agent": "VLC/3.0.18"
}'
# Using the CLI client
python m3u_client.py create "https://your-stream.m3u8" --user-agent "MyApp/1.0"
python m3u_client.py create "http://server.com/movie.mkv" --failover "http://backup.com/movie.mkv"POST /streams
Content-Type: application/json
{
"url": "stream_url",
"failover_urls": ["backup_url1", "backup_url2"],
"user_agent": "Custom User Agent String"
}GET /streamsGET /streams/{stream_id}DELETE /streams/{stream_id}POST /streams/{stream_id}/failoverGET /statsGET /healthGET /clients
GET /clients/{client_id}The included CLI client (m3u_client.py) provides easy access to all proxy features:
# Create a stream with failover
python m3u_client.py create "https://primary.m3u8" --failover "https://backup1.m3u8" "https://backup2.m3u8"
# List all active streams
python m3u_client.py list
# View comprehensive statistics
python m3u_client.py stats
# Monitor in real-time (updates every 5 seconds)
python m3u_client.py monitor
# Check health status
python m3u_client.py health
# Get detailed stream information
python m3u_client.py info <stream_id>
# Trigger manual failover
python m3u_client.py failover <stream_id>
# Delete a stream
python m3u_client.py delete <stream_id># Server configuration
M3U_PROXY_HOST=0.0.0.0
M3U_PROXY_PORT=8085
# Base path for API routes (useful for reverse proxy integration)
# Default: /m3u-proxy (optimized for m3u-editor integration)
# Set to empty string for root path
ROOT_PATH=/m3u-proxy
# API Authentication (optional)
# Set API_TOKEN to require authentication for management endpoints
# Leave unset or empty to disable authentication
API_TOKEN=your_secret_token_here
# Client timeout (seconds)
CLIENT_TIMEOUT=300
# Cleanup interval (seconds)
CLEANUP_INTERVAL=60
# Stream Retry Configuration (improves reliability for unstable connections)
# Number of retry attempts before failover or giving up
STREAM_RETRY_ATTEMPTS=3
# Delay between retries (seconds)
STREAM_RETRY_DELAY=1.0
# Total timeout across all retries (seconds, 0 to disable)
STREAM_TOTAL_TIMEOUT=60.0
# Use exponential backoff for retry delays (false/true)
STREAM_RETRY_EXPONENTIAL_BACKOFF=false
# Timeout for receiving data chunks (seconds)
LIVE_CHUNK_TIMEOUT_SECONDS=15.0
# Sticky Session Handler (prevents playback loops with load-balanced providers)
# Locks to specific backend after redirect to maintain playlist consistency
USE_STICKY_SESSION=falseWhen API_TOKEN is set in the environment, all management endpoints require authentication via the X-API-Token header. This includes:
/- Root endpoint/streams- Create, list, get, delete streams/stats/*- All statistics endpoints/clients- Client management/health- Health check endpoint/webhooks- Webhook management/streams/{stream_id}/failover- Failover control/hls/{stream_id}/clients/{client_id}- Client disconnect
Stream endpoints (the actual streaming URLs) do NOT require authentication since they are accessed by media players that identify streams via stream_id.
Example with authentication:
# Set your API token
export API_TOKEN="my_secret_token"
# Method 1: Using header (recommended for API calls)
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-H "X-API-Token: my_secret_token" \
-d '{"url": "https://your-stream.m3u8"}'
# Method 2: Using query parameter (useful for browser access)
curl -X POST "http://localhost:8085/streams?api_token=my_secret_token" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8"}'
# Browser access example
# Visit: http://localhost:8085/stats?api_token=my_secret_token
# Without token - will get 401 error
curl -X POST "http://localhost:8085/streams" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-stream.m3u8"}'To disable authentication, simply leave API_TOKEN unset or set it to an empty string.
m3u-proxy includes comprehensive hardware acceleration support for video transcoding operations using FFmpeg with GPU acceleration.
- π₯ NVIDIA GPUs: CUDA, NVENC, NVDEC (10-20x faster than CPU)
- β‘ Intel GPUs: VAAPI, QuickSync (QSV) (3-8x faster than CPU)
- π AMD GPUs: VAAPI acceleration (3-5x faster than CPU)
- π» CPU Fallback: Software encoding when no GPU available
The container automatically detects available hardware on startup:
π Running hardware acceleration check...
β
Device /dev/dri/renderD128 is accessible.
π° Intel GPU: Intel GPU (Device ID: 0x041e)
β
FFmpeg VAAPI acceleration: AVAILABLE
π‘ For older Intel GPUs, try: LIBVA_DRIVER_NAME=i965
services:
m3u-proxy:
image: sparkison/m3u-proxy:latest
devices:
- /dev/dri:/dev/dri
environment:
- LIBVA_DRIVER_NAME=i965 # For older Intel GPUs
# - LIBVA_DRIVER_NAME=iHD # For newer Intel GPUsservices:
m3u-proxy:
image: sparkison/m3u-proxy:latest
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]# Intel/AMD GPU
docker run -d --name m3u-proxy \
--device /dev/dri:/dev/dri \
-e LIBVA_DRIVER_NAME=i965 \
-p 8085:8085 \
sparkison/m3u-proxy:latest
# NVIDIA GPU
docker run -d --name m3u-proxy \
--gpus all \
-p 8085:8085 \
sparkison/m3u-proxy:latestThe hardware acceleration is available through Python APIs:
from hwaccel import get_ffmpeg_hwaccel_args, is_hwaccel_available
# Check if hardware acceleration is available
if is_hwaccel_available():
# Get optimized FFmpeg arguments
hwaccel_args = get_ffmpeg_hwaccel_args("h264")
# Example: Hardware-accelerated transcoding
cmd = ["ffmpeg"] + hwaccel_args + [
"-i", "input_stream.m3u8",
"-c:v", "h264_vaapi", # Hardware encoder
"-preset", "fast",
"-b:v", "2M",
"output_stream.mp4"
]| Hardware | Encoding Speed | Concurrent Streams | CPU Usage Reduction |
|---|---|---|---|
| NVIDIA GPU | 10-20x faster | 4-8 streams | 95%+ |
| Intel GPU | 3-8x faster | 2-4 streams | 90%+ |
| AMD GPU | 3-5x faster | 2-3 streams | 85%+ |
| CPU Only | Baseline | 1 stream | N/A |
- H.264/AVC: High compatibility, universal support
- H.265/HEVC: Better compression, 4K/8K content
- VP8/VP9: WebM containers, streaming optimized
- AV1: Next-gen codec, best compression
- MJPEG: Low latency, surveillance applications
For detailed hardware acceleration setup and troubleshooting, see Hardware Acceleration Guide.
# Main server with all features
python main.py
# With custom options
python main.py --port 8002 --debug --reload-
Stream Won't Load
- Check original URL accessibility
- Verify CORS headers if accessing from browser
- Check server logs for detailed errors
-
High Memory Usage
- Reduce
CLIENT_TIMEOUTfor faster cleanup - Monitor client connections and cleanup inactive ones
- Consider horizontal scaling for high loads
- Reduce
-
Failover Not Working
- Verify failover URLs are accessible
- Check failover trigger conditions in logs
- Test manual failover via API
# Enable detailed logging
export LOG_LEVEL=DEBUG
python main.py --debug<video controls>
<source src="http://localhost:8085/hls/{stream_id}/playlist.m3u8" type="application/x-mpegURL">
</video>ffplay "http://localhost:8085/hls/{stream_id}/playlist.m3u8"vlc "http://localhost:8085/hls/{stream_id}/playlist.m3u8"The proxy includes a comprehensive event system for monitoring and integration:
# Add webhook to receive events
curl -X POST "http://localhost:8085/webhooks" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-server.com/webhook",
"events": ["stream_started", "client_connected", "failover_triggered"],
"timeout": 10,
"retry_attempts": 3
}'stream_started- New stream createdstream_stopped- Stream endedclient_connected- Client joined streamclient_disconnected- Client left streamfailover_triggered- Switched to backup URL
{
"event_id": "uuid",
"event_type": "stream_started",
"stream_id": "abc123",
"timestamp": "2025-09-25T22:38:34.392830",
"data": {
"primary_url": "http://example.com/stream.m3u8",
"user_agent": "MyApp/1.0"
}
}# Try the event system demo
python demo_events.pyπ Full Documentation: See EVENT_SYSTEM.md for complete webhook integration guide.
- Documentation Index - Complete, always-current list of all documentation
βββ docker/ # Container and deployment assets
βββ docs/ # Full documentation set
β βββ README.md # Canonical docs index
β βββ *.md # Architecture, failover, retry, auth, etc.
βββ logs/ # Runtime logs (local/dev)
βββ src/
β βββ __init__.py # Package marker
β βββ api.py # FastAPI server application
β βββ broadcast_manager.py # Client broadcast coordination
β βββ config.py # Configuration management
β βββ events.py # Event system with webhooks
β βββ hwaccel.py # Hardware acceleration detection/helpers
β βββ models.py # Data models and schemas
β βββ pooled_stream_manager.py # Shared/pooling stream orchestration
β βββ redis_config.py # Redis settings
β βββ redis_manager.py # Redis coordination layer
β βββ stream_manager.py # Per-client direct proxy core
β βββ transcoding.py # FFmpeg transcoding pipeline
βββ static/ # Static assets (icons, images)
βββ tests/ # Test suite
β βββ integration/ # Integration tests
β βββ test_*.py # Unit tests
βββ tools/ # Utility scripts and tools
β βββ performance_test.py # Performance testing
β βββ m3u_client.py # CLI client
β βββ demo_events.py # Event system demo
β βββ run_tests.py # Enhanced test runner
βββ docker-compose.yml # Default compose stack
βββ Dockerfile # Container build definition
βββ main.py # Server entry point (uvloop support)
βββ requirements.txt # Python dependencies
βββ pytest.ini # Test configuration
βββ README.md # This file
Whether itβs writing docs, squashing bugs, or building new features, your contribution matters! β€οΈ
We welcome PRs, issues, ideas, and suggestions!
Hereβs how you can join the party:
- Follow our coding style and best practices.
- Be respectful, helpful, and open-minded.
- Respect the CC BY-NC-SA license.
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
m3u editor is licensed under CC BY-NC-SA 4.0:
- BY: Give credit where creditβs due.
- NC: No commercial use.
- SA: Share alike if you remix.
For full license details, see LICENSE.