For a complete automated installation, run this single command:
curl -fsSL https://raw.githubusercontent.com/Adelkazzaz/Linucast/main/install/install_wrapper.sh | bashThis will:
- Install all required dependencies
- Set up the virtual camera
- Build the C++ components
- Create desktop shortcuts and launcher scripts
# Install required Python packages
pip install mediapipe opencv-python numpy
# Optional: for virtual camera support
pip install pyvirtualcamTo use Linucast as a virtual camera for video conferencing:core/assets/linucast-logo.png)
Linucast is a Linux virtual camera application that enhances your webcam feed with AI-powered features like face tracking, auto-framing, and background effects. It's designed to be a free, open-source alternative to commercial solutions like NVIDIA Broadcast, optimized specifically for Linux systems.
- Face Tracking & Auto-framing: Automatically centers and zooms on your face (like Apple Center Stage)
- Dynamic Zoom Control: Adjust zoom level in real-time while tracking is active
- Background Effects:
- Blur: Apply Gaussian blur to the background
- Remove: Replace background with black
- Replace: Use a custom background image
- Face Landmarks: Option to display facial feature points
- FPS Control: Toggle between 30 and 60 fps modes for different performance levels
- Virtual Camera Integration: Output to a virtual camera for use in video conferencing apps
- Linux system (tested on Ubuntu 20.04+)
- Python 3.10
- Webcam
- v4l2loopback (for virtual camera output)
- MediaPipe
- OpenCV
- NumPy
- pyvirtualcam (optional, for virtual camera output)
For a complete automated installation, run this single command:
curl -fsSL https://raw.githubusercontent.com/Adelkazzaz/Linucast/main/install/install_wrapper.sh | bashThis will:
- Install all required dependencies
- Set up the virtual camera
- Build the C++ components
- Create desktop shortcuts and launcher scripts
# Install required Python packages
pip install mediapipe opencv-python numpy
# Optional: for virtual camera support
pip install pyvirtualcamTo use Linucast as a virtual camera for video conferencing:
# Install v4l2loopback
sudo apt install v4l2loopback-dkms
# Load the kernel module
sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="LinucastCam" exclusive_caps=1To make the virtual camera persistent across reboots:
echo "v4l2loopback" | sudo tee -a /etc/modules
echo "options v4l2loopback devices=1 video_nr=10 card_label=LinucastCam exclusive_caps=1" | sudo tee -a /etc/modprobe.d/v4l2loopback.conf
### Post-Installation Fixes
If you encounter issues after installation:
#### Fix Duplicate Desktop Icons
If you see multiple Linucast icons in your applications menu:
```bash
cd Linucast
./install/fix_desktop_icon.sh
The latest version includes stability improvements. If the app still stops unexpectedly:
-
Update to the latest version:
cd Linucast git pull origin main ./install/build_cpp.sh -
Check system resources and camera availability
-
See MAINTENANCE.md for detailed troubleshooting
To completely remove Linucast:
cd Linucast
./install/uninstall.shpython linucast_simple.pypython linucast_simple.py --face-trackingpython linucast_simple.py --face-tracking --landmarks --mode blur --fps 60python linucast_simple.py --face-tracking --virtual-campython linucast_simple.py --face-tracking --landmarks --mode blur --virtual-cam --fps 60 --zoom-ratio 1.5| Key | Action |
|---|---|
| C | Switch camera |
| B | Blur background mode |
| R | Remove background mode |
| I | Image background mode |
| L | Toggle face landmarks display |
| T | Toggle face tracking auto-frame |
| +/- | Increase/decrease zoom (tracking mode) |
| F | Toggle between 30fps and 60fps modes |
| Q or ESC | Quit |
| Option | Description | Default |
|---|---|---|
| --camera CAMERA | Camera index | 1 |
| --blur BLUR | Blur strength | 55 |
| --threshold THRESHOLD | Segmentation threshold (0.0-1.0) | 0.6 |
| --landmarks | Show face landmarks | False |
| --resolution RES | Output resolution (e.g., 640x480) | 640x480 |
| --virtual-cam | Output to virtual camera | False |
| --virtual-device DEV | Virtual camera device | /dev/video10 |
| --bg-image IMAGE | Background image for replacement mode | None |
| --mode MODE | Background mode (blur, remove, replace) | blur |
| --face-tracking | Enable face tracking and auto-framing | False |
| --zoom-ratio RATIO | Zoom ratio for face tracking | 1.8 |
| --smoothing FACTOR | Smoothing factor for tracking (0.05-0.5) | 0.2 |
| --fps FPS | Target FPS (30 or 60) | 30 |
- If tracking is jittery, try decreasing the smoothing factor
- If tracking is too slow, try increasing the smoothing factor
- If the face appears cut off, try decreasing the zoom ratio
- If face tracking is lost during fast movements, the app now has improved recovery mechanisms
- Try a lower resolution:
--resolution 640x360 - Disable face landmarks if not needed
- The app includes adaptive frame skipping to maintain responsiveness when the CPU is heavily loaded
- For best FPS, ensure your system has sufficient resources (CPU, GPU)
- The FPS counter shows the actual processing framerate, which might be lower than the target due to:
- Complex processing operations (face detection, background segmentation)
- System load from other applications
- Camera hardware limitations
- The app adaptively manages processing to maintain smooth video even when the full target FPS can't be reached
The face tracking feature implements a "virtual pan-tilt-zoom" effect:
- Detect Face: Uses MediaPipe Face Mesh to detect facial landmarks
- Calculate Center: Finds the midpoint between the eyes
- Smart Tracking: Adaptively adjusts tracking speed based on movement and FPS
- Face Recovery: Maintains last known position when face detection is temporarily lost
- Smooth Movement: Applies an Exponential Moving Average (EMA) filter with adaptive smoothing
- Crop and Zoom: Creates a zoomed view centered on the face (with adjustable zoom level)
- Resize to Output: Scales the cropped region back to full size
- Dynamic FPS Control: Adjusts the processing rate between standard (30fps) and high (60fps) modes
- Adaptive Performance: Balances image quality and responsiveness based on system capabilities
This project is fully functional and ready for use. Future updates may include:
- GUI interface for easier control
- More background effects
- Multi-face tracking
- Performance optimizations
This project is licensed under the Apache License 2.0.
cd python_core
poetry run linucastcd python_core
poetry run linucast --nogui --input /dev/video0 --output /dev/video10linucast [OPTIONS]
Options:
--config FILE Configuration file path
--nogui Run without GUI (headless mode)
--input DEVICE Input camera device (default: /dev/video0)
--output DEVICE Output virtual device (default: /dev/video10)
--gpu N GPU device index (default: 0)
--debug Enable debug logging
--help Show help messageLinucast uses a YAML configuration file. See python_core/config.yaml for all options:
camera:
device: /dev/video1
resolution: [1280, 720]
fps: 30
background:
mode: blur # Options: blur, replace, none
replacement_image: ""
blur_strength: 51
face_tracking:
smoothing: true
lock_identity: true
min_similarity: 0.65
max_faces: 5
output:
virtual_device: /dev/video10
resolution: [1280, 720]
fps: 30
ai:
device: auto
face_detection:
model: mediapipe
confidence_threshold: 0.5
background_segmentation:
model: mediapipe
model_path: ""
face_recognition:
model: mediapipe
model_path: ""
performance:
num_threads: 4
batch_size: 1
optimize_memory: true
logging:
level: INFO
file: "logs/linucast.log"
console: true- Start Linucast:
cd python_core
poetry run linucast-
Configure your settings in the GUI:
- Select background mode (blur/replace/none)
- Adjust face detection sensitivity
- Choose background image (if using replace mode)
-
In your video conferencing app (Zoom, Teams, Discord, etc.):
- Select "Linucast" as your camera device
- Enjoy AI-enhanced video!
C++ Backend:
cd cpp_core
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Debug
make -j$(nproc)Python Package:
cd python_core
poetry install --with dev
poetry run pytest tests/- Export your model to ONNX format
- Place in
models/directory - Update configuration in
config.yaml - Modify the appropriate AI component in
python_core/linucast/ai/
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
- OS: Ubuntu 20.04+ (other Linux distributions may work)
- RAM: 4GB minimum, 8GB recommended
- CPU: Multi-core processor (4+ cores recommended)
- GPU: Optional but recommended for best performance
- NVIDIA GPU with CUDA 11.0+
- AMD GPU with ROCm support
- Intel GPU with OpenVINO
- GPU systems: ≥30 FPS at 720p
- CPU-only systems: ≥15 FPS at 720p (with fallback models)
- Latency: ≤50ms end-to-end processing time
For detailed troubleshooting information, please see our Troubleshooting Guide.
- Virtual camera not detected - Check v4l2loopback setup
- Performance issues - Adjust resolution, processing options
- Face tracking problems - Tune sensitivity and smoothing parameters
- Python module errors - Rebuild components or check dependencies
Enable detailed logging:
poetry run linucast --debugCheck log files in logs/linucast.log.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- MediaPipe - Face detection and segmentation
- MODNet - Background matting
- InsightFace - Face recognition
- OpenCV - Computer vision library
- v4l2loopback - Virtual camera driver
- Documentation Home
- Installation Guide
- Usage Guide
- Troubleshooting Guide
- Contributing Guide
- Issue Tracker
- Discussions
- Releases
Linucast - Bringing professional AI camera effects to Linux! 🎥✨
