Real-time drowsiness detection using computer vision and machine learning
Features β’ Quick Start β’ Architecture β’ Documentation
SleepSafe is a cross-platform drowsiness detection ecosystem that prevents accidents caused by fatigue. Using advanced computer vision and AI, the system monitors eye closure patterns in real-time and triggers alerts when drowsiness is detected.
- π Offline-First Web App: Progressive Web App with TensorFlow.js for browser-based detection
- π± Native Mobile Apps: iOS (Swift) and Android (Java) with shared Rust core
- π¦ High-Performance Rust Core: Memory-safe, optimized logic shared across platforms
- π¨ Beautiful UI: Glassmorphism design with dark/light modes
- π Privacy-Focused: All processing happens on-device, no data leaves your machine
- β‘ Real-Time Performance: Optimized for low-latency detection (< 100ms)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SleepSafe Ecosystem β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββββββββββ β
β β Web PWA β β Android App β β iOS App β β
β β (Next.js) β β (Java) β β (Swift) β β
β β TensorFlow β β JNI β β FFI β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββββββ¬ββββββββββββ β
β β β β β
β β βββββββββββββ¬ββββββββββββ β
β β β β
β β ββββββββΌβββββββββββ β
β β β Rust Core β β
β β β (libsleep) β β
β β βββββββββββββββββββ β
β β β
β ββββββΌβββββββββββββββββββββββββββββββββββββββββββ β
β β MediaPipe Face Mesh β β
β β (468 Facial Landmarks Detection) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Sleep-Detection/
β
βββ π web/ # Next.js Progressive Web App
β βββ app/
β β βββ page.tsx # Main detection interface
β β βββ layout.tsx # App shell
β β βββ globals.css # Global styles
β βββ public/
β β βββ manifest.json # PWA manifest
β β βββ icons/ # App icons
β βββ package.json # Dependencies
β βββ Dockerfile # Container config
β
βββ π api/ # FastAPI Backend
β βββ main.py # API entry point
β βββ models/ # Pydantic models
β β βββ __init__.py
β β βββ models.py
β βββ services/ # Business logic
β β βββ __init__.py
β β βββ services.py
β βββ mlops/ # ML training
β β βββ __init__.py
β β βββ train_model.py
β βββ db/ # Database (Django-style)
β β βββ __init__.py
β β βββ database.py
β β βββ models.py
β β βββ sleepsafe.db # SQLite database
β β βββ postgres/ # PostgreSQL data (Docker)
β βββ pyproject.toml
β βββ Dockerfile
β
βββ π¦ core/ # Rust Shared Library
β βββ src/
β β βββ lib.rs # FFI/JNI exports
β βββ Cargo.toml # Rust dependencies
β
βββ π± app/ # Native Mobile Apps
β βββ android/ # Android Application
β β βββ app/src/main/
β β βββ java/.../MainActivity.java
β β βββ AndroidManifest.xml
β β
β βββ ios/ # iOS Application
β βββ SleepDetection/
β βββ ViewController.swift
β βββ AppDelegate.swift
β βββ SleepCoreBridge.h # C bridge for Rust
β
βββ π¦ lib/ # Future Libraries
β βββ npm/ # (Planned) NPM package
β βββ pypi/ # (Planned) PyPI package
β
βββ π docs/ # Documentation
β βββ ARCHITECTURE.md # System design
β βββ DEPLOYMENT.md # Deployment guide
β βββ DATABASE-STRUCTURE.md # Database setup
β βββ BACKEND-COMPLETE.md # Backend features
β βββ DOCKER.md # Docker guide
β
βββ docker-compose.yml # Multi-container orchestration
βββ .env.example # Environment template
βββ README.md # This file
| Component | Requirement |
|---|---|
| Web | Node.js 18+, npm 8+ |
| Mobile | Android Studio / Xcode |
| Rust | Rust 1.70+ (for core compilation) |
The backend provides telemetry logging and MLOps features:
cd api
# Install dependencies
uv sync
# Run development server
uv run uvicorn main:app --reloadπ§ API Docs: http://localhost:8000/docs
Features:
- Detection event logging
- Model metrics tracking
- MLflow experiment tracking
- Statistics and analytics
- Database:
api/db/sleepsafe.db(Django-style)
Endpoints:
POST /telemetry- Log detection eventGET /statistics- Get statsGET /dashboard- Dashboard dataPOST /metrics/model- Log model metrics
Note: Backend is fully functional with SQLite. PostgreSQL optional for production.
The web app is fully functional and works offline:
# Clone repository
git clone https://github.com/nishanth-kj/Sleep-Detection.git
cd Sleep-Detection/web
# Install dependencies
npm install
# Start development server
npm run devπ± Open http://localhost:3000 in your browser
- Click the camera icon to start detection
- Allow camera access when prompted
- Position your face in the webcam view
- Close your eyes for 3+ seconds to trigger the alarm
- Toggle dark/light mode with the moon/sun icon
cd core
rustup target add aarch64-linux-android
cargo install cargo-ndk
cargo ndk -t arm64-v8a --platform 24 build --releasemkdir -p app/android/app/src/main/jniLibs/arm64-v8a
cp target/aarch64-linux-android/release/libsleep_core.so \
app/android/app/src/main/jniLibs/arm64-v8a/# Open app/android/ folder
android-studio app/androidBuild and run on device or emulator.
cd core
rustup target add aarch64-apple-ios x86_64-apple-ios
cargo install cargo-lipo
cargo lipo --release- Open
app/ios/SleepDetection.xcodeprojin Xcode - Add
core/target/universal/release/libsleep_core.ato Link Binary With Libraries - Set Objective-C Bridging Header to
SleepDetection/SleepCoreBridge.h - Build and run on device/simulator
The system uses the Eye Aspect Ratio metric to detect eye closure:
||p2 - p6|| + ||p3 - p5||
EAR = βββββββββββββββββββββββββββ
2 Γ ||p1 - p4||
Where p1...p6 are eye landmark coordinates
Detection Logic:
- EAR > 0.25 β Eyes OPEN β
- EAR < 0.25 for 10 consecutive frames (β 3 seconds) β DROWSINESS DETECTED π¨
- Framework: Next.js 16.1 (React 19.2)
- AI/ML: TensorFlow.js 4.22, MediaPipe Face Mesh 1.0
- Styling: TailwindCSS 3.4, Framer Motion 12.23
- PWA: next-pwa 5.6 for offline support
- Utilities: react-webcam, lucide-react icons
- Language: Rust 2021 Edition
- Build: Cargo with aggressive optimizations
- Features:
opt-level = 3- Maximum optimizationlto = true- Link-time optimizationcodegen-units = 1- Single compilation unitpanic = "abort"- Smaller binary size
- Android: JNI (Java Native Interface)
- iOS: FFI (Foreign Function Interface) via C bridge
β Real-Time Face Detection
- 468 facial landmarks tracked at 30 FPS
- MediaPipe Face Mesh model (optimized for web)
β Eye Closure Monitoring
- Continuous EAR calculation for both eyes
- Configurable threshold and frame count
β Smart Alerting
- Audio alarm using Web Audio API
- Visual on-screen alerts
- Mute/unmute toggle
β Offline Capability
- PWA with service worker caching
- Install to home screen (mobile/desktop)
- Works without internet after first load
β Dark/Light Modes
- System preference detection
- Manual toggle
- Smooth transitions
π¨ Modern Design
- Glassmorphism effects
- Smooth animations with Framer Motion
- Responsive layout (mobile-first)
π Live Statistics
- Current EAR value display
- Online/offline indicator
- FPS counter
- Detection status
Note: Docker Compose currently references an empty api/ directory. To run only the web app:
Run the complete stack with Docker:
docker compose up -d --buildServices:
- Frontend: http://localhost:80 (Next.js PWA)
- Backend API: http://localhost:8000 (FastAPI)
- MLflow UI: http://localhost:5001 (Experiment tracking)
- PostgreSQL: Port 5432 (Database)
Data Persistence:
- PostgreSQL:
api/db/postgres/ - MLruns:
api/mlruns/
Commands:
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# Stop all
docker compose down
# Run ML training
docker compose --profile training up ml_training- π ARCHITECTURE.md - System design, diagrams, data flow
- π DEPLOYMENT.md - Detailed deployment instructions
- π» Code Comments - Inline documentation in all source files
- NO data is sent to external servers
- Facial landmarks processed locally
- Web app works 100% offline
- NO persistent storage of video/images
- NO tracking or analytics
- Optional browser cache for PWA only
- Camera: Required for face detection
- Audio: For alarm playback (Web Audio API)
npm run dev # Start dev server
npm run build # Build for production
npm run start # Run production server
npm run lint # Run ESLintcargo build --release # Build optimized library
cargo test # Run unit tests
cargo clippy # Lint checks
cargo fmt # Format codeNone required! The app works out-of-the-box.
| Component | Status |
|---|---|
| Web PWA | β Fully Functional |
| Rust Core | β Code Complete |
| Android App | ποΈ Skeleton Code |
- Web PWA (Next.js + TensorFlow.js)
- Backend API (FastAPI + SQLAlchemy)
- Database (SQLite + PostgreSQL support)
- MLOps (MLflow + training pipeline)
- Docker setup (multi-container)
- Documentation (comprehensive)
- Compile Rust core for Android (
libsleep_core.so) - Compile Rust core for iOS (
libsleep_core.a) - Integrate Rust with mobile apps
- Publish NPM package (
lib/npm) - Publish PyPI package (
lib/pypi)
- Location:
api/db/(Django-style) - SQLite:
api/db/sleepsafe.db - PostgreSQL:
api/db/postgres/(Docker) - Models: 4 tables (events, metrics, sessions, system) support
- Customizable EAR thresholds
- Bluetooth alerting (mobile)
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Setup all components
npm install # Web dependencies
cargo build # Rust coreThis project is licensed under the MIT License - see the LICENSE file for details.
- MediaPipe team for Face Mesh model
- TensorFlow.js for browser ML capabilities
- Rust community for FFI/JNI tooling
- Next.js team for the amazing framework
- Author: Nishanth KJ
- GitHub: @nishanth-kj
- Repository: Sleep-Detection
- Issues: Report a Bug
Made with β€οΈ for safer roads and workplaces
β Star this repo if you find it useful!