A practical computer vision system for classroom analysis: attendance, emotions, attention, and reporting.
- Install Python 3.9+ and ffmpeg
- Create a virtual environment and install minimal dependencies:
python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt - Launch the Flask web app (dashboard):
python src/app.py # open http://localhost:5000 - Run the CLI (pipeline) on a video:
python src/main.py --video test_website_video.mp4 --mode full
- Run unit tests for core services (if pytest is installed):
pytest -q test_attention_analysis_unit.py test_emotion_analysis_unit.py test_attendance_manager_unit.py
The default requirements.txt lists minimal, conflict-free packages (OpenCV, Flask, Dash, Plotly). Optional heavy frameworks (YOLO/Ultralytics, InsightFace, FaceNet, DeepFace) are not required for basic operation. The code will gracefully fall back to classical methods if those packages are not installed.
Optional installs to enable extra features:
- YOLOv8 detection:
pip install ultralytics - ArcFace recognition:
pip install insightface onnxruntime - FaceNet embeddings:
pip install facenet-pytorch torch torchvision - DeepFace (emotion/recognition):
pip install deepface
- Generated outputs and large media are ignored via
.gitignore(e.g.,data/outputs/,static/processed_videos/). - Legacy demo/experimental scripts were removed for maintainability.
- Service modules include docstrings and follow PEP8 style.
FaceClass is an intelligent system that uses computer vision to record and analyze student attendance in the classroom. The system simultaneously analyzes emotions and attention, implements behavioral patterns, and automatically records attendance and absence of students. Various events in the class are recorded and displayed appropriately.
-
Face Detection, Tracking and Recognition
- Multiple face detection models (YOLO, RetinaFace, MTCNN, OpenCV)
- Advanced tracking algorithms (ByteTrack, Deep OC-SORT)
- Face recognition with ArcFace, FaceNet, VGGFace
- Student ID matching and database management
-
Emotion and Attention Analysis
- Emotion classification (FER-2013, AffectNet)
- Attention detection using MediaPipe and OpenFace
- Gaze direction and head pose analysis
- Behavioral pattern recognition
-
Attendance Tracking
- Automatic attendance recording
- Duration-based attendance scoring
- Absence detection and reporting
- Session management
-
Spatial Analysis
- Classroom heatmaps
- Seat assignment analysis
- Movement pattern detection
- Spatial distribution statistics
-
Reporting Dashboard
- Interactive visualizations
- Real-time monitoring
- Comprehensive reports
- Data export capabilities
- Multi-Model Face Detection: Support for YOLO, RetinaFace, MTCNN, and OpenCV
- Advanced Face Recognition: ArcFace, FaceNet, VGGFace integration
- Emotion Analysis: 8 emotion categories (angry, disgust, fear, happy, sad, surprise, neutral, confused, tired)
- Attention Detection: Gaze direction, head pose, and attention scoring
- Attendance Tracking: Automatic attendance recording with duration and confidence scoring
- Spatial Analysis: Heatmaps, seat assignments, and spatial distribution
- Comprehensive Reporting: HTML reports with charts, statistics, and recommendations
- Interactive Dashboard: Real-time monitoring and visualization
- Data Export: CSV and JSON export capabilities
- Session Management: Multi-session support with data persistence
- Real-time Video Processing: Upload and process videos with live feedback
- Interactive Visualizations: Charts, heatmaps, and statistics
- Attendance Monitoring: Live attendance tracking and statistics
- Emotion Analysis: Real-time emotion detection and trends
- Attention Tracking: Attention scores and patterns
- Spatial Analysis: Classroom layout and heatmaps
- Report Generation: Comprehensive analysis reports
FaceClass/
โโโ config.yaml # Configuration file
โโโ requirements.txt # Python dependencies
โโโ README.md # Project documentation
โโโ src/ # Source code
โ โโโ main.py # Main entry point
โ โโโ config.py # Configuration management
โ โโโ detection/ # Face detection and tracking
โ โ โโโ face_tracker.py
โ โโโ recognition/ # Face recognition
โ โ โโโ face_identifier.py
โ โโโ emotion/ # Emotion and attention analysis
โ โ โโโ emotion_detector.py
โ โโโ attendance/ # Attendance tracking
โ โ โโโ attendance_tracker.py
โ โโโ layout_analysis/ # Spatial analysis
โ โ โโโ layout_mapper.py
โ โโโ reporting/ # Report generation
โ โ โโโ report_generator.py
โ โโโ dashboard/ # Web dashboard
โ โ โโโ dashboard_ui.py
โ โโโ utils/ # Utility functions
โ โโโ video_utils.py
โโโ models/ # Model files
โ โโโ face_detection/
โ โโโ face_recognition/
โ โโโ emotion_recognition/
โ โโโ attention_detection/
โโโ data/ # Data storage
โ โโโ raw_videos/ # Input videos
โ โโโ frames/ # Extracted frames
โ โโโ labeled_faces/ # Labeled face data
โ โโโ heatmaps/ # Generated heatmaps
โ โโโ outputs/ # Analysis outputs
โ โโโ temp/ # Temporary files
โโโ reports/ # Generated reports
โโโ notebooks/ # Jupyter notebooks
- Python 3.8+
- OpenCV 4.5+
- PyTorch 1.9+
- CUDA (optional, for GPU acceleration)
-
Clone the repository
git clone <repository-url> cd FaceClass
-
Install dependencies
pip install -r requirements.txt
-
Download models (optional)
# Download pre-trained models python scripts/download_models.py -
Configure the system
# Edit config.yaml for your specific needs nano config.yaml
-
Launch Dashboard
python src/main.py --mode dashboard
-
Process Video
python src/main.py --video path/to/video.mp4 --mode full
-
Generate Report
python src/main.py --video path/to/video.mp4 --mode full --generate-report
python src/main.py [OPTIONS]
Options:
--video PATH Path to input video file
--config PATH Configuration file path (default: config.yaml)
--output-dir PATH Output directory (default: data/outputs)
--mode MODE Analysis mode:
- detection: Face detection only
- recognition: Face recognition only
- emotion: Emotion analysis only
- attendance: Attendance tracking only
- full: Comprehensive analysis
- dashboard: Launch dashboard only
- extract-frames: Extract video frames
- report: Generate report only
--extract-frames Extract frames from video
--generate-report Generate comprehensive reportEdit config.yaml to customize:
- Face Detection: Model selection, confidence thresholds
- Face Recognition: Model selection, similarity thresholds
- Emotion Detection: Model selection, emotion categories
- Attention Detection: Gaze and head pose thresholds
- Video Processing: FPS, resolution, batch size
- Dashboard: Port, host, refresh rate
- Reporting: Report format, chart options
- Multiple Models: YOLO, RetinaFace, MTCNN, OpenCV
- Tracking: ByteTrack, Deep OC-SORT algorithms
- Recognition: ArcFace, FaceNet, VGGFace models
- Database: Student face database management
- Emotion Categories: 8 emotions (angry, disgust, fear, happy, sad, surprise, neutral, confused, tired)
- Models: FER-2013, AffectNet integration
- Real-time: Live emotion detection and tracking
- Statistics: Emotion distribution and trends
- Gaze Direction: Eye tracking and gaze analysis
- Head Pose: Yaw, pitch, roll estimation
- Attention Scoring: Combined attention metrics
- Patterns: Attention trends and patterns
- Automatic Recording: Duration-based attendance
- Confidence Scoring: Multi-factor attendance scoring
- Session Management: Multi-session support
- Statistics: Attendance rates and trends
- Heatmaps: Presence, attention, emotion heatmaps
- Seat Assignment: Automatic seat assignment
- Movement Patterns: Student movement analysis
- Spatial Distribution: Classroom layout analysis
- Comprehensive Report: Full analysis with all metrics
- Attendance Report: Attendance-specific analysis
- Emotion Report: Emotion analysis and trends
- Attention Report: Attention patterns and scores
- Spatial Report: Spatial distribution and heatmaps
- Interactive Charts: Attendance, emotion, attention charts
- Heatmaps: Spatial distribution visualizations
- Statistics: Comprehensive statistics and metrics
- Recommendations: AI-generated recommendations
- Export: CSV, JSON, HTML export options
- Video Upload: Drag-and-drop video upload
- Real-time Processing: Live video processing
- Interactive Charts: Real-time charts and visualizations
- Attendance Monitoring: Live attendance tracking
- Emotion Analysis: Real-time emotion detection
- Attention Tracking: Live attention scores
- Spatial Analysis: Interactive heatmaps
- Report Generation: On-demand report generation
-
Launch Dashboard
python src/main.py --mode dashboard
-
Access Interface
- Open browser:
http://localhost:8080 - Upload video for analysis
- View real-time results
- Generate reports
- Open browser:
# Face Detection
face_detection:
model: "yolo" # yolo, retinaface, mtcnn, opencv
confidence_threshold: 0.5
nms_threshold: 0.4
# Face Recognition
face_recognition:
model: "arcface" # arcface, facenet, vggface, opencv
similarity_threshold: 0.6
# Emotion Detection
emotion_detection:
model: "fer2013" # fer2013, affectnet, placeholder
emotions: ["angry", "disgust", "fear", "happy", "sad", "surprise", "neutral", "confused", "tired"]
# Attention Detection
attention_detection:
model: "mediapipe" # mediapipe, openface, placeholder
gaze_threshold: 0.7
head_pose_threshold: 30.0
# Dashboard
dashboard:
port: 8080
host: "localhost"
refresh_rate: 1.0- CPU: Intel i5 or equivalent
- RAM: 8GB minimum, 16GB recommended
- GPU: NVIDIA GTX 1060 or equivalent (optional)
- Storage: 10GB free space
- Processing Speed: 30 FPS (with GPU acceleration)
- Accuracy: 95%+ face detection accuracy
- Scalability: Supports up to 50 students per session
- Real-time: Live processing and analysis
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenCV for computer vision capabilities
- PyTorch for deep learning models
- MediaPipe for face mesh and pose estimation
- Dash for interactive dashboard
- Plotly for data visualization
For support and questions:
- Documentation: Check the docs/ directory
- Issues: Create an issue on GitHub
- Email: Contact the development team
- Comprehensive student attendance analysis
- Multi-model face detection and recognition
- Advanced emotion and attention analysis
- Spatial analysis and heatmaps
- Interactive dashboard
- Comprehensive reporting system
- Basic face detection and tracking
- Simple emotion analysis
- Basic dashboard interface
FaceClass - Transforming classroom analysis with computer vision technology.