A small human verification system that combines mouse click behavior analysis and facial analysis to estimate whether the current user is a human or a bot.
Built as a learning / portfolio project by Ziad Saqr.
This project is an experimental human verification demo that uses:
- User behavior signals (mouse click timing & variability)
- Computer vision (face analysis via DeepFace)
The goal is not to replace production-grade CAPTCHA / security systems, but to showcase:
- How to collect and analyze simple behavioral features
- How to integrate DeepFace + OpenCV into a Streamlit app
- How to combine multiple signals into a single verification score
-
🖱️ Mouse Click Behavior Analyzer
- Records timestamps of button clicks
- Computes:
- Number of clicks
- Average interval between clicks
- Standard deviation of intervals
- Uses simple heuristics to produce a behavior_score ∈ [0, 1]
-
👤 Face Analysis using DeepFace
- Capture a frame from the webcam or upload an image
- Runs DeepFace to:
- Detect a face
- Estimate dominant emotion
- Get a face confidence score
- Produces a face_score ∈ [0, 1]
-
🧮 Score Fusion & Final Decision
- Combines behavior_score + face_score into a final_score
- Applies a threshold (default
0.6) to label the session as:- Human
- Bot / Suspicious
-
🌐 Interactive UI (Streamlit)
- Three clear tabs:
BehaviorFaceResult
- JSON views for debugging and explainability
- Three clear tabs:
- Language: Python
- Web App: Streamlit
- Computer Vision: OpenCV, Pillow
- Face & Emotion Analysis: DeepFace
- Numerical Computing: NumPy
human-verification-system/
│
├─ app.py # Streamlit main app
├─ behavior_analyzer.py # Mouse click behavior features & scoring
├─ face_verifier.py # DeepFace wrapper for face analysis
├─ requirements.txt # Python dependencies
└─ README.md # This file
⚙️ Installation & Setup
1. Clone the repository
bash
Copy code
git clone https://github.com/Zsaqr/human-verification-system.git
cd human-verification-system
2. (Optional but recommended) Create a virtual environment
bash
Copy code
python -m venv .venv
# On Windows:
.venv\Scripts\activate
# On Linux / macOS:
# source .venv/bin/activate
3. Install dependencies
bash
Copy code
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
💡 On first run, DeepFace may download some pre-trained models, which can take a bit of time.
4. Run the app
bash
Copy code
streamlit run app.py
Open the URL printed in the terminal (usually: http://localhost:8501).
🚀 Usage
1️⃣ Behavior Tab
Go to the "Behavior" tab.
Click the button "Click here repeatedly" multiple times (at least 5).
The app will:
Record timestamps
Compute intervals & statistics
Show a behavior_score and some basic metrics in JSON form.
2️⃣ Face Tab
Go to the "Face" tab.
Either:
Capture an image from the webcam, or
Upload a face image (.jpg, .jpeg, .png).
Click "Analyze face with DeepFace".
The app will:
Run DeepFace analysis
Show:
Dominant emotion
Face confidence
A face_score
3️⃣ Result Tab
Go to the "Result" tab.
Click "Run full verification".
The app will:
Recalculate behavior_score
Recalculate face_score (if a face image is available)
Fuse them into a final_score:
text
Copy code
final_score = 0.6 * behavior_score + 0.4 * face_score
Apply a threshold (default: 0.6):
final_score >= 0.6 → Human
otherwise → Bot / Suspicious
Details are shown in expandable sections for:
Behavior features
Face analysis (DeepFace output)
🖼️ Screenshots
Screenshots will be added soon, such as:
Behavior analysis tab
Face analysis tab
Final decision tab
(You can simply add images to this section later, e.g. using .)
⚠️ Disclaimer
This is a learning / portfolio project, not a production-ready security system.
It should not be used as a primary defense against bots in real-world applications.
The heuristics are simple and mainly for demonstration.
The face analysis is limited by DeepFace models and input image quality.
Use it for:
Learning about behavior-based signals
Playing with DeepFace and OpenCV
Showcasing skills in Python / Streamlit / CV
📌 Possible Improvements / Future Work
Some potential ideas to extend the project:
Add more behavior signals (mouse movement paths, scroll patterns, keypress timing)
Train a small ML model on synthetic data
Add logging / database storage for sessions
Try different weightings for fusion or adaptive thresholds
Add more visualization (charts for click intervals, etc.)
👤 Author & Contact
Author: Ziad Saqr
💼 LinkedIn: ziadmuhammedsaqr
📧 Email: ziadmuhammedsaq@gmail.com
🇪🇬 بالعربي (ملخّص سريع)
ده بروجيكت بسيط لـ Human Verification System بيحاول يفرّق بين:
Human
و Bot / سلوك مش طبيعي
عن طريق حاجتين رئيسيتين:
🖱️ تحليل سلوك الضغطات بالماوس
يسجّل توقيت كل كليك
يحسب:
عدد الضغطات
متوسط الزمن بين الضغطات
تفاوت الزمن بين الضغطات
يطلع behavior_score من 0 لـ 1 (كل ما يعلى كل ما السلوك يبقى أقرب لـ human).
👤 تحليل الوجه باستخدام DeepFace
تقدر تلتقط صورة من الكاميرا أو ترفع صورة من الجهاز
DeepFace يحاول:
يحدد هل في وش في الصورة ولا لأ
يجيب الـ dominant emotion
يدي face_score من 0 لـ 1.
بعد كده السيستم بيجمع الـ scores في final_score ويطلع قرار:
لو final_score أعلى من threshold (مثلاً 0.6) → Human
غير كده → Bot / Suspicious
البروجيكت معمول أساسًا كـ Learning / Portfolio عشان يورّي شغل Python + Streamlit + OpenCV + DeepFace، مش سيستم حماية production.
شكراً إنك وصلت لحد هنا 🙌
لو عندك اقتراحات لتطوير البروجيكت أو حابب تضيف section معين تاني في الـ README (مثلاً "Frequently Asked Questions" أو "Motivation") أقدر أعدّلهولك على طول.