Multi-Modal Behavioral Fraud Detection System
Real-time fraud detection using behavioral biometrics and AI-powered multi-agent analysis
Features β’ Architecture β’ Getting Started β’ Demo β’ Documentation
QuadFusion is an advanced multi-modal behavioral fraud detection system that leverages AI and machine learning to identify fraudulent activities through behavioral biometrics. Unlike traditional authentication methods, QuadFusion continuously monitors user behavior patterns across multiple dimensions:
- Touch Patterns - Swipe dynamics, tap pressure, gesture recognition
- Typing Behavior - Keystroke dynamics, rhythm analysis, timing patterns
- Voice Authentication - Speaker identification, voice pattern analysis
- Visual Biometrics - Face recognition, scene analysis
- Motion Analysis - Accelerometer, gyroscope, magnetometer data
- App Usage Patterns - Usage frequency, navigation patterns, temporal analysis
The system uses a multi-agent architecture where specialized AI agents analyze different behavioral aspects and a coordinator agent fuses their decisions for robust fraud detection.
- Continuous behavioral biometric monitoring
- Real-time anomaly detection
- Risk scoring with confidence levels
- Session-based fraud analysis
- 6 Specialized Agents:
- Touch Pattern Agent
- Typing Behavior Agent
- Voice Command Agent
- Visual Agent
- Movement Agent
- App Usage Agent
- Coordinator Agent for intelligent decision fusion
- Lightweight models optimized for mobile deployment
- React Native + Expo for cross-platform support
- Real-time sensor data collection
- Live monitoring dashboard
- Beautiful, responsive UI with animations
- Offline-capable with local processing
- End-to-end encryption for biometric data
- On-device processing where possible
- Secure data storage and transmission
- GDPR-compliant data handling
- RESTful API with comprehensive documentation
- Easy integration with existing apps
- Detailed logging and monitoring
- Performance metrics and analytics
QuadFusion/
βββ Backend (Python) # AI/ML Processing Server
β βββ API Server # FastAPI REST endpoints
β βββ Multi-Agent System # 6 specialized + 1 coordinator
β βββ Models # ML models (LSTM, CNN, etc.)
β βββ Data Pipeline # Collection, preprocessing, encryption
β βββ Mobile Deployment # ONNX/TFLite conversion
β
βββ Frontend (React Native) # Mobile Application
βββ Sensor Managers # Data collection
βββ Live Monitoring # Real-time dashboard
βββ UI Components # Responsive, animated UI
βββ API Client # Backend communication
User Interaction Data
β
βββββββββββββββββββββββββββββββββββββ
β Specialized Agent Layer β
βββββββββββββββββββββββββββββββββββββ€
β β’ TouchPatternAgent (20%) β
β β’ TypingBehaviorAgent (15%) β
β β’ VoiceCommandAgent (20%) β
β β’ VisualAgent (25%) β
β β’ MovementAgent (10%) β
β β’ AppUsageAgent (10%) β
βββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββ
β Coordinator Agent β
β β’ Weighted fusion β
β β’ Confidence aggregation β
β β’ Risk level determination β
βββββββββββββββββββββββββββββββββββββ
β
Fraud Decision
Backend:
- Python 3.10+
- FastAPI (REST API)
- TensorFlow & PyTorch (Deep Learning)
- Scikit-learn (ML algorithms)
- ONNX/TFLite (Mobile optimization)
- Librosa (Audio processing)
- OpenCV & MediaPipe (Computer Vision)
Frontend:
- React Native 0.79
- Expo 53.0
- TypeScript
- Expo Sensors, Camera, Audio
- Victory Native (Charts)
- React Navigation
- Backend: Python 3.10+, pip
- Frontend: Node.js 18+, npm/yarn
- Mobile: Expo Go app (for testing) or Expo CLI
git clone https://github.com/Samrudhp/OnDevice-Multimodal-Agent.git
cd QuadFusioncd src/backend/src
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Start API server
cd ..
python api_server.pyThe backend server will start at `http://127.0.0.1:8000\`
API Documentation: Visit `http://127.0.0.1:8000/docs\` for interactive API docs
cd src/qf
# Install dependencies
npm install
# Start development server
npm run devScan the QR code with Expo Go app to run on your device.
Edit `src/backend/src/config.yaml`:
agents:
coordinator:
agent_weights:
TouchPatternAgent: 0.2
TypingBehaviorAgent: 0.15
VoiceCommandAgent: 0.2
VisualAgent: 0.25
AppUsageAgent: 0.1
MovementAgent: 0.1
risk_thresholds:
low: 0.3
medium: 0.6
high: 0.8# Terminal 1: Start backend
cd src/backend
python api_server.py
# Terminal 2: Start frontend
cd src/qf
npm run devcurl -X POST http://127.0.0.1:8000/api/v1/process/realtime \\
-H "Content-Type: application/json" \\
-d '{
"session_id": "session-123",
"sensor_data": {
"touch_events": [...],
"keystroke_events": [...],
"motion_data": {...},
"audio_data": "base64...",
"image_data": "base64..."
}
}'Response:
{
"anomaly_score": 0.23,
"risk_level": "low",
"confidence": 0.87,
"agent_results": {
"MovementAgent": {
"anomaly_score": 0.15,
"risk_level": "low",
"confidence": 0.9
},
"TouchPatternAgent": {...},
...
}
}The mobile app provides real-time visualization of:
- Sensor data collection (touch, motion, audio, camera)
- Agent analysis results with individual scores
- Risk assessment with confidence levels
- Processing metrics and performance stats
(Add screenshots of your mobile app here)
- API Specification - Complete API reference
- Backend Setup - Detailed backend setup guide
- Mobile Setup - Frontend setup and testing
- Model Documentation - ML model details
- Architecture Docs - System architecture and design
QuadFusion/
βββ src/
β βββ backend/
β β βββ api_server.py # Main API server
β β βββ API_SPECIFICATION.md # API docs
β β βββ src/
β β βββ agents/ # Multi-agent system
β β β βββ coordinator_agent.py
β β β βββ touch_pattern_agent.py
β β β βββ typing_behavior_agent.py
β β β βββ voice_command_agent.py
β β β βββ visual_agent.py
β β β βββ movement_agent.py
β β β βββ app_usage_agent.py
β β βββ models/ # ML models
β β βββ data/ # Data pipeline
β β βββ mobile_deployment/ # Model conversion
β β βββ training/ # Model training
β β βββ utils/ # Utilities
β β
β βββ qf/ # React Native app
β βββ app/ # Expo Router pages
β βββ components/ # UI components
β βββ lib/ # Utilities
β β βββ sensor-manager.ts # Sensor data collection
β β βββ api.ts # API client
β β βββ audio-recorder.ts # Audio recording
β βββ config/ # Configuration
β
βββ docs/ # Documentation
βββ README.md # This file
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create your feature branch (`git checkout -b feature/AmazingFeature`)
- Commit your changes (`git commit -m 'Add some AmazingFeature'`)
- Push to the branch (`git push origin feature/AmazingFeature`)
- Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built for Samsung EnnovateX 2025 AI Challenge
- TensorFlow and PyTorch communities
- Expo and React Native teams
- Open-source ML model contributors
Project Repository: https://github.com/Samrudhp/OnDevice-Multimodal-Agent
Built with β€οΈ using AI and Multi-Agent Systems
Protecting users through behavioral intelligence