π An AI-assisted bug-hunting framework that automates high-volume reconnaissance, surfaces high-probability attack paths, runs smart dynamic checks, and produces prioritized findings with reproducible PoCs and recommended mitigations.
- Python 3.8+
- PostgreSQL 12+
- Redis 6+
- Git
- Clone the repository:
git clone <repository-url>
cd hunter- Set up environment:
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install Playwright browsers
playwright install chromium- Configure environment:
# Copy example environment file
cp .env.example .env
# Edit .env with your configuration
nano .env- Initialize database:
python3 scripts/init_db.py- Start services:
./scripts/start_services.sh- Access the application:
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
Create a .env file with the following configuration:
# Database Configuration
DATABASE_URL=postgresql://postgres:password@localhost:5432/bug_hunter
# Redis Configuration
REDIS_URL=redis://localhost:6379/0
CELERY_BROKER_URL=redis://localhost:6379/0
CELERY_RESULT_BACKEND=redis://localhost:6379/0
# API Keys (optional but recommended)
SHODAN_API_KEY=your_shodan_api_key_here
VIRUSTOTAL_API_KEY=your_virustotal_api_key_here
SECURITYTRAILS_API_KEY=your_securitytrails_api_key_here
GITHUB_TOKEN=your_github_token_here
CENSYS_API_KEY=your_censys_api_key_here
# OpenAI Configuration (for AI features)
OPENAI_API_KEY=your_openai_api_key_here
# Evidence Storage
EVIDENCE_STORAGE_TYPE=local # or 's3'
EVIDENCE_BASE_PATH=evidence
# Security
API_ENCRYPTION_KEY=generate_with_fernet.generate_key()The framework supports multiple external services for enhanced reconnaissance:
- Shodan: Host and service discovery
- VirusTotal: Passive DNS and malware analysis
- SecurityTrails: Historical DNS data
- GitHub: Code repository scanning
- Censys: Internet-wide scanning data
- OpenAI: AI-powered analysis and PoC generation
hunter/
βββ automation/ # Orchestration and core services
β βββ orchestrator.py # Job scheduling and workflow management
β βββ database.py # Database models and repositories
β βββ api_manager.py # API key management and rate limiting
β βββ ai_services.py # LLM and embedding services
β βββ logging_config.py # Audit logging and evidence storage
βββ recon/ # Reconnaissance modules
β βββ collectors.py # Data collection from various sources
β βββ tasks.py # Celery tasks for distributed recon
βββ analysis/ # Content discovery and app analysis
β βββ tasks.py # Web application analysis tasks
βββ fuzz/ # Vulnerability scanning and fuzzing
β βββ tasks.py # Automated vulnerability detection
βββ ui/ # Web interface
β βββ api.py # FastAPI REST API
βββ data/ # Data models and schemas
β βββ schemas.py # Pydantic models for all entities
βββ docs/ # Documentation
β βββ legal-ethics-policy.md # Legal and ethical guidelines
βββ scripts/ # Utility scripts
βββ init_db.py # Database initialization
βββ start_services.sh # Service startup script
βββ stop_services.sh # Service shutdown script
- Job Submission β API receives scan requests
- Task Distribution β Celery distributes work to workers
- Data Collection β Collectors gather information from various sources
- Analysis β AI services analyze findings and generate insights
- Storage β Results stored in PostgreSQL with evidence in file system
- Reporting β Dashboard and API provide access to findings
# Using curl
curl -X POST "http://localhost:8000/scans" \
-H "Content-Type: application/json" \
-d '{
"target": "example.com",
"scan_type": "recon",
"priority": 8
}'# Start comprehensive reconnaissance
curl -X POST "http://localhost:8000/workflows/recon?target=example.com"
# Start vulnerability assessment
curl -X POST "http://localhost:8000/workflows/vulnerability-assessment?target=example.com"# List all scans
curl "http://localhost:8000/scans"
# Get scan status
curl "http://localhost:8000/scans/{job_id}"
# List findings
curl "http://localhost:8000/findings"
# Get dashboard statistics
curl "http://localhost:8000/dashboard/stats"- Read the Legal & Ethics Policy
- Obtain written authorization for all targets
- Respect scope limitations and out-of-scope rules
- Follow responsible disclosure practices
- β Only test systems you own or have explicit permission to test
- β Implement reasonable rate limiting to avoid service disruption
- β Document all activities for audit purposes
- β Report findings responsibly to appropriate parties
- β Never test without authorization
- β Never access or modify sensitive data
- β Never perform destructive actions
- Certificate Transparency Logs - Subdomain discovery via CT logs
- Passive DNS - Historical DNS data analysis
- Shodan Integration - Internet-wide host discovery
- GitHub Dorking - Code repository scanning
- Wayback Machine - Historical content analysis
- Technology Fingerprinting - Framework and service identification
- SQL Injection - Automated SQLi detection with error-based analysis
- Cross-Site Scripting (XSS) - Reflected and stored XSS detection
- Server-Side Request Forgery (SSRF) - Internal service probing
- Directory Traversal - File inclusion vulnerability testing
- Information Disclosure - Sensitive file and configuration exposure
- Security Misconfigurations - Missing security headers and controls
- Intelligent Triage - AI-assisted finding prioritization
- PoC Generation - Automated proof-of-concept creation
- Vulnerability Analysis - LLM-powered security assessment
- Report Summarization - Natural language finding summaries
- Screenshot Capture - Automated web application screenshots
- Request/Response Logging - Complete HTTP transaction recording
- Audit Trail - Immutable activity logging
- Evidence Storage - Secure file storage with integrity verification
- Real-time Scan Monitoring - Live status updates
- Finding Management - Triage, assignment, and tracking
- Asset Inventory - Comprehensive asset discovery view
- Evidence Viewer - Integrated evidence examination
- Export Capabilities - PDF and JSON report generation
GET /health- System health checkPOST /scans- Submit new scan jobGET /scans/{id}- Get scan statusGET /findings- List security findingsPOST /findings/{id}/triage- Triage findingsGET /assets- List discovered assetsGET /dashboard/stats- System statistics
# Start individual components
redis-server
celery -A automation.orchestrator worker --loglevel=info
python3 -m uvicorn ui.api:app --reload --host 0.0.0.0 --port 8000# Run tests
pytest
# Run with coverage
pytest --cov=. --cov-report=html- Create collector class in
recon/collectors.py - Implement
collect()method - Add to
ReconOrchestrator - Create corresponding Celery task in
recon/tasks.py
# Reset database (WARNING: deletes all data)
python3 scripts/init_db.py --reset
# Check database connection
python3 scripts/init_db.py --check./scripts/start_services.sh./scripts/stop_services.sh# Check individual services
redis-cli ping
celery -A automation.orchestrator inspect ping
curl http://localhost:8000/health- Legal & Ethics Policy - Legal compliance and ethical guidelines
- API Documentation - Interactive API documentation
- Data Schemas - Complete data model documentation
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 style guidelines
- Add comprehensive docstrings
- Include unit tests for new features
- Update documentation as needed
- Ensure legal and ethical compliance
This project is licensed under the MIT License - see the LICENSE file for details.
This tool is intended for authorized security testing and educational purposes only. Users are responsible for ensuring they have proper authorization before testing any systems. The developers assume no liability for misuse of this software.
- Issues: GitHub Issues
- Documentation: Project Wiki
- Security: Report security issues privately to security@yourproject.com
Happy Bug Hunting! ππ