| language | license | tags | datasets | metrics | base_model | library_name | pipeline_tag | ||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
gpl-3.0 |
|
|
|
|
docking-at-home |
other |
Distributed and Parallel Molecular Docking Platform
Docking@HOME is a cutting-edge distributed computing platform that leverages the power of volunteer computing, GPU acceleration, decentralized networking, and AI-driven orchestration to perform large-scale molecular docking simulations. This project combines multiple state-of-the-art technologies to democratize drug discovery and computational chemistry.
- 𧬠AutoDock Integration: Uses AutoDock Suite 4.2.6 for molecular docking simulations
- π GPU Acceleration: CUDPP-powered parallel processing for enhanced performance
- π Distributed Computing: BOINC framework for volunteer computing at scale
- π Decentralized Networking: Distributed Network Settings-based coordination using the Decentralized Internet SDK
- π€ AI Orchestration: Cloud Agents for intelligent task distribution and optimization
- π HuggingFace Integration: Model cards and datasets for reproducible research
Docking@HOME Platform
βββββββββββββββββββββββ
β BOINC Server β
β (Task Mgmt) β
βββββββββββ²ββββββββββββ
β
β
ββββββββββββ΄ββββββββββββ
β Decentralized β
β Internet β
βββββββββββ²ββββββββββββ
β
β
ββββββββββββ΄ββββββββββββ
β Cloud Agents β
β (AI Routing) β
ββββββββββββ¬ββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββ
β Distributed Worker Nodes (Clients) β
β ββββββββββββββββββ ββββββββββββββββββ β
β β AutoDock β<---->β CUDPP β β
β β (Docking) β β (GPU Accel) β β
β ββββββββββββββββββ ββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββ
Core molecular docking engine that predicts binding modes and affinities of small molecules to protein targets.
Provides GPU-accelerated parallel primitives for enhancing AutoDock's computational performance.
Distributed computing middleware that manages volunteer computing resources globally.
Enables Distributed Network Settings-based coordination, ensuring transparency and decentralization of task distribution.
AI-powered orchestration layer that optimizes task scheduling and resource allocation based on workload characteristics.
- OpenPeer AI - AI/ML Integration & Cloud Agents
- Riemann Computing Inc. - Distributed Computing Architecture
- Bleunomics - Bioinformatics & Drug Discovery Expertise
- Andrew Magdy Kamal - Project Lead & System Integration
- C++ compiler (GCC 7+ or MSVC 2019+)
- CUDA Toolkit 11.0+ (for GPU acceleration)
- Python 3.8+
- Node.js 16+ (for the Decentralized Internet SDK)
- BOINC client/server software
# Clone the repository
git clone https://huggingface.co/OpenPeerAI/DockingAtHOME
cd DockingAtHOME
# Initialize submodules
git submodule update --init --recursive
# Build the project
mkdir build && cd build
cmake ..
make -j$(nproc)
# Install
sudo make install# Clone repository
git clone https://huggingface.co/OpenPeerAI/DockingAtHOME
cd DockingAtHOME
# Install dependencies
pip install -r requirements.txt
npm install
# Build C++/CUDA components
mkdir build && cd build
cmake .. && make -j$(nproc)# Start the web-based GUI (fastest way to get started)
docking-at-home gui
# Or with Python
python -m docking_at_home.gui
# Open browser to http://localhost:8080from docking_at_home import DockingClient
# Initialize client (localhost mode)
client = DockingClient(mode="localhost")
# Submit docking job
job = client.submit_job(
ligand="path/to/ligand.pdbqt",
receptor="path/to/receptor.pdbqt",
num_runs=100
)
# Monitor progress
status = client.get_status(job.id)
# Retrieve results
results = client.get_results(job.id)
print(f"Best binding energy: {results.best_energy} kcal/mol")# Start server
docking-at-home server --port 8080
# In another terminal, run worker
docking-at-home worker --localfrom docking_at_home.server import job_manager, initialize_server
import asyncio
async def main():
await initialize_server()
job_id = await job_manager.submit_job(
ligand_file="molecule.pdbqt",
receptor_file="protein.pdbqt",
num_runs=100,
use_gpu=True
)
# Monitor progress
while True:
job = job_manager.get_job(job_id)
if job["status"] == "completed":
print(f"Best energy: {job['results']['best_energy']}")
break
await asyncio.sleep(1)
asyncio.run(main())Configuration files are located in config/:
autodock.conf- AutoDock parametersboinc_server.conf- BOINC server settingsgpu_config.conf- CUDPP and GPU settingsdecentralized.conf- Distributed Network Settingscloud_agents.conf- AI orchestration parameters
On a typical configuration:
- CPU-only: ~100 docking runs/hour
- Single GPU (RTX 3090): ~2,000 docking runs/hour
- Distributed (1000 nodes): ~100,000+ docking runs/hour
- π¬ Drug Discovery and Virtual Screening
- π§ͺ Protein-Ligand Binding Studies
- π Large-Scale Chemical Library Screening
- π Educational Computational Chemistry
- π Pandemic Response (e.g., COVID-19 drug discovery)
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the GNU General Public License v3.0 - see LICENSE for details.
Individual components retain their original licenses:
- AutoDock: GNU GPL v2
- BOINC: GNU LGPL v3
- CUDPP: BSD License
If you use Docking@HOME in your research, please cite:
@software{docking_at_home_2025,
title={Docking@HOME: A Distributed Platform for Molecular Docking},
author={OpenPeer AI and Riemann Computing Inc. and Bleunomics and Andrew Magdy Kamal},
year={2025},
url={https://huggingface.co/OpenPeerAI/DockingAtHOME}
}- π§ Email: andrew@bleunomics.com
- οΏ½ Issues: HuggingFace Issues
- π€ Community: HuggingFace Discussions
- The AutoDock development team at The Scripps Research Institute
- BOINC project at UC Berkeley
- CUDPP developers
- Lonero Team for the Decentralized Internet SDK
- OpenPeer AI for Cloud Agents framework
Made with β€οΈ by the open-source computational chemistry community