A self-contained AI neural engine with tool execution, scheduled goals, and 100% local LLM inference via llama.cpp.
- Local LLM: Built-in llama.cpp server (Mistral 7B) - no external APIs
- Tool System: Semantic tool discovery and execution (Strava, memory, calculator, etc.)
- Scheduler: Run goals on cron schedules or intervals
- PostgreSQL Storage: Persistent tool data and credentials
- GPU Acceleration: Auto-detects NVIDIA GPU for faster inference
# Clone
git clone https://github.com/gradrix/dendrite.git
cd dendrite
# Start (auto-detects GPU)
./start.sh
# Run a single goal
./start.sh goal "What is 2+2?"
# Run scheduler daemon
./start.sh scheduler./start.sh # Start services (auto-detect GPU)
./start.sh goal "..." # Run single goal and exit
./start.sh scheduler # Run scheduler daemon (uses goals.yaml)
./start.sh api # Start HTTP API server
./start.sh stop # Stop all services
./start.sh status # Show service status
./start.sh logs # Follow logs
./start.sh test # Run tests
./start.sh help # Show help# RAM profile for model selection (8gb, 16gb, 32gb)
RAM_PROFILE=32gb
# GPU VRAM (auto-detected if not set)
VRAM_GB=32
# Strava OAuth (optional)
STRAVA_CLIENT_ID=your_id
STRAVA_CLIENT_SECRET=your_secretgoals:
- id: collect_kudos
goal: "Use strava_collect_kudos_givers with hours_back=48"
schedule: cron
cron: "0 */4 * * *" # Every 4 hours
enabled: true
- id: reciprocate_kudos
goal: "Use strava_reciprocate_kudos with count=30 and max_age_hours=24"
schedule: cron
cron: "0 */6 * * *" # Every 6 hours
enabled: true
settings:
check_interval: 60βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Orchestrator β
β Routes goals to appropriate neurons based on intent β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββΌββββββββββββββββββ
βΌ βΌ βΌ
βββββββββββββ βββββββββββββ βββββββββββββ
β Intent β β Tool β β Generativeβ
β Neuron β β Neuron β β Neuron β
βββββββββββββ βββββββββββββ βββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββ βββββββββββββ βββββββββββββ
β LLM β β Tool β β LLM β
β Client β β Registry β β Client β
βββββββββββββ βββββββββββββ βββββββββββββ
β
βΌ
βββββββββββββββββββββββββββ
β Tools (Strava, etc) β
β PostgreSQL Storage β
βββββββββββββββββββββββββββ
calculator- Math expressionsmemory_write/memory_read- Persistent key-value storagecurrent_datetime- Current date/time
strava_get_activities- Get your activitiesstrava_get_dashboard_feed- Get friends' activitiesstrava_give_kudos- Give kudos to an activitystrava_collect_kudos_givers- Track who gives you kudosstrava_reciprocate_kudos- Auto-kudos back to giversstrava_list_kudos_givers- List known kudos givers
| Service | Port | Description |
|---|---|---|
| llama-gpu | 8080 | llama.cpp server (Mistral 7B) |
| postgres | 5432 | PostgreSQL with pgvector |
| redis | 6379 | Message bus / caching |
# Run tests
./start.sh test
# Run specific tests
./start.sh test -k "test_strava"
# Access container shell
./scripts/docker/shell.shβββ main.py # Entry point
βββ start.sh # Main startup script
βββ goals.yaml # Scheduled goals config
βββ docker-compose.yml # Services definition
βββ neural_engine/
β βββ v2/
β βββ core/ # Config, LLM, Orchestrator
β βββ neurons/ # Intent, Tool, Generative, Memory
β βββ tools/ # Tool implementations
β βββ scheduler/ # Goal scheduling
β βββ forge/ # Dynamic tool creation
β βββ cli.py # Command-line interface
β βββ api.py # HTTP API
βββ scripts/
βββ db/ # Database migrations
βββ docker/ # Docker helper scripts
βββ testing/ # Test runners
MIT