π mofaclaw is an ultra-lightweight personal AI assistant inspired by nanobot
β‘οΈ Written in Rust for maximum performance, safety, and efficiency β 99% smaller than Clawdbot's 430k+ lines.
- 2026-02-04 π mofaclaw inspired by nanobot, rewritten in Rust! Blazing fast performance with memory safety.
π¦ Rust-Powered: Memory-safe, zero-cost abstractions, and fearless concurrency for rock-solid stability.
πͺΆ Ultra-Lightweight: Minimal dependencies and clean architecture β easy to understand and modify.
π¬ Research-Ready: Clean, readable code that's easy to understand, modify, and extend for research.
β‘οΈ Lightning Fast: Compiled performance with instant startup, minimal memory footprint, and efficient execution.
π Easy-to-Use: Simple installation and setup β you're ready to go in minutes.
Rust (1.85+ required)
- For users in China, use the fast mirror: https://rsproxy.cn/
- Official installation: https://www.rust-lang.org/tools/install
Quick install (China mirror):
# Visit https://rsproxy.cn/ for the latest installation command
curl --proto '=https' --tlsv1.2 -sSf https://rsproxy.cn/install.sh | shInstall from source (recommended)
# Install the CLI binary
cargo install --path cli/
# Or build and run directly
cargo run --release -- onboardBuild manually
cargo build --release
# The binary will be at target/release/mofaclawInstall with cargo (from crates.io, coming soon)
cargo install mofaclawTip
Set your API key in ~/.mofaclaw/config.json.
Get API keys: OpenRouter (LLM) Β· Brave Search (optional, for web search)
You can also change the model to minimax/minimax-m2 for lower cost.
1. Initialize
mofaclaw onboard2. Configure (~/.mofaclaw/config.json)
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
},
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5"
}
},
"webSearch": {
"apiKey": "BSA-xxx"
}
}3. Chat
mofaclaw agent -m "What is 2+2?"That's it! You have a working AI assistant in 2 minutes.
Run mofaclaw with your own local models using vLLM or any OpenAI-compatible server.
1. Start your vLLM server
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 80002. Configure (~/.mofaclaw/config.json)
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
},
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}3. Chat
mofaclaw agent -m "Hello from my local LLM!"Tip
The apiKey can be any non-empty string for local servers that don't require authentication.
Talk to your mofaclaw through DingTalk or Feishu β anytime, anywhere.
| Channel | Status |
|---|---|
| DingTalk | tested |
| Feishu | not test |
DingTalk and Feishu channels require Python 3.11+ with the following packages:
DingTalk:
pip install dingtalk-stream websockets certifiFeishu:
pip install lark-oapi websockets cryptographyNote: The gateway will automatically check and install these packages on first run. However, you may need to install them manually if you don't have write permissions to the Python environment, or prefer to manage dependencies explicitly.
To check your Python version:
python3 --versionIf Python is not installed:
- Download from: https://www.python.org/downloads/
- On macOS:
brew install python3 - On Ubuntu/Debian:
sudo apt-get install python3 python3-pip - On Windows: Use the Python Installer from python.org
DingTalk Configuration
1. Create a DingTalk App
- Go to DingTalk Open Platform
- Create an app and get
client_idandclient_secret - For detailed setup, see: ιιεΌεζζ‘£
2. Configure (~/.mofaclaw/config.json)
{
"channels": {
"dingtalk": {
"enabled": true,
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET"
}
}
}3. Run
mofaclaw gatewayConfig file: ~/.mofaclaw/config.json
Note
Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
| Provider | Purpose | Get API Key |
|---|---|---|
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
anthropic |
LLM (Claude direct) | console.anthropic.com |
openai |
LLM (GPT direct) | platform.openai.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
gemini |
LLM (Gemini direct) | aistudio.google.com |
Full config example
{
"agents": {
"defaults": {
"workspace": "~/.mofaclaw/workspace",
"model": "glm-4.7-flash",
"max_tokens": 16000,
"temperature": 0.7,
"max_tool_iterations": 5
}
},
"channels": {
"dingtalk": {
"enabled": true,
"client_id": "YOUR_CLIENT_ID",
"client_secret": "YOUR_CLIENT_SECRET"
},
"feishu": {
"enabled": false,
"app_id": "",
"app_secret": "",
"encrypt_key": "",
"verification_token": ""
}
},
"providers": {
"anthropic": {
"api_key": ""
},
"openai": {
"api_key": ""
},
"openrouter": {
"api_key": ""
},
"zhipu": {
"api_base": "https://open.bigmodel.cn/api/paas/v4",
"api_key": ""
},
"vllm": {
"api_key": ""
},
"gemini": {
"api_key": ""
},
"groq": {
"api_key": ""
}
},
"gateway": {
"host": "0.0.0.0",
"port": 18790
},
"tools": {
"web": {
"search": {
"api_key": "",
"max_results": 0
}
},
"transcription": {
"groq_api_key": ""
}
}
}| Command | Description |
|---|---|
mofaclaw onboard |
Initialize config & workspace |
mofaclaw agent -m "..." |
Chat with the agent |
mofaclaw agent |
Interactive chat mode |
mofaclaw gateway |
Start the gateway |
mofaclaw status |
Show status |
Scheduled Tasks (Cron)
# Add a job
mofaclaw cron add --name "daily" --message "Good morning!" --cron "0 9 * * *"
mofaclaw cron add --name "hourly" --message "Check status" --every 3600
# List jobs
mofaclaw cron list
# Remove a job
mofaclaw cron remove <job_id>Note
Docker support coming soon for the Rust version. For now, please install from source.
# Coming soon: docker build -t mofaclaw .
# Coming soon: docker run -v ~/.mofaclaw:/root/.mofaclaw mofaclaw gatewaymofaclaw/
βββ core/ # π§ Core library (agent, tools, providers)
β βββ agent/ # Agent loop, context, memory, skills
β βββ tools/ # Built-in tools (filesystem, shell, web, spawn)
β βββ provider/ # LLM provider clients
β βββ channels/ # Channel integrations (DingTalk, Feishu)
β βββ bus/ # Message routing
β βββ cron/ # Scheduled tasks
β βββ heartbeat/ # Proactive wake-up service
βββ cli/ # π₯οΈ Command-line interface
βββ channels/ # π± Channel implementations
βββ skills/ # π― Bundled skills (github, weather, tmux...)

