The High-Performance Go AI Swarm
Multi-agent orchestration with zero-latency routing, extensible skills, and multi-channel support.
Clawkido is a multi-agent AI swarm written in Go. You define agents (each with their own LLM provider, personality, and tools), organize them into teams, and talk to them through Telegram or Discord. Agents collaborate autonomously β a manager delegates to a coder, the coder writes code, the reviewer checks it β all from a single message.
Why Go instead of Python/Node?
- Go channels = zero-latency agent-to-agent routing (no file-system polling)
- Single binary, ~50MB RAM β runs on a Raspberry Pi
- Goroutines = each agent runs in its own lightweight thread
context.Contextflows everywhere = clean graceful shutdown on Ctrl+C
git clone https://github.com/shamspias/clawkido.git
cd clawkido
make deps
make buildcp .env.example .envEdit .env with your real keys:
GROQ_API_KEY=gsk_your_real_key_here
TELEGRAM_BOT_TOKEN=123456789:AAF-your_real_token_here
# Optional β only needed if you configure agents with these providers:
OPENAI_API_KEY=sk-proj-...
DISCORD_BOT_TOKEN=...How to get a Telegram bot token: Message @BotFather on Telegram β
/newbotβ follow prompts β copy the token.
How to get a Groq API key: Sign up at console.groq.com β API Keys β Create.
Edit config.json. The default ships with 3 agents (manager, coder, reviewer). No changes needed to start.
make runYou should see:
π¦ CLAWKIDO β AI AGENT SWARM ENGINE
ββββββββββββββββββββββββββββββββββββββ
10:04:35 β INFO β Swarm β Agent 'manager' registered
10:04:35 β INFO β Swarm β Agent 'coder' registered
10:04:35 β INFO β Swarm β Agent 'reviewer' registered
10:04:35 β INFO β Swarm β Hive active: 3 agents, 1 teams
10:04:35 β INFO β Telegram β Connected as @your_bot
Now message your bot on Telegram. It works.
{
"ai": {
"ollama_url": "http://localhost:11434"
},
"telegram": {
"allowed_users": []
},
"discord": {},
"swarm": {
"max_handoff_depth": 5,
"inbox_buffer_size": 256,
"router_buffer": 256
},
"agents": [
...
],
"teams": [
...
]
}The allowed_users field controls who can talk to your bot:
| Value | Behavior |
|---|---|
[] (empty) |
Allow everyone β any Telegram user can message the bot |
[910739932] |
Only user ID 910739932 can message the bot |
[910739932, 123456789] |
Only these two users can message the bot |
How to find your Telegram user ID: Message @userinfobot on Telegram β it replies with your numeric ID.
Each agent entry in the agents array:
{
"name": "coder",
"provider": "groq",
"model_name": "openai/gpt-oss-120b",
"temperature": 0.2,
"max_history": 80,
"skills": [
"shell",
"time"
],
"fallback": "ollama",
"system_prompt": "You are a Senior Software Engineer..."
}| Field | Type | Description |
|---|---|---|
name |
string | Unique agent name (used for @mentions) |
provider |
string | "groq", "openai", or "ollama" |
model_name |
string | Model ID for the provider |
temperature |
float | 0.0 (deterministic) to 2.0 (creative). Default: 0.7 |
max_history |
int | Max conversation turns to keep in memory. Default: 50 |
skills |
string[] | Skill names this agent can invoke |
fallback |
string | Fallback provider if primary fails (optional) |
system_prompt |
string | The agent's personality and instructions |
{
"name": "dev",
"members": [
"manager",
"coder",
"reviewer"
],
"leader": "manager"
}Message @dev to broadcast to all members simultaneously.
| Field | Default | Description |
|---|---|---|
max_handoff_depth |
5 | Prevents infinite agentβagent loops |
inbox_buffer_size |
256 | Buffer size for external message queue |
router_buffer |
256 | Buffer size for internal handoff queue |
Talk to a specific agent by prefixing with @name:
You: @coder write a fibonacci function in Go
Coder: π€ Here's an optimized implementation...
Messages without @ go to the first agent in the config (typically the manager):
You: Build me a REST API for a todo app
Manager: π€ I'll coordinate this. [@coder: Build a REST API in Go with...]
This is the core power. The manager can autonomously delegate:
You: @manager I need a Python web scraper
Manager: π€ I'll have the coder handle this.
[@coder: Write a Python web scraper using BeautifulSoup]
β (Swarm routes internally β you see both responses)
Coder: π€ Here's the scraper:
```python
import requests
from bs4 import BeautifulSoup
...
```
The Swarm routes [@coder: ...] tags internally. You receive all responses from the chain in your chat.
Agents can chain through multiple handoffs:
You: @manager review and optimize my sorting algorithm
Manager: [@coder: Optimize this sorting algorithm]
Coder: Here's the optimized version. [@reviewer: Check this for edge cases]
Reviewer: π€ Found 2 issues: ...
The max_handoff_depth (default: 5) prevents infinite loops.
You: @dev what's the status of the auth module?
β (All 3 agents receive the message in parallel)
Manager: π€ From a project perspective...
Coder: π€ Implementation is at 80%...
Reviewer: π€ I've flagged 3 security concerns...
Skills are tools agents can invoke inline during responses. The LLM includes [!skill_name: args] in its output, and
Clawkido executes it before delivering the response.
| Skill | Usage in LLM output | Description |
|---|---|---|
shell |
[!shell: ls -la] |
Run a shell command (30s timeout, output truncated at 4KB) |
time |
[!time] |
Current UTC timestamp |
memory_reset |
[!memory_reset] |
Clear the agent's conversation history |
-
You configure which skills each agent can use in
config.json:{ "name": "coder", "skills": ["shell", "time"], ... } -
The agent's system prompt is automatically augmented with skill descriptions.
-
When the LLM outputs
[!shell: ls -la], Clawkido:- Parses the tag
- Executes the skill
- Replaces the tag with the output in the response
Create a new file (e.g., internal/skills/weather.go):
package skills
import (
"context"
"fmt"
"io"
"net/http"
)
type WeatherSkill struct{}
func (w WeatherSkill) Name() string {
return "weather"
}
func (w WeatherSkill) Description() string {
return "Get weather for a city. Usage: [!weather: London]"
}
func (w WeatherSkill) Execute(ctx context.Context, args string) (string, error) {
url := fmt.Sprintf("https://wttr.in/%s?format=3", args)
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return "", err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
return string(body), nil
}Register it in cmd/clawkido/main.go after skills.RegisterDefaults(skillReg):
skillReg.Register(skills.WeatherSkill{})Add it to agent configs:
{
"name": "manager",
"skills": [
"time",
"weather"
],
...
}Rebuild and run. The manager can now say [!weather: Tokyo] and it works.
| Skill | What it does |
|---|---|
http_get |
Fetch a URL and return the body |
calculator |
Evaluate a math expression |
file_read |
Read a local file |
file_write |
Write content to a file |
git_status |
Run git status in a project directory |
docker_ps |
List running containers |
db_query |
Execute a read-only SQL query |
web_search |
Search the web via an API (SerpAPI, Brave, etc.) |
βββββββββββββββ βββββββββββββββ
β Telegram β β Discord β
ββββββββ¬βββββββ ββββββββ¬βββββββ
β β
βββββββββ¬ββββββββββββ
βΌ
βββββββββββββββββ
β Inbox β β Buffered channel (256)
βββββββββ¬ββββββββ
βΌ
βββββββββββββββββ
β The Hive β β Router goroutine
β (Swarm) β
ββββ¬βββββββ¬βββ¬βββ
β β β
βΌ βΌ βΌ
βββββββ βββββββ βββββββ
β Mgr β βCoderβ β QA β β Agent goroutines (actors)
β β β β β β Each has: inbox, history, skills
ββββ¬βββ ββββ¬βββ ββββ¬βββ
β β β
βββββ¬ββββββββ¬ββββ
βΌ βΌ
βββββββββββββββββ
β Router Bus β β Handoff channel (256)
βββββββββ¬ββββββββ
βΌ
βββββββββββββββββ
β Event Bus β β Pub/sub for extensions
βββββββββ¬ββββββββ
βΌ
βββββββββββββββββ
β TUI + Health β β Dashboard + metrics
βββββββββββββββββ
| Decision | Why |
|---|---|
| Buffered reply channel (16) | A handoff chain (manager β coder β reviewer) produces multiple responses. Buffer holds them all without blocking. |
| Rolling idle timeout (90s) | After receiving a response, wait 90s for more before closing. Lets slow LLMs in the chain finish. |
| Depth limiting | max_handoff_depth=5 prevents agent A tagging agent B tagging agent A in a loop. |
| History trimming | max_history=50 keeps the last 50 turns. System prompt (index 0) is never trimmed. |
| Non-blocking sends | Every channel send uses select/default. If a buffer is full, the message is dropped with a warning β never deadlocked. |
| Atomic metrics | Message counts and latency use atomic.Int64 β no mutex contention on the hot path. |
| Provider fallback | If an agent's primary provider (Groq) fails after retries, it automatically tries the fallback (Ollama). |
| Markdown fallback | If Telegram rejects a message due to bad Markdown, it retries without formatting instead of failing silently. |
Configure a coder agent with shell skill. Ask it to write code, and it can run tests:
@coder write a Go function to reverse a string, then test it with [!shell: go test ./...]
@manager review my auth implementation
β Manager delegates to coder for analysis
β Coder reviews and delegates to reviewer for security check
β You get all three perspectives in your chat
Add a docker_ps skill and a devops agent:
{
"name": "devops",
"provider": "groq",
"model_name": "openai/gpt-oss-120b",
"skills": [
"shell"
],
"system_prompt": "You are a DevOps engineer. Use [!shell: command] to check system status."
}@devops check if nginx is running and show disk usage
{
"agents": [
{
"name": "researcher",
"system_prompt": "Find information and cite sources..."
},
{
"name": "writer",
"system_prompt": "Write clear, engaging content..."
},
{
"name": "editor",
"system_prompt": "Edit for grammar, clarity, and accuracy..."
}
],
"teams": [
{
"name": "content",
"members": [
"researcher",
"writer",
"editor"
]
}
]
}@content write a blog post about Go's concurrency model
Your Telegram user ID is not in allowed_users. Fix:
Option A: Allow everyone (recommended for personal use):
"telegram": {"allowed_users": []}Option B: Add your specific ID:
"telegram": {"allowed_users": [910739946]}Your DISCORD_BOT_TOKEN in .env is invalid or empty. If you don't use Discord, just leave it empty β the error is
non-fatal and Telegram still works.
This was a bug in v1 β the reply goroutine only read one message and then exited. v2 uses a draining loop that collects all responses from handoff chains. Make sure you're running the latest code.
Your GROQ_API_KEY in .env is missing or empty. The brain only registers providers that have valid keys.
Check max_history in config.json. Default is 50 turns. For long conversations, increase it (cost goes up).
make fmt # gofmt
make vet # go vet
make lint # golangci-lint (if installed)
make test # Tests with race detector
make release # Cross-compile for Linux/macOS/WindowsMIT β see LICENSE.