LocalAGI is a powerful, self-hostable AI Agent platform that allows you to design AI automations without writing code. A complete drop-in replacement for OpenAI's Responses APIs with advanced agentic capabilities.
- 🎛 No-Code Agents: Easy-to-configure multiple agents via Web UI.
- 🖥 Web-Based Interface: Simple and intuitive agent management.
- 🤖 Advanced Agent Teaming: Instantly create cooperative agent teams from a single prompt.
- 📡 Connectors Galore: Built-in integrations with Discord, Slack, Telegram, GitHub Issues, and IRC.
- 🛠 Comprehensive REST API: Seamless integration into your workflows. Every agent created will support OpenAI Responses API out of the box.
- 🧠 Planning & Reasoning: Agents intelligently plan, reason, and adapt.
- 🔄 Periodic Tasks: Schedule tasks with cron-like syntax.
- 💾 Memory Management: Control memory usage with options for long-term.
- 🖼 Multimodal Support: Ready for vision, text, and more.
- 🔧 Extensible Custom Actions: Easily script dynamic agent behaviors in Go (interpreted, no compilation!).
- 📊 Observability: Monitor agent status and view detailed observable updates in real-time.
# Clone the repository
git clone https://github.com/bit-gpt/local-agi
cd local-agi
# Build the project
./build.sh
# Run the Go backend (from the project root)
./app
# Access the application
# Open your browser and go to the address where the backend is running (e.g., http://localhost:3000)
# If you are unsure of the port, check your Go code (main.go or webui/app.go) for the port configuration.
Now you can access and manage your agents at http://localhost:3000
- ✓ Developer-Friendly: Rich APIs and intuitive interfaces.
- ✓ Effortless Setup: Simple setup and pre-built binaries.
- ✓ Feature-Rich: From planning to multimodal capabilities, connectors for Slack, MCP support, LocalAGI has it all.
|
|
|
|
|
|
Explore detailed documentation including:
LocalAGI supports environment configurations.
| Variable | What It Does |
|---|---|
DB_HOST |
MySQL Database host address |
DB_NAME |
MySQL Database name |
DB_PASS |
MySQL Database password |
DB_USER |
MySQL Database user |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL (e.g., for OpenRouter) |
LOCALAGI_LLM_API_KEY |
API authentication key for LLM API |
LOCALAGI_TIMEOUT |
Request timeout settings (e.g., 5m) |
VITE_PRIVY_APP_ID |
Privy App ID for frontend (Vite) |
PRIVY_APP_ID |
Privy App ID for backend |
PRIVY_APP_SECRET |
Privy App Secret for backend authentication |
PRIVY_PUBLIC_KEY_PEM |
Privy public key PEM (if required) |
Download ready-to-run binaries from the Releases page.
Requirements:
- Go 1.20+
- Git
- Bun 1.2+
# Clone the repository
git clone https://github.com/bit-gpt/local-agi
cd local-agi
# Build the project
./build.sh
# Run the Go backend (from the project root)
./appLocalAGI provides a powerful way to extend its functionality with custom actions:
LocalAGI supports both smithery.ai and glama.ai MCP servers, allowing you to extend functionality with external tools and services.
The Model Context Protocol (MCP) is a standard for connecting AI applications to external data sources and tools. LocalAGI can connect to any MCP-compliant server to access additional capabilities.
- Via Web UI: In the MCP Settings section of agent creation, add MCP servers
- Security: Always validate inputs and use proper authentication for remote MCP servers
- Error Handling: Implement robust error handling in your MCP servers
- Documentation: Provide clear descriptions for all tools exposed by your MCP server
- Testing: Test your MCP servers independently before integrating with LocalAGI
- Resource Management: Ensure your MCP servers properly clean up resources
The development workflow is similar to the source build, but with additional steps for hot reloading of the frontend:
# Clone repo
git clone https://github.com/bit-gpt/local-agi.git
cd local-agi
# Install dependencies and start frontend development server
cd webui/react-ui && bun i && bun run devThen in separate terminal:
# Start development server
cd ../.. && go run main.goNote: see webui/react-ui/.vite.config.js for env vars that can be used to configure the backend URL
Link your agents to the services you already use. Configuration examples below.
GitHub Issues
{
"token": "YOUR_PAT_TOKEN",
"repository": "repo-to-monitor",
"owner": "repo-owner",
"botUserName": "bot-username"
}Discord
After creating your Discord bot:
{
"token": "Bot YOUR_DISCORD_TOKEN",
"defaultChannel": "OPTIONAL_CHANNEL_ID"
}Don't forget to enable "Message Content Intent" in Bot(tab) settings! Enable " Message Content Intent " in the Bot tab!
Slack
Use the included slack.yaml manifest to create your app, then configure:
{
"botToken": "xoxb-your-bot-token",
"appToken": "xapp-your-app-token"
}- Create Oauth token bot token from "OAuth & Permissions" -> "OAuth Tokens for Your Workspace"
- Create App level token (from "Basic Information" -> "App-Level Tokens" ( scope connections:writeRoute authorizations:read ))
Telegram
Get a token from @botfather, then:
{
"token": "your-bot-father-token",
"group_mode": "true",
"mention_only": "true",
"admins": "username1,username2"
}Configuration options:
token: Your bot token from BotFathergroup_mode: Enable/disable group chat functionalitymention_only: When enabled, bot only responds when mentioned in groupsadmins: Comma-separated list of Telegram usernames allowed to use the bot in private chatschannel_id: Optional channel ID for the bot to send messages to
Important: For group functionality to work properly:
- Go to @BotFather
- Select your bot
- Go to "Bot Settings" > "Group Privacy"
- Select "Turn off" to allow the bot to read all messages in groups
- Restart your bot after changing this setting
IRC
Connect to IRC networks:
{
"server": "irc.example.com",
"port": "6667",
"nickname": "LocalAGIBot",
"channel": "#yourchannel",
"alwaysReply": "false"
}{
"smtpServer": "smtp.gmail.com:587",
"imapServer": "imap.gmail.com:993",
"smtpInsecure": "false",
"imapInsecure": "false",
"username": "user@gmail.com",
"email": "user@gmail.com",
"password": "correct-horse-battery-staple",
"name": "LogalAGI Agent"
}Agent Management
| Endpoint | Method | Description |
|---|---|---|
/api/agents |
GET | List all available agents |
/api/agent/:id |
GET | Get agent details |
/api/agent/:id/status |
GET | View agent status history |
/api/agent/create |
POST | Create a new agent |
/api/agent/:id |
DELETE | Remove an agent |
/api/agent/:id/pause |
PUT | Pause agent activities |
/api/agent/:id/start |
PUT | Resume a paused agent |
/api/agent/:id/config |
GET | Get agent configuration |
/api/agent/:id/config |
PUT | Update agent configuration |
/api/agent/config/metadata |
GET | Get agent configuration metadata |
/api/meta/agent/config |
GET | Get agent configuration metadata |
/settings/export/:id |
GET | Export agent config |
/settings/import |
POST | Import agent config |
Actions and Groups
| Endpoint | Method | Description |
|---|---|---|
/api/actions |
GET | List available actions |
/api/action/:name/run |
POST | Execute an action |
/api/action/:name/definition |
POST | Get action definition |
/api/agent/group/generateProfiles |
POST | Generate group profiles |
/api/agent/group/create |
POST | Create a new agent group |
Chat and Communication
| Endpoint | Method | Description |
|---|---|---|
/api/chat/:id |
POST | Send message & get response |
/api/chat/:id |
GET | Get chat history |
/api/chat/:id |
DELETE | Clear chat history |
/api/sse/:id |
GET | Real-time agent event stream |
/api/agent/:id/observables |
GET | Get agent observables |
Usage and Analytics
| Endpoint | Method | Description |
|---|---|---|
/api/usage |
GET | Get usage statistics |
Curl Examples
Note: When using the API with curl, you need to include your Privy authentication token. You can either:
- Include it as a cookie:
-b "privy-token=YOUR_TOKEN_HERE"- Or set it as a header:
-H "Cookie: privy-token=YOUR_TOKEN_HERE"Replace
YOUR_TOKEN_HEREwith your actual Privy JWT token obtained from the web interface.
curl -X GET "http://localhost:3000/api/agents"curl -X GET "http://localhost:3000/api/agent/agent-id"curl -X GET "http://localhost:3000/api/agent/agent-id/status"curl -X POST "http://localhost:3000/api/agent/create" \
-H "Content-Type: application/json" \
-d '{
"name": "my-agent",
"model": "gpt-4",
"system_prompt": "You are an AI assistant.",
"enable_kb": true,
"enable_reasoning": true
}'curl -X DELETE "http://localhost:3000/api/agent/agent-id"curl -X PUT "http://localhost:3000/api/agent/agent-id/pause"curl -X PUT "http://localhost:3000/api/agent/agent-id/start"curl -X GET "http://localhost:3000/api/agent/agent-id/config"curl -X PUT "http://localhost:3000/api/agent/agent-id/config" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"system_prompt": "You are an AI assistant."
}'curl -X GET "http://localhost:3000/settings/export/agent-id" --output my-agent.jsoncurl -X POST "http://localhost:3000/settings/import" \
-F "file=@/path/to/my-agent.json"curl -X POST "http://localhost:3000/api/chat/agent-id" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you today?"}'curl -X GET "http://localhost:3000/api/agent/agent-id/chat"curl -X DELETE "http://localhost:3000/api/agent/agent-id/chat"curl -N -X GET "http://localhost:3000/api/sse/agent-id"Note: For proper SSE handling, you should use a client that supports SSE natively.
curl -X GET "http://localhost:3000/api/usage"curl -X POST "http://localhost:3000/api/action/action-name/run" \
-H "Content-Type: application/json" \
-d '{
"parameters": {
"param1": "value1",
"param2": "value2"
}
}'curl -X POST "http://localhost:3000/api/action/action-name/definition"curl -X POST "http://localhost:3000/api/agent/group/generateProfiles" \
-H "Content-Type: application/json" \
-d '{
"description": "A team of agents to help with project management"
}'curl -X POST "http://localhost:3000/api/agent/group/create" \
-H "Content-Type: application/json" \
-d '{
"name": "project-team",
"agents": [
{
"name": "coordinator",
"role": "Project Coordinator"
},
{
"name": "developer",
"role": "Developer"
}
]
}'Configuration Structure
The agent configuration defines how an agent behaves and what capabilities it has. You can view the available configuration options and their descriptions by using the metadata endpoint:
curl -X GET "http://localhost:3000/api/meta/agent/config"This will return a JSON object containing all available configuration fields, their types, and descriptions.
Here's an example of the agent configuration structure:
{
"name": "my-agent",
"model": "gpt-4",
"multimodal_model": "gpt-4-vision",
"hud": true,
"standalone_job": false,
"random_identity": false,
"initiate_conversations": true,
"enable_planning": true,
"identity_guidance": "You are a helpful assistant.",
"periodic_runs": "0 * * * *",
"permanent_goal": "Help users with their questions.",
"enable_kb": true,
"enable_reasoning": true,
"kb_results": 5,
"can_stop_itself": false,
"system_prompt": "You are an AI assistant.",
"long_term_memory": true,
"summary_long_term_memory": false
}Environment Configuration
LocalAGI supports environment configurations. Note that these environment variables needs to be specified in the localagi container in the docker-compose file to have effect.
| Variable | What It Does |
|---|---|
DB_HOST |
MySQL Database host address |
DB_NAME |
MySQL Database name |
DB_PASS |
MySQL Database password |
DB_USER |
MySQL Database user |
LOCALAGI_LLM_API_URL |
OpenAI-compatible API server URL (e.g., for OpenRouter) |
LOCALAGI_LLM_API_KEY |
API authentication key for LLM API |
LOCALAGI_MODEL |
Model name to use (e.g., deepseek/deepseek-chat-v3-0324:free) |
LOCALAGI_TIMEOUT |
Request timeout settings (e.g., 5m) |
VITE_PRIVY_APP_ID |
Privy App ID for frontend (Vite) |
PRIVY_APP_ID |
Privy App ID for backend |
PRIVY_APP_SECRET |
Privy App Secret for backend authentication |
PRIVY_PUBLIC_KEY_PEM |
Privy public key PEM (if required) |
MIT License — See the LICENSE file for details.
LOCAL PROCESSING. GLOBAL THINKING.
Made with ❤️ by BitGPT








