Website | Quick Start | GitHub | Hackathon | Community
MoFA (Modular Framework for Agents) is not just another entry in the crowded agent framework landscape. It is the first production-grade framework to achieve "write once, run everywhere" across languages, built for extreme performance, boundless extensibility, and runtime programmability. Through its revolutionary microkernel architecture and innovative dual-layer plugin system (compile-time + runtime), MoFA strikes the elusive balance between raw performance and dynamic flexibility.
What Sets MoFA Apart:
✅ Rust Core + UniFFI: Blazing performance with native multi-language interoperability
✅ Dual-Layer Plugins: Zero-cost compile-time extensions meet hot-swappable runtime scripts
✅ Microkernel Architecture: Clean separation of concerns, effortless to extend
✅ Cloud-Native by Design: First-class support for distributed and edge deployments
- Zero-cost abstractions in Rust
- Memory safety without garbage collection
- Orders of magnitude faster than Python-based frameworks
- Auto-generated bindings for Python, Java, Go, Kotlin, Swift via UniFFI
- Call Rust core logic natively from any supported language
- Near-zero overhead compared to traditional FFI
- Embedded Rhai scripting engine
- Hot-reload business logic without recompilation
- Runtime configuration and rule adjustments
- User-defined extensions on the fly
- Compile-time plugins: Extreme performance, native integration
- Runtime plugins: Dynamic loading, instant effect
- Support plugin hot loading and version management
- Built on Dora-rs for distributed dataflow
- Seamless cross-process, cross-machine agent communication
- Edge computing ready
- Isolated agent processes via Ractor
- Message-passing architecture
- Battle-tested for high-concurrency workloads
MoFA adopts a layered microkernel architecture, achieving extreme extensibility through a dual-layer plugin system:
┌─────────────────────────────────────────────────────────┐
│ Business Layer │
│ (User-defined Agents, Workflows, Rules) │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Runtime Plugin Layer (Rhai Scripts) │
│ • Dynamic tool registration • Rule engine • Scripts │
│ • Hot-load logic • Expression evaluation │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Compile-time Plugin Layer (Rust/WASM) │
│ • LLM plugins • Tool plugins • Storage • Protocol │
│ • High-performance modules • Native system integration │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ Microkernel (mofa-kernel) │
│ • Lifecycle management • Metadata • Communication │
│ • Task scheduling • Memory management │
└─────────────────────────────────────────────────────────┘
Compile-time Plugins (Rust/WASM)
- Extreme performance, zero runtime overhead
- Type safety, compile-time error checking
- Support complex system calls and native integration
- WASM sandbox provides secure isolation
Runtime Plugins (Rhai Scripts)
- No recompilation needed, instant effect
- Business logic hot updates
- User-defined extensions
- Secure sandbox execution with configurable resource limits
Combined Power
- Use Rust plugins for performance-critical paths (e.g., LLM inference, data processing)
- Use Rhai scripts for business logic (e.g., rule engines, workflow orchestration)
- Seamless interoperability between both, covering 99% of extension scenarios
MoFA adopts a layered microkernel architecture with mofa-kernel at its core. All other features (including plugin system, LLM capabilities, multi-agent collaboration, etc.) are built as modular components on top of the microkernel.
- Core Simplicity: The microkernel contains only the most basic functions: agent lifecycle management, metadata system, and dynamic management
- High Extensibility: All advanced features are extended through modular components and plugins, keeping the kernel stable
- Loose Coupling: Components communicate through standardized interfaces, easy to replace and upgrade
- The plugin system is developed based on the
Plugininterface of the microkernel. All plugins (including LLM plugins, tool plugins, etc.) are integrated through theAgentPluginstandard interface - The microkernel provides plugin registration center and lifecycle management, supporting plugin hot loading and version control
- LLM capabilities are implemented through
LLMPlugin, encapsulating LLM providers as plugins compliant with microkernel specifications
- LLM exists as a plugin component of the microkernel, providing standard LLM access capabilities through the
LLMCapabilityinterface - All agent collaboration patterns (chain, parallel, debate, etc.) are built on the microkernel's workflow engine and interact with LLMs through standardized LLM plugin interfaces
- Secretary mode is also implemented based on the microkernel's A2A communication protocol and task scheduling system
- Compile-time plugins: Extreme performance, native integration
- Runtime plugins: Dynamic loading, instant effect
- Seamless collaboration between both, covering all scenarios
- Priority Scheduling: Task scheduling system based on priority levels
- Communication Bus: Built-in inter-agent communication bus
- Workflow Engine: Visual workflow builder and executor
- LLM Abstraction Layer: Standardized LLM integration interface
- OpenAI Support: Built-in OpenAI API integration
- ReAct Pattern: Agent framework based on reasoning and action
- Multi-Agent Collaboration: LLM-driven agent coordination, supporting multiple collaboration modes:
- Request-Response: One-to-one deterministic tasks with synchronous replies
- Publish-Subscribe: One-to-many broadcast tasks with multiple receivers
- Consensus: Multi-round negotiation and voting for decision-making
- Debate: Agents alternate speaking to iteratively refine results
- Parallel: Simultaneous execution with automatic result aggregation
- Sequential: Pipeline execution where output flows to the next agent
- Custom: User-defined modes interpreted by the LLM
- Secretary Mode: Provides end-to-end task closed-loop management, including 5 core phases: receive ideas → record todos, clarify requirements → convert to project documents, schedule dispatch → call execution agents, monitor feedback → push key decisions to humans, acceptance report → update todos
Features:- 🧠 Autonomous task planning and decomposition
- 🔄 Intelligent agent scheduling and orchestration
- 👤 Human intervention at key nodes
- 📊 Full process observability and traceability
- 🔁 Closed-loop feedback and continuous optimization
- Multiple Backends: Support PostgreSQL, MySQL, and SQLite
- Session Management: Persistent agent session storage
- Memory System: Stateful agent memory management
- Dashboard: Built-in web dashboard with real-time metrics
- Metrics System: Prometheus-compatible metrics system
- Tracing Framework: Distributed tracing system
MoFA integrates the Rhai embedded scripting language, providing runtime programmability without recompilation.
- Safe Sandbox Execution: Configurable operation limits, call stack depth, loop control
- Script Compilation Cache: Pre-compile scripts for improved repeated execution performance
- Rich Built-in Functions: String manipulation, math functions, JSON processing, time utilities
- Bidirectional JSON Conversion: Seamless conversion between JSON and Rhai Dynamic types
- Script Task Nodes: Execute business logic via scripts
- Script Condition Nodes: Dynamic branch decisions
- Script Transform Nodes: Data format transformation
- YAML/JSON Workflow Loading: Define workflows through configuration files
- Script-based Tool Definition: Register tools at runtime
- Parameter Validation: Type checking, range validation, enum constraints
- Auto JSON Schema Generation: Compatible with LLM Function Calling
- Hot Loading: Dynamically load tools from directories
- Priority Rules: Critical > High > Normal > Low
- Multiple Match Modes: First match, all match, ordered match
- Composite Actions: Set variables, trigger events, goto rules
- Rule Group Management: Support default fallback actions
| Scenario | Description |
|---|---|
| Dynamic Business Rules | Discount strategies, content moderation rules, no redeployment needed |
| Configurable Workflows | User-defined data processing pipelines |
| LLM Tool Extensions | Register new tools at runtime for LLM calls |
| A/B Testing | Control experiment logic through scripts |
| Expression Evaluation | Dynamic condition checking, formula calculation |
- Dora-rs runtime support for distributed dataflow
- Complete distributed tracing implementation
- Python binding generation
- More LLM provider integrations
- Visual workflow designer UI
- Cloud-native deployment support
- Advanced agent coordination algorithms
- Agent platform
- Cross-process/cross-machine distributed agent collaboration
- Multi-agent collaboration standard protocol
- Cross-platform mobile support
- Evolve into agent operating system
Add MoFA to your Cargo.toml:
[dependencies]
mofa-sdk = "0.1.0"See Quick Start for the full guide!
The runtime mode is most suitable for scenarios that require building complete agent workflows, specifically including:
- Multi-agent collaboration scenarios
The runtime provides a message bus (SimpleMessageBus/DoraChannel) and agent registration system, supporting communication between agents:
- Point-to-point communication (send_to_agent)
- Broadcast messages (broadcast)
- Topic pub/sub (publish_to_topic/subscribe_topic)
- Role management (get_agents_by_role)
When you need multiple agents to collaborate on complex tasks (such as master-slave architecture, division of labor), the runtime's communication mechanism can significantly simplify development.
- Event-driven agent applications
The runtime has a built-in event loop (run_with_receiver/run_event_loop) and interrupt handling system, automatically managing:
- Event reception and dispatch
- Agent state lifecycle
- Timeout and interrupt handling
Suitable for building applications that need to respond to external events or timers (such as real-time dialogue systems, event response robots).
- Distributed agent systems
When the dora feature is enabled, the runtime provides Dora adapters (DoraAgentNode/DoraDataflow), supporting:
- Distributed node deployment
- Cross-node agent communication
- Data flow management
Suitable for production scenarios requiring large-scale deployment and low-latency communication.
- Structured agent building
The runtime provides AgentBuilder fluent API, simplifying agent:
- Configuration management
- Plugin integration
- Capability declaration
- Port configuration
Suitable for scenarios where you need to quickly build standardized agents, especially when you need to uniformly manage multiple agent configurations.
- Production-grade applications
The runtime provides comprehensive:
- Health checks and state management
- Logging and monitoring integration
- Error handling mechanisms
Suitable for building production applications that need stable operation, rather than simple plugin testing or prototype development.
- API Documentation
- Security Guide - Comprehensive security documentation
- GitHub Repository
- Examples
MoFA is designed with security-first principles. Key security features include:
- WASM Sandboxing: Strong isolation for compile-time plugins
- Rhai Script Limits: Configurable resource limits for runtime scripts (memory, CPU, operations)
- Type Safety: Rust's memory safety and type system prevent entire classes of vulnerabilities
- Credential Management: Environment variable support for secure credential handling
- Plugin Verification: Version tracking and integrity checks for plugins
For comprehensive security documentation, including:
- Credential management best practices
- Runtime scripting security configuration
- Plugin security guidelines
- Production deployment security checklist
- Threat model and attack surface analysis
Please see our Security Guide and Security Policy.
Reporting Vulnerabilities: If you discover a security vulnerability, please report it privately through our Security Policy.
We welcome contributions! Please check out our contributing guide for more details.
- GitHub Issues: https://github.com/mofa-org/mofa/discussions
- Discord: https://discord.com/invite/hKJZzDMMm9
MoFA stands on the shoulders of giants:
- Rust - Perfect combination of performance and safety
- UniFFI - Mozilla's multi-language binding magic
- Rhai - Powerful embedded scripting engine
- Tokio - Async runtime cornerstone
- Ractor - Actor model concurrency framework
- Dora - Distributed dataflow runtime
- Wasmtime - WebAssembly runtime
源起之道支持|Supported by Upstream Labs
