MoM is a Ruby on Rails manager for Mule AI agents. It provides centralized management and coordination of multiple Mule instances through both REST API and gRPC interfaces.
- Client Management: Register, monitor, and manage multiple Mule AI clients
- Workflow Orchestration: Execute workflows across different Mule instances
- gRPC Integration: High-performance gRPC server for real-time communication
- REST API: HTTP endpoints for web-based integrations
- Status Monitoring: Track client health and workflow execution status
-
Install dependencies:
bundle install
-
Generate gRPC code:
rake generate_grpc
-
Setup database:
rails db:create db:migrate
-
Start the server:
rails server
The gRPC server will start automatically on port 50051 (configurable via GRPC_PORT environment variable).
GET /api/v1/mule_clients- List all registered clientsPOST /api/v1/mule_clients- Register a new clientGET /api/v1/mule_clients/:id- Get client detailsDELETE /api/v1/mule_clients/:id- Deregister a clientPOST /api/v1/mule_clients/:id/execute_workflow- Execute workflow on clientGET /api/v1/mule_clients/:id/status- Get client status and running workflows
The MuleManagerService provides equivalent functionality through gRPC:
RegisterMuleClient- Register a new Mule clientGetMuleClients- List all registered clientsExecuteWorkflowOnClient- Execute a workflow on a specific clientGetClientStatus- Get client status and running workflowsDeregisterMuleClient- Remove a client from management
Environment variables:
GRPC_PORT- gRPC server port (default: 50051)GRPC_HOST- gRPC server host (default: 0.0.0.0)DB_USERNAME- Database usernameDB_PASSWORD- Database passwordDB_HOST- Database host
Generate gRPC code after modifying the protobuf definition:
rake generate_grpcMoM acts as a central coordinator for multiple Mule instances:
- Client Registration: Mule instances register themselves with MoM
- Workflow Distribution: MoM routes workflow requests to appropriate clients
- Status Monitoring: Track execution status and client health
- Load Balancing: Distribute work across available clients (future feature)
This enables scalable AI agent orchestration across multiple machines or containers.