A complete Rust port of pydantic-ai for building production-ready AI agents
SerdesAI is a comprehensive, type-safe Rust framework for building AI agents that interact with large language models. It is a complete port of pydantic-ai to pure Rust, providing ergonomic async APIs with compile-time safety guarantees.
- π€ Type-safe Agents - Generic over dependencies and output types with compile-time validation
- π Multi-provider Support - OpenAI, Anthropic, Google Gemini, Groq, Mistral, Ollama, AWS Bedrock, Azure OpenAI
- π οΈ Tool Calling - Define tools with automatic JSON schema generation via macros
- π‘ Streaming - Real-time response streaming with backpressure support
- π Smart Retries - Configurable retry strategies with exponential backoff
- π Evaluations - Built-in testing and benchmarking framework
- π Graph Workflows - Complex multi-agent orchestration with state management
- π MCP Support - Model Context Protocol integration for tool servers
- π Structured Output - JSON schema-based output validation with serde
- π Embeddings - Semantic search and RAG support
Add to your Cargo.toml:
[dependencies]
serdes-ai = "0.1"
tokio = { version = "1", features = ["full"] }use serdes_ai::prelude::*;
use serdes_ai::OpenAIChatModel;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let agent = Agent::new(OpenAIChatModel::from_env("gpt-4o")?)
.system_prompt("You are a helpful assistant.")
.build();
let result = agent.run("Hello! What can you help me with?", ()).await?;
println!("{}", result.output);
Ok(())
}use serdes_ai::prelude::*;
use serdes_ai_tools::{Tool, ToolDefinition, ToolReturn, ToolResult, SchemaBuilder};
struct CalculatorTool;
impl Tool<()> for CalculatorTool {
fn definition(&self) -> ToolDefinition {
ToolDefinition::new("calculate", "Perform arithmetic calculations")
.with_parameters(
SchemaBuilder::new()
.string("expression", "Math expression to evaluate", true)
.build()
.unwrap()
)
}
async fn call(
&self,
_ctx: &RunContext<()>,
args: serde_json::Value
) -> ToolResult {
let expr = args["expression"].as_str().unwrap();
// Evaluate the expression...
Ok(ToolReturn::text("42"))
}
}
let agent = Agent::new(model)
.tool(CalculatorTool)
.build();use serdes_ai::prelude::*;
use serdes_ai_macros::Output;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, Output)]
struct PersonInfo {
name: String,
age: u32,
occupation: String,
}
let agent = Agent::new(model)
.output_type::<PersonInfo>()
.build();
let result = agent.run("John is a 30 year old engineer", ()).await?;
println!("Extracted: {} is {} and works as {}",
result.output.name,
result.output.age,
result.output.occupation
);use serdes_ai::prelude::*;
use serdes_ai_streaming::AgentStreamEvent;
use futures::StreamExt;
let mut stream = agent.run_stream("Write a poem", ()).await?;
while let Some(event) = stream.next().await {
if let AgentStreamEvent::TextDelta { content, .. } = event {
print!("{}", content);
}
}use serdes_ai::prelude::*;
use serdes_ai_graph::{Graph, BaseNode, NodeResult, GraphRunContext, GraphResult};
use async_trait::async_trait;
#[derive(Debug, Clone, Default)]
struct WorkflowState {
query: String,
research: Option<String>,
response: Option<String>,
}
struct ResearchNode;
struct WriteNode;
#[async_trait]
impl BaseNode<WorkflowState, (), String> for ResearchNode {
fn name(&self) -> &str { "research" }
async fn run(
&self,
ctx: &mut GraphRunContext<WorkflowState, ()>,
) -> GraphResult<NodeResult<WorkflowState, (), String>> {
ctx.state.research = Some(format!("Research for: {}", ctx.state.query));
Ok(NodeResult::next(WriteNode))
}
}
#[async_trait]
impl BaseNode<WorkflowState, (), String> for WriteNode {
fn name(&self) -> &str { "write" }
async fn run(
&self,
ctx: &mut GraphRunContext<WorkflowState, ()>,
) -> GraphResult<NodeResult<WorkflowState, (), String>> {
let response = format!("Based on: {}", ctx.state.research.as_deref().unwrap_or(""));
Ok(NodeResult::end(response))
}
}
let graph = Graph::new()
.node("research", ResearchNode)
.node("write", WriteNode)
.entry("research")
.build()?;
let result = graph.run(WorkflowState::default(), ()).await?;SerdesAI is organized as a workspace of focused crates:
| Crate | Description |
|---|---|
serdes-ai |
Main facade with re-exports |
serdes-ai-core |
Core types, messages, errors |
serdes-ai-agent |
Agent implementation |
serdes-ai-models |
Model trait and providers |
serdes-ai-providers |
Provider abstractions |
serdes-ai-tools |
Tool definitions and execution |
serdes-ai-toolsets |
Tool collections and composition |
serdes-ai-output |
Output schemas and validation |
serdes-ai-streaming |
Streaming support |
serdes-ai-mcp |
MCP protocol support |
serdes-ai-embeddings |
Embedding models |
serdes-ai-retries |
Retry strategies |
serdes-ai-graph |
Graph-based workflows |
serdes-ai-evals |
Evaluation framework |
serdes-ai-macros |
Procedural macros |
| Provider | Feature Flag | Models | Status |
|---|---|---|---|
| OpenAI | openai (default) |
GPT-4, GPT-4o, o1, o3 | β Full |
| Anthropic | anthropic (default) |
Claude 3.5, Claude 4 | β Full |
google (default) |
Gemini 1.5, Gemini 2.0 | β Full | |
| Groq | groq |
Llama 3, Mixtral, Gemma | β Full |
| Mistral | mistral |
Mistral Large, Codestral | β Full |
| Ollama | ollama |
Any local model | β Full |
| Azure OpenAI | azure |
Azure-hosted OpenAI | β Full |
| AWS Bedrock | bedrock |
Claude, Llama, Titan | β Full |
// OpenAI
let model = OpenAIChatModel::from_env("gpt-4o")?;
// Anthropic
let model = AnthropicModel::from_env("claude-3-5-sonnet-20241022")?;
// Google Gemini
let model = GoogleModel::from_env("gemini-1.5-pro")?;
// Groq (ultra-fast inference)
let model = GroqModel::from_env("llama-3.1-70b-versatile")?;
// Mistral
let model = MistralModel::from_env("mistral-large-latest")?;
// Ollama (local)
let model = OllamaModel::new("llama3.1");
// Azure OpenAI
let model = AzureOpenAIModel::from_env("my-deployment")?;
// AWS Bedrock
let model = BedrockModel::new("anthropic.claude-3-sonnet-20240229-v1:0")?;[dependencies]
serdes-ai = { version = "0.1", features = ["full"] }| Feature | Description | Default |
|---|---|---|
openai |
OpenAI GPT models | β |
anthropic |
Anthropic Claude models | β |
google |
Google Gemini models | β |
groq |
Groq fast inference | |
mistral |
Mistral AI models | |
ollama |
Ollama local models | |
azure |
Azure OpenAI | |
bedrock |
AWS Bedrock | |
mcp |
Model Context Protocol | |
graph |
Graph workflows | β |
evals |
Evaluation framework | β |
macros |
Procedural macros | β |
full |
All features |
| Feature | pydantic-ai | serdes-ai |
|---|---|---|
| Language | Python | Rust |
| Type Safety | Runtime (Pydantic) | Compile-time |
| Async Runtime | asyncio | tokio |
| Validation | Pydantic v2 | serde + custom |
| Performance | Good | Excellent |
| Memory Safety | Garbage Collected | Ownership system |
| Binary Size | Large (Python runtime) | Minimal |
| Startup Time | Slow | Instant |
| Thread Safety | GIL limitations | Fully concurrent |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β serdes-ai β
β (Main Facade Crate) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββΌββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ
β serdes-ai- β β serdes-ai- β β serdes-ai- β
β agent β β models β β graph β
β β β β β β
β Agent logic β β Model trait β β Multi-agent β
β Run context β β Providers β β workflows β
βββββββββββββββββ βββββββββββββββββ βββββββββββββββββ
β β β
ββββββββββββ¬βββββββββββ΄βββββββββββ¬βββββββββββ
β β
βΌ βΌ
βββββββββββββββββ βββββββββββββββββ
β serdes-ai- β β serdes-ai- β
β tools β β core β
β β β β
β Tool traits β β Messages β
β Schema gen β β Errors β
βββββββββββββββββ βββββββββββββββββ
# Run all tests
cargo test --workspace --all-features
# Run with specific provider
cargo test --features openai
# Run benchmarks
cargo bench --workspaceContributions are welcome! Please see CONTRIBUTING.md for guidelines.
git clone https://github.com/janfeddersen-wq/serdesAI
cd serdesAI
cargo build --workspace --all-features
cargo test --workspace --all-featuresMIT License - see LICENSE for details.
- pydantic-ai - The original Python implementation that inspired this project
- Anthropic - For Claude and the Model Context Protocol
- OpenAI - For the OpenAI API and tool calling standards
- The Rust community for excellent crates like
tokio,serde, andasync-trait
Made with π¦ and β€οΈ