Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions crates/dspy-rs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ cargo add dspy-rs
Here's a simple example to get you started:

```rust
use dsrs::prelude::*;
use dsrs::*;
use anyhow::Result;

#[Signature]
Expand All @@ -67,7 +67,7 @@ async fn main() -> Result<()> {
// Configure your LM (Language Model)
configure(
LM::builder()
.api_key(SecretString::from(std::env::var("OPENAI_API_KEY")?))
.api_key(std::env::var("OPENAI_API_KEY")?.into())
.build(),
ChatAdapter {},
);
Expand Down
104 changes: 104 additions & 0 deletions docs/docs/building-blocks/predictors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,107 @@ title: 'Predictors'
description: 'Learn how to create and use predictors for LM inference'
icon: 'robot'
---


## What is a predictor?

Predictors execute LM calls for a signature and input data. In Rust terms, a `Predict` struct wraps a `MetaSignature` and calls the LM via an `Adapter`.

- **Purpose:** Execute an LM call for a signature with the provided input data.
- **Rust shape:** `Predict` holds a `Box<dyn MetaSignature>` and uses the configured `Adapter` + `LM` to run inference.
- **Trait:** Anything that implements the `Predictor` trait can be invoked with `forward`/`forward_with_config`.

## API surface

- **`Predictor` trait:**
- `async fn forward(&self, inputs: Example) -> Result<Prediction>` — uses global settings (LM + Adapter).
- `async fn forward_with_config(&self, inputs: Example, lm: &mut LM) -> Result<Prediction>` — supply your own `LM` (uses `ChatAdapter`).
- **`Predict` struct:** concrete predictor that wraps a `MetaSignature`.

## Minimal usage

```rust
use dspy_rs::{
ChatAdapter, Example, LM, LMConfig, Predict, Predictor, Signature, configure, hashmap,
};

#[Signature]
struct QA {
/// Use Renaissance-era English to answer the question.
#[input]
question: String,
#[output]
answer: String,
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Configure global LM + adapter once
let lm = LM::builder()
.config(LMConfig::builder().model("gpt-4.1-nano".to_string()).build())
.api_key(std::env::var("OPENAI_API_KEY")?.into())
.build();
configure(lm, ChatAdapter::default());

// Create predictor for your signature
let predictor = Predict::new(QA::new());

// Provide inputs as an Example
let inputs = Example::new(
hashmap! { "question".to_string() => "What is gravity?".into() },
vec!["question".to_string()],
vec!["answer".to_string()],
);

let pred = predictor.forward(inputs).await?;
println!("Answer: {}", pred.get("answer", None).as_str().unwrap());
Ok(())
}
```

## Inline signatures

You can also build a predictor from an inline signature:

```rust
use dspy_rs::{Predict, sign};

let predict = Predict::new(sign! { (question: String) -> answer: String });
```

## How it works under the hood

- `Predict::forward` reads the globally configured `LM` and `Adapter` from `GLOBAL_SETTINGS` and calls the adapter with your signature and inputs.
- `Predict::forward_with_config` lets you supply a mutable `&mut LM` directly and uses `ChatAdapter` for the call. This is helpful for tests or local overrides.

## Testing and determinism

In tests, inject a `DummyLM` and call `forward_with_config` to avoid network calls and ensure deterministic outputs.

```rust
use dspy_rs::{DummyLM, Example, Predict, Predictor, Signature, hashmap};

#[Signature]
struct QA { #[input] question: String, #[output] answer: String }

#[tokio::test]
async fn predicts_locally() -> anyhow::Result<()> {
let predict = Predict::new(QA::new());
let mut lm = DummyLM::default().into(); // convert into LM

let inputs = Example::new(
hashmap! { "question".to_string() => "Test?".into() },
vec!["question".to_string()],
vec!["answer".to_string()],
);

let out = predict.forward_with_config(inputs, &mut lm).await?;
assert!(out.get("answer", None).is_string());
Ok(())
}
```

## Where predictors fit

- A `Signature` defines the task schema; a `Predictor` executes it.
- You can compose multiple predictors and custom logic into higher-level `Module`s. See the modules guide for composition patterns.
65 changes: 26 additions & 39 deletions docs/docs/getting-started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,57 +30,52 @@ tokio = "1.47.1"
cargo add dspy-rs anyhow serde serde_json tokio
```

This will create an alias `dsrs` for the `dspy-rs` crate which is the intended way to use it.

<Note>
We need to install DSRS using the name `dspy-rs` for now, because
`dsrs` is an already-published crate.
This may change in the future if the `dsrs` crate name is donated back or becomes available.
This may change in the future if the `dsrs` crate name is donated or becomes available.
</Note>
</Step>

<Step title="Set up your language model">

The first step in DSRs is to configure your Language Model (LM). DSRs supports
any LM supported via the `async-openai` crate. You can define your LM
any OpenAI compatible LM supported via the `async-openai` crate. You can define your LM
configuration using the builder pattern as follows.

Once the LM instance is created, pass it to the configure function along with
a chat adapter to set the global LM and adapter for your application.
a chat adapter to set the global LM and adapter for your application. An adapter sits on top of the LM and signature and
is responsible for converting signatures to prompts and parse the output fields from the LM response.
`ChatAdapter` is the default adapter in DSRs and is responsible for converting
the instructions and the structure from your signature (defined in the next step)
into a prompt that the LM can follow to complete the task.
the signature (defined in the next step) to a list of messages that the `LM` can use to generate the output.

```rust
use dspy_rs::{configure, ChatAdapter, LM, LMConfig};
use dspy_rs::{configure, ChatAdapter, LM};
use std::env;
use secrecy::SecretString;

fn main() -> Result<(), anyhow::Error> {
//Define a config for the LM
let config = LMConfig::builder()
.model("gpt-4.1-nano".to_string())
.build();
// Create the LM instance via the builder
let lm = LM::builder()
.config(config)
.api_key(env::var("OPENAI_API_KEY")?.into())
.build();
// Configure the global LM and adapter
configure(lm, ChatAdapter::default());
configure(
// Dec
LM::builder()
.api_key(SecretString::from(std::env::var("OPENAI_API_KEY")?))
.build(),
ChatAdapter {},
);

Ok(())
...
}
```

</Step>

<Step title="Define task via signatures">

A signature defines the structure of your task: what inputs it takes and what outputs it should produce. Think of it as a schema for your LM call,
A signature in DSRs specifies your task: it describes the instructions, the expected inputs, and the outputs your LM should generate. You can think of it as a schema that guides how your prompt for the LLM call is structured.

You can create your signature in DSRs in one of two ways: using an inline macro, and via an attribute macro.
You can create your signature in DSRs in one of two ways: using an inline signature or via structs.

Let's create a question-answering signature using the inline macro:
Let's create a question-answering signature using the inline signature:

```rust
let signature = sign! {
Expand All @@ -90,11 +85,11 @@ let signature = sign! {
The input fields are to the left of the `->` arrow, and the output fields are to the right. Multiple fields can be comma-separated, for e.g., `(question: String,
context: String) -> answer: String`.

Alternatively, you can have more control over defining more granular aspects of the signature by defining signature using attribute macro on structs.
Alternatively, you can have more control over defining more granular aspects of the signature by defining them using structs.

```rust
#[Signature]
struct QASignature {
struct QA {
/// Answer the question concisely.

#[input(desc="Question to be answered.")]
Expand All @@ -106,8 +101,7 @@ struct QASignature {
```

The advantage of the latter approach is that you can add doc comments at the
top of the struct, specifying
important domain information or specific instructions to the LM.
top of the struct, specifying detailed instructions for the task.
Additionally, you can also annotate each field with `#[input]` and `#[output]`
attributes, useful when you have multiple input and output fields, and when
you want to add descriptions to each field.
Expand All @@ -116,7 +110,11 @@ you want to add descriptions to each field.

<Step title="Create a simple predictor">

A predictor is the simplest module in DSRs. It takes a signature and input data, and orchestrates the LM call to produce a prediction. Let's demonstrate this
LM is what define the configuration of the LLM call being made. It takes a signature and input data and calls the LLM to produce a prediction.

Let's say you want to generate output

Let's demonstrate this
with an example.

Gravity was explained by Isaac Newton in 1687. To make this more interesting,
Expand All @@ -131,17 +129,6 @@ use dspy_rs::{
};
use std::env;

#[Signature]
struct QA {
/// Use Renaissance-era English to answer the question.

#[input]
pub question: String,

#[output]
pub answer: String,
}

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
let config = LMConfig::builder().model("gpt-4.1-nano".to_string()).build();
Expand Down