Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 6 additions & 35 deletions openai_agents/README.md
Copy link
Member

@cretz cretz Jul 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed there is no reference to these samples from the primary repo README like there are others, may want to add (just adding this general comment, deferring code review to Dan/Tim)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should add some readme cleanup + linking. Since there are a lot of changes I've been cutting them up into separate PRs. We should add that once we bring them all together.

Original file line number Diff line number Diff line change
Expand Up @@ -21,42 +21,13 @@ This approach ensures that AI agent workflows are durable, observable, and can h
- Required dependencies installed via `uv sync --group openai-agents`
- OpenAI API key set as environment variable: `export OPENAI_API_KEY=your_key_here`

## Running the Examples
## Examples

1. **Start the worker** (supports all samples):
```bash
uv run openai_agents/run_worker.py
```
Each directory contains a complete example with its own README for detailed instructions:

2. **Run individual samples** in separate terminals:
- **[Basic Examples](./basic/README.md)** - Simple agent examples including a hello world agent and a tools-enabled agent that can access external APIs like weather services.
- **[Agent Patterns](./agent_patterns/README.md)** - Advanced patterns for agent composition, including using agents as tools within other agents.
- **[Research Bot](./research_bot/README.md)** - Multi-agent research system with specialized roles: a planner agent, search agent, and writer agent working together to conduct comprehensive research.
- **[Customer Service](./customer_service/README.md)** - Interactive customer service agent with escalation capabilities, demonstrating conversational workflows.

### Basic Agent Examples

- **Hello World Agent** - Simple agent that responds in haikus:
```bash
uv run openai_agents/run_hello_world_workflow.py
```

- **Tools Agent** - Agent with access to external tools (weather API):
```bash
uv run openai_agents/run_tools_workflow.py
```

### Advanced Multi-Agent Examples

- **Research Workflow** - Multi-agent research system with specialized roles:
```bash
uv run openai_agents/run_research_workflow.py
```
Features a planner agent, search agent, and writer agent working together.

- **Customer Service Workflow** - Customer service agent with escalation capabilities (interactive):
```bash
uv run openai_agents/run_customer_service_client.py --conversation-id my-conversation-123
```

- **Agents as Tools** - Demonstrate using agents as tools within other agents:
```bash
uv run openai_agents/run_agents_as_tools_workflow.py
```

68 changes: 68 additions & 0 deletions openai_agents/agent_patterns/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Agent Patterns

Common agentic patterns extended with Temporal's durable execution capabilities.

*Adapted from [OpenAI Agents SDK agent patterns](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)*

## Running the Examples

First, start the worker (supports all patterns):
```bash
uv run openai_agents/agent_patterns/run_worker.py
```

Then run individual examples in separate terminals:

## Deterministic Flows

**TODO**

A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:

1. Generate an outline
2. Generate the story
3. Generate the ending

Each of these steps can be performed by an agent. The output of one agent is used as input to the next.

## Handoffs and Routing

**TODO**

In many situations, you have specialized sub-agents that handle specific tasks. You can use handoffs to route the task to the right agent.

For example, you might have a frontline agent that receives a request, and then hands off to a specialized agent based on the language of the request.

## Agents as Tools

The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.

For example, you could model a translation task as tool calls instead: rather than handing over to the language-specific agent, you could call the agent as a tool, and then use the result in the next step. This enables things like translating multiple languages at once.

```bash
uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py
```

## LLM-as-a-Judge

**TODO**

LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.

For example, you could use an LLM to generate an outline for a story, and then use a second LLM to evaluate the outline and provide feedback. You can then use the feedback to improve the outline, and repeat until the LLM is satisfied with the outline.

## Parallelization

**TODO**

Running multiple agents in parallel is a common pattern. This can be useful for both latency (e.g. if you have multiple steps that don't depend on each other) and also for other reasons e.g. generating multiple responses and picking the best one.

## Guardrails

**TODO**

Related to parallelization, you often want to run input guardrails to make sure the inputs to your agents are valid. For example, if you have a customer support agent, you might want to make sure that the user isn't trying to ask for help with a math problem.

You can definitely do this without any special Agents SDK features by using parallelization, but we support a special guardrail primitive. Guardrails can have a "tripwire" - if the tripwire is triggered, the agent execution will immediately stop and a `GuardrailTripwireTriggered` exception will be raised.

This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
from temporalio.client import Client
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin

from openai_agents.workflows.agents_as_tools_workflow import AgentsAsToolsWorkflow
from openai_agents.agent_patterns.workflows.agents_as_tools_workflow import (
AgentsAsToolsWorkflow,
)


async def main():
Expand Down
39 changes: 39 additions & 0 deletions openai_agents/agent_patterns/run_worker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from __future__ import annotations

import asyncio
from datetime import timedelta

from temporalio.client import Client
from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin
from temporalio.worker import Worker

from openai_agents.agent_patterns.workflows.agents_as_tools_workflow import (
AgentsAsToolsWorkflow,
)


async def main():
# Create client connected to server at the given address
client = await Client.connect(
"localhost:7233",
plugins=[
OpenAIAgentsPlugin(
model_params=ModelActivityParameters(
start_to_close_timeout=timedelta(seconds=30)
)
),
],
)

worker = Worker(
client,
task_queue="openai-agents-task-queue",
workflows=[
AgentsAsToolsWorkflow,
],
)
await worker.run()


if __name__ == "__main__":
asyncio.run(main())
25 changes: 25 additions & 0 deletions openai_agents/basic/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Basic Agent Examples

Simple examples to get started with OpenAI Agents SDK integrated with Temporal workflows.

*Adapted from [OpenAI Agents SDK basic examples](https://github.com/openai/openai-agents-python/tree/main/examples/basic)*

## Running the Examples

First, start the worker (supports all basic examples):
```bash
uv run openai_agents/basic/run_worker.py
```

Then run individual examples in separate terminals:

### Hello World Agent
```bash
uv run openai_agents/basic/run_hello_world_workflow.py
```

### Tools Agent
Agent with access to external tools (weather API):
```bash
uv run openai_agents/basic/run_tools_workflow.py
```
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from temporalio.client import Client
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin

from openai_agents.workflows.hello_world_workflow import HelloWorldAgent
from openai_agents.basic.workflows.hello_world_workflow import HelloWorldAgent


async def main():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from temporalio.client import Client
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin

from openai_agents.workflows.tools_workflow import ToolsWorkflow
from openai_agents.basic.workflows.tools_workflow import ToolsWorkflow


async def main():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,9 @@
from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin
from temporalio.worker import Worker

from openai_agents.workflows.agents_as_tools_workflow import AgentsAsToolsWorkflow
from openai_agents.workflows.customer_service_workflow import CustomerServiceWorkflow
from openai_agents.workflows.get_weather_activity import get_weather
from openai_agents.workflows.hello_world_workflow import HelloWorldAgent
from openai_agents.workflows.research_bot_workflow import ResearchWorkflow
from openai_agents.workflows.tools_workflow import ToolsWorkflow
from openai_agents.basic.activities.get_weather_activity import get_weather
from openai_agents.basic.workflows.hello_world_workflow import HelloWorldAgent
from openai_agents.basic.workflows.tools_workflow import ToolsWorkflow


async def main():
Expand All @@ -22,7 +19,7 @@ async def main():
plugins=[
OpenAIAgentsPlugin(
model_params=ModelActivityParameters(
start_to_close_timeout=timedelta(seconds=120)
start_to_close_timeout=timedelta(seconds=30)
)
),
],
Expand All @@ -34,9 +31,6 @@ async def main():
workflows=[
HelloWorldAgent,
ToolsWorkflow,
ResearchWorkflow,
CustomerServiceWorkflow,
AgentsAsToolsWorkflow,
],
activities=[
get_weather,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from temporalio import workflow
from temporalio.contrib import openai_agents as temporal_agents

from openai_agents.workflows.get_weather_activity import get_weather
from openai_agents.basic.activities.get_weather_activity import get_weather


@workflow.defn
Expand Down
21 changes: 21 additions & 0 deletions openai_agents/customer_service/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Customer Service

Interactive customer service agent with escalation capabilities, extended with Temporal's durable conversational workflows.

*Adapted from [OpenAI Agents SDK customer service](https://github.com/openai/openai-agents-python/tree/main/examples/customer_service)*

This example demonstrates how to build persistent, stateful conversations where each conversation maintains state across multiple interactions and can survive system restarts and failures.

## Running the Example

First, start the worker:
```bash
uv run openai_agents/customer_service/run_worker.py
```

Then start a customer service conversation:
```bash
uv run openai_agents/customer_service/run_customer_service_client.py --conversation-id my-conversation-123
```

You can start a new conversation with any unique conversation ID, or resume existing conversations by using the same conversation ID. The conversation state is persisted in the Temporal workflow, allowing you to resume conversations even after restarting the client.
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
from temporalio.service import RPCError, RPCStatusCode

from openai_agents.workflows.customer_service_workflow import (
from openai_agents.customer_service.workflows.customer_service_workflow import (
CustomerServiceWorkflow,
ProcessUserMessageInput,
)
Expand Down
39 changes: 39 additions & 0 deletions openai_agents/customer_service/run_worker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from __future__ import annotations

import asyncio
from datetime import timedelta

from temporalio.client import Client
from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin
from temporalio.worker import Worker

from openai_agents.customer_service.workflows.customer_service_workflow import (
CustomerServiceWorkflow,
)


async def main():
# Create client connected to server at the given address
client = await Client.connect(
"localhost:7233",
plugins=[
OpenAIAgentsPlugin(
model_params=ModelActivityParameters(
start_to_close_timeout=timedelta(seconds=30)
)
),
],
)

worker = Worker(
client,
task_queue="openai-agents-task-queue",
workflows=[
CustomerServiceWorkflow,
],
)
await worker.run()


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
)
from temporalio import workflow

from openai_agents.workflows.customer_service import (
from openai_agents.customer_service.customer_service import (
AirlineAgentContext,
ProcessUserMessageInput,
init_agents,
Expand Down
35 changes: 35 additions & 0 deletions openai_agents/research_bot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Research Bot

Multi-agent research system with specialized roles, extended with Temporal's durable execution.

*Adapted from [OpenAI Agents SDK research bot](https://github.com/openai/openai-agents-python/tree/main/examples/research_bot)*

## Architecture

The flow is:

1. User enters their research topic
2. `planner_agent` comes up with a plan to search the web for information. The plan is a list of search queries, with a search term and a reason for each query.
3. For each search item, we run a `search_agent`, which uses the Web Search tool to search for that term and summarize the results. These all run in parallel.
4. Finally, the `writer_agent` receives the search summaries, and creates a written report.

## Running the Example

First, start the worker:
```bash
uv run openai_agents/research_bot/run_worker.py
```

Then run the research workflow:
```bash
uv run openai_agents/research_bot/run_research_workflow.py
```

## Suggested Improvements

If you're building your own research bot, some ideas to add to this are:

1. Retrieval: Add support for fetching relevant information from a vector store. You could use the File Search tool for this.
2. Image and file upload: Allow users to attach PDFs or other files, as baseline context for the research.
3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve its results, search for more stuff, etc.
4. Code execution: Allow running code, which is useful for data analysis.
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@
# TODO: Restore progress updates
from agents import RunConfig, Runner, custom_span, trace

from openai_agents.workflows.research_agents.planner_agent import (
from openai_agents.research_bot.agents.planner_agent import (
WebSearchItem,
WebSearchPlan,
new_planner_agent,
)
from openai_agents.workflows.research_agents.search_agent import new_search_agent
from openai_agents.workflows.research_agents.writer_agent import (
from openai_agents.research_bot.agents.search_agent import new_search_agent
from openai_agents.research_bot.agents.writer_agent import (
ReportData,
new_writer_agent,
)
Expand Down
Loading
Loading