diff --git a/openai_agents/README.md b/openai_agents/README.md index 1eceb747..96975ec2 100644 --- a/openai_agents/README.md +++ b/openai_agents/README.md @@ -21,42 +21,13 @@ This approach ensures that AI agent workflows are durable, observable, and can h - Required dependencies installed via `uv sync --group openai-agents` - OpenAI API key set as environment variable: `export OPENAI_API_KEY=your_key_here` -## Running the Examples +## Examples -1. **Start the worker** (supports all samples): - ```bash - uv run openai_agents/run_worker.py - ``` +Each directory contains a complete example with its own README for detailed instructions: -2. **Run individual samples** in separate terminals: +- **[Basic Examples](./basic/README.md)** - Simple agent examples including a hello world agent and a tools-enabled agent that can access external APIs like weather services. +- **[Agent Patterns](./agent_patterns/README.md)** - Advanced patterns for agent composition, including using agents as tools within other agents. +- **[Research Bot](./research_bot/README.md)** - Multi-agent research system with specialized roles: a planner agent, search agent, and writer agent working together to conduct comprehensive research. +- **[Customer Service](./customer_service/README.md)** - Interactive customer service agent with escalation capabilities, demonstrating conversational workflows. -### Basic Agent Examples - -- **Hello World Agent** - Simple agent that responds in haikus: - ```bash - uv run openai_agents/run_hello_world_workflow.py - ``` - -- **Tools Agent** - Agent with access to external tools (weather API): - ```bash - uv run openai_agents/run_tools_workflow.py - ``` - -### Advanced Multi-Agent Examples - -- **Research Workflow** - Multi-agent research system with specialized roles: - ```bash - uv run openai_agents/run_research_workflow.py - ``` - Features a planner agent, search agent, and writer agent working together. - -- **Customer Service Workflow** - Customer service agent with escalation capabilities (interactive): - ```bash - uv run openai_agents/run_customer_service_client.py --conversation-id my-conversation-123 - ``` - -- **Agents as Tools** - Demonstrate using agents as tools within other agents: - ```bash - uv run openai_agents/run_agents_as_tools_workflow.py - ``` diff --git a/openai_agents/agent_patterns/README.md b/openai_agents/agent_patterns/README.md new file mode 100644 index 00000000..26867f10 --- /dev/null +++ b/openai_agents/agent_patterns/README.md @@ -0,0 +1,68 @@ +# Agent Patterns + +Common agentic patterns extended with Temporal's durable execution capabilities. + +*Adapted from [OpenAI Agents SDK agent patterns](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)* + +## Running the Examples + +First, start the worker (supports all patterns): +```bash +uv run openai_agents/agent_patterns/run_worker.py +``` + +Then run individual examples in separate terminals: + +## Deterministic Flows + +**TODO** + +A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps: + +1. Generate an outline +2. Generate the story +3. Generate the ending + +Each of these steps can be performed by an agent. The output of one agent is used as input to the next. + +## Handoffs and Routing + +**TODO** + +In many situations, you have specialized sub-agents that handle specific tasks. You can use handoffs to route the task to the right agent. + +For example, you might have a frontline agent that receives a request, and then hands off to a specialized agent based on the language of the request. + +## Agents as Tools + +The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent. + +For example, you could model a translation task as tool calls instead: rather than handing over to the language-specific agent, you could call the agent as a tool, and then use the result in the next step. This enables things like translating multiple languages at once. + +```bash +uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py +``` + +## LLM-as-a-Judge + +**TODO** + +LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost. + +For example, you could use an LLM to generate an outline for a story, and then use a second LLM to evaluate the outline and provide feedback. You can then use the feedback to improve the outline, and repeat until the LLM is satisfied with the outline. + +## Parallelization + +**TODO** + +Running multiple agents in parallel is a common pattern. This can be useful for both latency (e.g. if you have multiple steps that don't depend on each other) and also for other reasons e.g. generating multiple responses and picking the best one. + +## Guardrails + +**TODO** + +Related to parallelization, you often want to run input guardrails to make sure the inputs to your agents are valid. For example, if you have a customer support agent, you might want to make sure that the user isn't trying to ask for help with a math problem. + +You can definitely do this without any special Agents SDK features by using parallelization, but we support a special guardrail primitive. Guardrails can have a "tripwire" - if the tripwire is triggered, the agent execution will immediately stop and a `GuardrailTripwireTriggered` exception will be raised. + +This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs. \ No newline at end of file diff --git a/openai_agents/run_agents_as_tools_workflow.py b/openai_agents/agent_patterns/run_agents_as_tools_workflow.py similarity index 85% rename from openai_agents/run_agents_as_tools_workflow.py rename to openai_agents/agent_patterns/run_agents_as_tools_workflow.py index 822ca709..a42d3aac 100644 --- a/openai_agents/run_agents_as_tools_workflow.py +++ b/openai_agents/agent_patterns/run_agents_as_tools_workflow.py @@ -3,7 +3,9 @@ from temporalio.client import Client from temporalio.contrib.openai_agents import OpenAIAgentsPlugin -from openai_agents.workflows.agents_as_tools_workflow import AgentsAsToolsWorkflow +from openai_agents.agent_patterns.workflows.agents_as_tools_workflow import ( + AgentsAsToolsWorkflow, +) async def main(): diff --git a/openai_agents/agent_patterns/run_worker.py b/openai_agents/agent_patterns/run_worker.py new file mode 100644 index 00000000..3ca0eda2 --- /dev/null +++ b/openai_agents/agent_patterns/run_worker.py @@ -0,0 +1,39 @@ +from __future__ import annotations + +import asyncio +from datetime import timedelta + +from temporalio.client import Client +from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin +from temporalio.worker import Worker + +from openai_agents.agent_patterns.workflows.agents_as_tools_workflow import ( + AgentsAsToolsWorkflow, +) + + +async def main(): + # Create client connected to server at the given address + client = await Client.connect( + "localhost:7233", + plugins=[ + OpenAIAgentsPlugin( + model_params=ModelActivityParameters( + start_to_close_timeout=timedelta(seconds=30) + ) + ), + ], + ) + + worker = Worker( + client, + task_queue="openai-agents-task-queue", + workflows=[ + AgentsAsToolsWorkflow, + ], + ) + await worker.run() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/openai_agents/workflows/agents_as_tools_workflow.py b/openai_agents/agent_patterns/workflows/agents_as_tools_workflow.py similarity index 100% rename from openai_agents/workflows/agents_as_tools_workflow.py rename to openai_agents/agent_patterns/workflows/agents_as_tools_workflow.py diff --git a/openai_agents/basic/README.md b/openai_agents/basic/README.md new file mode 100644 index 00000000..56d9f528 --- /dev/null +++ b/openai_agents/basic/README.md @@ -0,0 +1,25 @@ +# Basic Agent Examples + +Simple examples to get started with OpenAI Agents SDK integrated with Temporal workflows. + +*Adapted from [OpenAI Agents SDK basic examples](https://github.com/openai/openai-agents-python/tree/main/examples/basic)* + +## Running the Examples + +First, start the worker (supports all basic examples): +```bash +uv run openai_agents/basic/run_worker.py +``` + +Then run individual examples in separate terminals: + +### Hello World Agent +```bash +uv run openai_agents/basic/run_hello_world_workflow.py +``` + +### Tools Agent +Agent with access to external tools (weather API): +```bash +uv run openai_agents/basic/run_tools_workflow.py +``` \ No newline at end of file diff --git a/openai_agents/workflows/get_weather_activity.py b/openai_agents/basic/activities/get_weather_activity.py similarity index 100% rename from openai_agents/workflows/get_weather_activity.py rename to openai_agents/basic/activities/get_weather_activity.py diff --git a/openai_agents/run_hello_world_workflow.py b/openai_agents/basic/run_hello_world_workflow.py similarity index 89% rename from openai_agents/run_hello_world_workflow.py rename to openai_agents/basic/run_hello_world_workflow.py index 9ec8833f..412dbb58 100644 --- a/openai_agents/run_hello_world_workflow.py +++ b/openai_agents/basic/run_hello_world_workflow.py @@ -3,7 +3,7 @@ from temporalio.client import Client from temporalio.contrib.openai_agents import OpenAIAgentsPlugin -from openai_agents.workflows.hello_world_workflow import HelloWorldAgent +from openai_agents.basic.workflows.hello_world_workflow import HelloWorldAgent async def main(): diff --git a/openai_agents/run_tools_workflow.py b/openai_agents/basic/run_tools_workflow.py similarity index 89% rename from openai_agents/run_tools_workflow.py rename to openai_agents/basic/run_tools_workflow.py index cfb3d811..eb6adcc1 100644 --- a/openai_agents/run_tools_workflow.py +++ b/openai_agents/basic/run_tools_workflow.py @@ -3,7 +3,7 @@ from temporalio.client import Client from temporalio.contrib.openai_agents import OpenAIAgentsPlugin -from openai_agents.workflows.tools_workflow import ToolsWorkflow +from openai_agents.basic.workflows.tools_workflow import ToolsWorkflow async def main(): diff --git a/openai_agents/run_worker.py b/openai_agents/basic/run_worker.py similarity index 58% rename from openai_agents/run_worker.py rename to openai_agents/basic/run_worker.py index b17fbd0f..ec94e907 100644 --- a/openai_agents/run_worker.py +++ b/openai_agents/basic/run_worker.py @@ -7,12 +7,9 @@ from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin from temporalio.worker import Worker -from openai_agents.workflows.agents_as_tools_workflow import AgentsAsToolsWorkflow -from openai_agents.workflows.customer_service_workflow import CustomerServiceWorkflow -from openai_agents.workflows.get_weather_activity import get_weather -from openai_agents.workflows.hello_world_workflow import HelloWorldAgent -from openai_agents.workflows.research_bot_workflow import ResearchWorkflow -from openai_agents.workflows.tools_workflow import ToolsWorkflow +from openai_agents.basic.activities.get_weather_activity import get_weather +from openai_agents.basic.workflows.hello_world_workflow import HelloWorldAgent +from openai_agents.basic.workflows.tools_workflow import ToolsWorkflow async def main(): @@ -22,7 +19,7 @@ async def main(): plugins=[ OpenAIAgentsPlugin( model_params=ModelActivityParameters( - start_to_close_timeout=timedelta(seconds=120) + start_to_close_timeout=timedelta(seconds=30) ) ), ], @@ -34,9 +31,6 @@ async def main(): workflows=[ HelloWorldAgent, ToolsWorkflow, - ResearchWorkflow, - CustomerServiceWorkflow, - AgentsAsToolsWorkflow, ], activities=[ get_weather, diff --git a/openai_agents/workflows/hello_world_workflow.py b/openai_agents/basic/workflows/hello_world_workflow.py similarity index 100% rename from openai_agents/workflows/hello_world_workflow.py rename to openai_agents/basic/workflows/hello_world_workflow.py diff --git a/openai_agents/workflows/tools_workflow.py b/openai_agents/basic/workflows/tools_workflow.py similarity index 90% rename from openai_agents/workflows/tools_workflow.py rename to openai_agents/basic/workflows/tools_workflow.py index c9f80e9f..70964dc0 100644 --- a/openai_agents/workflows/tools_workflow.py +++ b/openai_agents/basic/workflows/tools_workflow.py @@ -6,7 +6,7 @@ from temporalio import workflow from temporalio.contrib import openai_agents as temporal_agents -from openai_agents.workflows.get_weather_activity import get_weather +from openai_agents.basic.activities.get_weather_activity import get_weather @workflow.defn diff --git a/openai_agents/customer_service/README.md b/openai_agents/customer_service/README.md new file mode 100644 index 00000000..34b9504d --- /dev/null +++ b/openai_agents/customer_service/README.md @@ -0,0 +1,21 @@ +# Customer Service + +Interactive customer service agent with escalation capabilities, extended with Temporal's durable conversational workflows. + +*Adapted from [OpenAI Agents SDK customer service](https://github.com/openai/openai-agents-python/tree/main/examples/customer_service)* + +This example demonstrates how to build persistent, stateful conversations where each conversation maintains state across multiple interactions and can survive system restarts and failures. + +## Running the Example + +First, start the worker: +```bash +uv run openai_agents/customer_service/run_worker.py +``` + +Then start a customer service conversation: +```bash +uv run openai_agents/customer_service/run_customer_service_client.py --conversation-id my-conversation-123 +``` + +You can start a new conversation with any unique conversation ID, or resume existing conversations by using the same conversation ID. The conversation state is persisted in the Temporal workflow, allowing you to resume conversations even after restarting the client. \ No newline at end of file diff --git a/openai_agents/workflows/customer_service.py b/openai_agents/customer_service/customer_service.py similarity index 100% rename from openai_agents/workflows/customer_service.py rename to openai_agents/customer_service/customer_service.py diff --git a/openai_agents/run_customer_service_client.py b/openai_agents/customer_service/run_customer_service_client.py similarity index 96% rename from openai_agents/run_customer_service_client.py rename to openai_agents/customer_service/run_customer_service_client.py index ef8aaf62..e66419e4 100644 --- a/openai_agents/run_customer_service_client.py +++ b/openai_agents/customer_service/run_customer_service_client.py @@ -10,7 +10,7 @@ from temporalio.contrib.openai_agents import OpenAIAgentsPlugin from temporalio.service import RPCError, RPCStatusCode -from openai_agents.workflows.customer_service_workflow import ( +from openai_agents.customer_service.workflows.customer_service_workflow import ( CustomerServiceWorkflow, ProcessUserMessageInput, ) diff --git a/openai_agents/customer_service/run_worker.py b/openai_agents/customer_service/run_worker.py new file mode 100644 index 00000000..b82f6919 --- /dev/null +++ b/openai_agents/customer_service/run_worker.py @@ -0,0 +1,39 @@ +from __future__ import annotations + +import asyncio +from datetime import timedelta + +from temporalio.client import Client +from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin +from temporalio.worker import Worker + +from openai_agents.customer_service.workflows.customer_service_workflow import ( + CustomerServiceWorkflow, +) + + +async def main(): + # Create client connected to server at the given address + client = await Client.connect( + "localhost:7233", + plugins=[ + OpenAIAgentsPlugin( + model_params=ModelActivityParameters( + start_to_close_timeout=timedelta(seconds=30) + ) + ), + ], + ) + + worker = Worker( + client, + task_queue="openai-agents-task-queue", + workflows=[ + CustomerServiceWorkflow, + ], + ) + await worker.run() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/openai_agents/workflows/customer_service_workflow.py b/openai_agents/customer_service/workflows/customer_service_workflow.py similarity index 98% rename from openai_agents/workflows/customer_service_workflow.py rename to openai_agents/customer_service/workflows/customer_service_workflow.py index 3186c5ab..c816d868 100644 --- a/openai_agents/workflows/customer_service_workflow.py +++ b/openai_agents/customer_service/workflows/customer_service_workflow.py @@ -14,7 +14,7 @@ ) from temporalio import workflow -from openai_agents.workflows.customer_service import ( +from openai_agents.customer_service.customer_service import ( AirlineAgentContext, ProcessUserMessageInput, init_agents, diff --git a/openai_agents/research_bot/README.md b/openai_agents/research_bot/README.md new file mode 100644 index 00000000..f4f4a074 --- /dev/null +++ b/openai_agents/research_bot/README.md @@ -0,0 +1,35 @@ +# Research Bot + +Multi-agent research system with specialized roles, extended with Temporal's durable execution. + +*Adapted from [OpenAI Agents SDK research bot](https://github.com/openai/openai-agents-python/tree/main/examples/research_bot)* + +## Architecture + +The flow is: + +1. User enters their research topic +2. `planner_agent` comes up with a plan to search the web for information. The plan is a list of search queries, with a search term and a reason for each query. +3. For each search item, we run a `search_agent`, which uses the Web Search tool to search for that term and summarize the results. These all run in parallel. +4. Finally, the `writer_agent` receives the search summaries, and creates a written report. + +## Running the Example + +First, start the worker: +```bash +uv run openai_agents/research_bot/run_worker.py +``` + +Then run the research workflow: +```bash +uv run openai_agents/research_bot/run_research_workflow.py +``` + +## Suggested Improvements + +If you're building your own research bot, some ideas to add to this are: + +1. Retrieval: Add support for fetching relevant information from a vector store. You could use the File Search tool for this. +2. Image and file upload: Allow users to attach PDFs or other files, as baseline context for the research. +3. More planning and thinking: Models often produce better results given more time to think. Improve the planning process to come up with a better plan, and add an evaluation step so that the model can choose to improve its results, search for more stuff, etc. +4. Code execution: Allow running code, which is useful for data analysis. \ No newline at end of file diff --git a/openai_agents/workflows/research_agents/planner_agent.py b/openai_agents/research_bot/agents/planner_agent.py similarity index 100% rename from openai_agents/workflows/research_agents/planner_agent.py rename to openai_agents/research_bot/agents/planner_agent.py diff --git a/openai_agents/workflows/research_agents/research_manager.py b/openai_agents/research_bot/agents/research_manager.py similarity index 91% rename from openai_agents/workflows/research_agents/research_manager.py rename to openai_agents/research_bot/agents/research_manager.py index 356da1d7..62a77a07 100644 --- a/openai_agents/workflows/research_agents/research_manager.py +++ b/openai_agents/research_bot/agents/research_manager.py @@ -8,13 +8,13 @@ # TODO: Restore progress updates from agents import RunConfig, Runner, custom_span, trace - from openai_agents.workflows.research_agents.planner_agent import ( + from openai_agents.research_bot.agents.planner_agent import ( WebSearchItem, WebSearchPlan, new_planner_agent, ) - from openai_agents.workflows.research_agents.search_agent import new_search_agent - from openai_agents.workflows.research_agents.writer_agent import ( + from openai_agents.research_bot.agents.search_agent import new_search_agent + from openai_agents.research_bot.agents.writer_agent import ( ReportData, new_writer_agent, ) diff --git a/openai_agents/workflows/research_agents/search_agent.py b/openai_agents/research_bot/agents/search_agent.py similarity index 100% rename from openai_agents/workflows/research_agents/search_agent.py rename to openai_agents/research_bot/agents/search_agent.py diff --git a/openai_agents/workflows/research_agents/writer_agent.py b/openai_agents/research_bot/agents/writer_agent.py similarity index 100% rename from openai_agents/workflows/research_agents/writer_agent.py rename to openai_agents/research_bot/agents/writer_agent.py diff --git a/openai_agents/run_research_workflow.py b/openai_agents/research_bot/run_research_workflow.py similarity index 88% rename from openai_agents/run_research_workflow.py rename to openai_agents/research_bot/run_research_workflow.py index 5279648e..e2739ef5 100644 --- a/openai_agents/run_research_workflow.py +++ b/openai_agents/research_bot/run_research_workflow.py @@ -3,7 +3,7 @@ from temporalio.client import Client from temporalio.contrib.openai_agents import OpenAIAgentsPlugin -from openai_agents.workflows.research_bot_workflow import ResearchWorkflow +from openai_agents.research_bot.workflows.research_bot_workflow import ResearchWorkflow async def main(): diff --git a/openai_agents/research_bot/run_worker.py b/openai_agents/research_bot/run_worker.py new file mode 100644 index 00000000..fd6827d6 --- /dev/null +++ b/openai_agents/research_bot/run_worker.py @@ -0,0 +1,37 @@ +from __future__ import annotations + +import asyncio +from datetime import timedelta + +from temporalio.client import Client +from temporalio.contrib.openai_agents import ModelActivityParameters, OpenAIAgentsPlugin +from temporalio.worker import Worker + +from openai_agents.research_bot.workflows.research_bot_workflow import ResearchWorkflow + + +async def main(): + # Create client connected to server at the given address + client = await Client.connect( + "localhost:7233", + plugins=[ + OpenAIAgentsPlugin( + model_params=ModelActivityParameters( + start_to_close_timeout=timedelta(seconds=120) + ) + ), + ], + ) + + worker = Worker( + client, + task_queue="openai-agents-task-queue", + workflows=[ + ResearchWorkflow, + ], + ) + await worker.run() + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/openai_agents/workflows/research_bot_workflow.py b/openai_agents/research_bot/workflows/research_bot_workflow.py similarity index 68% rename from openai_agents/workflows/research_bot_workflow.py rename to openai_agents/research_bot/workflows/research_bot_workflow.py index c0779c02..c2523c8c 100644 --- a/openai_agents/workflows/research_bot_workflow.py +++ b/openai_agents/research_bot/workflows/research_bot_workflow.py @@ -1,6 +1,6 @@ from temporalio import workflow -from openai_agents.workflows.research_agents.research_manager import ResearchManager +from openai_agents.research_bot.agents.research_manager import ResearchManager @workflow.defn diff --git a/openai_agents/workflows/__init__.py b/openai_agents/workflows/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/openai_agents/workflows/research_agents/__init__.py b/openai_agents/workflows/research_agents/__init__.py deleted file mode 100644 index e69de29b..00000000