This repository showcases a deterministic, two-agent workflow that fetches todos from a public API and ingests them into an in-memory sink service. The goal is to provide a minimal, reproducible template for building LangGraph-driven orchestrators that coordinate specialized agents while keeping the implementation approachable for local development and containerized deployments.
- FetchAgent downloads todos from JSONPlaceholder.
- PostAgent transforms todo records and persists them in the sink service.
- Orchestrator exposes a FastAPI endpoint (
POST /query) that routes requests through the LangGraph graph. - Intent Parser uses OpenAI Responses API with a rule-based fallback to recognize ingestion requests and limits.
- Sink API is a FastAPI app that stores records in memory and exposes health and retrieval endpoints.
The agents themselves rely on deterministic logic (no LLM calls) to maintain predictable behavior. Only the intent classifier optionally calls OpenAI.
- Python 3.12
- FastAPI + Uvicorn
- LangGraph
- httpx
- Pydantic
- OpenAI Python SDK (for optional intent classification)
- Docker & Docker Compose
- pytest (unit and integration tests)
.
├── orchestrator/
│ ├── app.py # FastAPI entrypoint exposing /query
│ ├── graph.py # LangGraph definition for FetchAgent & PostAgent
│ ├── intent.py # Intent parsing with LLM fallback
│ ├── models.py # Shared Pydantic models and state schema
│ ├── prompts.py # Prompt templates (intent + agent context)
│ ├── tools.py # HTTP helpers for fetch/sink interactions
│ ├── Dockerfile # Uvicorn-based service image
│ └── requirements.txt
├── sink-api/
│ ├── main.py # In-memory sink FastAPI app
│ ├── Dockerfile
│ └── requirements.txt
├── tests/
│ ├── test_graph.py # LangGraph unit coverage
│ ├── test_intent.py # Intent parsing unit coverage
│ └── test_integration.py # Orchestrator↔sink integration tests
├── docker-compose.yml # Multi-service orchestration
├── .env # Runtime environment variables (not committed)
├── .env.example # Sample environment configuration
├── .gitignore # Git ignore list
└── README.md # Project documentation
- Docker Desktop or Python 3.12 with virtualenv tooling.
- OpenAI API key (if you want the LLM-powered intent classifier).
- Clone the repository
git clone <repo_url> cd ai-agent
- Create an environment file
cp .env.example .env # edit .env with your OPENAI_API_KEY or leave placeholder for heuristic-only mode
docker compose up --build- Frontend available at
http://localhost:3000. - Orchestrator available at
http://localhost:8080. - Sink API available at
http://localhost:8000. - Stop with
Ctrl+C, and clean up viadocker compose down.
- Create and activate a virtual environment (recommended).
- Install dependencies.
pip install -r orchestrator/requirements.txt pip install -r sink-api/requirements.txt
- Start the sink API.
cd sink-api uvicorn main:app --reload --port 8000 - Start the orchestrator in another shell.
cd orchestrator uvicorn app:app --reload --port 8080 - (Optional) Launch the React frontend.
Open
cd frontend npm install npm run dev -- --host 0.0.0.0 --port 3000http://localhost:3000in your browser.
- Start the stack (Docker or local).
- Open
http://localhost:3000to access the React client. - Enter a message (e.g., “please ingest todos 3”) and submit to trigger the orchestrator. The UI shows latest results plus recent history.
You can still interact with the API directly for scripts or testing.
- Ensure the services are running.
- Send a request to the orchestrator.
curl -X POST http://localhost:8080/query \ -H "Content-Type: application/json" \ -d '{"message": "please ingest todos 3"}'
- Inspect persisted records.
curl http://localhost:8000/records
If the message does not request todo ingestion, the orchestrator returns 400 Bad Request. If the sink rejects the payload, the orchestrator surfaces a 502 with error details.
Run tests after installing dependencies:
pytest -qtests/test_intent.pycovers the intent classifier with and without LLM help.tests/test_graph.pyvalidates graph behavior using stubbed tool functions.tests/test_integration.pyspins up both FastAPI apps viaTestClientto ensure end-to-end flow (success and failure paths).
Key environment variables (see .env.example):
OPENAI_API_KEY– optional key for LLM intent classification.OPENAI_MODEL– defaults togpt-4o-mini, only used when API key is provided.FETCH_URL– defaults to JSONPlaceholder todos endpoint.SINK_URL– defaults to internal Docker network URL for the sink.VITE_API_BASE_URL– overrides the frontend’s API target (defaults tohttp://orchestrator:8080in Docker andhttp://localhost:8080locally).
You can override these per environment, including at compose runtime via docker compose run -e KEY=value.
- Add new LangGraph nodes (e.g., validation or enrichment) in
orchestrator/graph.py. - Introduce richer prompts while keeping the agents deterministic if desired.
- Persist records to an actual database by replacing the sink API implementation.
- Expand test coverage with additional scenarios (authorization, alternative sources, etc.).
This project is intended as a reference implementation. Feel free to fork it, adapt it to your needs, and share improvements via pull requests or issues.