Note
Pre-Release Software - AgentSystems is in active development. Join our Discord for updates and early access. ⭐ Star the main repository to show your support!
This is the reference agent template for AgentSystems. See the main repository for platform overview and documentation.
A minimal starter for building containerized AI agents that plug into the Agent Systems platform.
- Built on FastAPI + LangChain + LangGraph
- Uses OCI standard labels for container metadata
- No version tags or Docker image are published here – this repo is a template, not a distributable agent
The Agent Template is a minimal, batteries-included starter repo for building container-ised AI agents that plug into the Agent Control Plane.
This repo is intended to be used via GitHub’s “Use this template” button or the gh repo create CLI. It can also be cloned directly for experiments.
| Path / file | Purpose |
|---|---|
main.py |
FastAPI app exposing /invoke and /health endpoints. Contains an invoke() function you can customize. |
Dockerfile |
Multi-stage Python 3.13 image with OCI labels, license attribution, and healthcheck. |
requirements.txt |
Runtime dependencies. |
| Langfuse tracing | Pre-configured via agentsystems-toolkit for observability. |
Note: Agent metadata (model dependencies, egress requirements, setup instructions) is defined in the agent-index when you publish your agent, not in the container itself.
graph LR
client((Client)) --> gateway[Gateway]
gateway --> agent((Your-Agent))
agent --> lf[Langfuse]
- Client calls
POST /your-agenton the Gateway. - Gateway forwards to your container’s
/invokeendpoint and injectsX-Thread-Id. - Your code adds Langfuse traces and responds with JSON.
docker compose -f compose/docker-standard.yml up --buildAfter a few seconds check http://localhost:8000/docs for the swagger UI.
- Click "Use this template" on GitHub and create a new repository (e.g.
johndoe/my-agent). - Clone your new repo and customize
main.py:- Update the FastAPI app metadata (lines 38-42)
- Modify the
State,InvokeRequest, andInvokeResponsemodels - Implement your agent logic in the graph nodes
- Start the agent locally with hot-reload:
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
uvicorn main:app --reload --port 8000Open http://localhost:8000/docs to test the /invoke endpoint.
AgentSystems CLI waits until Docker marks your container healthy before routing traffic. Add a simple HEALTHCHECK to your Dockerfile so the platform knows when the agent is ready:
# after EXPOSE 8000
ENV PORT 8000
HEALTHCHECK --interval=10s --retries=3 CMD curl -sf http://localhost:${PORT}/health || exit 1The template exposes a GET /health endpoint that returns 200, so the example healthcheck will work with the default app.
Build your agent image with standard Docker commands:
docker build -t yourname/my-agent:0.1.0 .To embed metadata in OCI labels (recommended):
docker build \
--build-arg AGENT_NAME="my-agent" \
--build-arg AGENT_DESCRIPTION="My custom agent" \
--build-arg AGENT_DEVELOPER="yourname" \
--build-arg VERSION="0.1.0" \
-t yourname/my-agent:0.1.0 \
.Then push to your registry:
docker push yourname/my-agent:0.1.0Add the service to agent-platform-deployments:
# compose/local/docker-compose.yml
echo-agent:
image: mycorp/echo-agent:0.1
networks:
- agents-int
labels:
- agent.enabled=true
- agent.port=8000The Gateway should now route POST /echo-agent to your container (once the container is healthy and registered).
| Var | Purpose |
|---|---|
LANGFUSE_PUBLIC_KEY / LANGFUSE_SECRET_KEY |
Needed for Langfuse tracing. |
| Any model API keys | e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY – accessed in invoke(). |
Agents can receive file uploads and access shared artifacts through the /artifacts volume mounted at runtime. The platform uses a thread-centric structure where each request gets its own directory.
Upload files using multipart requests to the gateway:
# Upload file with JSON payload
curl -X POST http://localhost:18080/invoke/agent-template \
-H "Authorization: Bearer your-token" \
-F "file=@input.txt" \
-F 'json={"sync": true}'/artifacts/
├── {thread-id-1}/
│ ├── in/ # Input files (uploaded by client)
│ │ └── input.txt
│ └── out/ # Output files (created by agent)
│ └── result.txt
└── {thread-id-2}/
├── in/
└── out/
# In your agent's invoke() function
thread_id = request.headers.get("X-Thread-Id", "")
in_dir = pathlib.Path("/artifacts") / thread_id / "in"
# Check for uploaded files
if (in_dir / "data.txt").exists():
content = (in_dir / "data.txt").read_text()# Create output directory and write results
out_dir = pathlib.Path("/artifacts") / thread_id / "out"
out_dir.mkdir(parents=True, exist_ok=True)
(out_dir / "result.txt").write_text("Processing complete")Check artifacts from any container with the volume mounted:
# List all threads
docker exec local-gateway-1 ls -la /artifacts/
# Read specific output file
docker exec local-gateway-1 cat /artifacts/{thread-id}/out/result.txt
# Use CLI helper
agentsystems artifacts-path {thread-id} result.txt- Keep the container port consistent (8080 or 8000); the Gateway connects over the internal Docker network, so host port mapping is optional.
- You should return JSON with the
thread_idyou received – this keeps the audit log and Langfuse trace in sync. - Use the Add a New Agent guide when integrating into the full stack.
To make your agent discoverable in the AgentSystems platform:
- Build and push your Docker image to a container registry
- Publish metadata to the agent-index
- Users can discover and install your agent via the platform UI
See the AgentSystems documentation for detailed publishing instructions.
Issues and PRs are welcome – feel free to open a discussion if you need changes to the template.
-
Use this template to create your own repository.
-
Customize
main.py:- Update FastAPI metadata
- Define your request/response models
- Implement your agent logic
-
Build and run locally (see sections above).
Request contract
- Client must include
Authorization: Bearer <token>header (any placeholder for now). - Gateway injects
X-Thread-Id: <uuid>header before forwarding to the agent.
Response contract
- JSON must include the same
thread_idso audit logs can correlate request/response pairs.
Example curl (once the agent is behind the gateway):
curl -X POST localhost:18080/my-agent \ -H 'Content-Type: application/json' \ -d '{"prompt": "Hello"}'Response:
{ "thread_id": "550e8400-e29b-41d4-a716-446655440000", "reply": "Echo: Hello", "timestamp": "2025-06-16T09:34:00Z" } - Client must include
-
Build & run locally:
docker build -t my-agent .
docker run -p 8000:8000 my-agent- Test:
curl -X POST localhost:8000/invoke -H 'Content-Type: application/json' \
-d '{"prompt": "Hello"}'In production you usually build & push the image, then reference it in the deployment bundle stored in agent-platform-deployments.
# example snippet in compose/local/docker-compose.yml
my-agent:
image: mycorp/my-agent:1.0
labels:
- agent.enabled=true
- agent.port=8000
The Gateway will auto-discover the container and route POST /my-agent to its /invoke endpoint.
The project ships with a ci.yml workflow that now goes beyond linting:
- Runs pre-commit hooks (ruff, black, shellcheck, hadolint).
- Builds the agent Docker image.
- Starts the container mapped to
localhost:9800(internal port 8000). - Polls
http://localhost:9800/healthfor up to 60 s and fails the job if the endpoint never returns 200 OK. - Removes the container in a cleanup step.
This helps verify that PRs produce images that boot successfully and expose the health endpoint.
Licensed under the Apache-2.0 license.