Skip to content

Aparnap2/agentstack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AgentStack OSS

"The XAMPP for Vertical AI Agents"

License: Apache 2.0

AgentStack bundles all the essential infrastructure for building Vertical AI Agents into a single docker compose up command, running locally for $0.

πŸš€ What It Solves

Building a Vertical AI Agent (e.g., "AI Dental Billing Clerk") traditionally requires:

  • Database with vector search ($70/mo Pinecone)
  • AI Gateway for model routing ($50/mo Portkey)
  • Observability to debug traces ($100/mo Langfuse)
  • Task Queue for async jobs ($30/mo AWS SQS)
  • Document Parser for PDFs ($100/mo Unstructured.io)

Total SaaS Cost: $350+/month before writing agent code.

AgentStack provides all of this locally with open-source alternatives.

✨ Features

Component Technology Purpose Cost
AI Gateway LiteLLM Proxy Model routing, load balancing, cost tracking $0
Observability Arize Phoenix LLM tracing, latency analysis, prompt comparison $0
Database PostgreSQL + pgvector Relational + vector search (1536-dim embeddings) $0
REST API pREST Auto-generated REST API from database schema $0
Task Queue Celery + Valkey Async document processing, job status tracking $0
Document Parser Docling (IBM) PDF/HTML/DOCX β†’ clean Markdown $0
Admin UI Adminer Database administration interface $0

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    USER APPLICATION                         β”‚
β”‚                (Next.js, React, Mobile App)                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚   LiteLLM        β”‚ ← AI Gateway (Port 4000)
                 β”‚   Gateway        β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚ Traces (OTLP)
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚   Phoenix        β”‚ ← Observability (Port 6006)
                 β”‚   Dashboard      β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚ SQL
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚ PostgreSQL      β”‚ ← Database (Port 5432)
                 β”‚ + pgvector      β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚ Read/Write
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚   Celery        β”‚ ← Async Worker
                 β”‚   Tasks         β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                          β”‚ Jobs
                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”
                 β”‚   Valkey        β”‚ ← Message Broker (Port 6379)
                 β”‚   (Redis alt)   β”‚
                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Docker & Docker Compose
  • 2GB+ RAM, 5GB+ disk space
  • OpenSSL (for SSL certificates)

Installation

# 1. Clone repository
git clone <repository-url>
cd agentstack-oss

# 2. Copy environment template
cp .env.example .env

# 3. Add your OpenAI API key (required)
nano .env
# Set: OPENAI_API_KEY=sk-your-openai-key-here

# 4. Start all services
docker compose up -d --build

# 5. Wait for initialization (2-5 minutes on first run)
docker compose logs -f worker

Verification

Service URL Expected Result
Phoenix http://localhost:6006 Dashboard loads, shows "Projects"
pREST API http://localhost:3000/tables JSON list of database tables
LiteLLM http://localhost:4000/health {"status": "healthy"}
Adminer http://localhost:8080 Login form (postgres/password)

πŸ“– Usage Examples

Ingest a Document

from celery import Celery

app = Celery(broker='redis://localhost:6379/0')

# Ingest PDF from URL
result = app.send_task(
    'tasks.ingest_document',
    args=['https://example.com/document.pdf'],
    kwargs={'metadata': {'category': 'research'}}
)

# Wait for completion
document = result.get(timeout=120)
print(f"Document ID: {document['document_id']}")

Semantic Search

# Search knowledge base
result = app.send_task(
    'tasks.semantic_search',
    args=['What is machine learning?', 5, 0.7]
)

matches = result.get(timeout=30)
for doc in matches:
    print(f"Match: {doc['filename']} (similarity: {doc['similarity']:.2f})")

RAG Query

# Ask question with context from knowledge base
result = app.send_task(
    'tasks.rag_query',
    args=['Explain quantum computing'],
    kwargs={'model': 'gpt-4o'}
)

response = result.get(timeout=60)
print(response['answer'])
print("Sources:", response['sources'])

πŸ”§ Configuration

Environment Variables

# Required
OPENAI_API_KEY=sk-your-key-here

# Optional
ANTHROPIC_API_KEY=sk-ant-your-key-here
DEFAULT_MODEL=gpt-4o
EMBEDDING_MODEL=text-embedding-3-small

# Database
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
POSTGRES_DB=agentstack

# Security
JWT_SECRET=change-in-production

Model Configuration

Edit configs/litellm_config.yaml to add/remove models:

model_list:
  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

  - model_name: claude-3-5-sonnet
    litellm_params:
      model: anthropic/claude-3-5-sonnet-20241022
      api_key: os.environ/ANTHROPIC_API_KEY

πŸ—„οΈ Database Schema

Core Tables

  • knowledge_base: Documents + embeddings
  • chat_sessions: Conversation memory
  • chat_messages: Individual messages with tracing
  • job_status: Async task tracking
  • prompt_templates: Reusable prompts

Vector Search

-- Search by semantic similarity
SELECT * FROM search_knowledge(
    query_embedding => '[0.1,0.2,...]',
    match_threshold => 0.7,
    match_count => 5
);

πŸ” Observability

All LLM calls are automatically traced in Phoenix:

  • Latency analysis per model
  • Cost tracking per request
  • Prompt comparison across versions
  • Error debugging with full context

Access Phoenix at: http://localhost:6006

πŸ› οΈ Development

Project Structure

agentstack-oss/
β”œβ”€β”€ docker-compose.yml          # Main orchestration
β”œβ”€β”€ .env.example               # Environment template
β”œβ”€β”€ configs/
β”‚   └── litellm_config.yaml    # Model routing config
β”œβ”€β”€ init/
β”‚   └── 01_extensions.sql      # Database schema
β”œβ”€β”€ scripts/                   # Utility scripts
└── src/
    └── agentstack_worker/     # Celery worker code

Adding New Tasks

  1. Define task in src/agentstack_worker/tasks.py
  2. Add tracing with Phoenix spans
  3. Update database schema if needed
  4. Test with docker compose restart worker

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Submit a pull request

Development Setup

# Install dependencies
pip install -r requirements-dev.txt

# Run tests
pytest

# Lint code
ruff check .
mypy .

πŸ“„ License

Licensed under the Apache License 2.0. See LICENSE for details.

πŸ™ Acknowledgments

Built with open-source components:


Ready to build your Vertical AI Agent? Start with docker compose up and focus on your agent logic instead of infrastructure.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published