A production-ready, containerized FastAPI application with background task processing, AI model serving, and persistent storage. This project leverages Docker Compose to orchestrate a modern stack for scalable, automated workflows.
- FastAPI: High-performance Python web API.
- Celery: Distributed task queue for background and scheduled jobs.
- PostgreSQL: Reliable relational database.
- Redis: Fast in-memory broker for Celery.
- Ollama: Local AI model server for LLM inference.
- CrewAI: Agentic workflow orchestration (via
crewaiandcrewai-tools). - Automated Model Management: Scripts for managing Ollama models.
[FastAPI] <--> [PostgreSQL]
| ^
v |
[Celery Worker] |
| |
v |
[Redis] <--------+
|
v
[Ollama]
- FastAPI serves as the main API and interacts with the database and Ollama.
- Celery Worker runs background and scheduled tasks, including database checks and agentic workflows.
- Ollama provides local LLM inference, managed via helper scripts.
- Celery Beat schedules periodic tasks (e.g., database checks every minute).
-
Clone the repository
git clone <your-repo-url> cd <your-repo-directory>
-
Build and start all services
docker compose up --build
-
(Optional) Pull a default Ollama model
docker exec ollama ollama pull qwen3:1.7b -
Access the API
- FastAPI: http://localhost:8000
- Ollama: http://localhost:11434
-
Stop all services
docker compose down
- Main entrypoint:
app/main.py - Auto-reloads on code changes (if configured)
- Connects to PostgreSQL and Redis
- Worker and Beat services for background and scheduled tasks
- Example tasks: add, multiply, periodic DB check with agentic workflow
- Local LLM server, managed via
ollama-commands.shandstartup-ollama.sh - Pull, run, and manage models interactively
- User:
postgres - Password:
postgres - Database:
tododb - Data persisted in Docker volume
- Used as broker and result backend for Celery
Use the provided scripts to manage Ollama models:
- List models
./ollama-commands.sh list
- Pull a model
./ollama-commands.sh pull <model-name>
- Run a model interactively
./ollama-commands.sh run <model-name>
- Remove a model
./ollama-commands.sh remove <model-name>
- Show logs
./ollama-commands.sh logs
Set in docker-compose.yml:
CELERY_BROKER_URLandCELERY_RESULT_BACKEND: Redis connectionDATABASE_URL: PostgreSQL connectionOLLAMA_HOSTandOLLAMA_PORT: Ollama service
- Python dependencies in
requirements.txt - Dockerfile for reproducible builds
- All code in the
app/directory (seemain.py,models.py,database.py, and agentic logic intodos_crew/)
- Add your FastAPI endpoints in
app/main.py - Celery tasks in
tasks.py(e.g.,add_task,multiply_task,check_database_task) - Scheduled DB checks trigger agentic workflows using CrewAI
- Data for PostgreSQL and Ollama is persisted in Docker volumes.
- The system is designed for local development and experimentation with agentic workflows and LLMs.
- You can extend the agent logic in
app/todos_crew/.