Skip to content

will-ixo/orm-benchmarks-python

Repository files navigation

ORM Benchmarks: TortoiseORM vs SQLAlchemy

A comprehensive benchmark suite comparing TortoiseORM and SQLAlchemy performance when using the asyncpg driver for asynchronous PostgreSQL operations.

Overview

Both ORMs are tested using asyncpg, which is a native async PostgreSQL driver for Python. This ensures a fair comparison of ORM overhead since both use the same underlying database driver.

Benchmarked Operations

Operation Description
Single Insert Insert a single record
Bulk Insert Insert 100 records at once
Select by ID Fetch a single record by primary key
Select with Filter Query with WHERE clause (LIKE filter)
Select All Fetch multiple records with LIMIT
Select with JOIN Eager loading of related entities
Update Single Update a single record
Update Bulk Update multiple records matching a filter
Aggregate Count COUNT(*) operation
Aggregate GROUP BY GROUP BY with COUNT

Requirements

  • Python 3.12+
  • Docker & Docker Compose
  • uv package manager

Quick Start

# Install dependencies
uv sync

# Start PostgreSQL and run benchmarks (one command!)
make benchmark

Installation

# Clone the repository
git clone <repository-url>
cd orm-benchmarks-python

# Install dependencies with uv
uv sync

Database Setup (Docker)

The project includes a Docker Compose file for PostgreSQL. Use the Makefile commands:

# Start PostgreSQL container
make up

# Stop PostgreSQL container
make down

# View PostgreSQL logs
make logs

# Show container status
make ps

# Clean up (stop container and remove data volume)
make clean

# Reset (clean and start fresh)
make reset

Alternative: Cloud PostgreSQL

You can also use free PostgreSQL providers:

  • Neon - Free tier with serverless PostgreSQL
  • Supabase - Free tier PostgreSQL
  • ElephantSQL - Free "Tiny Turtle" plan

Note: For accurate benchmarks, local Docker is recommended to eliminate network latency.

Running Benchmarks

Using Makefile (Recommended)

# Start PostgreSQL and run benchmarks
make benchmark

# Quick test with fewer iterations
make benchmark-quick

# Full benchmark with more iterations
make benchmark-full

Manual Execution

# Make sure PostgreSQL is running
make up

# Run with default settings
uv run python main.py

# Or directly run the benchmark script
uv run python benchmarks.py

With Custom Database Settings

# Using command line arguments
uv run python main.py \
    --db-host localhost \
    --db-port 5432 \
    --db-user postgres \
    --db-password postgres \
    --db-name orm_benchmark

# Using environment variables
export DB_HOST=localhost
export DB_PORT=5432
export DB_USER=postgres
export DB_PASSWORD=postgres
export DB_NAME=orm_benchmark
uv run python main.py

Adjusting Benchmark Parameters

# Run with more iterations for statistical significance
uv run python main.py --iterations 500 --warmup 50

# Quick test with fewer iterations
uv run python main.py --iterations 20 --warmup 5

Cloud Database Example (Neon)

export DB_HOST=ep-xxx.us-east-2.aws.neon.tech
export DB_PORT=5432
export DB_USER=your_username
export DB_PASSWORD=your_password
export DB_NAME=neondb
uv run python main.py

Output

The benchmark produces:

  1. Comparison Table: Side-by-side comparison with winner for each operation
  2. Detailed Statistics: Mean, std dev, min, max times for each benchmark
  3. Summary: Overall winner count

Example output:

═══════════════════════════════════════════════════════════════
                      BENCHMARK RESULTS
═══════════════════════════════════════════════════════════════

┌─────────────────────┬──────────────────┬───────────────────┬─────────────┬─────────────┐
│ Benchmark           │ SQLAlchemy Mean  │ TortoiseORM Mean  │ Winner      │ Difference  │
├─────────────────────┼──────────────────┼───────────────────┼─────────────┼─────────────┤
│ Single Insert       │ 0.543            │ 0.612             │ SQLAlchemy  │ 11.3% faster│
│ Select by ID        │ 0.234            │ 0.198             │ TortoiseORM │ 15.4% faster│
│ ...                 │ ...              │ ...               │ ...         │ ...         │
└─────────────────────┴──────────────────┴───────────────────┴─────────────┴─────────────┘

Project Structure

orm-benchmarks-python/
├── main.py              # Entry point with CLI
├── benchmarks.py        # Main benchmark logic
├── config.py            # Database configuration
├── docker-compose.yml   # PostgreSQL container config
├── Makefile             # Commands for managing containers & benchmarks
├── models/
│   ├── __init__.py
│   ├── sqlalchemy_models.py   # SQLAlchemy ORM models
│   └── tortoise_models.py     # TortoiseORM models
├── pyproject.toml       # Project dependencies
└── README.md

Models

The benchmark uses a book/author/publisher schema:

  • Author: Basic entity with text fields
  • Publisher: Entity with numeric fields
  • Book: Entity with foreign keys to Author and Publisher
  • Review: Entity with foreign key to Book

This schema allows testing various relationship patterns (one-to-many, many-to-one) and join operations.

Technical Notes

Why asyncpg?

Both TortoiseORM and SQLAlchemy support multiple database backends. For this benchmark:

  • asyncpg is the fastest PostgreSQL driver for Python
  • It's natively async (not using thread pools)
  • Both ORMs are configured to use asyncpg for fair comparison

Measurement Methodology

  1. Warmup: Each benchmark runs warmup iterations first (default: 10)
  2. Timing: time.perf_counter() is used for high-precision timing
  3. Statistics: Mean, standard deviation, min, and max are calculated
  4. Isolation: Each ORM uses separate tables to avoid conflicts

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published