params-proto is a lightweight, decorator-based library for defining configurations in Python. Write your parameters once with type hints and inline documentation, and get automatic CLI parsing, validation, and help generation.
Note: This is v3 with a completely redesigned API. For the v2 API, see params-proto-v2.
Stop fighting with argparse and click. With params-proto, your configuration is your documentation:
from params_proto import proto
@proto.cli
def train_mnist(
batch_size: int = 128, # Training batch size
lr: float = 0.001, # Learning rate
epochs: int = 10, # Number of training epochs
):
"""Train an MLP on MNIST dataset."""
print(f"Training with lr={lr}, batch_size={batch_size}, epochs={epochs}")
# Your training code here...
if __name__ == "__main__":
train_mnist()That's it! No argparse boilerplate, no manual help strings, no type conversion logic. Just pure Python functions with type hints and inline comments.
Run it:
$ python train.py --help
usage: train.py [-h] [--batch-size INT] [--lr FLOAT] [--epochs INT]
Train an MLP on MNIST dataset.
options:
-h, --help show this help message and exit
--batch-size INT Training batch size (default: 128)
--lr FLOAT Learning rate (default: 0.001)
--epochs INT Number of training epochs (default: 10)
$ python train.py --lr 0.01 --batch-size 256
Training with lr=0.01, batch_size=256, epochs=10Note: The actual terminal output includes beautiful ANSI colors! See the demo below or check the documentation for colorized examples.
Want to see the colorized help in action? Clone the repo and run the demo:
# Clone and setup
git clone https://github.com/geyang/params-proto.git
cd params-proto
uv sync
# See the colorized help (with bright blue types, bold cyan defaults, bold red required)
uv run python scratch/demo_v3.py --help
# Try running without required --seed (shows error)
uv run python scratch/demo_v3.py
# Error: the following arguments are required: --seed
# Run with required parameter (keyword syntax)
uv run python scratch/demo_v3.py --seed 42
# Or use positional syntax
uv run python scratch/demo_v3.py 42pip install params-proto==3.0.0-rc7Define parameters using type-annotated functions:
@proto.cli
def train(
model: str = "resnet50", # Model architecture
dataset: str = "imagenet", # Dataset to use
gpu: bool = True, # Enable GPU acceleration
):
"""Train a model on a dataset."""
print(f"Training {model} on {dataset}")Or use classes for more structure:
@proto
class Params: """Training configuration."""
# Model settings
model: str = "resnet50"
pretrained: bool = True # Use pretrained weights
# Training settings
lr: float = 0.001 # Learning rate
batch_size: int = 32 # Batch size
epochs: int = 100 # Number of epochsCreate modular, reusable configuration groups:
from params_proto import proto
@proto.prefix
class Environment:
"""Environment configuration."""
domain: str = "cartpole" # Domain name
task: str = "swingup" # Task name
time_limit: float = 10.0 # Episode time limit
@proto.prefix
class Agent:
"""Agent hyperparameters."""
algorithm: str = "SAC" # RL algorithm
lr: float = 3e-4 # Learning rate
gamma: float = 0.99 # Discount factor
@proto.cli
def train_rl(
seed: int = 0, # Random seed
total_steps: int = 1000000, # Total training steps
):
"""Train RL agent on dm_control."""
print(f"Training {Agent.algorithm} on {Environment.domain}-{Environment.task}")
print(f"Agent LR: {Agent.lr}, Gamma: {Agent.gamma}")Command line:
$ python train_rl.py --Agent.lr 0.001 --Environment.domain walker --seed 42
Training SAC on walker-swingup
Agent LR: 0.001, Gamma: 0.99Override parameters in multiple ways:
# 1. Command line
$ python train.py --lr 0.01
# 2. Direct attribute assignment
Params.lr = 0.01
# 3. Function call with kwargs
train(lr=0.01, batch_size=256)
# 4. Using proto.bind() context manager
with proto.bind(lr=0.01, **{"train.epochs": 50}):
train()Support for complex types:
from typing import Literal, Union
from enum import Enum, auto
class Optimizer(Enum):
ADAM = auto()
SGD = auto()
RMSPROP = auto()
@proto
class Params: # Union types
precision: Literal["fp16", "fp32", "fp64"] = "fp32"
# Enums
optimizer: Optimizer = Optimizer.ADAM
# Tuples
image_size: tuple[int, int] = (224, 224)
# Optional types
checkpoint: str | None = None- Define your configuration with a decorated function or class
- Add type hints for automatic validation
- Add inline comments for automatic documentation
- Call your function - params-proto handles the rest!
See our Quick Start Guide for more.
- Quick Start - Get started in 5 minutes
- User Guide - Detailed documentation
- Examples - Real-world usage patterns
- API Reference - Complete API documentation
- Migration from v2 - Upgrade guide
v2 (old): Class-based with inheritance
from params_proto import ParamsProto
class Args(ParamsProto):
lr = 0.001
batch_size = 32v3 (new): Decorator-based with type hints
from params_proto import proto
@proto
class Args:
lr: float = 0.001 # Learning rate
batch_size: int = 32 # Batch sizeKey improvements:
- ✅ Cleaner decorator syntax (no inheritance needed)
- ✅ Full IDE support with type hints
- ✅ Inline documentation becomes automatic help text
- ✅ Support for functions, not just classes
- ✅ Better Union types and Enum support
- ✅ Simplified singleton pattern with
@proto.prefix
git clone https://github.com/episodeyang/params_proto.git
cd params_proto
make dev testTo publish:
make publishMIT License - see LICENSE for details.