A simple and robust caching library for Python functions, supporting both synchronous and asynchronous code.
- Cache function results based on function ID and arguments
- Supports both synchronous and asynchronous functions
- Thread-safe locking to prevent duplicate cached function calls
- Configurable Time-To-Live (TTL) for cached items
- "Never Die" mode for functions that should keep cache refreshed automatically
- Skip cache functionality to force fresh function execution while updating cache
- Redis cache for distributed caching across multiple processes/machines
# Using pip
pip install cachify
# Using poetry
poetry add cachify
# Using uv
uv add cachifyfrom cachify import cache
# Cache function in sync functions
@cache(ttl=60) # ttl in seconds
def expensive_calculation(a, b):
# Some expensive operation
return a + b
# And async functions
@cache(ttl=3600) # ttl in seconds
async def another_calculation(url):
# Some expensive IO call
return await httpx.get(url).json()| Parameter | Type | Default | Description |
|---|---|---|---|
ttl |
int | float |
300 |
Time to live for cached items in seconds |
never_die |
bool |
False |
If True, cache refreshes automatically in background |
cache_key_func |
Callable |
None |
Custom function to generate cache keys |
ignore_fields |
Sequence[str] |
() |
Function parameters to exclude from cache key |
no_self |
bool |
False |
If True, ignores the first parameter (usually self or cls) |
Use cache_key_func when you need custom control over how cache keys are generated:
from cachify import cache
def custom_key(args: tuple, kwargs: dict) -> str:
user_id = kwargs.get("user_id") or args[0]
return f"user:{user_id}"
@cache(ttl=60, cache_key_func=custom_key)
def get_user_profile(user_id: int):
return fetch_from_database(user_id)Use ignore_fields to exclude specific parameters from the cache key. Useful when some arguments don't affect the result:
from cachify import cache
@cache(ttl=300, ignore_fields=("logger", "request_id"))
def fetch_data(query: str, logger: Logger, request_id: str):
# Cache key only uses 'query', ignoring logger and request_id
logger.info(f"Fetching data for request {request_id}")
return database.execute(query)For distributed caching across multiple processes or machines, use rcache:
import redis
from cachify import setup_redis_config, rcache
# Configure Redis (call once at startup)
setup_redis_config(
sync_client=redis.from_url("redis://localhost:6379/0"),
key_prefix="{myapp}", # default: "{cachify}", prefix searchable on redis "PREFIX:*"
lock_timeout=10, # default: 10, maximum lock lifetime in seconds
on_error="silent", # "silent" (default) or "raise" in case of redis errors
)
@rcache(ttl=300)
def get_user(user_id: int) -> dict:
return fetch_from_database(user_id)
# Async version
import redis.asyncio as aredis
setup_redis_config(async_client=aredis.from_url("redis://localhost:6379/0"))
@rcache(ttl=300)
async def get_user_async(user_id: int) -> dict:
return await fetch_from_database(user_id)The never_die feature ensures that cached values never expire by automatically refreshing them in the background:
# Cache with never_die (automatic refresh)
@cache(ttl=300, never_die=True)
def critical_operation(data_id: str):
# Expensive operation that should always be available from cache
return fetch_data_from_database(data_id)How Never Die Works:
- When a function with
never_die=Trueis first called, the result is cached - A background thread monitors all
never_diefunctions - On cache expiration (TTL), the function is automatically called again
- The cache is updated with the new result
- If the refresh operation fails, the existing cached value is preserved
- Clients always get fast response times by reading from cache
Benefits:
- Cache is always "warm" and ready to serve
- No user request ever has to wait for the expensive operation
- If a dependency service from the cached function goes down temporarily, the last successful result is still available
- Perfect for critical operations where latency must be minimized
The skip_cache feature allows you to bypass reading from cache while still updating it with fresh results:
@cache(ttl=300)
def get_user_data(user_id):
# Expensive operation to fetch user data
return fetch_from_database(user_id)
# Normal call - uses cache if available
user = get_user_data(123)
# Force fresh execution while updating cache
fresh_user = get_user_data(123, skip_cache=True)
# Next normal call will get the updated cached value
updated_user = get_user_data(123)How Skip Cache Works:
- When
skip_cache=Trueis passed, the function bypasses reading from cache - The function executes normally and returns fresh results
- The fresh result is stored in the cache, updating any existing cached value
- Subsequent calls without
skip_cache=Truewill use the updated cached value - The TTL timer resets from when the cache last was updated
Benefits:
- Force refresh of potentially stale data while keeping cache warm
- Ensuring fresh data for critical operations while maintaining cache for other calls
Run the test scripts
poetry run python -m pytestContributions are welcome! Feel free to open an issue or submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.