Unified LLM API provider and engine library for Python.
- Support for multiple LLM providers (OpenAI, DeepSeek, Ollama, Custom)
- Both synchronous and asynchronous API support
- Unified configuration via
providers.yml - Environment variable resolution
- Automatic retry logic
- Streaming support
- Token estimation and resource management
Install from PyPI:
pip install llm-engineFor local development:
cd llm-engine
pip install -e .Or install with development dependencies:
pip install -e ".[dev]"Create a providers.yml file:
providers:
deepseek:
base_url: "https://api.deepseek.com/v1"
api_key: ${DEEPSEEK_API_KEY}
default_model: "deepseek-chat"
models:
- name: "deepseek-chat"
context_length: 128000
functions:
json_output: truefrom llm_engine import LLMConfig, LLMProvider, LLMEngine
config = LLMConfig(
provider=LLMProvider.DEEPSEEK,
model_name="deepseek-chat",
api_key="your-api-key",
)
engine = LLMEngine(config)
response = await engine.generate("Hello, world!")from llm_engine import LLMConfig, LLMProvider
from llm_engine.providers.openai_compatible import OpenAICompatibleProvider
config = LLMConfig(
provider=LLMProvider.DEEPSEEK,
model_name="deepseek-chat",
api_key="your-api-key",
)
provider = OpenAICompatibleProvider(config)
response = provider.call("Hello, world!")MIT