Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ coverage.xml
*.cover
*.py.cover
.hypothesis/

# Model configuration (contains API keys)
.aloop/models.yaml
.pytest_cache/
cover/

Expand Down
14 changes: 11 additions & 3 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,14 @@ pre-commit install

Never commit directly to `main`. All changes go through PR review.

## Checkpoint Commits

Prefer small, reviewable commits:
- Before committing, run `./scripts/dev.sh check` (precommit + typecheck + tests).
- Keep mechanical changes (formatting, renames) in their own commit when possible.
- **Human-in-the-loop**: at key checkpoints, the agent should *ask* whether to `git commit` and/or `git push` (do not do it automatically).
- Before asking to commit, show a short change summary (e.g. `git diff --stat`) and the `./scripts/dev.sh check` result.

## CI

GitHub Actions runs `./scripts/dev.sh precommit`, `./scripts/dev.sh test -q`, and strict typecheck on PRs.
Expand All @@ -44,7 +52,7 @@ TYPECHECK_STRICT=1 ./scripts/dev.sh typecheck
```

Manual doc/workflow checks:
- README/AGENTS/docs: avoid legacy/removed commands (`LLM_PROVIDER`, `pip install -e`, `requirements.txt`, `setup.py`)
- README/AGENTS/docs: avoid legacy/removed commands or env-based config; use current docs only
- Docker examples use `--mode`/`--task`
- Python 3.12+ + uv-only prerequisites documented consistently

Expand All @@ -53,7 +61,7 @@ Change impact reminders:
- Config changes → update `docs/configuration.md`
- Workflow scripts → update `AGENTS.md`, `docs/packaging.md`

Run a quick smoke task (requires a configured provider in `.aloop/config`):
Run a quick smoke task (requires a configured provider in `.aloop/models.yaml`):

```bash
python main.py --task "Calculate 1+1"
Expand Down Expand Up @@ -122,7 +130,7 @@ Unified entrypoint: `./scripts/dev.sh format`

## Docs Pointers

- Configuration & `.aloop/config`: `docs/configuration.md`
- Configuration & `.aloop/models.yaml`: `docs/configuration.md`
- Packaging & release checklist: `docs/packaging.md`
- Extending tools/agents: `docs/extending.md`
- Memory system: `docs/memory-management.md`, `docs/memory_persistence.md`
Expand Down
74 changes: 35 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,57 +57,56 @@ pre-commit install

### 1. Configuration

On first run, `.aloop/config` is created automatically with sensible defaults. Edit it to configure your LLM provider:
On first run, `.aloop/models.yaml` is created automatically with a template. Edit it to configure your models and API keys (this file is gitignored):

```bash
$EDITOR .aloop/config
$EDITOR .aloop/models.yaml
```

Example `.aloop/config`:
Example `.aloop/models.yaml`:

```bash
# LiteLLM Model Configuration (supports 100+ providers)
# Format: provider/model_name
LITELLM_MODEL=anthropic/claude-3-5-sonnet-20241022
```yaml
models:
openai/gpt-4o:
api_key: sk-...
timeout: 300

# API Keys (set the key for your chosen provider)
ANTHROPIC_API_KEY=your_anthropic_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
anthropic/claude-3-5-sonnet-20241022:
api_key: sk-ant-...

# Optional: Custom base URL for proxies or custom endpoints
LITELLM_API_BASE=
# Local model example
ollama/llama2:
api_base: http://localhost:11434

# Optional: LiteLLM-specific settings
LITELLM_DROP_PARAMS=true # Drop unsupported params instead of erroring
LITELLM_TIMEOUT=600 # Request timeout in seconds
default: openai/gpt-4o
```

# Agent Configuration
MAX_ITERATIONS=100 # Maximum iteration loops
Non-model runtime settings live in `.aloop/config` (created automatically). Example:

# Memory Management
```bash
MAX_ITERATIONS=100
MEMORY_ENABLED=true
MEMORY_COMPRESSION_THRESHOLD=25000
MEMORY_SHORT_TERM_SIZE=100
MEMORY_COMPRESSION_RATIO=0.3
```

# Retry Configuration (for handling rate limits)
RETRY_MAX_ATTEMPTS=3
RETRY_INITIAL_DELAY=1.0
RETRY_MAX_DELAY=60.0
**Switching Models:**

# Logging
LOG_LEVEL=DEBUG
In interactive mode, use the `/model` command:
```bash
# Pick a model
/model

# Or edit the config
/model edit
```

Or use the CLI flag:
```bash
python main.py --task "Hello" --model openai/gpt-4o
```

**Quick setup for different providers:**
**Model setup:**

- **Anthropic Claude**: `LITELLM_MODEL=anthropic/claude-3-5-sonnet-20241022`
- **OpenAI GPT**: `LITELLM_MODEL=openai/gpt-4o`
- **Google Gemini**: `LITELLM_MODEL=gemini/gemini-1.5-pro`
- **Azure OpenAI**: `LITELLM_MODEL=azure/gpt-4`
- **AWS Bedrock**: `LITELLM_MODEL=bedrock/anthropic.claude-v2`
- **Local (Ollama)**: `LITELLM_MODEL=ollama/llama2`
Edit `.aloop/models.yaml` and add your provider model IDs + API keys.

See [LiteLLM Providers](https://docs.litellm.ai/docs/providers) for 100+ supported providers.

Expand Down Expand Up @@ -238,10 +237,7 @@ See the [Configuration Guide](docs/configuration.md) for all options. Key settin

| Setting | Description | Default |
|---------|-------------|---------|
| `LITELLM_MODEL` | LiteLLM model (provider/model format) | `anthropic/claude-3-5-sonnet-20241022` |
| `LITELLM_API_BASE` | Custom base URL for proxies | Empty |
| `LITELLM_DROP_PARAMS` | Drop unsupported params | `true` |
| `LITELLM_TIMEOUT` | Request timeout in seconds | `600` |
| `.aloop/models.yaml` | Model configuration (models + keys + default) | - |
| `MAX_ITERATIONS` | Maximum agent iterations | `100` |
| `MEMORY_COMPRESSION_THRESHOLD` | Compress when exceeded | `25000` |
| `MEMORY_SHORT_TERM_SIZE` | Recent messages to keep | `100` |
Expand Down
77 changes: 76 additions & 1 deletion agent/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
from .tool_executor import ToolExecutor

if TYPE_CHECKING:
from llm import LiteLLMAdapter
from llm import LiteLLMAdapter, ModelManager

logger = get_logger(__name__)

Expand All @@ -27,16 +27,19 @@ def __init__(
llm: "LiteLLMAdapter",
tools: List[BaseTool],
max_iterations: int = 10,
model_manager: Optional["ModelManager"] = None,
):
"""Initialize the agent.

Args:
llm: LLM instance to use
max_iterations: Maximum number of agent loop iterations
tools: List of tools available to the agent
model_manager: Optional model manager for switching models
"""
self.llm = llm
self.max_iterations = max_iterations
self.model_manager = model_manager

# Initialize todo list system
self.todo_list = TodoList()
Expand All @@ -56,6 +59,16 @@ def __init__(
# This injects current todo state into summaries instead of preserving all todo messages
self.memory.set_todo_context_provider(self._get_todo_context)

def _set_llm_adapter(self, llm: "LiteLLMAdapter") -> None:
self.llm = llm

# Keep memory/compressor in sync with the active LLM.
# Otherwise stats/compression might continue using the previous model.
if hasattr(self, "memory") and self.memory:
self.memory.llm = llm
if hasattr(self.memory, "compressor") and self.memory.compressor:
self.memory.compressor.llm = llm

@abstractmethod
def run(self, task: str) -> str:
"""Execute the agent on a task and return final answer."""
Expand Down Expand Up @@ -225,3 +238,65 @@ async def _react_loop(
await self.memory.add_message(result_messages)
else:
messages.append(result_messages)

def switch_model(self, model_id: str) -> bool:
"""Switch to a different model.

Args:
model_id: LiteLLM model ID to switch to

Returns:
True if switch was successful, False otherwise
"""
if not self.model_manager:
logger.warning("No model manager available for switching models")
return False

profile = self.model_manager.get_model(model_id)
if not profile:
logger.error(f"Model '{model_id}' not found")
return False

# Validate the model
is_valid, error_msg = self.model_manager.validate_model(profile)
if not is_valid:
logger.error(f"Invalid model: {error_msg}")
return False

# Switch the model
new_profile = self.model_manager.switch_model(model_id)
if not new_profile:
logger.error(f"Failed to switch to model '{model_id}'")
return False

# Reinitialize LLM adapter with new model
from llm import LiteLLMAdapter

new_llm = LiteLLMAdapter(
model=new_profile.model_id,
api_key=new_profile.api_key,
api_base=new_profile.api_base,
timeout=new_profile.timeout,
drop_params=new_profile.drop_params,
)
self._set_llm_adapter(new_llm)

logger.info(f"Switched to model: {new_profile.model_id}")
return True

def get_current_model_info(self) -> Optional[dict]:
"""Get information about the current model.

Returns:
Dictionary with model info or None if not available
"""
if self.model_manager:
profile = self.model_manager.get_current_model()
if not profile:
return None
return {
"name": profile.model_id,
"model_id": profile.model_id,
"provider": profile.provider,
}
return None
57 changes: 17 additions & 40 deletions config.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,10 @@
# Default configuration template
_DEFAULT_CONFIG = """\
# AgenticLoop Configuration
#
# NOTE: Model configuration lives in `.aloop/models.yaml`.
# This file controls non-model runtime settings only.

# LiteLLM Model Configuration
# Format: provider/model_name (e.g. "anthropic/claude-3-5-sonnet-20241022")
LITELLM_MODEL=anthropic/claude-3-5-sonnet-20241022

# API Keys (set the key for your chosen provider)
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GEMINI_API_KEY=

# Optional settings
LITELLM_API_BASE=
LITELLM_DROP_PARAMS=true
LITELLM_TIMEOUT=600
TOOL_TIMEOUT=600
MAX_ITERATIONS=1000
"""
Expand Down Expand Up @@ -63,25 +53,23 @@ def _ensure_config():
_cfg = _load_config(_CONFIG_FILE)


def get_raw_config() -> dict[str, str]:
"""Get the raw config dictionary.

Returns:
Dictionary of config key-value pairs
"""
return _cfg.copy()


class Config:
"""Configuration for the agentic system.

All configuration is centralized here. Access config values directly via Config.XXX.
"""

# LiteLLM Model Configuration
# Format: provider/model_name (e.g. "anthropic/claude-3-5-sonnet-20241022")
LITELLM_MODEL = _cfg.get("LITELLM_MODEL", "anthropic/claude-3-5-sonnet-20241022")

# Common provider API keys (optional depending on provider)
ANTHROPIC_API_KEY = _cfg.get("ANTHROPIC_API_KEY") or None
OPENAI_API_KEY = _cfg.get("OPENAI_API_KEY") or None
GEMINI_API_KEY = _cfg.get("GEMINI_API_KEY") or _cfg.get("GOOGLE_API_KEY") or None

# Optional LiteLLM Configuration
LITELLM_API_BASE = _cfg.get("LITELLM_API_BASE") or None
LITELLM_DROP_PARAMS = _cfg.get("LITELLM_DROP_PARAMS", "true").lower() == "true"
LITELLM_TIMEOUT = int(_cfg.get("LITELLM_TIMEOUT", "600"))
# Model configuration is handled by `.aloop/models.yaml` via ModelManager.
# `.aloop/config` controls non-model runtime settings only.
TOOL_TIMEOUT = float(_cfg.get("TOOL_TIMEOUT", "600"))

# Agent Configuration
Expand Down Expand Up @@ -147,17 +135,6 @@ def validate(cls):
Raises:
ValueError: If required configuration is missing
"""
if not cls.LITELLM_MODEL:
raise ValueError(
"LITELLM_MODEL not set. Please set it in .aloop/config.\n"
"Example: LITELLM_MODEL=anthropic/claude-3-5-sonnet-20241022"
)

# Validate common providers (LiteLLM supports many; only enforce the ones we document).
provider = cls.LITELLM_MODEL.split("/", 1)[0].lower() if "/" in cls.LITELLM_MODEL else ""
if provider == "anthropic" and not cls.ANTHROPIC_API_KEY:
raise ValueError("ANTHROPIC_API_KEY not set. Please set it in .aloop/config.")
if provider == "openai" and not cls.OPENAI_API_KEY:
raise ValueError("OPENAI_API_KEY not set. Please set it in .aloop/config.")
if provider == "gemini" and not cls.GEMINI_API_KEY:
raise ValueError("GEMINI_API_KEY not set. Please set it in .aloop/config.")
# Model configuration is handled by `.aloop/models.yaml` via ModelManager.
# `.aloop/config` is used for non-model runtime settings only.
return
23 changes: 16 additions & 7 deletions docs/advanced-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,9 +194,15 @@ Support for proxies, Azure, and local deployments:

### Configuration

```bash
```yaml
# Proxy / custom endpoint
LITELLM_API_BASE=http://proxy.company.com
# Set `api_base` on the model you want to route through a proxy.
models:
openai/gpt-4o:
api_key: sk-...
api_base: http://proxy.company.com

default: openai/gpt-4o
```

### Use Cases
Expand All @@ -210,11 +216,14 @@ LITELLM_API_BASE=http://proxy.company.com
### Example: Azure OpenAI

```bash
# .aloop/config
LITELLM_MODEL=azure/gpt-4
AZURE_API_KEY=your_azure_key
AZURE_API_BASE=https://your-resource.openai.azure.com
AZURE_API_VERSION=2024-02-15-preview
# .aloop/models.yaml
models:
azure/gpt-4:
api_key: your_azure_key
api_base: https://your-resource.openai.azure.com
api_version: 2024-02-15-preview

default: azure/gpt-4
```

## Agent Mode Comparison
Expand Down
Loading