From 1e2279adc38b21661f48d5d4c3f8466f60fea720 Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Thu, 11 Dec 2025 06:07:33 +0000 Subject: [PATCH] Update documentation for secrets and integrate Agent Harness - Updated MIGRATION.md and README.md to include BIBLE_API_URL and BIBLE_API_KEY. - Removed obsolete REVIEW_AND_PROPOSAL.md. - Integrated Agent Harness structure: - Added scripts/tasks.py, scripts/memory.py, scripts/bootstrap.py, scripts/tasks. - Added .cursorrules and templates/maintenance_mode.md. - Added docs/tasks/GUIDE.md and docs/interop/tool_definitions.json. - Updated AGENTS.md and CLAUDE.md to match harness standards while preserving project instructions. - Created docs/memories/ directory. --- .cursorrules | 15 ++ AGENTS.md | 97 ++++++- CLAUDE.md | 118 ++++++++- MIGRATION.md | 2 + README.md | 4 +- REVIEW_AND_PROPOSAL.md | 86 ------- docs/interop/tool_definitions.json | 185 +++++++++++++ docs/memories/.keep | 0 docs/tasks/GUIDE.md | 122 +++++++++ scripts/bootstrap.py | 230 +++++++++++++++++ scripts/memory.py | 239 +++++++++++++++++ scripts/tasks | 15 ++ scripts/tasks.py | 399 +++++++++++++++++++++++++++-- templates/maintenance_mode.md | 88 +++++++ 14 files changed, 1467 insertions(+), 133 deletions(-) create mode 100644 .cursorrules delete mode 100644 REVIEW_AND_PROPOSAL.md create mode 100644 docs/interop/tool_definitions.json create mode 100644 docs/memories/.keep create mode 100644 docs/tasks/GUIDE.md create mode 100644 scripts/bootstrap.py create mode 100644 scripts/memory.py create mode 100644 scripts/tasks create mode 100644 templates/maintenance_mode.md diff --git a/.cursorrules b/.cursorrules new file mode 100644 index 00000000..740e7c5a --- /dev/null +++ b/.cursorrules @@ -0,0 +1,15 @@ +# Cursor Rules + +You are working in a project that follows a strict Task Documentation System. + +## Task System +- **Source of Truth**: The `docs/tasks/` directory contains the state of all work. +- **Workflow**: + 1. Check context: `./scripts/tasks context` + 2. Create task if needed: `./scripts/tasks create ...` + 3. Update status: `./scripts/tasks update ...` +- **Reference**: See `docs/tasks/GUIDE.md` for details. + +## Tools +- Use `./scripts/tasks` for all task operations. +- Use `--format json` if you need to parse output. diff --git a/AGENTS.md b/AGENTS.md index 59ef9f51..d1fd6d68 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,17 +1,87 @@ -# ScriptureBot Developer Guide +# AI Agent Instructions -**CURRENT STATUS: MAINTENANCE MODE** +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. -## Helper Scripts -- `scripts/tasks.py`: Manage development tasks. - - `python3 scripts/tasks.py list`: List tasks. - - `python3 scripts/tasks.py create `: Create a task. - - `python3 scripts/tasks.py update <id> <status>`: Update task status. +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** -## Documentation -- `docs/architecture/`: System architecture and directory structure. -- `docs/features/`: Feature specifications. -- `docs/tasks/`: Active and pending tasks. +## Workflow +1. **Pick a Task**: Run `python3 scripts/tasks.py next` to find the best task, `context` to see active tasks, or `list` to see pending ones. +2. **Plan & Document**: + * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. + * **Security Check**: Ask the user about specific security considerations for this task. + * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. + * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. +3. **Implement**: Write code, run tests. +4. **Update Documentation Loop**: + * As you complete sub-tasks, check them off in the task document. + * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. + * Record key architectural decisions in the task document. + * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. +5. **Review & Verify**: + * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. + * Ask a human or another agent to review the code. + * Once approved and tested, update status to `verified`. +6. **Finalize**: + * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. + * Record actual effort in the file. + * Ensure all acceptance criteria are met. + +## Tools +* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). +* **Next**: `./scripts/tasks next` (Finds the best task to work on). +* **Create**: `./scripts/tasks create [category] "Title"` +* **List**: `./scripts/tasks list [--status pending]` +* **Context**: `./scripts/tasks context` +* **Update**: `./scripts/tasks update [ID] [status]` +* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) +* **Memory**: `./scripts/memory.py [create|list|read]` +* **JSON Output**: Add `--format json` to any command for machine parsing. + +## Documentation Reference +* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. +* **Architecture**: Refer to `docs/architecture/` for system design. +* **Features**: Refer to `docs/features/` for feature specifications. +* **Security**: Refer to `docs/security/` for risk assessments and mitigations. +* **Memories**: Refer to `docs/memories/` for long-term project context. + +## Code Style & Standards +* Follow the existing patterns in the codebase. +* Ensure all new code is covered by tests (if testing infrastructure exists). + +## PR Review Methodology +When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. + +### 1. Preparation +1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` +2. **Fetch Details**: Use `gh` to get the PR context. + * `gh pr view <N>` + * `gh pr diff <N>` + +### 2. Analysis & Planning (The "Review Plan") +**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). + +Your plan must include: +* **High-Level Summary**: Purpose, new APIs, breaking changes. +* **Dependency Check**: New libraries, maintenance status, security. +* **Impact Assessment**: Effect on existing code/docs. +* **Focus Areas**: Prioritized list of files/modules to check. +* **Suggested Comments**: Draft comments for specific lines. + * Format: `File: <path> | Line: <N> | Comment: <suggestion>` + * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). + +### 3. Execution +Once the human approves the plan and comments: +1. **Pending Review**: Create a pending review using `gh`. + * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` + * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` +2. **Batch Comments**: Add comments to the pending review. + * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` +3. **Submit**: + * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). + +### 4. Close Task +* Update task status to `completed`. ## Project Specific Instructions @@ -35,3 +105,8 @@ - **Setup**: Create a `.env` file with `TELEGRAM_ID` and `TELEGRAM_ADMIN_ID`. - **Run**: `go run main.go` - **Testing**: Use `ngrok` to tunnel webhooks or send mock HTTP requests. + +## Agent Interoperability +- **Task Manager Skill**: `.claude/skills/task_manager/` +- **Memory Skill**: `.claude/skills/memory/` +- **Tool Definitions**: `docs/interop/tool_definitions.json` diff --git a/CLAUDE.md b/CLAUDE.md index 4927e159..d1fd6d68 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,10 +1,112 @@ -# Claude Instructions +# AI Agent Instructions -See [AGENTS.md](AGENTS.md) for full context. +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. -## Task Management -Use `scripts/tasks.py` to manage tasks: -- `python3 scripts/tasks.py list` -- `python3 scripts/tasks.py create <category> <title>` -- `python3 scripts/tasks.py update <id> <status>` -- `python3 scripts/tasks.py show <id>` +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Workflow +1. **Pick a Task**: Run `python3 scripts/tasks.py next` to find the best task, `context` to see active tasks, or `list` to see pending ones. +2. **Plan & Document**: + * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. + * **Security Check**: Ask the user about specific security considerations for this task. + * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. + * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. +3. **Implement**: Write code, run tests. +4. **Update Documentation Loop**: + * As you complete sub-tasks, check them off in the task document. + * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. + * Record key architectural decisions in the task document. + * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. +5. **Review & Verify**: + * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. + * Ask a human or another agent to review the code. + * Once approved and tested, update status to `verified`. +6. **Finalize**: + * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. + * Record actual effort in the file. + * Ensure all acceptance criteria are met. + +## Tools +* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). +* **Next**: `./scripts/tasks next` (Finds the best task to work on). +* **Create**: `./scripts/tasks create [category] "Title"` +* **List**: `./scripts/tasks list [--status pending]` +* **Context**: `./scripts/tasks context` +* **Update**: `./scripts/tasks update [ID] [status]` +* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) +* **Memory**: `./scripts/memory.py [create|list|read]` +* **JSON Output**: Add `--format json` to any command for machine parsing. + +## Documentation Reference +* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. +* **Architecture**: Refer to `docs/architecture/` for system design. +* **Features**: Refer to `docs/features/` for feature specifications. +* **Security**: Refer to `docs/security/` for risk assessments and mitigations. +* **Memories**: Refer to `docs/memories/` for long-term project context. + +## Code Style & Standards +* Follow the existing patterns in the codebase. +* Ensure all new code is covered by tests (if testing infrastructure exists). + +## PR Review Methodology +When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. + +### 1. Preparation +1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` +2. **Fetch Details**: Use `gh` to get the PR context. + * `gh pr view <N>` + * `gh pr diff <N>` + +### 2. Analysis & Planning (The "Review Plan") +**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). + +Your plan must include: +* **High-Level Summary**: Purpose, new APIs, breaking changes. +* **Dependency Check**: New libraries, maintenance status, security. +* **Impact Assessment**: Effect on existing code/docs. +* **Focus Areas**: Prioritized list of files/modules to check. +* **Suggested Comments**: Draft comments for specific lines. + * Format: `File: <path> | Line: <N> | Comment: <suggestion>` + * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). + +### 3. Execution +Once the human approves the plan and comments: +1. **Pending Review**: Create a pending review using `gh`. + * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` + * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` +2. **Batch Comments**: Add comments to the pending review. + * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` +3. **Submit**: + * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). + +### 4. Close Task +* Update task status to `completed`. + +## Project Specific Instructions + +### Core Directives +- **API First**: The Bible AI API is the primary source for data. Scraping (`pkg/app/passage.go` fallback) is deprecated and should be avoided for new features. +- **Secrets**: Do not commit secrets. Use `pkg/secrets` to retrieve them from Environment or Google Secret Manager. +- **Testing**: Run tests from the root using `go test ./pkg/...`. + +### Code Guidelines +- **Go Version**: 1.24+ +- **Naming**: + - Variables: `camelCase` + - Functions: `PascalCase` (exported), `camelCase` (internal) + - Packages: `underscore_case` +- **Structure**: + - `pkg/app`: Business logic. + - `pkg/bot`: Platform integration. + - `pkg/utils`: Shared utilities. + +### Local Development +- **Setup**: Create a `.env` file with `TELEGRAM_ID` and `TELEGRAM_ADMIN_ID`. +- **Run**: `go run main.go` +- **Testing**: Use `ngrok` to tunnel webhooks or send mock HTTP requests. + +## Agent Interoperability +- **Task Manager Skill**: `.claude/skills/task_manager/` +- **Memory Skill**: `.claude/skills/memory/` +- **Tool Definitions**: `docs/interop/tool_definitions.json` diff --git a/MIGRATION.md b/MIGRATION.md index a8556d0e..f6e7ec9f 100644 --- a/MIGRATION.md +++ b/MIGRATION.md @@ -31,6 +31,8 @@ Update the following secrets in the GitHub Repository settings: * `GCLOUD_ARTIFACT_REPOSITORY_ID`: The name of the repository created in Artifact Registry. * `TELEGRAM_ID`: The Telegram Bot Token (ensure it matches the one used in the source project if preserving identity). * `TELEGRAM_ADMIN_ID`: Your Telegram User ID. +* `BIBLE_API_URL`: The URL for the Bible API (required for new features). +* `BIBLE_API_KEY`: The API Key for the Bible API. ## 2. Data Migration diff --git a/README.md b/README.md index dbf2ead0..8d1d6358 100644 --- a/README.md +++ b/README.md @@ -28,8 +28,8 @@ A Telegram bot to make the Bible more accessible, providing passages, search, an ```env TELEGRAM_ID=your_bot_token TELEGRAM_ADMIN_ID=your_user_id - BIBLE_API_URL=https://api.example.com (optional) - BIBLE_API_KEY=your_key (optional) + BIBLE_API_URL=https://api.example.com (Required for Q&A and Search) + BIBLE_API_KEY=your_key (Required for Q&A and Search) ``` 3. Run the bot: ```bash diff --git a/REVIEW_AND_PROPOSAL.md b/REVIEW_AND_PROPOSAL.md deleted file mode 100644 index 2275f4da..00000000 --- a/REVIEW_AND_PROPOSAL.md +++ /dev/null @@ -1,86 +0,0 @@ -# Review and Proposal: BotPlatform Refactoring - -## 1. Review of Current State - -The `BotPlatform` repository is currently tightly coupled with the `ScriptureBot` application. This coupling prevents `BotPlatform` from being a truly "democratized" and generic platform for other chatbots. - -### Key Issues Identified: - -1. **Data Structure Coupling (`pkg/def/class.go`)**: - * **`UserData`**: Contains `datastore:""` tags. These are specific to Google Cloud Datastore and the schema used by `ScriptureBot`. A generic platform should be storage-agnostic. - * **`UserData`**: Contains `Action` and `Config` fields. These are application-level state tracking fields specific to ScriptureBot's state machine logic, not properties of a Platform User. - * **`SessionData`**: Contains `ResourcePath string`. This is a `ScriptureBot`-specific configuration used to locate local resources. Generic session data should not enforce specific configuration fields. - * **UI Constraints**: The generic `ResponseOptions` struct forces a 1-column layout (via hardcoded constants in the Telegram implementation), limiting flexibility for other bots. - -2. **Platform Implementation (`pkg/platform/telegram.go`)**: - * The `Translate` method populates `env.User` directly into the struct. While functional, it needs to ensure generic extensibility points (like `Props`) are initialized. - -3. **ScriptureBot Usage**: - * `ScriptureBot` relies on `BotPlatform`'s `UserData` for its database operations (`utils.RegisterUser`, `utils.PushUser`) and state tracking (`Action`). - * `ScriptureBot` uses `SessionData.ResourcePath` to pass configuration. - -## 2. Refactoring Proposal for BotPlatform - -The goal is to remove all `ScriptureBot`-specific artifacts from `BotPlatform` while providing extension points so `ScriptureBot` (and other bots) can still function effectively. - -### Proposed Changes: - -1. **Clean `UserData`**: - * Remove all `datastore` tags from the `UserData` struct. - * Remove `Action` and `Config` fields. `UserData` should only contain fields relevant to the chat platform identity (Id, Username, Firstname, Lastname, Type). - -2. **Generalize `SessionData`**: - * Remove `ResourcePath` from `SessionData`. - * Add a generic `Props map[string]interface{}`. This allows applications to attach arbitrary data (like `ResourcePath` or other context) to the session. - * **Crucial Implementation Detail**: Platform implementations (e.g., `Translate` in `telegram.go`) *must* initialize this map (`make(map[string]interface{})`) to prevent runtime panics for consumers. - -3. **Enhance UI Flexibility**: - * Add `ColWidth int` to `ResponseOptions`. - * Update platform logic to use this value for button layout, defaulting to the standard (1 column) if not set. - -## 3. Adaptation Plan for ScriptureBot - -Since `BotPlatform` will be modifying its public API, `ScriptureBot` must be updated. - -### Required Changes in ScriptureBot: - -1. **Define Local User Model**: - * Create a `User` struct in `ScriptureBot` (e.g., in `pkg/models/user.go`) that includes: - * The basic fields (Firstname, etc.) - * The `datastore` tags. - * **The State Fields**: `Action` and `Config`. - * Example: - ```go - type User struct { - Firstname string `datastore:""` - Action string `datastore:""` - Config string `datastore:""` - // ... other fields - } - ``` - -2. **Map Data**: - * In `TelegramHandler`, map `platform.UserData` (identity) to `ScriptureBot.User` (identity + state). - * Load `Action` and `Config` from the database (via `utils.RegisterUser`), not from the platform session. - -3. **Handle ResourcePath**: - * Populate `env.Props["ResourcePath"]` in the handler and read it from there in command processors. - -## 4. Migration Impact Analysis - -### Will this affect existing users? -**No, the data for existing users will remain intact.** - -* **Data Compatibility**: The removal of fields (`Action`, `Config`) from the *library struct* does not delete columns in the *database*. Since `ScriptureBot` will define a local struct that *includes* these fields before writing back to the DB, the data is preserved. -* **Datastore Tags**: Removing tags is safe as the Go Datastore client defaults to field names, which matches the previous behavior. - -### Do we need a migration task? -**Yes, a *Code Migration* task is required.** - -`ScriptureBot` **will fail to compile** or **lose state functionality** without code changes because `UserData` will no longer have `Action` or `Config`. - -* **Task**: Implement the "Define Local User Model" step. This is critical to preserve the bot's ability to remember user state (e.g., "waiting for search term"). - -## 5. Conclusion - -This refactoring strictly separates "Platform Identity" from "Application State" and "Storage". `BotPlatform` handles the delivery of messages, while `ScriptureBot` owns the user's state and data persistence. diff --git a/docs/interop/tool_definitions.json b/docs/interop/tool_definitions.json new file mode 100644 index 00000000..f9c4572f --- /dev/null +++ b/docs/interop/tool_definitions.json @@ -0,0 +1,185 @@ +{ + "tools": [ + { + "type": "function", + "function": { + "name": "task_create", + "description": "Create a new development task.", + "parameters": { + "type": "object", + "properties": { + "category": { + "type": "string", + "enum": ["foundation", "infrastructure", "domain", "presentation", "migration", "features", "testing"], + "description": "The category of the task." + }, + "title": { + "type": "string", + "description": "The title of the task." + }, + "description": { + "type": "string", + "description": "Detailed description of the task." + } + }, + "required": ["category", "title"] + } + } + }, + { + "type": "function", + "function": { + "name": "task_list", + "description": "List existing tasks, optionally filtered by status or category.", + "parameters": { + "type": "object", + "properties": { + "status": { + "type": "string", + "enum": ["pending", "in_progress", "wip_blocked", "review_requested", "verified", "completed", "blocked", "cancelled", "deferred"], + "description": "Filter by task status." + }, + "category": { + "type": "string", + "enum": ["foundation", "infrastructure", "domain", "presentation", "migration", "features", "testing"], + "description": "Filter by task category." + }, + "archived": { + "type": "boolean", + "description": "Include archived tasks in the list." + } + } + } + } + }, + { + "type": "function", + "function": { + "name": "task_update", + "description": "Update the status of an existing task.", + "parameters": { + "type": "object", + "properties": { + "task_id": { + "type": "string", + "description": "The ID of the task (e.g., FOUNDATION-20230521-120000)." + }, + "status": { + "type": "string", + "enum": ["pending", "in_progress", "wip_blocked", "review_requested", "verified", "completed", "blocked", "cancelled", "deferred"], + "description": "The new status of the task." + } + }, + "required": ["task_id", "status"] + } + } + }, + { + "type": "function", + "function": { + "name": "task_show", + "description": "Show the details of a specific task.", + "parameters": { + "type": "object", + "properties": { + "task_id": { + "type": "string", + "description": "The ID of the task." + } + }, + "required": ["task_id"] + } + } + }, + { + "type": "function", + "function": { + "name": "task_context", + "description": "Show tasks that are currently in progress.", + "parameters": { + "type": "object", + "properties": {} + } + } + }, + { + "type": "function", + "function": { + "name": "task_archive", + "description": "Archive a completed task.", + "parameters": { + "type": "object", + "properties": { + "task_id": { + "type": "string", + "description": "The ID of the task to archive." + } + }, + "required": ["task_id"] + } + } + }, + { + "type": "function", + "function": { + "name": "memory_create", + "description": "Create a new long-term memory.", + "parameters": { + "type": "object", + "properties": { + "title": { + "type": "string", + "description": "The title of the memory." + }, + "content": { + "type": "string", + "description": "The content of the memory." + }, + "tags": { + "type": "string", + "description": "Comma-separated tags for the memory." + } + }, + "required": ["title", "content"] + } + } + }, + { + "type": "function", + "function": { + "name": "memory_list", + "description": "List existing memories, optionally filtered by tag.", + "parameters": { + "type": "object", + "properties": { + "tag": { + "type": "string", + "description": "Filter by tag." + }, + "limit": { + "type": "integer", + "description": "Limit the number of results." + } + } + } + } + }, + { + "type": "function", + "function": { + "name": "memory_read", + "description": "Read a specific memory.", + "parameters": { + "type": "object", + "properties": { + "filename": { + "type": "string", + "description": "The filename or slug of the memory to read." + } + }, + "required": ["filename"] + } + } + } + ] +} diff --git a/docs/memories/.keep b/docs/memories/.keep new file mode 100644 index 00000000..e69de29b diff --git a/docs/tasks/GUIDE.md b/docs/tasks/GUIDE.md new file mode 100644 index 00000000..3d0a9440 --- /dev/null +++ b/docs/tasks/GUIDE.md @@ -0,0 +1,122 @@ +# Task Documentation System Guide + +This guide explains how to create, maintain, and update task documentation. It provides a reusable system for tracking implementation work, decisions, and progress. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Directory Structure +Tasks are organized by category in `docs/tasks/`: +- `foundation/`: Core architecture and setup +- `infrastructure/`: Services, adapters, platform code +- `domain/`: Business logic, use cases +- `presentation/`: UI, state management +- `features/`: End-to-end feature implementation +- `migration/`: Refactoring, upgrades +- `testing/`: Testing infrastructure +- `review/`: Code reviews and PR analysis + +## Task Document Format + +We use **YAML Frontmatter** for metadata and **Markdown** for content. + +### Frontmatter (Required) +```yaml +--- +id: FOUNDATION-20250521-103000 # Auto-generated Timestamp ID +status: pending # Current status +title: Initial Project Setup # Task Title +priority: medium # high, medium, low +created: 2025-05-21 10:30:00 # Creation timestamp +category: foundation # Category +type: task # task, story, bug, epic (Optional) +sprint: Sprint 1 # Iteration identifier (Optional) +estimate: 3 # Story points / T-shirt size (Optional) +dependencies: TASK-001, TASK-002 # Comma separated list of IDs (Optional) +--- +``` + +### Status Workflow +1. `pending`: Created but not started. +2. `in_progress`: Active development. +3. `review_requested`: Implementation done, awaiting code review. +4. `verified`: Reviewed and approved. +5. `completed`: Merged and finalized. +6. `wip_blocked` / `blocked`: Development halted. +7. `cancelled` / `deferred`: Stopped or postponed. + +### Content Template +```markdown +# [Task Title] + +## Task Information +- **Dependencies**: [List IDs] + +## Task Details +[Description of what needs to be done] + +### Acceptance Criteria +- [ ] Criterion 1 +- [ ] Criterion 2 + +## Implementation Status +### Completed Work +- ✅ Implemented X (file.py) + +### Blockers +[Describe blockers if any] +``` + +## Tools + +Use the `scripts/tasks` wrapper to manage tasks. + +```bash +# Create a new task (standard) +./scripts/tasks create foundation "Task Title" + +# Create an Agile Story in a Sprint +./scripts/tasks create features "User Login" --type story --sprint "Sprint 1" --estimate 5 + +# List tasks (can filter by sprint) +./scripts/tasks list +./scripts/tasks list --sprint "Sprint 1" + +# Find the next best task to work on (Smart Agent Mode) +./scripts/tasks next + +# Update status +./scripts/tasks update [TASK_ID] in_progress +./scripts/tasks update [TASK_ID] review_requested +./scripts/tasks update [TASK_ID] verified +./scripts/tasks update [TASK_ID] completed + +# Migrate legacy tasks (if updating from older version) +./scripts/tasks migrate +``` + +## Agile Methodology + +This system supports Agile/Scrum workflows for LLM-Human collaboration. + +### Sprints +- Tag tasks with `sprint: [Name]` to group them into iterations. +- Use `./scripts/tasks list --sprint [Name]` to view the sprint backlog. + +### Estimation +- Use `estimate: [Value]` (e.g., Fibonacci numbers 1, 2, 3, 5, 8) to size tasks. + +### Auto-Pilot +- The `./scripts/tasks next` command uses an algorithm to determine the optimal next task based on: + 1. Status (In Progress > Pending) + 2. Dependencies (Unblocked > Blocked) + 3. Sprint (Current Sprint > Backlog) + 4. Priority (High > Low) + 5. Type (Stories/Bugs > Tasks) + +## Agent Integration + +Agents (Claude, etc.) use this system to track their work. +- Always check `./scripts/tasks context` or use `./scripts/tasks next` before starting. +- Keep the task file updated with your progress. +- Use `review_requested` when you need human feedback. diff --git a/scripts/bootstrap.py b/scripts/bootstrap.py new file mode 100644 index 00000000..e180f128 --- /dev/null +++ b/scripts/bootstrap.py @@ -0,0 +1,230 @@ +#!/usr/bin/env python3 +import os +import sys +import shutil +import subprocess + +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +REPO_ROOT = os.path.dirname(SCRIPT_DIR) +AGENTS_FILE = os.path.join(REPO_ROOT, "AGENTS.md") +CLAUDE_FILE = os.path.join(REPO_ROOT, "CLAUDE.md") +TEMPLATE_MAINTENANCE = os.path.join(REPO_ROOT, "templates", "maintenance_mode.md") + +STANDARD_HEADERS = [ + "Helper Scripts", + "Agent Interoperability", + "Step 1: Detect Repository State", + "Step 2: Execution Strategy", + "Step 3: Finalize & Switch to Maintenance Mode" +] + +PREAMBLE_IGNORE_PATTERNS = [ + "# AI Agent Bootstrap Instructions", + "# AI Agent Instructions", + "**CURRENT STATUS: BOOTSTRAPPING MODE**", + "You are an expert Software Architect", + "Your current goal is to bootstrap", +] + +def is_ignored_preamble_line(line): + l = line.strip() + # Keep empty lines to preserve spacing in custom content, + # but we will strip the final result to remove excess whitespace. + if not l: + return False + + for p in PREAMBLE_IGNORE_PATTERNS: + if p in l: + return True + return False + +def extract_custom_content(content): + lines = content.splitlines() + custom_sections = [] + preamble_lines = [] + current_header = None + current_lines = [] + + for line in lines: + if line.startswith("## "): + header = line[3:].strip() + + # Flush previous section + if current_header: + if current_header not in STANDARD_HEADERS: + custom_sections.append((current_header, "\n".join(current_lines))) + else: + # Capture preamble (lines before first header) + for l in current_lines: + if not is_ignored_preamble_line(l): + preamble_lines.append(l) + + current_header = header + current_lines = [] + else: + current_lines.append(line) + + # Flush last section + if current_header: + if current_header not in STANDARD_HEADERS: + custom_sections.append((current_header, "\n".join(current_lines))) + else: + # If no headers found, everything is preamble + for l in current_lines: + if not is_ignored_preamble_line(l): + preamble_lines.append(l) + + return "\n".join(preamble_lines).strip(), custom_sections + +def check_state(): + print("Repository Analysis:") + + # Check if already in maintenance mode + if os.path.exists(AGENTS_FILE): + with open(AGENTS_FILE, "r") as f: + content = f.read() + if "BOOTSTRAPPING MODE" not in content: + print("Status: MAINTENANCE MODE (AGENTS.md is already updated)") + print("To list tasks: python3 scripts/tasks.py list") + return + + files = [f for f in os.listdir(REPO_ROOT) if not f.startswith(".")] + print(f"Files in root: {len(files)}") + + if os.path.exists(os.path.join(REPO_ROOT, "src")) or os.path.exists(os.path.join(REPO_ROOT, "lib")) or os.path.exists(os.path.join(REPO_ROOT, ".git")): + print("Status: EXISTING REPOSITORY (Found src/, lib/, or .git/)") + else: + print("Status: NEW REPOSITORY (Likely)") + + # Check for hooks + hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") + if not os.path.exists(hook_path): + print("\nTip: Run 'python3 scripts/tasks.py install-hooks' to enable safety checks.") + + print("\nNext Steps:") + print("1. Run 'python3 scripts/tasks.py init' to scaffold directories.") + print("2. Run 'python3 scripts/tasks.py create foundation \"Initial Setup\"' to track your work.") + print("3. Explore docs/architecture/ and docs/features/.") + print("4. When ready to switch to maintenance mode, run: python3 scripts/bootstrap.py finalize --interactive") + +def finalize(): + interactive = "--interactive" in sys.argv + print("Finalizing setup...") + if not os.path.exists(TEMPLATE_MAINTENANCE): + print(f"Error: Template {TEMPLATE_MAINTENANCE} not found.") + sys.exit(1) + + # Safety check + if os.path.exists(AGENTS_FILE): + with open(AGENTS_FILE, "r") as f: + content = f.read() + if "BOOTSTRAPPING MODE" not in content and "--force" not in sys.argv: + print("Error: AGENTS.md does not appear to be in bootstrapping mode.") + print("Use --force to overwrite anyway.") + sys.exit(1) + + # Ensure init is run + print("Ensuring directory structure...") + tasks_script = os.path.join(SCRIPT_DIR, "tasks.py") + try: + subprocess.check_call([sys.executable, tasks_script, "init"]) + except subprocess.CalledProcessError: + print("Error: Failed to initialize directories.") + sys.exit(1) + + # Analyze AGENTS.md for custom sections + custom_sections = [] + custom_preamble = "" + if os.path.exists(AGENTS_FILE): + try: + with open(AGENTS_FILE, "r") as f: + current_content = f.read() + custom_preamble, custom_sections = extract_custom_content(current_content) + except Exception as e: + print(f"Warning: Failed to parse AGENTS.md for custom sections: {e}") + + if interactive: + print("\n--- Merge Analysis ---") + if custom_preamble: + print("[PRESERVED] Custom Preamble (lines before first header)") + print(f" Snippet: {custom_preamble.splitlines()[0][:60]}...") + else: + print("[INFO] No custom preamble found.") + + if custom_sections: + print(f"[PRESERVED] {len(custom_sections)} Custom Sections:") + for header, _ in custom_sections: + print(f" - {header}") + else: + print("[INFO] No custom sections found.") + + print("\n[REPLACED] The following standard bootstrapping sections will be replaced by Maintenance Mode instructions:") + for header in STANDARD_HEADERS: + print(f" - {header}") + + print(f"\n[ACTION] AGENTS.md will be backed up to AGENTS.md.bak") + + try: + # Use input if available, but handle non-interactive environments + response = input("\nProceed with finalization? [y/N] ") + except EOFError: + response = "n" + + if response.lower() not in ["y", "yes"]: + print("Aborting.") + sys.exit(0) + + # Backup AGENTS.md + if os.path.exists(AGENTS_FILE): + backup_file = AGENTS_FILE + ".bak" + try: + shutil.copy2(AGENTS_FILE, backup_file) + print(f"Backed up AGENTS.md to {backup_file}") + if not custom_sections and not custom_preamble and not interactive: + print("IMPORTANT: If you added custom instructions to AGENTS.md, they are now in .bak") + print("Please review AGENTS.md.bak and merge any custom context into the new AGENTS.md manually.") + elif not interactive: + print(f"NOTE: Custom sections/preamble were preserved in the new AGENTS.md.") + print("Please review AGENTS.md.bak to ensure no other context was lost.") + except Exception as e: + print(f"Warning: Failed to backup AGENTS.md: {e}") + + # Read template + with open(TEMPLATE_MAINTENANCE, "r") as f: + content = f.read() + + # Prepend custom preamble + if custom_preamble: + content = custom_preamble + "\n\n" + content + + # Append custom sections + if custom_sections: + content += "\n" + for header, body in custom_sections: + content += f"\n## {header}\n{body}" + if not interactive: + print(f"Appended {len(custom_sections)} custom sections to new AGENTS.md") + + # Overwrite AGENTS.md + with open(AGENTS_FILE, "w") as f: + f.write(content) + + print(f"Updated {AGENTS_FILE} with maintenance instructions.") + + # Check CLAUDE.md symlink + if os.path.islink(CLAUDE_FILE): + print(f"{CLAUDE_FILE} is a symlink. Verified.") + else: + print(f"{CLAUDE_FILE} is NOT a symlink. Recreating it...") + if os.path.exists(CLAUDE_FILE): + os.remove(CLAUDE_FILE) + os.symlink("AGENTS.md", CLAUDE_FILE) + print("Symlink created.") + + print("\nBootstrapping Complete! The agent is now in Maintenance Mode.") + +if __name__ == "__main__": + if len(sys.argv) > 1 and sys.argv[1] == "finalize": + finalize() + else: + check_state() diff --git a/scripts/memory.py b/scripts/memory.py new file mode 100644 index 00000000..f82fef42 --- /dev/null +++ b/scripts/memory.py @@ -0,0 +1,239 @@ +#!/usr/bin/env python3 +import os +import sys +import argparse +import json +import datetime +import re + +# Determine the root directory of the repo +SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) +# Allow overriding root for testing, similar to tasks.py +REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) +MEMORY_DIR = os.path.join(REPO_ROOT, "docs", "memories") + +def init_memory(): + """Ensures the memory directory exists.""" + os.makedirs(MEMORY_DIR, exist_ok=True) + if not os.path.exists(os.path.join(MEMORY_DIR, ".keep")): + with open(os.path.join(MEMORY_DIR, ".keep"), "w") as f: + pass + +def slugify(text): + """Creates a URL-safe slug from text.""" + text = text.lower().strip() + return re.sub(r'[^a-z0-9-]', '-', text).strip('-') + +def create_memory(title, content, tags=None, output_format="text"): + init_memory() + tags = tags or [] + if isinstance(tags, str): + tags = [t.strip() for t in tags.split(",") if t.strip()] + + date_str = datetime.date.today().isoformat() + slug = slugify(title) + if not slug: + slug = "untitled" + + filename = f"{date_str}-{slug}.md" + filepath = os.path.join(MEMORY_DIR, filename) + + # Handle duplicates by appending counter + counter = 1 + while os.path.exists(filepath): + filename = f"{date_str}-{slug}-{counter}.md" + filepath = os.path.join(MEMORY_DIR, filename) + counter += 1 + + # Create Frontmatter + fm = f"""--- +date: {date_str} +title: "{title}" +tags: {json.dumps(tags)} +created: {datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")} +--- +""" + + full_content = fm + "\n" + content + "\n" + + try: + with open(filepath, "w") as f: + f.write(full_content) + + if output_format == "json": + print(json.dumps({ + "success": True, + "filepath": filepath, + "title": title, + "date": date_str + })) + else: + print(f"Created memory: {filepath}") + except Exception as e: + msg = f"Error creating memory: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def list_memories(tag=None, limit=20, output_format="text"): + if not os.path.exists(MEMORY_DIR): + if output_format == "json": + print(json.dumps([])) + else: + print("No memories found.") + return + + memories = [] + try: + files = [f for f in os.listdir(MEMORY_DIR) if f.endswith(".md") and f != ".keep"] + except FileNotFoundError: + files = [] + + for f in files: + path = os.path.join(MEMORY_DIR, f) + try: + with open(path, "r") as file: + content = file.read() + + # Extract basic info from frontmatter + title = "Unknown" + date = "Unknown" + tags = [] + + # Simple regex parsing to avoid YAML dependency + m_title = re.search(r'^title:\s*"(.*)"', content, re.MULTILINE) + if m_title: + title = m_title.group(1) + else: + # Fallback: unquoted title + m_title_uq = re.search(r'^title:\s*(.*)', content, re.MULTILINE) + if m_title_uq: title = m_title_uq.group(1).strip() + + m_date = re.search(r'^date:\s*(.*)', content, re.MULTILINE) + if m_date: date = m_date.group(1).strip() + + m_tags = re.search(r'^tags:\s*(\[.*\])', content, re.MULTILINE) + if m_tags: + try: + tags = json.loads(m_tags.group(1)) + except: + pass + + if tag and tag not in tags: + continue + + memories.append({ + "filename": f, + "title": title, + "date": date, + "tags": tags, + "path": path + }) + except Exception: + # Skip unreadable files + pass + + # Sort by date desc (filename usually works for YYYY-MM-DD prefix) + memories.sort(key=lambda x: x["filename"], reverse=True) + memories = memories[:limit] + + if output_format == "json": + print(json.dumps(memories)) + else: + if not memories: + print("No memories found.") + return + + print(f"{'Date':<12} {'Title'}") + print("-" * 50) + for m in memories: + print(f"{m['date']:<12} {m['title']}") + +def read_memory(filename, output_format="text"): + path = os.path.join(MEMORY_DIR, filename) + if not os.path.exists(path): + # Try finding by partial match if not exact + if os.path.exists(MEMORY_DIR): + matches = [f for f in os.listdir(MEMORY_DIR) if filename in f and f.endswith(".md")] + if len(matches) == 1: + path = os.path.join(MEMORY_DIR, matches[0]) + elif len(matches) > 1: + msg = f"Error: Ambiguous memory identifier '{filename}'. Matches: {', '.join(matches)}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + else: + msg = f"Error: Memory file '{filename}' not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + else: + msg = f"Error: Memory directory does not exist." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + with open(path, "r") as f: + content = f.read() + + if output_format == "json": + print(json.dumps({"filename": os.path.basename(path), "content": content})) + else: + print(content) + except Exception as e: + msg = f"Error reading file: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + +def main(): + # Common argument for format + parent_parser = argparse.ArgumentParser(add_help=False) + parent_parser.add_argument("--format", choices=["text", "json"], default="text", help="Output format") + + parser = argparse.ArgumentParser(description="Manage long-term memories") + + subparsers = parser.add_subparsers(dest="command") + + # Create + create_parser = subparsers.add_parser("create", parents=[parent_parser], help="Create a new memory") + create_parser.add_argument("title", help="Title of the memory") + create_parser.add_argument("content", help="Content of the memory") + create_parser.add_argument("--tags", help="Comma-separated tags") + + # List + list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List memories") + list_parser.add_argument("--tag", help="Filter by tag") + list_parser.add_argument("--limit", type=int, default=20, help="Max results") + + # Read + read_parser = subparsers.add_parser("read", parents=[parent_parser], help="Read a memory") + read_parser.add_argument("filename", help="Filename or slug part") + + args = parser.parse_args() + + # Default format to text if not present (though parents default handles it) + fmt = getattr(args, "format", "text") + + if args.command == "create": + create_memory(args.title, args.content, args.tags, fmt) + elif args.command == "list": + list_memories(args.tag, args.limit, fmt) + elif args.command == "read": + read_memory(args.filename, fmt) + else: + parser.print_help() + +if __name__ == "__main__": + main() diff --git a/scripts/tasks b/scripts/tasks new file mode 100644 index 00000000..9c4d703a --- /dev/null +++ b/scripts/tasks @@ -0,0 +1,15 @@ +#!/bin/bash + +# Wrapper for tasks.py to ensure Python 3 is available + +if ! command -v python3 &> /dev/null; then + echo "Error: Python 3 is not installed or not in PATH." + echo "Please install Python 3 to use the task manager." + exit 1 +fi + +# Get the directory of this script +SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" + +# Execute tasks.py +exec python3 "$SCRIPT_DIR/tasks.py" "$@" diff --git a/scripts/tasks.py b/scripts/tasks.py index d5c26d89..a585378c 100755 --- a/scripts/tasks.py +++ b/scripts/tasks.py @@ -1,6 +1,7 @@ #!/usr/bin/env python3 import os import sys +import shutil import argparse import re import json @@ -11,7 +12,7 @@ # Determine the root directory of the repo # Assumes this script is in scripts/ SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_ROOT = os.path.dirname(SCRIPT_DIR) +REPO_ROOT = os.getenv("TASKS_REPO_ROOT", os.path.dirname(SCRIPT_DIR)) DOCS_DIR = os.path.join(REPO_ROOT, "docs", "tasks") TEMPLATES_DIR = os.path.join(REPO_ROOT, "templates") @@ -23,6 +24,8 @@ "migration", "features", "testing", + "review", + "security", ] VALID_STATUSES = [ @@ -37,6 +40,15 @@ "deferred" ] +VALID_TYPES = [ + "epic", + "story", + "task", + "bug" +] + +ARCHIVE_DIR_NAME = "archive" + def init_docs(): """Scaffolds the documentation directory structure.""" print("Initializing documentation structure...") @@ -49,14 +61,46 @@ def init_docs(): with open(os.path.join(path, ".keep"), "w") as f: pass + # Copy GUIDE.md if missing + guide_path = os.path.join(DOCS_DIR, "GUIDE.md") + guide_template = os.path.join(TEMPLATES_DIR, "GUIDE.md") + if not os.path.exists(guide_path) and os.path.exists(guide_template): + shutil.copy(guide_template, guide_path) + print(f"Created {guide_path}") + # Create other doc directories - for doc_type in ["architecture", "features"]: + for doc_type in ["architecture", "features", "security"]: path = os.path.join(REPO_ROOT, "docs", doc_type) os.makedirs(path, exist_ok=True) readme_path = os.path.join(path, "README.md") if not os.path.exists(readme_path): - with open(readme_path, "w") as f: - f.write(f"# {doc_type.capitalize()} Documentation\n\nAdd {doc_type} documentation here.\n") + if doc_type == "security": + content = """# Security Documentation + +Use this section to document security considerations, risks, and mitigations. + +## Risk Assessment +* [ ] Threat Model +* [ ] Data Privacy + +## Compliance +* [ ] Requirements + +## Secrets Management +* [ ] Policy +""" + else: + content = f"# {doc_type.capitalize()} Documentation\n\nAdd {doc_type} documentation here.\n" + + with open(readme_path, "w") as f: + f.write(content) + + # Create memories directory + memories_path = os.path.join(REPO_ROOT, "docs", "memories") + os.makedirs(memories_path, exist_ok=True) + if not os.path.exists(os.path.join(memories_path, ".keep")): + with open(os.path.join(memories_path, ".keep"), "w") as f: + pass print(f"Created directories in {os.path.join(REPO_ROOT, 'docs')}") @@ -111,11 +155,18 @@ def parse_task_content(content, filepath=None): # Try Frontmatter first frontmatter, body = extract_frontmatter(content) if frontmatter: + deps_str = frontmatter.get("dependencies") or "" + deps = [d.strip() for d in deps_str.split(",") if d.strip()] + return { "id": frontmatter.get("id", "unknown"), "status": frontmatter.get("status", "unknown"), "title": frontmatter.get("title", "No Title"), "priority": frontmatter.get("priority", "medium"), + "type": frontmatter.get("type", "task"), + "sprint": frontmatter.get("sprint", ""), + "estimate": frontmatter.get("estimate", ""), + "dependencies": deps, "filepath": filepath, "content": content } @@ -136,11 +187,15 @@ def parse_task_content(content, filepath=None): "status": status, "title": title, "priority": priority, + "type": "task", + "sprint": "", + "estimate": "", + "dependencies": [], "filepath": filepath, "content": content } -def create_task(category, title, description, priority="medium", status="pending", output_format="text"): +def create_task(category, title, description, priority="medium", status="pending", dependencies=None, task_type="task", sprint="", estimate="", output_format="text"): if category not in CATEGORIES: msg = f"Error: Category '{category}' not found. Available: {', '.join(CATEGORIES)}" if output_format == "json": @@ -158,6 +213,18 @@ def create_task(category, title, description, priority="medium", status="pending filepath = os.path.join(DOCS_DIR, category, filename) # New YAML Frontmatter Format + deps_str = "" + if dependencies: + deps_str = ", ".join(dependencies) + + extra_fm = "" + if task_type: + extra_fm += f"type: {task_type}\n" + if sprint: + extra_fm += f"sprint: {sprint}\n" + if estimate: + extra_fm += f"estimate: {estimate}\n" + content = f"""--- id: {task_id} status: {status} @@ -165,7 +232,8 @@ def create_task(category, title, description, priority="medium", status="pending priority: {priority} created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} category: {category} ---- +dependencies: {deps_str} +{extra_fm}--- # {title} @@ -182,7 +250,8 @@ def create_task(category, title, description, priority="medium", status="pending "title": title, "filepath": filepath, "status": status, - "priority": priority + "priority": priority, + "type": task_type })) else: print(f"Created task: {filepath}") @@ -201,12 +270,7 @@ def find_task_file(task_id): for file in os.listdir(category_dir): if file.startswith(task_id) and file.endswith(".md"): return os.path.join(category_dir, file) - # If not found in expected category, return None (or fall through if we want to be paranoid) - # But the ID structure is strict, so we can likely return None here. - # However, for safety against moved files, let's fall through to full search if not found? - # No, if it has the category prefix, it SHOULD be in that folder. - # But if the user moved it manually... let's stick to the optimization. - return None + # Fallback to full search if not found in expected category (e.g. moved to archive) for root, _, files in os.walk(DOCS_DIR): for file in files: @@ -268,6 +332,37 @@ def delete_task(task_id, output_format="text"): print(msg) sys.exit(1) +def archive_task(task_id, output_format="text"): + filepath = find_task_file(task_id) + if not filepath: + msg = f"Error: Task ID {task_id} not found." + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + + try: + archive_dir = os.path.join(DOCS_DIR, ARCHIVE_DIR_NAME) + os.makedirs(archive_dir, exist_ok=True) + filename = os.path.basename(filepath) + new_filepath = os.path.join(archive_dir, filename) + + os.rename(filepath, new_filepath) + + if output_format == "json": + print(json.dumps({"success": True, "id": task_id, "message": "Archived task", "new_path": new_filepath})) + else: + print(f"Archived task: {task_id} -> {new_filepath}") + + except Exception as e: + msg = f"Error archiving task: {e}" + if output_format == "json": + print(json.dumps({"error": msg})) + else: + print(msg) + sys.exit(1) + def migrate_to_frontmatter(content, task_data): """Converts legacy content to Frontmatter format.""" # Strip the header section from legacy content @@ -283,6 +378,12 @@ def migrate_to_frontmatter(content, task_data): if "*Created:" in description: description = description.split("---")[0].strip() + # Check for extra keys in task_data that might need preservation + extra_fm = "" + if task_data.get("type"): extra_fm += f"type: {task_data['type']}\n" + if task_data.get("sprint"): extra_fm += f"sprint: {task_data['sprint']}\n" + if task_data.get("estimate"): extra_fm += f"estimate: {task_data['estimate']}\n" + new_content = f"""--- id: {task_data['id']} status: {task_data['status']} @@ -290,7 +391,7 @@ def migrate_to_frontmatter(content, task_data): priority: {task_data['priority']} created: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} category: unknown ---- +{extra_fm}--- # {task_data['title']} @@ -367,14 +468,20 @@ def update_task_status(task_id, new_status, output_format="text"): print(f"Updated {task_id} status to {new_status}") -def list_tasks(status=None, category=None, output_format="text"): +def list_tasks(status=None, category=None, sprint=None, include_archived=False, output_format="text"): tasks = [] for root, dirs, files in os.walk(DOCS_DIR): + rel_path = os.path.relpath(root, DOCS_DIR) + + # Exclude archive unless requested + if not include_archived: + if rel_path == ARCHIVE_DIR_NAME or rel_path.startswith(ARCHIVE_DIR_NAME + os.sep): + continue + # Filter by category if provided if category: - rel_path = os.path.relpath(root, DOCS_DIR) - if rel_path != category: + if rel_path != category and not rel_path.startswith(category + os.sep): continue for file in files: @@ -400,6 +507,9 @@ def list_tasks(status=None, category=None, output_format="text"): if status and status.lower() != task["status"].lower(): continue + if sprint and sprint != task.get("sprint"): + continue + tasks.append(task) if output_format == "json": @@ -407,11 +517,11 @@ def list_tasks(status=None, category=None, output_format="text"): print(json.dumps(summary)) else: # Adjust width for ID to handle longer IDs - print(f"{'ID':<25} {'Status':<20} {'Title'}") - print("-" * 75) + print(f"{'ID':<25} {'Status':<20} {'Type':<8} {'Title'}") + print("-" * 85) for t in tasks: - # Status width increased to accommodate 'review_requested' - print(f"{t['id']:<25} {t['status']:<20} {t['title']}") + t_type = t.get("type", "task")[:8] + print(f"{t['id']:<25} {t['status']:<20} {t_type:<8} {t['title']}") def get_context(output_format="text"): """Lists tasks that are currently in progress.""" @@ -451,6 +561,9 @@ def migrate_all(): def validate_all(output_format="text"): """Validates all task files.""" errors = [] + all_tasks = {} # id -> {path, deps} + + # Pass 1: Parse and Basic Validation for root, dirs, files in os.walk(DOCS_DIR): for file in files: if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: @@ -468,17 +581,71 @@ def validate_all(output_format="text"): # Check 2: Required fields required_fields = ["id", "status", "title", "created"] - for field in required_fields: - if field not in frontmatter: - errors.append(f"{file}: Missing required field '{field}'") + missing = [field for field in required_fields if field not in frontmatter] + if missing: + errors.append(f"{file}: Missing required fields: {', '.join(missing)}") + continue + + task_id = frontmatter["id"] # Check 3: Valid Status if "status" in frontmatter and frontmatter["status"] not in VALID_STATUSES: errors.append(f"{file}: Invalid status '{frontmatter['status']}'") + # Check 4: Valid Type + if "type" in frontmatter and frontmatter["type"] not in VALID_TYPES: + errors.append(f"{file}: Invalid type '{frontmatter['type']}'") + + # Parse dependencies + deps_str = frontmatter.get("dependencies") or "" + deps = [d.strip() for d in deps_str.split(",") if d.strip()] + + # Check for Duplicate IDs + if task_id in all_tasks: + errors.append(f"{file}: Duplicate Task ID '{task_id}' (also in {all_tasks[task_id]['path']})") + + all_tasks[task_id] = {"path": path, "deps": deps} + except Exception as e: errors.append(f"{file}: Error reading/parsing: {str(e)}") + # Pass 2: Dependency Validation & Cycle Detection + visited = set() + recursion_stack = set() + + def detect_cycle(curr_id, path): + visited.add(curr_id) + recursion_stack.add(curr_id) + + if curr_id in all_tasks: + for dep_id in all_tasks[curr_id]["deps"]: + # Dependency Existence Check + if dep_id not in all_tasks: + # This will be caught in the loop below, but we need to handle it here to avoid error + continue + + if dep_id not in visited: + if detect_cycle(dep_id, path + [dep_id]): + return True + elif dep_id in recursion_stack: + path.append(dep_id) + return True + + recursion_stack.remove(curr_id) + return False + + for task_id, info in all_tasks.items(): + # Check dependencies exist + for dep_id in info["deps"]: + if dep_id not in all_tasks: + errors.append(f"{os.path.basename(info['path'])}: Invalid dependency '{dep_id}' (task not found)") + + # Check cycles + if task_id not in visited: + cycle_path = [task_id] + if detect_cycle(task_id, cycle_path): + errors.append(f"Circular dependency detected: {' -> '.join(cycle_path)}") + if output_format == "json": print(json.dumps({"valid": len(errors) == 0, "errors": errors})) else: @@ -490,6 +657,161 @@ def validate_all(output_format="text"): print(f" - {err}") sys.exit(1) +def visualize_tasks(output_format="text"): + """Generates a Mermaid diagram of task dependencies.""" + tasks = [] + # Collect all tasks + for root, dirs, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + task = parse_task_content(content, path) + if task["id"] != "unknown": + tasks.append(task) + except: + pass + + if output_format == "json": + nodes = [{"id": t["id"], "title": t["title"], "status": t["status"]} for t in tasks] + edges = [] + for t in tasks: + for dep in t.get("dependencies", []): + edges.append({"from": dep, "to": t["id"]}) + print(json.dumps({"nodes": nodes, "edges": edges})) + return + + # Mermaid Output + print("graph TD") + + status_colors = { + "completed": "#90EE90", + "verified": "#90EE90", + "in_progress": "#ADD8E6", + "review_requested": "#FFFACD", + "wip_blocked": "#FFB6C1", + "blocked": "#FF7F7F", + "pending": "#D3D3D3", + "deferred": "#A9A9A9", + "cancelled": "#696969" + } + + # Nodes + for t in tasks: + # Sanitize title for label + safe_title = t["title"].replace('"', '').replace('[', '').replace(']', '') + print(f' {t["id"]}["{t["id"]}: {safe_title}"]') + + # Style + color = status_colors.get(t["status"], "#FFFFFF") + print(f" style {t['id']} fill:{color},stroke:#333,stroke-width:2px") + + # Edges + for t in tasks: + deps = t.get("dependencies", []) + for dep in deps: + print(f" {dep} --> {t['id']}") + +def get_next_task(output_format="text"): + """Identifies the next best task to work on.""" + # 1. Collect all tasks + all_tasks = {} + for root, _, files in os.walk(DOCS_DIR): + for file in files: + if not file.endswith(".md") or file in ["GUIDE.md", "README.md"]: + continue + path = os.path.join(root, file) + try: + with open(path, "r") as f: + content = f.read() + task = parse_task_content(content, path) + if task["id"] != "unknown": + all_tasks[task["id"]] = task + except: + pass + + candidates = [] + + # Priority mapping + prio_score = {"high": 3, "medium": 2, "low": 1, "unknown": 1} + + for tid, task in all_tasks.items(): + # Filter completed + if task["status"] in ["completed", "verified", "cancelled", "deferred", "blocked"]: + continue + + # Check dependencies + deps = task.get("dependencies", []) + blocked = False + for dep_id in deps: + if dep_id not in all_tasks: + blocked = True # Missing dependency + break + + dep_status = all_tasks[dep_id]["status"] + if dep_status not in ["completed", "verified"]: + blocked = True + break + + if blocked: + continue + + # Calculate Score + score = 0 + + # Status Bonus + if task["status"] == "in_progress": + score += 1000 + elif task["status"] == "pending": + score += 100 + elif task["status"] == "wip_blocked": + # Unblocked now + score += 500 + + # Priority + score += prio_score.get(task.get("priority", "medium"), 1) * 10 + + # Sprint Bonus + if task.get("sprint"): + score += 50 + + # Type Bonus (Stories/Bugs > Tasks > Epics) + t_type = task.get("type", "task") + if t_type in ["story", "bug"]: + score += 20 + elif t_type == "task": + score += 10 + + candidates.append((score, task)) + + candidates.sort(key=lambda x: x[0], reverse=True) + + if not candidates: + msg = "No suitable tasks found (all completed or blocked)." + if output_format == "json": + print(json.dumps({"message": msg})) + else: + print(msg) + return + + best = candidates[0][1] + + if output_format == "json": + print(json.dumps(best)) + else: + print(f"Recommended Next Task (Score: {candidates[0][0]}):") + print(f"ID: {best['id']}") + print(f"Title: {best['title']}") + print(f"Status: {best['status']}") + print(f"Priority: {best['priority']}") + print(f"Type: {best.get('type', 'task')}") + if best.get("sprint"): + print(f"Sprint: {best.get('sprint')}") + print(f"\nRun: scripts/tasks show {best['id']}") + def install_hooks(): """Installs the git pre-commit hook.""" hook_path = os.path.join(REPO_ROOT, ".git", "hooks", "pre-commit") @@ -533,11 +855,17 @@ def main(): create_parser.add_argument("--desc", default="To be determined", help="Task description") create_parser.add_argument("--priority", default="medium", help="Task priority") create_parser.add_argument("--status", choices=VALID_STATUSES, default="pending", help="Task status") + create_parser.add_argument("--dependencies", help="Comma-separated list of task IDs this task depends on") + create_parser.add_argument("--type", choices=VALID_TYPES, default="task", help="Task type") + create_parser.add_argument("--sprint", default="", help="Sprint name/ID") + create_parser.add_argument("--estimate", default="", help="Estimate (points/size)") # List list_parser = subparsers.add_parser("list", parents=[parent_parser], help="List tasks") list_parser.add_argument("--status", help="Filter by status") list_parser.add_argument("--category", choices=CATEGORIES, help="Filter by category") + list_parser.add_argument("--sprint", help="Filter by sprint") + list_parser.add_argument("--archived", action="store_true", help="Include archived tasks") # Show show_parser = subparsers.add_parser("show", parents=[parent_parser], help="Show task details") @@ -552,9 +880,16 @@ def main(): delete_parser = subparsers.add_parser("delete", parents=[parent_parser], help="Delete a task") delete_parser.add_argument("task_id", help="Task ID (e.g., FOUNDATION-001)") + # Archive + archive_parser = subparsers.add_parser("archive", parents=[parent_parser], help="Archive a task") + archive_parser.add_argument("task_id", help="Task ID") + # Context subparsers.add_parser("context", parents=[parent_parser], help="Show current context (in_progress tasks)") + # Next + subparsers.add_parser("next", parents=[parent_parser], help="Suggest the next task to work on") + # Migrate subparsers.add_parser("migrate", parents=[parent_parser], help="Migrate legacy tasks to new format") @@ -565,6 +900,9 @@ def main(): # Validate subparsers.add_parser("validate", parents=[parent_parser], help="Validate task files") + # Visualize + subparsers.add_parser("visualize", parents=[parent_parser], help="Visualize task dependencies (Mermaid)") + # Install Hooks subparsers.add_parser("install-hooks", parents=[parent_parser], help="Install git hooks") @@ -574,25 +912,34 @@ def main(): fmt = getattr(args, "format", "text") if args.command == "create": - create_task(args.category, args.title, args.desc, priority=args.priority, status=args.status, output_format=fmt) + deps = [] + if args.dependencies: + deps = [d.strip() for d in args.dependencies.split(",") if d.strip()] + create_task(args.category, args.title, args.desc, priority=args.priority, status=args.status, dependencies=deps, task_type=args.type, sprint=args.sprint, estimate=args.estimate, output_format=fmt) elif args.command == "list": - list_tasks(args.status, args.category, output_format=fmt) + list_tasks(args.status, args.category, sprint=args.sprint, include_archived=args.archived, output_format=fmt) elif args.command == "init": init_docs() elif args.command == "show": show_task(args.task_id, output_format=fmt) elif args.command == "delete": delete_task(args.task_id, output_format=fmt) + elif args.command == "archive": + archive_task(args.task_id, output_format=fmt) elif args.command == "update": update_task_status(args.task_id, args.status, output_format=fmt) elif args.command == "context": get_context(output_format=fmt) + elif args.command == "next": + get_next_task(output_format=fmt) elif args.command == "migrate": migrate_all() elif args.command == "complete": update_task_status(args.task_id, "completed", output_format=fmt) elif args.command == "validate": validate_all(output_format=fmt) + elif args.command == "visualize": + visualize_tasks(output_format=fmt) elif args.command == "install-hooks": install_hooks() else: diff --git a/templates/maintenance_mode.md b/templates/maintenance_mode.md new file mode 100644 index 00000000..3d53c806 --- /dev/null +++ b/templates/maintenance_mode.md @@ -0,0 +1,88 @@ +# AI Agent Instructions + +You are an expert Software Engineer working on this project. Your primary responsibility is to implement features and fixes while strictly adhering to the **Task Documentation System**. + +## Core Philosophy +**"If it's not documented in `docs/tasks/`, it didn't happen."** + +## Workflow +1. **Pick a Task**: Run `python3 scripts/tasks.py context` to see active tasks, or `list` to see pending ones. +2. **Plan & Document**: + * **Memory Check**: Run `python3 scripts/memory.py list` (or use the Memory Skill) to recall relevant long-term information. + * **Security Check**: Ask the user about specific security considerations for this task. + * If starting a new task, use `scripts/tasks.py create` (or `python3 scripts/tasks.py create`) to generate a new task file. + * Update the task status: `python3 scripts/tasks.py update [TASK_ID] in_progress`. +3. **Implement**: Write code, run tests. +4. **Update Documentation Loop**: + * As you complete sub-tasks, check them off in the task document. + * If you hit a blocker, update status to `wip_blocked` and describe the issue in the file. + * Record key architectural decisions in the task document. + * **Memory Update**: If you learn something valuable for the long term, use `scripts/memory.py create` to record it. +5. **Review & Verify**: + * Once implementation is complete, update status to `review_requested`: `python3 scripts/tasks.py update [TASK_ID] review_requested`. + * Ask a human or another agent to review the code. + * Once approved and tested, update status to `verified`. +6. **Finalize**: + * Update status to `completed`: `python3 scripts/tasks.py update [TASK_ID] completed`. + * Record actual effort in the file. + * Ensure all acceptance criteria are met. + +## Tools +* **Wrapper**: `./scripts/tasks` (Checks for Python, recommended). +* **Create**: `./scripts/tasks create [category] "Title"` +* **List**: `./scripts/tasks list [--status pending]` +* **Context**: `./scripts/tasks context` +* **Update**: `./scripts/tasks update [ID] [status]` +* **Migrate**: `./scripts/tasks migrate` (Migrate legacy tasks to new format) +* **Memory**: `./scripts/memory.py [create|list|read]` +* **JSON Output**: Add `--format json` to any command for machine parsing. + +## Documentation Reference +* **Guide**: Read `docs/tasks/GUIDE.md` for strict formatting and process rules. +* **Architecture**: Refer to `docs/architecture/` for system design. +* **Features**: Refer to `docs/features/` for feature specifications. +* **Security**: Refer to `docs/security/` for risk assessments and mitigations. +* **Memories**: Refer to `docs/memories/` for long-term project context. + +## Code Style & Standards +* Follow the existing patterns in the codebase. +* Ensure all new code is covered by tests (if testing infrastructure exists). + +## PR Review Methodology +When performing a PR review, follow this "Human-in-the-loop" process to ensure depth and efficiency. + +### 1. Preparation +1. **Create Task**: `python3 scripts/tasks.py create review "Review PR #<N>: <Title>"` +2. **Fetch Details**: Use `gh` to get the PR context. + * `gh pr view <N>` + * `gh pr diff <N>` + +### 2. Analysis & Planning (The "Review Plan") +**Do not review line-by-line yet.** Instead, analyze the changes and document a **Review Plan** in the task file (or present it for approval). + +Your plan must include: +* **High-Level Summary**: Purpose, new APIs, breaking changes. +* **Dependency Check**: New libraries, maintenance status, security. +* **Impact Assessment**: Effect on existing code/docs. +* **Focus Areas**: Prioritized list of files/modules to check. +* **Suggested Comments**: Draft comments for specific lines. + * Format: `File: <path> | Line: <N> | Comment: <suggestion>` + * Tone: Friendly, suggestion-based ("Consider...", "Nit: ..."). + +### 3. Execution +Once the human approves the plan and comments: +1. **Pending Review**: Create a pending review using `gh`. + * `COMMIT_SHA=$(gh pr view <N> --json headRefOid -q .headRefOid)` + * `gh api repos/{owner}/{repo}/pulls/{N}/reviews -f commit_id="$COMMIT_SHA"` +2. **Batch Comments**: Add comments to the pending review. + * `gh api repos/{owner}/{repo}/pulls/{N}/comments -f body="..." -f path="..." -f commit_id="$COMMIT_SHA" -F line=<L> -f side="RIGHT"` +3. **Submit**: + * `gh pr review <N> --approve --body "Summary..."` (or `--request-changes`). + +### 4. Close Task +* Update task status to `completed`. + +## Agent Interoperability +- **Task Manager Skill**: `.claude/skills/task_manager/` +- **Memory Skill**: `.claude/skills/memory/` +- **Tool Definitions**: `docs/interop/tool_definitions.json`