Multi-agent orchestration for AI coding assistants.
"The Panopticon had six sides, one for each of the Founders of Gallifrey..."
Panopticon is a unified orchestration layer for AI coding assistants. It works with:
| Tool | Support |
|---|---|
| Claude Code | Full support |
| Codex | Skills sync |
| Cursor | Skills sync |
| Gemini CLI | Skills sync |
| Google Antigravity | Skills sync |
- Multi-agent orchestration - Spawn and manage multiple AI agents in tmux sessions
- Cloister AI Lifecycle Manager - Automatic model routing, stuck detection, and specialist handoffs
- Universal skills - One SKILL.md format works across all supported tools
- Heartbeat Hooks - Real-time agent activity monitoring via Claude Code hooks
- Multi-project support - YAML-based project registry with label-based issue routing
- Health Monitoring - Deacon-style stuck detection with auto-recovery
- Context Engineering - Structured state management (STATE.md, WORKSPACE.md)
- Agent CVs - Work history tracking for capability-based routing
"AI works great on greenfield projects, but it's hopeless on our legacy code."
Sound familiar? Your developers aren't wrong. But they're not stuck, either.
AI coding assistants are trained on modern, well-documented open-source code. When they encounter your 15-year-old monolith with:
- Mixed naming conventions (some
snake_case, somecamelCase, someSCREAMING_CASE) - Undocumented tribal knowledge ("we never touch the
processUser()function directly") - Schemas that don't match the ORM ("the
accountstable is actually users") - Three different async patterns in the same codebase
- Build systems that require arcane incantations
...they stumble. Repeatedly. Every session starts from zero.
Panopticon includes two AI self-monitoring skills that no other orchestration framework provides:
| Skill | What It Does | Business Impact |
|---|---|---|
| Knowledge Capture | Detects when AI makes mistakes or gets corrected, prompts to document the learning | AI gets smarter about YOUR codebase over time |
| Refactor Radar | Identifies systemic code issues causing repeated AI confusion, creates actionable proposals | Surfaces technical debt that's costing you AI productivity |
Session 1: AI queries users.created_at β Error (column is "createdAt")
β Knowledge Capture prompts: "Document this convention?"
β User: "Yes, create skill"
β Creates project-specific skill documenting naming conventions
Session 2: AI knows to use camelCase for this project
No more mistakes on column names
Session 5: Refactor Radar detects: "Same entity called 'user', 'account', 'member'
across layers - this is causing repeated confusion"
β Offers to create issue with refactoring proposal
β Tech lead reviews and schedules cleanup sprint
| Week | Without Panopticon | With Panopticon |
|---|---|---|
| 1 | AI makes 20 mistakes/day on conventions | AI makes 20 mistakes, captures 8 learnings |
| 2 | AI makes 20 mistakes/day (no memory) | AI makes 12 mistakes, captures 5 more |
| 4 | AI makes 20 mistakes/day (still no memory) | AI makes 3 mistakes, codebase improving |
| 8 | Developers give up on AI for legacy code | AI is productive, tech debt proposals in backlog |
When one developer learns, everyone benefits.
Captured skills live in your project's .claude/skills/ directory - they're version-controlled alongside your code. When Sarah documents that "we use camelCase columns" after hitting that error, every developer on the team - and every AI session from that point forward - inherits that knowledge automatically.
myproject/
βββ .claude/skills/
β βββ project-knowledge/ # β Git-tracked, shared by entire team
β βββ SKILL.md # "Database uses camelCase, not snake_case"
βββ src/
βββ ...
No more repeating the same corrections to AI across 10 different developers. No more tribal knowledge locked in one person's head. The team's collective understanding of your codebase becomes permanent, searchable, and automatically applied.
New hire onboarding? The AI already knows your conventions from day one.
What gets measured gets managed. Panopticon's Refactor Radar surfaces the specific patterns that are costing you AI productivity:
- "Here are the 5 naming inconsistencies causing 40% of AI errors"
- "These 3 missing FK constraints led to 12 incorrect deletions last month"
- "Mixed async patterns in payments module caused 8 rollbacks"
Each proposal includes:
- Evidence: Specific file paths and examples
- Impact: How this affects AI (and new developers)
- Migration path: Incremental fix that won't break production
ROI is simple:
- $200K/year senior developer spends 2 hours/day correcting AI on legacy code
- That's $50K/year in wasted productivity per developer
- Team of 10 = $500K/year in AI friction
Panopticon's learning system:
- Captures corrections once, applies them forever
- Identifies root causes (not just symptoms)
- Creates actionable improvement proposals
- Works across your entire AI toolchain (Claude, Codex, Cursor, Gemini)
This isn't "AI for greenfield only." This is AI that learns your business.
Different teams have different ownership boundaries. Individual developers have different preferences. Panopticon respects both:
# In ~/.claude/CLAUDE.md (developer's personal config)
## AI Suggestion Preferences
### refactor-radar
skip: database-migrations, infrastructure # DBA/Platform team handles these
welcome: naming, code-organization # Always happy for these
### knowledge-capture
skip: authentication # Security team owns this- "Skip database migrations" - Your DBA has a change management process
- "Skip infrastructure" - Platform team owns that
- "Welcome naming fixes" - Low risk, high value, always appreciated
The AI adapts to your org structure, not the other way around.
# Install Panopticon
npm install -g panopticon-cli
# Install prerequisites and setup (includes optional HTTPS/Traefik)
pan install
# Sync skills to all AI tools
pan sync
# Check system health
pan doctorPanopticon supports local HTTPS via Traefik reverse proxy:
# Full install (includes Traefik + mkcert for HTTPS)
pan install
# Add to /etc/hosts (macOS/Linux)
echo "127.0.0.1 pan.localhost" | sudo tee -a /etc/hosts
# Start with HTTPS
pan up
# β Dashboard: https://pan.localhost
# β Traefik UI: https://traefik.pan.localhost:8080Minimal install (skip Traefik, use ports):
pan install --minimal
pan up
# β Dashboard: http://localhost:3010See docs/DNS_SETUP.md for detailed DNS configuration (especially for WSL2).
| Platform | Support |
|---|---|
| Linux | Full support |
| macOS | Full support |
| Windows | WSL2 required |
Windows users: Panopticon requires WSL2 (Windows Subsystem for Linux 2). Native Windows is not supported. Install WSL2
- Node.js 18+
- Git (for worktree-based workspaces)
- Docker (for Traefik and workspace containers)
- tmux (for agent sessions)
- ttyd - Web terminal for interactive planning sessions. Auto-installed by
pan install. - GitHub CLI (
gh) - For GitHub integration (issues, PRs, merges). Install - GitLab CLI (
glab) - For GitLab integration (if using GitLab). Install
- mkcert - For HTTPS certificates (recommended)
- Linear API key - For issue tracking integration
- Beads CLI (bd) - For persistent task tracking across sessions
The Panopticon dashboard includes terminal streaming, which requires a native binary (node-pty). Prebuilt binaries are available for:
| Platform | Architecture | Support |
|---|---|---|
| macOS | Intel (x64) | β Prebuilt |
| macOS | Apple Silicon (arm64) | β Prebuilt |
| Linux | x64 (glibc) | β Prebuilt |
| Linux | arm64 (glibc) | β Prebuilt |
| Linux | musl (Alpine) | β Prebuilt |
| Windows | x64 | β Prebuilt |
If a prebuilt binary is not available for your platform, node-gyp will automatically compile from source during installation (requires Python and build tools).
Panopticon uses gh and glab CLIs instead of raw API tokens because:
- Better auth: OAuth tokens that auto-refresh (no expiring PATs)
- Simpler setup:
gh auth loginhandles everything - Agent-friendly: Agents can use them for PRs, merges, reviews
Create ~/.panopticon.env:
LINEAR_API_KEY=lin_api_xxxxx
GITHUB_TOKEN=ghp_xxxxx # Optional: for GitHub-tracked projects
RALLY_API_KEY=_xxxxx # Optional: for Rally as secondary trackerPanopticon supports multiple issue trackers:
| Tracker | Role | Configuration |
|---|---|---|
| Linear | Primary tracker | LINEAR_API_KEY in .panopticon.env |
| GitHub Issues | Secondary tracker | GITHUB_TOKEN or gh auth login |
| GitLab Issues | Secondary tracker | glab auth login |
| Rally | Secondary tracker | RALLY_API_KEY in .panopticon.env |
Secondary trackers sync issues to the dashboard alongside Linear issues, allowing unified project management.
Register your local project directories so Panopticon knows where to create workspaces:
# Register a project
pan project add /path/to/your/project --name myproject
# List registered projects
pan project listIf you have multiple Linear projects, configure which local directory each maps to. Create/edit ~/.panopticon/project-mappings.json:
[
{
"linearProjectId": "abc123",
"linearProjectName": "Mind Your Now",
"linearPrefix": "MIN",
"localPath": "/home/user/projects/myn"
},
{
"linearProjectId": "def456",
"linearProjectName": "Househunt",
"linearPrefix": "HH",
"localPath": "/home/user/projects/househunt"
}
]The dashboard uses this mapping to determine where to create workspaces when you click "Create Workspace" or "Start Agent" for an issue.
Cloister is Panopticon's intelligent agent lifecycle manager. It monitors all running agents and automatically handles:
- Model Routing - Routes tasks to appropriate models based on complexity
- Stuck Detection - Identifies agents that have stopped making progress
- Automatic Handoffs - Escalates to specialists when needed
- Specialist Coordination - Manages test-agent, review-agent, and merge-agent
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLOISTER SERVICE β
β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Heartbeat βββββΆβ Trigger βββββΆβ Handoff β β
β β Monitor β β Detector β β Manager β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β β β β
β βΌ βΌ βΌ β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Agent β β Complexity β β Specialists β β
β β Health β β Analysis β β β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# Via dashboard - click "Start" in the Cloister status bar
# Or via CLI:
pan cloister start
# Check status
pan cloister status
# Stop monitoring
pan cloister stopCloister manages specialized agents that handle specific phases of the development lifecycle:
| Specialist | Purpose | Trigger |
|---|---|---|
| test-agent | Runs test suite after implementation | implementation_complete signal |
| review-agent | Code review before merge | After tests pass (manual trigger) |
| merge-agent | Handles git merge and conflict resolution | "Approve & Merge" button |
The merge-agent is a specialist that handles ALL merges, not just conflicts. This ensures:
- It sees all code changes coming through the pipeline
- It builds context about the codebase over time
- When conflicts DO occur, it has better understanding for intelligent resolution
- Tests are always run before completing the merge
Workflow:
- Pull latest main - Ensures local main is up-to-date
- Analyze incoming changes - Reviews what the feature branch contains
- Perform merge - Merges feature branch into main
- Resolve conflicts - If conflicts exist, uses AI to resolve them intelligently
- Run tests - Verifies the merge didn't break anything
- Commit merge - Commits the merge with descriptive message
- Report results - Returns success/failure with details
Triggering merge-agent:
# Via dashboard - click "Approve & Merge" on an issue card
# merge-agent is ALWAYS invoked, regardless of whether conflicts exist
# Via CLI
pan specialists wake merge-agent --issue MIN-123The merge-agent uses a specialized prompt template that instructs it to:
- Never force-push
- Always run tests before completing
- Document conflict resolution decisions
- Provide detailed feedback on what was merged
The review pipeline is a sequential handoff between specialists:
Human clicks "Review"
β
βΌ
βββββββββββββββββββββ
β review-agent β Reviews code, checks for issues
βββββββββββ¬ββββββββββ
β If PASSED: queues test-agent
β If BLOCKED: sends feedback to work-agent
βΌ
βββββββββββββββββββββ
β test-agent β Runs test suite, analyzes failures
βββββββββββ¬ββββββββββ
β If PASSED: marks ready for merge
β If FAILED: sends feedback to work-agent
βΌ
βββββββββββββββββββββ
β (Human clicks β Human approval required
β "Approve & Merge"β before merge
βββββββββββ¬ββββββββββ
βΌ
βββββββββββββββββββββ
β merge-agent β Performs merge, resolves conflicts
βββββββββββββββββββββ
Key Points:
- Human-initiated start - A human must click "Review" to start the pipeline
- Automatic handoffs - review-agent β test-agent happens automatically
- Human approval for merge - Merge is NOT automatic; human clicks "Approve & Merge"
- Feedback loops - Failed reviews/tests send feedback back to the work-agent
Each specialist has a task queue (~/.panopticon/specialists/{name}/hook.json) managed via the FPP (Fixed Point Principle):
1. Task arrives (via API or handoff)
β
βΌ
2. wakeSpecialistOrQueue() checks if specialist is busy
β
βββ If IDLE: Wake specialist immediately with task
β
βββ If BUSY: Add task to queue (hook.json)
β
βΌ
3. When specialist completes current task:
β
βββ Updates status via API (passed/failed/skipped)
β
βββ Dashboard automatically wakes specialist for next queued task
Queue priority order: urgent > high > normal > low
Completion triggers: When a specialist reports status (passed, failed, or skipped), the dashboard:
- Sets the specialist state to
idle - Checks the specialist's queue for pending work
- If work exists, immediately wakes the specialist with the next task
After a human initiates the first review, work-agents can request re-review up to 3 times automatically:
# Work-agent requests re-review after fixing issues
pan work request-review MIN-123 -m "Fixed: added tests for edge cases"Circuit breaker behavior:
- First human click resets the counter to 0
- Each
pan work request-reviewincrements the counter - After 3 automatic re-requests, returns HTTP 429
- Human must click "Review" in dashboard to continue
This prevents infinite loops where an agent repeatedly fails review.
API endpoint: POST /api/workspaces/:issueId/request-review
When Cloister starts, it automatically initializes specialists that don't exist yet. This ensures the test-agent, review-agent, and merge-agent are ready to receive wake signals without manual setup.
Cloister detects situations that require intervention:
| Trigger | Condition | Action |
|---|---|---|
| stuck_escalation | No activity for 30+ minutes | Escalate to more capable model |
| complexity_upgrade | Task complexity exceeds model capability | Route to Opus |
| implementation_complete | Agent signals work is done | Wake test-agent |
| test_failure | Tests fail repeatedly | Escalate model or request help |
| planning_complete | Planning session finishes | Transition to implementation |
| merge_requested | User clicks "Approve & Merge" | Wake merge-agent |
Cloister supports two handoff methods, automatically selected based on agent type:
| Method | When Used | How It Works |
|---|---|---|
| Kill & Spawn | General agents (agent-min-123, etc.) | 1. Captures full context (STATE.md, beads, git state) 2. Kills tmux session 3. Spawns new agent with handoff prompt 4. New agent continues work with preserved context |
| Specialist Wake | Permanent specialists (merge-agent, test-agent) | 1. Captures handoff context 2. Sends wake message to existing session 3. Specialist resumes with context injection |
Kill & Spawn is used for temporary agents that work on specific issues. It creates a clean handoff by:
- Capturing the agent's current understanding (from STATE.md)
- Preserving beads task progress and open items
- Including relevant git diff and file context
- Building a comprehensive handoff prompt for the new model
Specialist Wake is used for permanent specialists that persist across multiple issues. It avoids the overhead of killing/respawning by injecting context into the existing session.
When a handoff occurs, Cloister captures:
{
"agentId": "agent-min-123",
"issueId": "MIN-123",
"currentModel": "sonnet",
"targetModel": "opus",
"reason": "stuck_escalation",
"handoffCount": 1,
"state": {
"phase": "implementation",
"complexity": "complex",
"lastActivity": "2024-01-22T10:30:00-08:00"
},
"beadsTasks": [...],
"gitContext": {
"branch": "feature/min-123",
"uncommittedChanges": ["src/auth.ts", "src/tests/auth.test.ts"],
"recentCommits": [...]
}
}Handoff prompts are saved to ~/.panopticon/agents/{agent-id}/handoffs/ for debugging.
Agents send heartbeats via Claude Code hooks. Cloister tracks:
- Last tool use and timestamp
- Current task being worked on
- Git branch and workspace
- Process health
Heartbeat files are stored in ~/.panopticon/heartbeats/:
{
"timestamp": "2024-01-22T10:30:00-08:00",
"agent_id": "agent-min-123",
"tool_name": "Edit",
"last_action": "{\"file_path\":\"/path/to/file.ts\"...}",
"git_branch": "feature/min-123",
"workspace": "/home/user/projects/myapp/workspaces/feature-min-123"
}The heartbeat hook is automatically synced to ~/.panopticon/bin/heartbeat-hook via pan sync. It's also installed automatically when you install or upgrade Panopticon via npm.
Manual installation:
pan sync # Syncs all skills, agents, AND hooksHook configuration in ~/.claude/settings.json:
{
"hooks": {
"PostToolUse": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "~/.panopticon/bin/heartbeat-hook"
}
]
}
]
}
}Hook resilience: The heartbeat hook is designed to fail silently if:
- The heartbeats directory doesn't exist
- Write permissions are missing
- The hook script has errors
This prevents hook failures from interrupting agent work.
Cloister configuration lives in ~/.panopticon/cloister/config.json:
{
"monitoring": {
"heartbeat_interval_ms": 5000,
"stuck_threshold_minutes": 30,
"health_check_interval_ms": 30000
},
"specialists": {
"test_agent": { "enabled": true, "auto_wake": true },
"review_agent": { "enabled": true, "auto_wake": false },
"merge_agent": { "enabled": true, "auto_wake": false }
},
"triggers": {
"stuck_escalation": { "enabled": true },
"complexity_upgrade": { "enabled": true }
}
}Cloister automatically routes tasks to the appropriate model based on detected complexity, optimizing for cost while ensuring quality.
| Level | Model | Use Case |
|---|---|---|
| trivial | Haiku | Typos, comments, documentation updates |
| simple | Haiku | Small fixes, test additions, minor changes |
| medium | Sonnet | Features, components, integrations |
| complex | Sonnet/Opus | Refactors, migrations, redesigns |
| expert | Opus | Architecture, security, performance optimization |
Complexity is detected from multiple signals (in priority order):
- Explicit field - Task has a
complexityfield set (e.g., in beads) - Labels/tags - Issue labels like
architecture,security,refactor - Keywords - Title/description contains keywords like "migration", "overhaul"
- File count - Number of files changed (>20 files = complex)
- Time estimate - If estimate exceeds thresholds
Keyword patterns:
{
trivial: ['typo', 'rename', 'comment', 'documentation', 'readme'],
simple: ['add comment', 'update docs', 'fix typo', 'small fix'],
medium: ['feature', 'endpoint', 'component', 'service'],
complex: ['refactor', 'migration', 'redesign', 'overhaul'],
expert: ['architecture', 'security', 'performance optimization']
}Edit ~/.panopticon/cloister/config.json:
{
"model_selection": {
"default_model": "sonnet",
"complexity_routing": {
"trivial": "haiku",
"simple": "haiku",
"medium": "sonnet",
"complex": "sonnet",
"expert": "opus"
}
}
}Model routing helps optimize costs:
| Model | Relative Cost | Best For |
|---|---|---|
| Haiku | 1x (cheapest) | Simple tasks, bulk operations |
| Sonnet | 3x | Most development work |
| Opus | 15x | Complex architecture, critical fixes |
A typical agent run might:
- Start on Haiku for initial exploration
- Escalate to Sonnet for implementation
- Escalate to Opus only if stuck or complexity detected
Panopticon supports managing multiple projects with intelligent issue routing.
Projects are registered in ~/.panopticon/projects.yaml:
projects:
myn:
name: "Mind Your Now"
path: /home/user/projects/myn
linear_team: MIN
issue_routing:
- labels: [splash, landing-pages, seo]
path: /home/user/projects/myn/splash
- labels: [docs, marketing]
path: /home/user/projects/myn/docs
- default: true
path: /home/user/projects/myn
panopticon:
name: "Panopticon"
path: /home/user/projects/panopticon
linear_team: PANIssues are routed to different subdirectories based on their labels:
- Labeled issues - Matched against
issue_routingrules in order - Default route - Issues without matching labels use the
default: truepath - Fallback - If no default, uses the project root path
Example: An issue with label "splash" in the MIN team would create its workspace at /home/user/projects/myn/splash/workspaces/feature-min-xxx/.
Note: For most polyrepo projects, use the built-in
workspaceconfiguration (see below) instead of custom scripts. Custom commands are only needed for highly specialized setups.
For projects that need logic beyond what the configuration supports, you can specify custom workspace scripts:
projects:
myn:
name: "Mind Your Now"
path: /home/user/projects/myn
linear_team: MIN
# Custom scripts handle complex workspace setup
workspace_command: /home/user/projects/myn/infra/new-feature
workspace_remove_command: /home/user/projects/myn/infra/remove-featureWhen workspace_command is specified, Panopticon calls your script instead of creating a standard git worktree. The script receives the normalized issue ID (e.g., min-123) as an argument.
When workspace_remove_command is specified, Panopticon calls your script when deleting workspaces (e.g., aborting planning with "delete workspace" enabled). This is important for complex setups that need to:
- Stop Docker containers and remove volumes
- Clean up root-owned files created by containers
- Remove git worktrees from multiple repositories
- Release port assignments
- Remove DNS entries
What your custom script should handle:
- Creating git worktrees for multiple repositories (polyrepo structure)
- Setting up Docker Compose files and dev containers
- Configuring environment variables and
.envfiles - Setting up DNS entries for workspace-specific URLs (e.g., Traefik routing)
- Creating a
./devscript for container management - Copying agent configuration templates (CLAUDE.md, .mcp.json, etc.)
Example script flow:
#!/bin/bash
# new-feature script for a polyrepo project
ISSUE_ID=$1 # e.g., "min-123"
# Create worktrees for frontend and api repos
git -C /path/to/frontend worktree add ../workspaces/feature-$ISSUE_ID/fe feature/$ISSUE_ID
git -C /path/to/api worktree add ../workspaces/feature-$ISSUE_ID/api feature/$ISSUE_ID
# Generate docker-compose from templates
sed "s/{{FEATURE_FOLDER}}/feature-$ISSUE_ID/g" template.yml > workspace/docker-compose.yml
# Set up DNS and Traefik routing
# ... additional setupThe standard pan workspace create command will automatically detect and use your custom script.
When setting up Docker containers for workspaces, avoid these common pitfalls:
Maven projects:
- DO NOT set
MAVEN_CONFIG=/some/pathas an environment variable - Maven interprets
MAVEN_CONFIGas additional CLI arguments, not a directory path - This causes "Unknown lifecycle phase" errors (e.g., "Unknown lifecycle phase /maven-cache")
- Instead, use
-Dmaven.repo.local=/path/to/cachein the Maven command
# WRONG - causes Maven startup failure
environment:
- MAVEN_CONFIG=/maven-cache
# CORRECT - use command line argument
command: ./mvnw spring-boot:run -Dmaven.repo.local=/maven-cache/repository
volumes:
- ~/.m2:/maven-cache:cachedpnpm projects:
- Set
PNPM_HOME=/pathto configure the pnpm store location - Mount a named volume for the store to share across containers
Panopticon is an orchestration layer - it manages workspaces, agents, and workflows, but your project repository provides the actual templates and configuration.
Projects can be as simple as just a git repo (for worktree-only workspaces) or as complex as a full polyrepo with Docker, Traefik, and database seeding. Here's what you need for each level:
Your project needs a .devcontainer/ or template directory with:
your-project/
βββ infra/
β βββ .devcontainer-template/ # Template for workspace containers
β βββ docker-compose.devcontainer.yml.template
β βββ compose.infra.yml.template # Optional: separate infra services
β βββ Dockerfile
β βββ devcontainer.json.template
βββ ...
Docker Compose templates should use placeholders that Panopticon will replace:
# docker-compose.devcontainer.yml.template
services:
api:
build: ./api
labels:
- "traefik.http.routers.{{FEATURE_FOLDER}}-api.rule=Host(`api-{{FEATURE_FOLDER}}.{{DOMAIN}}`)"
environment:
- DATABASE_URL=postgres://app:app@postgres:5432/mydb
frontend:
build: ./fe
labels:
- "traefik.http.routers.{{FEATURE_FOLDER}}.rule=Host(`{{FEATURE_FOLDER}}.{{DOMAIN}}`)"If you want local HTTPS (recommended), provide a Traefik compose file:
your-project/
βββ infra/
β βββ docker-compose.traefik.yml # Traefik reverse proxy
βββ ...
Example Traefik config:
# infra/docker-compose.traefik.yml
services:
traefik:
image: traefik:v2.10
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ~/.panopticon/traefik/certs:/certs:ro
command:
- --providers.docker=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443For projects with databases:
your-project/
βββ infra/
β βββ seed/
β βββ seed.sql # Sanitized production data
βββ ...
Your compose template should mount this:
services:
postgres:
image: postgres:16
volumes:
- /path/to/project/infra/seed:/docker-entrypoint-initdb.d:roFor customizing how agents work in your project:
your-project/
βββ infra/
β βββ .agent-template/
β βββ CLAUDE.md.template # Project-specific AI instructions
β βββ .mcp.json.template # MCP server configuration
βββ .claude/
βββ skills/ # Project-specific skills
βββ my-project-standards/
βββ SKILL.md
Point Panopticon to your templates:
# ~/.panopticon/projects.yaml
projects:
myproject:
name: "My Project"
path: /home/user/projects/myproject
linear_team: PRJ
workspace:
type: polyrepo # or monorepo
workspaces_dir: workspaces
docker:
traefik: infra/docker-compose.traefik.yml
compose_template: infra/.devcontainer-template
database:
seed_file: /home/user/projects/myproject/infra/seed/seed.sql
container_name: "{{PROJECT}}-postgres-1"
agent:
template_dir: infra/.agent-template
templates:
- source: CLAUDE.md.template
target: CLAUDE.md| Component | Required? | Location | Purpose |
|---|---|---|---|
| Docker Compose template | Yes (for Docker workspaces) | infra/.devcontainer-template/ |
Container configuration |
| Traefik config | Only for HTTPS | infra/docker-compose.traefik.yml |
Reverse proxy |
| Seed file | Only if database needed | infra/seed/seed.sql |
Pre-populate database |
| Agent template | Recommended | infra/.agent-template/ |
AI instructions |
| Project skills | Optional | .claude/skills/ |
Project-specific workflows |
For a simple monorepo with no Docker:
# ~/.panopticon/projects.yaml
projects:
simple-app:
name: "Simple App"
path: /home/user/projects/simple-app
linear_team: APP
# No workspace config needed - uses git worktreesPanopticon creates workspaces as git worktrees. Docker, HTTPS, and seeding are opt-in.
For projects with multiple git repositories, configure workspace settings directly in projects.yaml:
projects:
myapp:
name: "My App"
path: /home/user/projects/myapp
linear_team: APP
workspace:
type: polyrepo
workspaces_dir: workspaces
# Git repositories to include in each workspace
repos:
- name: fe
path: frontend
branch_prefix: "feature/"
- name: api
path: backend
branch_prefix: "feature/"
# DNS entries for local development
dns:
domain: myapp.test
entries:
- "{{FEATURE_FOLDER}}.{{DOMAIN}}"
- "api-{{FEATURE_FOLDER}}.{{DOMAIN}}"
sync_method: wsl2hosts # or: hosts_file, dnsmasq
# Port assignments for services
ports:
redis:
range: [6380, 6499]
# Service definitions - how to start each service
services:
- name: api
path: api
start_command: ./mvnw spring-boot:run
docker_command: ./mvnw spring-boot:run
health_url: "https://api-{{FEATURE_FOLDER}}.{{DOMAIN}}/actuator/health"
port: 8080
- name: frontend
path: fe
start_command: pnpm start
docker_command: pnpm start
health_url: "https://{{FEATURE_FOLDER}}.{{DOMAIN}}"
port: 3000
# Docker configuration
docker:
traefik: infra/docker-compose.traefik.yml
compose_template: infra/.devcontainer-template
# Agent configuration templates
agent:
template_dir: infra/.agent-template
templates:
- source: CLAUDE.md.template
target: CLAUDE.md
symlinks:
- .claude/commands
- .claude/skills
# Environment template
env:
template: |
COMPOSE_PROJECT_NAME={{COMPOSE_PROJECT}}
FRONTEND_URL=https://{{FEATURE_FOLDER}}.{{DOMAIN}}Template Placeholders:
| Placeholder | Example | Description |
|---|---|---|
{{FEATURE_NAME}} |
min-123 |
Normalized issue ID |
{{FEATURE_FOLDER}} |
feature-min-123 |
Workspace folder name |
{{BRANCH_NAME}} |
feature/min-123 |
Git branch name |
{{COMPOSE_PROJECT}} |
myapp-feature-min-123 |
Docker Compose project |
{{DOMAIN}} |
myapp.test |
DNS domain |
Service Templates:
Panopticon provides built-in templates for common frameworks. Use these to avoid boilerplate:
| Template | Start Command | Port |
|---|---|---|
react |
npm start |
3000 |
react-vite |
npm run dev |
5173 |
react-pnpm |
pnpm start |
3000 |
nextjs |
npm run dev |
3000 |
spring-boot-maven |
./mvnw spring-boot:run |
8080 |
spring-boot-gradle |
./gradlew bootRun |
8080 |
express |
npm start |
3000 |
fastapi |
uvicorn main:app --reload |
8000 |
django |
python manage.py runserver |
8000 |
Use a template by referencing it in your service config:
services:
- name: api
template: spring-boot-maven
path: api
health_url: "https://api-{{FEATURE_FOLDER}}.myapp.test/health"See /pan-workspace-config skill for complete documentation.
# List registered projects
pan project list
# Add a project
pan project add /path/to/project --name myproject --linear-team PRJ
# Remove a project
pan project remove myprojectMany projects need a pre-populated database for development and testing. Panopticon provides database seeding commands that work with your existing infrastructure.
Problem: Development databases often need:
- Schema with 100+ migrations already applied
- Seed data for testing (users, reference data)
- External QA database connections
- Database snapshots from staging/production (sanitized)
Solution: Configure database seeding in projects.yaml:
projects:
myapp:
workspace:
database:
# Path to seed file (loaded on first container start)
seed_file: /path/to/sanitized-seed.sql
# Command to create new snapshots from external source
snapshot_command: "kubectl exec -n prod pod/postgres -- pg_dump -U app mydb"
# Or connect to external database directly
external_db:
host: qa-db.example.com
database: myapp_qa
user: readonly
password_env: QA_DB_PASSWORD
# Container naming pattern
container_name: "{{PROJECT}}-postgres-1"
# Migration tool (for status checks)
migrations:
type: flyway # flyway | liquibase | prisma | typeorm | custom
path: src/main/resources/db/migrationCommands:
# Create a snapshot from production/staging
pan db snapshot --project myapp --output /path/to/seed.sql
# Seed a workspace database
pan db seed MIN-123
# Check database status
pan db status MIN-123
# Clean kubectl noise from pg_dump files
pan db clean /path/to/dump.sql
# View database configuration
pan db config myappWorkflow for capturing production data:
-
Create snapshot from production (via kubectl or direct connection):
pan db snapshot --project myapp --sanitize
-
Verify the seed file:
pan db clean /path/to/seed.sql --dry-run
-
Update projects.yaml with seed file path
-
Workspaces automatically seed on first postgres container start
Container integration:
Your Docker Compose template should mount the seed directory:
# compose.infra.yml
services:
postgres:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
# Seed database on first startup
- /path/to/project/infra/seed:/docker-entrypoint-initdb.d:roTroubleshooting:
| Issue | Solution |
|---|---|
| "relation does not exist" | Seed file missing or incomplete - run pan db snapshot |
| Slow database startup | Large seed file - consider pruning old data |
| kubectl garbage in dump | Run pan db clean <file> to remove stderr noise |
| Migrations fail after seed | Check Flyway version matches seed file's schema_history |
Panopticon includes a notification system that alerts you when agents complete their work.
Desktop Notifications:
The notify-complete script sends desktop notifications across platforms:
| Platform | Notification Method |
|---|---|
| WSL2/Windows | PowerShell toast notifications |
| macOS | osascript display notification |
| Linux | notify-send |
Usage:
# Send a completion notification
~/.panopticon/bin/notify-complete MIN-123 "Fixed login button" "https://gitlab.com/mr/456"Completion log: All notifications are logged to ~/.panopticon/agent-completed.log with timestamps.
Integration with agent workflows:
Agents can call notify-complete at the end of their work:
# In agent completion script or /work-complete skill
notify-complete "$ISSUE_ID" "$SUMMARY" "$MR_URL"pan init # Initialize ~/.panopticon/
pan sync # Sync skills, commands, agents, AND hooks to all AI tools
pan sync --dry-run # Preview what will be synced
pan doctor # Check system health
pan skills # List available skills
pan status # Show running agents
pan up # Start dashboard (Docker or minimal)
pan down # Stop dashboard and services
pan update # Update Panopticon to latest version
pan backup # Create backup of ~/.panopticon/
pan restore # Restore from backup
pan setup hooks # Install Claude Code hooks (heartbeat, etc.)pan sync synchronizes Panopticon assets to all supported AI tools:
| Asset Type | Source | Destinations |
|---|---|---|
| Skills | ~/.panopticon/skills/ |
~/.claude/skills/, ~/.codex/skills/, ~/.gemini/skills/ |
| Agents | ~/.panopticon/agents/*.md |
~/.claude/agents/ |
| Commands | ~/.panopticon/commands/ |
~/.claude/commands/ |
| Hooks | src/hooks/ (in package) |
~/.panopticon/bin/ |
Automatic sync: Hooks are also synced automatically when you install or upgrade Panopticon via npm (postinstall hook).
# Spawn an agent for a Linear issue
pan work issue MIN-123
# List all running agents
pan work status
# Send a message to an agent (handles Enter key automatically)
pan work tell min-123 "Please also add tests"
# Kill an agent
pan work kill min-123
# Show work completed and awaiting review
pan work pending
# Approve agent work, merge MR, update Linear
pan work approve min-123
# List issues from configured trackers
pan work list
# Triage issues from secondary tracker
pan work triage
# Reopen a closed issue and re-run planning
pan work reopen min-123
# Request re-review after fixing feedback (for agents, max 3 auto-requeues)
pan work request-review min-123 -m "Fixed: added tests and removed duplication"
# Recover crashed agents
pan work recover min-123
pan work recover --all# Run a health check
pan work health check
# Show health status of all agents
pan work health status
# Start the health daemon (background monitoring)
pan work health daemon --interval 30# Run all tests for a workspace
pan test run min-123
# Run tests for main branch
pan test run main
# Run specific test suites only
pan test run min-123 --tests backend,frontend_unit
# List configured tests for a project
pan test list
# List tests for specific project
pan test list myprojectTest Configuration:
Configure test suites in projects.yaml:
projects:
myapp:
tests:
backend:
type: maven # maven, vitest, jest, playwright, pytest, cargo
path: api # Path relative to workspace
command: ./mvnw test # Command to run
frontend_unit:
type: vitest
path: fe
command: pnpm test:unit --run
container: true # Run inside Docker container
container_name: "{{COMPOSE_PROJECT}}-fe-1"
frontend_e2e:
type: playwright
path: fe
command: pnpm test:e2e
env:
BASE_URL: "https://{{FEATURE_FOLDER}}.myapp.test"Reports:
Test results are saved to {project}/reports/test-run-{target}-{timestamp}.md with detailed logs for each suite.
Notifications:
Desktop notifications are sent when tests complete (disable with --no-notify).
See /pan-test-config skill for complete documentation.
# Check for pending work on hook
pan work hook check
# Push work to an agent's hook
pan work hook push agent-min-123 "Continue with tests"
# Send mail to an agent
pan work hook mail agent-min-123 "Review feedback received"Workspaces are git worktrees - isolated working directories for each issue/feature. Each workspace:
- Has its own feature branch (e.g.,
feature/min-123-add-login) - Shares git history with the main repo (no separate clone)
- Can run independently (separate node_modules, builds, etc.)
- Is located at
{project}/workspaces/{issue-id}/
This allows multiple agents to work on different features simultaneously without conflicts.
Planning artifacts are stored inside the workspace, making them part of the feature branch:
workspaces/feature-min-123/
βββ .planning/
β βββ output.jsonl # Full conversation history (tool uses + results)
β βββ PLANNING_PROMPT.md # Initial planning prompt
β βββ CONTINUATION_PROMPT.md # Context for continued sessions
β βββ output-*.jsonl # Backup of previous rounds
βββ ... (code)
When the planning session completes, the AI generates:
- STATE.md - Current understanding, decisions made, and implementation plan
- Beads tasks - Trackable sub-tasks with dependencies for the implementation
- Feature PRD - Copied to
docs/prds/active/{issue}-plan.mdfor documentation
This enables:
- Collaborative async planning - Push your branch, someone else pulls and continues the planning session with full context
- Context recovery - If Claude's context compacts, the full conversation is preserved in the branch
- Audit trail - See how planning decisions were made, what files were explored, what questions were asked
- Branch portability - The planning state travels with the feature branch
Dashboard workflow (recommended):
The planning dialog has Pull and Push buttons that handle git operations automatically:
| Button | What it does |
|---|---|
| Pull | Fetches from origin, creates workspace from remote branch if needed, pulls latest changes |
| Push | Commits .planning/ artifacts and pushes to origin |
- Person A starts planning in dashboard, clicks Push when interrupted
- Person B opens same issue in dashboard, clicks Pull β gets Person A's full context
- Person B continues the planning session and clicks Push when done
CLI workflow:
# Person A starts planning
pan work plan MIN-123
# ... answers discovery questions, gets interrupted ...
# Push the branch (includes planning context)
cd workspaces/feature-min-123
git add .planning && git commit -m "WIP: planning session"
git push origin feature/min-123
# Person B continues
git pull origin feature/min-123
pan work plan MIN-123 --continue
# Claude has full context from Person A's session# Create a workspace (git worktree) without starting an agent
pan workspace create MIN-123
# Create workspace and start Docker containers
pan workspace create MIN-123 --docker
# List all workspaces
pan workspace list
# Destroy a workspace
pan workspace destroy MIN-123
# Force destroy (even with uncommitted changes)
pan workspace destroy MIN-123 --forceThe --docker flag automatically starts containers after workspace creation:
pan workspace create MIN-123 --dockerWhat it does:
- Creates the workspace (git worktree or custom command)
- Looks for docker-compose files in:
{workspace}/docker-compose.yml{workspace}/docker-compose.yaml{workspace}/.devcontainer/docker-compose.yml{workspace}/.devcontainer/docker-compose.devcontainer.yml{workspace}/.devcontainer/compose.yml
- Runs
docker compose -p "{project}-feature-{issue}" -f {file} up -d --build
Docker Project Naming:
Each workspace gets a unique Docker Compose project name to avoid container conflicts:
- Format:
{project-name}-feature-{issue-id}(e.g.,mind-your-now-feature-min-123) - The project name comes from
namein your~/.panopticon/projects.yaml - Container names follow:
{project}-{service}-1(e.g.,mind-your-now-feature-min-123-api-1)
This allows multiple workspaces to run simultaneously without port or name conflicts.
Why this matters:
- Containers start warming up while you review the issue
- Environment is ready when the planning agent starts asking questions
- You can test assumptions during planning without waiting for builds
Dashboard integration:
The planning dialog includes a "Start Docker containers" checkbox:
- Default: Enabled (containers start automatically)
- Preference saved: Your choice is remembered in browser localStorage
- Key:
panopticon.planning.startDocker
To change the default via browser console:
// Disable Docker by default
localStorage.setItem('panopticon.planning.startDocker', 'false');
// Enable Docker by default (this is the out-of-box default)
localStorage.setItem('panopticon.planning.startDocker', 'true');Example workflow:
# From dashboard: click "Start Planning" (Docker enabled by default)
# Or from CLI:
pan workspace create MIN-123 --docker
# While containers build in background:
# - Review the Linear issue
# - Check related PRs
# - Think about approach
# By the time you're ready to engage with the planning agent,
# the dev environment is warm and ready for testing# Register a project
pan project add /path/to/project --name myproject
# List managed projects
pan project list
# Show project details
pan project show myproject
# Initialize project config (creates .panopticon.json)
pan project init
# Remove a project
pan project remove myproject# Show agent state
pan work context state agent-min-123
# Set a checkpoint
pan work context checkpoint "Completed auth module"
# Search history
pan work context history "test"# View an agent's CV (work history)
pan work cv agent-min-123
# Show agent rankings by success rate
pan work cv --rankings# Recover a specific crashed agent
pan work recover min-123
# Auto-recover all crashed agents
pan work recover --all# Start Cloister monitoring service
pan cloister start
# Stop Cloister
pan cloister stop
# Emergency stop all agents (force kill)
pan cloister emergency-stop
# Check Cloister status
pan cloister status
# List all specialists
pan specialists list
# Wake a specialist (resumes previous session if exists)
pan specialists wake merge-agent
# Wake and send a task
pan specialists wake merge-agent --task "Review PR #123 for security issues"
# View specialist queue
pan specialists queue merge-agent
# Reset a single specialist (wipes context)
pan specialists reset merge-agent
# Reset ALL specialists (fresh start)
pan specialists reset --allStart the monitoring dashboard:
pan upRecommended (containerized with HTTPS):
- Dashboard: https://pan.localhost
- Traefik UI: https://traefik.pan.localhost:8082
This runs everything in Docker containers, avoiding port conflicts with your other projects.
Minimal install (no Docker):
pan up --minimal- Dashboard: http://localhost:3010
Stop with pan down.
| Tab | Purpose |
|---|---|
| Board | Kanban view of Linear issues with drag-and-drop status changes |
| Agents | Running agent sessions with terminal output |
| Activity | Real-time pan command output log |
| Metrics | Runtime comparison and cost tracking |
| Skills | Available skills and their descriptions |
| Health | System health checks and diagnostics |
The dashboard automatically filters issues to reduce visual clutter:
- Linear issues - Shows current cycle issues only
- Done column - Shows items completed in the last 24 hours
- Canceled column - Shows items canceled in the last 24 hours
This filtering applies to both Linear and GitHub issues. Older completed items are excluded from the dashboard but remain in your issue tracker.
Issue cards on the Kanban board display:
- Cost badges - Color-coded by amount ($0-5 green, $5-20 yellow, $20+ red)
- Container status - Shows if workspace has Docker containers (running/stopped)
- Agent status - Indicates if an agent is currently working on the issue
- Workspace status - Shows if workspace exists, is corrupted, or needs creation
Click an issue card to open the workspace detail panel:
| Button | Action |
|---|---|
| Create Workspace | Creates git worktree for the issue |
| Containerize | Adds Docker containers to an existing workspace |
| Start Containers | Starts stopped Docker containers |
| Start Planning | Opens interactive planning session with AI |
| Start Agent | Spawns autonomous agent in tmux |
| Approve & Merge | Triggers merge-agent to handle PR merge |
The planning dialog provides a real-time terminal for collaborative planning:
- Discovery questions - AI asks clarifying questions before implementation
- Codebase exploration - AI reads files and understands patterns
- Pull/Push buttons - Git operations to share planning context with teammates
- AskUserQuestion rendering - Questions from the AI appear as interactive prompts
The Metrics tab provides insights into AI agent performance and costs:
- Per-issue cost badges - See costs directly on Kanban cards (color-coded by amount)
- Issue cost breakdown - Click an issue to see detailed costs by model and session
- Runtime comparison - Compare success rates, duration, and costs across runtimes (Claude, Codex, etc.)
- Capability analysis - See how different task types (feature, bugfix, refactor) perform
Cost data is stored in ~/.panopticon/:
session-map.json- Links Claude Code sessions to issuesruntime-metrics.json- Aggregated runtime performance datacosts/- Raw cost logs
API Endpoints:
| Endpoint | Description |
|---|---|
GET /api/costs/summary |
Overall cost summary (today/week/month) |
GET /api/costs/by-issue |
Costs grouped by issue |
GET /api/issues/:id/costs |
Cost details for a specific issue |
GET /api/metrics/runtimes |
Runtime comparison metrics |
GET /api/metrics/tasks |
Recent task history |
Panopticon ships with 25+ skills organized into categories:
| Skill | Description |
|---|---|
feature-work |
Standard feature development workflow |
bug-fix |
Systematic bug investigation and fix |
refactor |
Safe refactoring with test coverage |
code-review |
Comprehensive code review checklist |
code-review-security |
OWASP Top 10 security analysis |
code-review-performance |
Algorithm and resource optimization |
release |
Step-by-step release process |
dependency-update |
Safe dependency updates |
incident-response |
Production incident handling |
onboard-codebase |
Understanding new codebases |
work-complete |
Checklist for completing agent work |
| Skill | Description |
|---|---|
knowledge-capture |
Captures learnings when AI gets confused or corrected |
refactor-radar |
Detects systemic issues causing AI confusion |
session-health |
Detect and clean up stuck sessions |
| Skill | Description |
|---|---|
pan-help |
Show all Panopticon commands |
pan-up |
Start dashboard and services |
pan-down |
Stop dashboard and services |
pan-status |
Show running agents |
pan-issue |
Spawn agent for an issue |
pan-plan |
Create execution plan for issue |
pan-tell |
Send message to running agent |
pan-kill |
Kill a running agent |
pan-approve |
Approve agent work and merge |
pan-health |
Check system health |
pan-sync |
Sync skills to AI tools |
pan-install |
Install prerequisites |
pan-setup |
Initial setup wizard |
pan-quickstart |
Quick start guide |
pan-projects |
Manage registered projects |
pan-tracker |
Issue tracker operations |
pan-logs |
View agent logs |
pan-rescue |
Recover crashed agents |
pan-diagnose |
Diagnose agent issues |
pan-docker |
Docker operations |
pan-network |
Network diagnostics |
pan-config |
Configuration management |
pan-restart |
Safely restart Panopticon dashboard and services |
pan-code-review |
Orchestrate parallel code review (3 reviewers + synthesis) |
pan-convoy-synthesis |
Synthesize convoy coordination |
pan-subagent-creator |
Create specialized subagents |
pan-skill-creator |
Create new skills (guided) |
pan-workspace-config |
Configure polyrepo workspaces, DNS, ports |
pan-test-config |
Configure project test suites |
| Skill | Description |
|---|---|
beads |
Git-backed issue tracking with dependencies |
skill-creator |
Guide for creating new skills |
web-design-guidelines |
UI/UX review checklist |
Panopticon includes specialized subagent templates for common development tasks. Subagents are invoked via the Task tool or convoy orchestration for parallel execution.
| Subagent | Model | Focus | Output |
|---|---|---|---|
code-review-correctness |
haiku | Logic errors, edge cases, type safety | .claude/reviews/<timestamp>-correctness.md |
code-review-security |
sonnet | OWASP Top 10, vulnerabilities | .claude/reviews/<timestamp>-security.md |
code-review-performance |
haiku | Algorithms, N+1 queries, memory | .claude/reviews/<timestamp>-performance.md |
code-review-synthesis |
sonnet | Combines all findings into unified report | .claude/reviews/<timestamp>-synthesis.md |
Usage Example:
/pan-code-review --files "src/auth/*.ts"This spawns all three reviewers in parallel, then synthesizes their findings into a prioritized report.
| Subagent | Model | Focus | Permission Mode |
|---|---|---|---|
planning-agent |
sonnet | Codebase research, execution planning | plan (read-only) |
codebase-explorer |
haiku | Fast architecture discovery, pattern finding | plan (read-only) |
triage-agent |
haiku | Issue categorization, complexity estimation | default |
health-monitor |
haiku | Agent stuck detection, log analysis | default |
Usage Examples:
# Explore codebase architecture
Task(subagent_type='codebase-explorer', prompt='Map out the authentication system')
# Triage an issue
Task(subagent_type='triage-agent', prompt='Categorize and estimate ISSUE-123')
# Check agent health
Task(subagent_type='health-monitor', prompt='Check status of all running agents')The /pan-code-review skill orchestrates a comprehensive parallel review:
1. Determine scope (git diff, files, or branch)
2. Spawn 3 parallel reviewers:
βββ correctness (logic, types)
βββ security (vulnerabilities)
βββ performance (bottlenecks)
3. Each writes findings to .claude/reviews/
4. Spawn synthesis agent
5. Synthesis combines all findings
6. Present unified, prioritized report
Benefits:
- 3x faster than sequential reviews
- Comprehensive coverage across all dimensions
- Prioritized findings (blocker > critical > high > medium > low)
- Actionable recommendations with code examples
Review Output:
# Code Review - Complete Analysis
## Executive Summary
**Overall Assessment:** Needs Major Revisions
**Key Findings:**
- 1 blocker (SQL injection)
- 4 critical issues
- 6 high-priority items
## Blocker Issues
### 1. [Security] SQL Injection in login endpoint
...
## Critical Issues
...Use the /pan-subagent-creator skill to create project-specific subagents:
/pan-subagent-creatorSubagent templates live in ~/.panopticon/agents/ and sync to ~/.claude/agents/.
Panopticon reserves all skill names listed above. Do not use these names for project-specific skills to avoid conflicts.
Recommendation: Use a project-specific prefix for your skills (e.g., myn-standards, acme-deployment) to avoid namespace collisions.
Projects can have their own skills alongside Panopticon's:
~/.claude/skills/
βββ pan-help/ # Symlink β ~/.panopticon/skills/pan-help/
βββ feature-work/ # Symlink β ~/.panopticon/skills/feature-work/
βββ ... (other pan skills)
{project}/.claude/skills/
βββ myn-standards/ # Project-specific (git-tracked)
βββ myn-api-patterns/ # Project-specific (git-tracked)
Project-specific skills in {project}/.claude/skills/ are not managed by Panopticon. They live in your project's git repo and take precedence over global skills with the same name.
Skills are synced to all supported AI tools via symlinks:
~/.panopticon/skills/ # Canonical source
β pan sync
~/.claude/skills/ # Claude Code + Cursor
~/.codex/skills/ # Codex
~/.gemini/skills/ # Gemini CLIPanopticon enforces a standard approach to Product Requirements Documents (PRDs) across all managed projects.
Every project has a canonical PRD that defines the core product:
{project}/
βββ docs/
β βββ PRD.md # The canonical PRD (core product definition)
βββ workspaces/
β βββ feature-{issue}/
β βββ docs/
β βββ {ISSUE}-plan.md # Feature PRD (lives in feature branch)
| PRD Type | Location | Purpose |
|---|---|---|
| Canonical PRD | docs/PRD.md |
Core product definition, always on main |
| Feature PRD | workspaces/feature-{issue}/docs/{ISSUE}-plan.md |
Feature spec, lives in feature branch, merged with PR |
When you start planning an issue, Panopticon creates:
- A git worktree (workspace) for the feature branch
- A planning session that generates a feature PRD
The feature PRD lives in the workspace (feature branch) because:
- It gets merged with the PR (documentation travels with code)
- If you abort planning and delete the workspace, you don't want orphaned PRDs
- Clean separation - each feature is self-contained
When registering a new project with Panopticon (pan project add), the system will:
- Check for existing PRD - Look for
docs/PRD.md,PRD.md,README.md, or similar - If found: Use it to create/update the canonical PRD format, prompting for any missing crucial information
- If not found: Generate one by:
- Analyzing the codebase structure
- Identifying key technologies and patterns
- Asking discovery questions about the product
This ensures every Panopticon-managed project has a well-defined canonical PRD that agents can reference.
| Document | Naming | Example |
|---|---|---|
| Canonical PRD | PRD.md |
docs/PRD.md |
| Feature PRD | {ISSUE}-plan.md |
MIN-123-plan.md, PAN-4-plan.md |
| Planning artifacts | In .planning/{issue}/ |
.planning/min-123/STATE.md |
~/.panopticon/
config.toml # Main configuration
projects.yaml # Multi-project registry with issue routing
project-mappings.json # Linear project β local path mappings (legacy)
session-map.json # Claude sessions β issue linking
runtime-metrics.json # Runtime performance metrics
skills/ # Shared skills (SKILL.md format)
commands/ # Slash commands
agents/ # Subagent templates (.md files)
bin/ # Hook scripts (synced via pan sync)
heartbeat-hook # Real-time activity monitoring hook
agents/ # Per-agent runtime state
agent-min-123/
state.json # Agent state (model, phase, complexity)
health.json # Health status
hook.json # FPP work queue
cv.json # Work history
mail/ # Incoming messages
handoffs/ # Handoff prompts (for debugging)
cloister/ # Cloister AI lifecycle manager
config.json # Cloister settings
state.json # Running state
events.jsonl # Handoff event log
heartbeats/ # Real-time agent activity
agent-min-123.json # Last heartbeat from agent
logs/ # Log files
handoffs.jsonl # All handoff events (for analytics)
costs/ # Raw cost logs (JSONL)
backups/ # Sync backups
traefik/ # Traefik reverse proxy config
dynamic/ # Dynamic route configs
certs/ # TLS certificates
Each agent's state is tracked in ~/.panopticon/agents/{agent-id}/state.json:
{
"id": "agent-min-123",
"issueId": "MIN-123",
"workspace": "/home/user/projects/myapp/workspaces/feature-min-123",
"branch": "feature/min-123",
"phase": "implementation",
"model": "sonnet",
"complexity": "medium",
"handoffCount": 0,
"sessionId": "abc123",
"createdAt": "2024-01-22T10:00:00-08:00",
"updatedAt": "2024-01-22T10:30:00-08:00"
}| Field | Description |
|---|---|
phase |
Current work phase: planning, implementation, testing, review, merging |
model |
Current model: haiku, sonnet, opus |
complexity |
Detected complexity: trivial, simple, medium, complex, expert |
handoffCount |
Number of times the agent has been handed off to a different model |
sessionId |
Claude Code session ID (for resuming after handoff) |
State Cleanup: When an agent is killed or aborted (pan work kill), Panopticon automatically cleans up its state files to prevent stale data from affecting future runs.
Panopticon implements the Deacon pattern for stuck agent detection:
- Ping timeout: 30 seconds
- Consecutive failures: 3 before recovery
- Cooldown: 5 minutes between force-kills
When an agent is stuck (no activity for 30+ minutes), Panopticon will:
- Force kill the tmux session
- Record the kill in health.json
- Respawn with crash recovery context
"Any runnable action is a fixed point and must resolve before the system can rest."
Inspired by Doctor Who: a fixed point in time must occur β it cannot be avoided.
Fixed Point Principle (FPP): Any runnable bead, hook, or agent action represents a fixed point in execution and must be resolved immediately. Panopticon continues executing until no fixed points remain.
FPP ensures agents are self-propelling:
- Work items are pushed to the agent's hook
- On spawn/recovery, the hook is checked
- Pending work is injected into the agent's prompt
- Completed work is popped from the hook
Panopticon uses a shared config, switchable CLI approach:
~/.panopticon/ # Shared by both dev and prod
βββ config.toml # Settings
βββ projects.json # Registered projects
βββ project-mappings.json # Linear β local path mappings
βββ agents/ # Agent state
βββ skills/ # Shared skills
Both dev and production versions read/write the same config, so you can switch between them freely.
# Clone and setup
git clone https://github.com/eltmon/panopticon.git
cd panopticon
npm install
# Link dev version globally (makes 'pan' use your local code)
npm link
# Start the dashboard (with hot reload)
cd src/dashboard
npm run install:all
npm run dev
# β Frontend: http://localhost:3010
# β API: http://localhost:3011# Use dev version (from your local repo)
cd /path/to/panopticon && npm link
# Switch back to stable release
npm unlink panopticon-cli
npm install -g panopticon-cli| Mode | Command | Use Case |
|---|---|---|
| Production | pan up |
Daily usage, containerized, HTTPS at https://pan.localhost |
| Dev | cd src/dashboard && npm run dev |
Only for active development on Panopticon itself |
Note: Use pan up for normal usage - it runs in Docker and won't conflict with your project's ports. Only use dev mode when actively working on Panopticon's codebase.
If you're both developing Panopticon AND using it for your own projects:
- Use
npm linkso CLI changes take effect immediately - Run dashboard from source for hot reload on UI changes
- Config is shared - workspaces/agents work the same either way
- Test in a real project - your own usage is the best test
Panopticon has two types of skills:
| Directory | Purpose | Synced When |
|---|---|---|
skills/ |
User-facing skills for all Panopticon users | Always via pan sync |
dev-skills/ |
Developer-only skills for Panopticon contributors | Only in dev mode |
Dev mode is automatically detected when running from the Panopticon source repo (npm link). Skills in dev-skills/ are:
- Checked into the repo and version-controlled
- Only synced to developers' machines, not end users
- Shown with
[dev]label inpan sync --dry-run
# Check what will be synced (including dev-skills)
pan sync --dry-run
# Output shows:
# Developer mode detected - dev-skills will be synced
# ...
# + skill/test-specialist-workflow [dev]The test-specialist-workflow skill provides end-to-end testing for the specialist handoff pipeline.
Location: dev-skills/test-specialist-workflow/SKILL.md
What it tests:
- Full approve workflow: review-agent β test-agent β merge-agent
- Specialist handoffs via
pan specialists wake --task - Merge completion and branch preservation
Usage:
# First sync to get dev-skills
pan sync
# In Claude Code, invoke the skill
/test-specialist-workflowThe skill guides you through:
- Prerequisites check - Dashboard running, specialists available
- Create test issue - GitHub issue for tracking
- Create workspace - Git worktree for the test
- Make test change - Trivial commit to verify merge
- Trigger approve - Kicks off the specialist pipeline
- Monitor handoffs - Watch each specialist complete and hand off
- Verify merge - Confirm change reached main branch
- Cleanup - Close issue, remove workspace
Expected timeline: 2-4 minutes total
When to use:
- After making changes to the specialist system
- After modifying the approve workflow
- As a smoke test before releases
If running multiple containerized workspaces with Vite/React frontends, you may notice CPU spikes and slow HMR. This is because Vite's default file watching polls every 100ms, which compounds with multiple instances.
Fix: Increase the polling interval in your vite.config.mjs:
server: {
watch: {
usePolling: true,
interval: 3000, // Poll every 3s instead of 100ms default
},
}A 3000ms interval supports 4-5 concurrent workspaces comfortably while maintaining acceptable HMR responsiveness.
A workspace can become "corrupted" when it exists as a directory but is no longer a valid git worktree. The dashboard will show a yellow "Workspace Corrupted" warning with an option to clean and recreate.
Symptoms:
- Dashboard shows "Workspace Corrupted" warning
git statusin the workspace fails with "not a git repository"- The
.gitfile is missing from the workspace directory
Common Causes:
| Cause | Description |
|---|---|
| Interrupted creation | pan workspace create was killed mid-execution (Ctrl+C, system crash) |
| Manual .git deletion | Someone accidentally deleted the .git file in the workspace |
| Disk space issues | Ran out of disk space during workspace creation |
| Git worktree pruning | Running git worktree prune in the main repo removed the worktree link |
| Force-deleted main repo | The main repository was moved or deleted while workspaces existed |
Resolution:
-
Via Dashboard (recommended):
- Click on the issue to open the detail panel
- Click "Clean & Recreate" button
- Review the files that will be deleted
- Check "Create backup" to preserve your work (recommended)
- Click "Backup & Recreate"
-
Via CLI:
# Option 1: Manual cleanup rm -rf /path/to/project/workspaces/feature-issue-123 pan workspace create ISSUE-123 # Option 2: Backup first cp -r /path/to/project/workspaces/feature-issue-123 /tmp/backup-issue-123 rm -rf /path/to/project/workspaces/feature-issue-123 pan workspace create ISSUE-123 # Then manually restore files from backup
Prevention:
- Don't interrupt
pan workspace createcommands - Don't run
git worktree prunein the main repo without checking for active workspaces - Ensure adequate disk space before creating workspaces
This project is licensed under the MIT License - see the LICENSE file for details.