Skip to content

Optimize setup context budget: tiered skills, deferred reads, pruned …#13

Open
friedrch wants to merge 2 commits intomainfrom
refactor/setup
Open

Optimize setup context budget: tiered skills, deferred reads, pruned …#13
friedrch wants to merge 2 commits intomainfrom
refactor/setup

Conversation

@friedrch
Copy link
Contributor

…re-reads

Three changes to reduce setup's context consumption by ~2,400 lines:

  1. Tiered skill generation (Step 9): Classify 16 skills into three tiers — A (copy: learn, tasks), B (frontmatter-only: reflect, reweave, stats, graph, refactor), C (DOMAIN substitution: seed, pipeline, ralph, rethink, next, validate, remember, reduce, verify). Tiers A+B skip body reasoning entirely; Tier C does mechanical string replacement only. Corrects plan misclassification of reduce and verify (both have {DOMAIN:xxx} patterns, moved from B to C).

  2. Defer upfront reference reads: Trim mandatory reads from 8 files (~2,268 lines) to 3 (kernel.yaml, interaction-constraints.md, vocabulary-transforms.md). Defer use-case-presets to Step 3c (matched preset only), personality-layer to Step 3d (only if signals detected). Inline failure-modes vulnerability matrix into Step 3f. Drop three-spaces.md (covered by Step 2) and conversation-patterns.md (consult on ambiguity only).

  3. Eliminate redundant re-reads: Remove 6 of 14 ops/derivation.md re-reads (Steps 4, 5, 5f, 5g, 6, 11 — small files from info already in context) and 2 interaction-constraints.md re-reads (Steps 3b, 3e — already loaded upfront).

friedrch and others added 2 commits February 19, 2026 20:47
…re-reads

Three changes to reduce setup's context consumption by ~2,400 lines:

1. Tiered skill generation (Step 9): Classify 16 skills into three
   tiers — A (copy: learn, tasks), B (frontmatter-only: reflect,
   reweave, stats, graph, refactor), C (DOMAIN substitution: seed,
   pipeline, ralph, rethink, next, validate, remember, reduce, verify).
   Tiers A+B skip body reasoning entirely; Tier C does mechanical
   string replacement only. Corrects plan misclassification of reduce
   and verify (both have {DOMAIN:xxx} patterns, moved from B to C).

2. Defer upfront reference reads: Trim mandatory reads from 8 files
   (~2,268 lines) to 3 (kernel.yaml, interaction-constraints.md,
   vocabulary-transforms.md). Defer use-case-presets to Step 3c
   (matched preset only), personality-layer to Step 3d (only if
   signals detected). Inline failure-modes vulnerability matrix into
   Step 3f. Drop three-spaces.md (covered by Step 2) and
   conversation-patterns.md (consult on ambiguity only).

3. Eliminate redundant re-reads: Remove 6 of 14 ops/derivation.md
   re-reads (Steps 4, 5, 5f, 5g, 6, 11 — small files from info
   already in context) and 2 interaction-constraints.md re-reads
   (Steps 3b, 3e — already loaded upfront).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…4 and 5g

- failure-modes.md: "not needed" annotation contradicted line 446 which
  still said "consult." Changed to scoped deferred read (HIGH-risk sections
  only) with explicit instruction to use prevention patterns for Common Pitfalls.
- Step 4 (identity.md): restored re-read of derivation.md + conditional
  personality-layer.md re-read. Step 3 is the largest generation task;
  without the re-read anchor, personality dimensions drift out of attention.
- Step 5g (manual/): restored re-read of derivation.md before 7 manual
  pages. Prevents vocabulary drift across a long multi-file generation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant