Skip to content

Conversation

@zby
Copy link
Owner

@zby zby commented Jan 10, 2026

Catalog 15 concrete problems that benefit from recursive LLM worker
architectures, with sketched implementations using llm-do:

  1. Document summarization (hierarchical map-reduce)
  2. Task decomposition / planning
  3. Code analysis and refactoring
  4. Mathematical/logical proof
  5. Research question exploration
  6. Debate/adversarial analysis
  7. Creative writing (hierarchical expansion)
  8. Data pipeline construction
  9. Test generation
  10. Knowledge graph construction
  11. Bug root cause analysis
  12. Multi-level translation/localization
  13. Game AI / decision trees
  14. Explanation generation
  15. Contract/legal analysis

Also documents four implementation patterns:

  • Pattern A: Self-recursive worker (requires policy change)
  • Pattern B: Twin workers (works now)
  • Pattern C: Typed recursion (different workers per level)
  • Pattern D: Depth-controlled via explicit parameter

Key insight: recursion works because messages are locals, not globals.
Each call gets fresh context, preventing unbounded accumulation.

Catalog 15 concrete problems that benefit from recursive LLM worker
architectures, with sketched implementations using llm-do:

1. Document summarization (hierarchical map-reduce)
2. Task decomposition / planning
3. Code analysis and refactoring
4. Mathematical/logical proof
5. Research question exploration
6. Debate/adversarial analysis
7. Creative writing (hierarchical expansion)
8. Data pipeline construction
9. Test generation
10. Knowledge graph construction
11. Bug root cause analysis
12. Multi-level translation/localization
13. Game AI / decision trees
14. Explanation generation
15. Contract/legal analysis

Also documents four implementation patterns:
- Pattern A: Self-recursive worker (requires policy change)
- Pattern B: Twin workers (works now)
- Pattern C: Typed recursion (different workers per level)
- Pattern D: Depth-controlled via explicit parameter

Key insight: recursion works because messages are locals, not globals.
Each call gets fresh context, preventing unbounded accumulation.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zby, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces extensive documentation outlining how to effectively utilize recursive patterns within LLM worker architectures. It addresses the inherent context limitations of large language models by providing a structured approach to breaking down complex problems into manageable, recursive sub-tasks. The document serves as a foundational resource, detailing various problem domains where recursion is beneficial and offering practical implementation strategies, thereby enabling the development of more robust and scalable LLM-powered applications.

Highlights

  • New Documentation on Recursive LLM Worker Patterns: A new document, 'recursive-problem-patterns.md', has been added to the docs/notes directory. This document serves as a comprehensive guide for implementing recursive patterns with LLM workers.
  • Catalog of 15 Recursive Problem Patterns: The document details 15 concrete problem types that significantly benefit from recursive LLM worker architectures, including document summarization, task decomposition, code analysis, research exploration, and creative writing, each with sketched implementations using llm-do.
  • Four Implementation Patterns for Recursion: Four distinct implementation patterns for recursion are outlined: Self-Recursive Worker, Twin Workers, Typed Recursion, and Depth-Controlled Recursion, providing practical approaches for developers.
  • Context Management and Depth Limit Considerations: The documentation clarifies the crucial insight that 'messages are locals, not globals' in recursive calls, preventing unbounded context accumulation. It also discusses considerations for recursion depth limits and strategies for managing deeper recursions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@zby zby merged commit 5e9ad19 into main Jan 10, 2026
2 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent and very thorough document cataloging recursive patterns for LLM workers. It's well-structured and provides great examples. I've added a few comments to improve clarity and correctness in some of the examples and explanations. Specifically, I've pointed out a structural mismatch in one of the schemas, noted a few missing schema definitions that would improve clarity, and corrected a calculation regarding recursion depth.


class CounterargumentAnalysis(BaseModel):
counterargument: str
rebuttals: list[str]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The rebuttals: list[str] field in CounterargumentAnalysis creates a structural mismatch with the recursive process. The recursive call to debate_analyzer returns a full ArgumentAnalysis object, and flattening this to a list of strings would lose valuable structured information.

To better represent the recursive analysis and preserve the structured output, consider replacing rebuttals with a field that can hold the nested ArgumentAnalysis object. I've suggested renaming it to analysis for clarity.

Suggested change
rebuttals: list[str]
analysis: "ArgumentAnalysis"

name: kg_builder
description: Build knowledge graphs from text recursively
model: anthropic:claude-haiku-4-5
schema_out_ref: schemas.py:KnowledgeFragment
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The kg_builder.worker references schemas.py:KnowledgeFragment, but the schema definition is not provided in the document. Including the schema would make this example much clearer and more complete by showing the expected structure of the worker's output.

name: game_ai
description: Explore game trees for optimal decisions
model: anthropic:claude-haiku-4-5
schema_out_ref: schemas.py:GameAnalysis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The game_ai.worker references schemas.py:GameAnalysis, but its definition is missing from the document. Adding the GameAnalysis schema would help readers better understand the expected output of this worker and make the pattern easier to implement.


The default max_depth of 5 is usually sufficient because:

1. **Exponential work**: Each level multiplies work. Depth 5 with branching factor 3 = 243 calls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The calculation for the number of calls appears to be ambiguous. For a tree of depth 5 with a branching factor of 3, the number of calls at the deepest level (the leaves) is 3^5 = 243. However, the total number of calls in the entire tree is 364. Please clarify whether you're referring to total calls or just leaf-node calls to ensure the documentation is precise.

Suggested change
1. **Exponential work**: Each level multiplies work. Depth 5 with branching factor 3 = 243 calls
1. **Exponential work**: Each level multiplies work. A tree of depth 5 with a branching factor of 3 has 243 leaf nodes, and a total of 364 calls.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants