Skip to content

Conversation

@jsw324
Copy link
Contributor

@jsw324 jsw324 commented Jul 31, 2025

Add Prompt Management Service Integration

🎯 What We Built

This PR introduces a new Prompt Management Service integration to the Agentuity Python SDK, enabling dynamic prompt template compilation with variable substitution and version management. This feature allows agents to compile prompt templates stored in Agentuity Cloud at runtime, providing flexibility and centralized prompt management.

🔧 Key Components Added

1. PromptClient (agentuity/server/prompt.py)

  • Async HTTP client for interacting with the Agentuity prompt management service
  • Template compilation with variable substitution
  • Version management support for prompt templates
  • Comprehensive error handling and logging
  • OpenTelemetry integration for distributed tracing

2. Data Models

  • CompilePromptRequest: Request model for prompt compilation
  • CompilePromptResponse: Response model with success/error handling
  • PromptCompileResult: Result container with compiled content and metadata

3. Integration Points

  • AgentContext: Added prompt client for easy access in agent implementations
  • Public API: Exported PromptClient and PromptCompileResult in main package
  • Type Safety: Full type annotations and validation

🚀 Why We Built This

Centralized Prompt Management

  • Store and version prompt templates in Agentuity Cloud
  • Avoid hardcoding prompts in agent code
  • Enable A/B testing and prompt iteration without code changes

Dynamic Variable Substitution

  • Compile templates with runtime variables
  • Support complex prompt engineering workflows
  • Maintain consistency across agent instances

Observability & Debugging

  • Full OpenTelemetry tracing integration
  • Detailed logging for request/response debugging
  • Error handling with meaningful error messages

Developer Experience

  • Simple async API: await context.prompt.compile("template_name", variables)
  • Context manager support for resource cleanup
  • Type-safe interfaces with comprehensive validation

�� Usage Example

# In an agent implementation
async def handle_request(self, context: AgentContext, request: AgentRequest):
    # Compile a prompt template with variables
    result = await context.prompt.compile(
        name="customer_support_response",
        variables={
            "customer_name": request.user.name,
            "issue_type": request.data.get("issue_type"),
            "priority": "high"
        }
    )
    
    # Use the compiled prompt
    response = await self.llm.generate(result.compiled_content)
    return AgentResponse(content=response)

Summary by CodeRabbit

  • New Features

    • Introduced a client for compiling prompt templates via the Agentuity prompt management service.
    • Added new public components for prompt compilation, making them available for external use.
    • Enhanced context handling with a dedicated prompt client for compiling prompt templates.
  • Style

    • Reformatted method signatures for improved readability.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 31, 2025

Walkthrough

This update introduces a new prompt compilation client (PromptClient) and related data models into the agentuity package, exposing them in the public API. The AgentContext class now includes a PromptClient instance. Additional minor changes include import/export adjustments and a method signature reformatting in the Telegram service interface.

Changes

Cohort / File(s) Change Summary
Public API Exposure
agentuity/__init__.py
Added PromptClient and PromptCompileResult to public exports and imports from agentuity.server.
Server Package Exports
agentuity/server/__init__.py
Explicitly imported and exposed PromptClient and PromptCompileResult from .prompt.
Prompt Compilation Client
agentuity/server/prompt.py
Introduced new module with PromptClient, data models for prompt compilation, and async HTTP logic with tracing.
Agent Context Extension
agentuity/server/context.py
Added PromptClient as self.prompt in AgentContext, initialized in the constructor.
Code Formatting
agentuity/server/types.py
Reformatted send_reply method signature in TelegramServiceInterface to multiline with trailing comma.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AgentContext
    participant PromptClient
    participant AgentuityService

    User->>AgentContext: Request prompt compilation
    AgentContext->>PromptClient: compile(name, variables, version)
    PromptClient->>AgentuityService: HTTP POST /prompt/compile
    AgentuityService-->>PromptClient: JSON response (compiled prompt data)
    PromptClient-->>AgentContext: PromptCompileResult
    AgentContext-->>User: Return compiled prompt
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Poem

A prompty new client hops into view,
With data models fresh as morning dew.
Contexts now compile with glee,
While exports bloom for all to see.
Telegram’s lines are neat and spry—
This rabbit’s code just loves to try!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch prompt-management

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
agentuity/server/context.py (1)

69-71: Consider maintaining consistency with other services initialization pattern.

While the implementation is functionally correct, the PromptClient is instantiated directly in the constructor whereas other services (kv, vector, objectstore) are passed in via the services dict. For consistency, consider either:

  1. Adding prompt to the services dict passed to the constructor, or
  2. Document why prompt is treated differently from other services
agentuity/server/prompt.py (4)

154-154: Consider making the API version configurable.

The API endpoint uses a hardcoded date version 2025-03-17. This should either be:

  1. Made configurable via environment variable or parameter
  2. Documented why this specific date version is used
  3. Updated to reflect the actual API version pattern

159-164: Consider reusing HTTP client for better performance.

Creating a new httpx.AsyncClient for each request prevents connection pooling. Consider creating a client instance at the class level and reusing it across requests for better performance.

Example implementation:

def __init__(self, base_url: str, api_key: str, tracer: trace.Tracer):
    self.base_url = base_url
    self.api_key = api_key
    self.tracer = tracer
    self._client = httpx.AsyncClient()

async def __aenter__(self):
    return self

async def __aexit__(self, exc_type, exc_val, exc_tb):
    await self._client.aclose()

230-235: Consider using more specific exception types.

The code raises generic Exception instances throughout. Consider creating specific exception classes like PromptCompilationError, PromptServiceConnectionError, etc., to make error handling more precise for callers.


222-225: Improve error message for non-200 responses.

When the response status is not 200, the error message only includes the status code. Consider including more context such as the actual HTTP status code and potentially the response body (if not sensitive).

-                    raise Exception(f"Failed to compile prompt: {response.status_code}")
+                    raise Exception(f"Failed to compile prompt: HTTP {response.status_code} - {error_msg}")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 10b84bf and b1a64bd.

📒 Files selected for processing (5)
  • agentuity/__init__.py (2 hunks)
  • agentuity/server/__init__.py (1 hunks)
  • agentuity/server/context.py (2 hunks)
  • agentuity/server/prompt.py (1 hunks)
  • agentuity/server/types.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
agentuity/server/context.py (1)
agentuity/server/prompt.py (1)
  • PromptClient (82-235)
agentuity/server/__init__.py (1)
agentuity/server/prompt.py (2)
  • PromptClient (82-235)
  • PromptCompileResult (70-79)
agentuity/__init__.py (1)
agentuity/server/prompt.py (2)
  • PromptClient (82-235)
  • PromptCompileResult (70-79)
agentuity/server/types.py (1)
agentuity/io/telegram.py (2)
  • chat_id (64-65)
  • message_id (60-61)
🔇 Additional comments (3)
agentuity/__init__.py (1)

15-16: LGTM! Properly exposed new prompt management classes.

The new PromptClient and PromptCompileResult classes are correctly imported and added to the __all__ list, following the existing pattern for public API exposure.

Also applies to: 32-33

agentuity/server/__init__.py (1)

27-30: LGTM! Explicit re-export pattern properly implemented.

The use of Name as Name aliasing pattern is correct for ensuring these classes are properly re-exported from the server module.

agentuity/server/types.py (1)

300-306: LGTM! Clean formatting improvement.

The multi-line parameter formatting with trailing comma improves readability and makes future changes cleaner.

@jsw324 jsw324 changed the title prompt management first pass prompt management first pass - AGENT-431 Jul 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants