-
Notifications
You must be signed in to change notification settings - Fork 7
Add Langfuse observability to Unified API #457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds an observe_llm_execution decorator factory to emit Langfuse traces/generations around LLM provider execute calls when credentials are available, and integrates it into execute_job to wrap provider execution with optional session/conversation IDs. Changes
Sequence Diagram(s)sequenceDiagram
participant Job as execute_job
participant Decorator as observe_llm_execution (wrapper)
participant Langfuse as Langfuse Client
participant Provider as LLM Provider
Job->>Decorator: call decorated_execute(completion_config, query, include_provider_raw_response)
alt credentials available & client init OK
Decorator->>Langfuse: init client (credentials)
Decorator->>Langfuse: create trace (session_id or conversation_id)
Decorator->>Langfuse: create generation
Decorator->>Provider: execute(completion_config, query, ...)
alt success
Provider-->>Decorator: response + usage
Decorator->>Langfuse: update generation (output, usage, model)
Decorator->>Langfuse: update trace (session context) & flush
Decorator-->>Job: return (response, None)
else error
Provider-->>Decorator: exception
Decorator->>Langfuse: flush data
Decorator-->>Job: propagate error
end
else credentials missing or client init failed
Decorator->>Provider: execute(...) (bypass observability)
Provider-->>Decorator: response
Decorator-->>Job: return (response, None)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
backend/app/core/langfuse/langfuse.py (2)
3-3: Optional: modernize typing imports and Dict usage to match Ruff hints.Ruff is flagging
CallableandDictfromtyping; in Python 3.11+ you can simplify by importingCallablefromcollections.abcand using builtindict[...]instead ofDict[...]. This is purely stylistic but will keep the module aligned with current best practices and avoid future deprecation noise.Example (conceptual only):
-from typing import Any, Callable, Dict, Optional +from collections.abc import Callable +from typing import Any, Optional ... - input: Dict[str, Any], - metadata: Optional[Dict[str, Any]] = None, + input: dict[str, Any], + metadata: Optional[dict[str, Any]] = None,Also applies to: 55-61, 73-78, 88-92
114-218: Tighten type hints onobserve_llm_executionand its wrapper.The decorator logic looks sound and preserves the original
(response, error)contract, including graceful fallback when credentials are missing or client init fails. To better leverage type checking (and per project guidelines on type hints), consider adding explicit return types for the decorator and wrapper:-def observe_llm_execution( - session_id: str | None = None, - credentials: dict | None = None, -): +def observe_llm_execution( + session_id: str | None = None, + credentials: dict | None = None, +) -> Callable: @@ - def decorator(func: Callable) -> Callable: + def decorator(func: Callable) -> Callable: @@ - def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs): + def wrapper( + completion_config: CompletionConfig, + query: QueryParams, + **kwargs, + ) -> tuple[LLMCallResponse | None, str | None]:You can later narrow the
Callableannotations if you want stronger guarantees, but even this minimal change makes the behavior clearer to tooling without affecting runtime.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/app/core/langfuse/langfuse.py(2 hunks)backend/app/services/llm/jobs.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/core/langfuse/langfuse.pybackend/app/services/llm/jobs.py
backend/app/core/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place core functionality (config, DB session, security, exceptions, middleware) in backend/app/core/
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/langfuse/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place Langfuse observability integration under backend/app/core/langfuse/
Files:
backend/app/core/langfuse/langfuse.py
backend/app/services/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Implement business logic services under backend/app/services/
Files:
backend/app/services/llm/jobs.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
Repo: ProjectTech4DevAI/ai-platform PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T12:05:01.317Z
Learning: Applies to backend/app/core/langfuse/**/*.py : Place Langfuse observability integration under backend/app/core/langfuse/
📚 Learning: 2025-10-08T12:05:01.317Z
Learnt from: CR
Repo: ProjectTech4DevAI/ai-platform PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T12:05:01.317Z
Learning: Applies to backend/app/core/langfuse/**/*.py : Place Langfuse observability integration under backend/app/core/langfuse/
Applied to files:
backend/app/core/langfuse/langfuse.pybackend/app/services/llm/jobs.py
🧬 Code graph analysis (2)
backend/app/core/langfuse/langfuse.py (3)
backend/app/models/llm/request.py (2)
CompletionConfig(49-58)QueryParams(35-46)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)backend/app/tests/services/llm/providers/test_openai.py (2)
completion_config(32-37)provider(27-29)
backend/app/services/llm/jobs.py (3)
backend/app/crud/credentials.py (1)
get_provider_credential(121-159)backend/app/core/langfuse/langfuse.py (1)
observe_llm_execution(114-218)backend/app/services/llm/providers/base.py (1)
execute(35-55)
🪛 Ruff (0.14.6)
backend/app/core/langfuse/langfuse.py
3-3: Import from collections.abc instead: Callable
Import from collections.abc
(UP035)
3-3: typing.Dict is deprecated, use dict instead
(UP035)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (2)
backend/app/services/llm/jobs.py (2)
187-193: Confirmget_provider_credentialsupportsprovider=\"langfuse\".This call assumes the credentials CRUD/validation layer recognizes
"langfuse"as a valid provider; otherwisevalidate_providerinsideget_provider_credentialwill raise and short‑circuit the LLM job before the actual provider executes.Please double‑check that:
"langfuse"is included wherever provider names are validated, and- Langfuse credentials are stored with the expected shape so that
decrypt_credentialsreturns thepublic_key/secret_key/hostfields used inobserve_llm_execution.
194-205: Verify provider/session lifetime and note clean fallback when Langfuse is absent.
decorated_executeis created and invoked after thewith Session(engine) as sessionblock has exited. That’s fine as long as:
get_llm_provideronly uses the DB session during provider construction (e.g., to fetch credentials/config), andprovider_instance.executedoes not depend on the originalSessionremaining open.If any provider still uses the passed
sessionduringexecute, it should instead manage its own short‑lived sessions internally, ordecorated_executeshould be moved back inside thewith Session(...)block.On the positive side, the decorator is correctly wired:
- When
langfuse_credentialsisNoneor invalid,observe_llm_executionwill call through toprovider_instance.executeunchanged.- When credentials are valid, you get tracing without altering the external
(response, error)behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/app/core/langfuse/langfuse.py (2)
173-175: Simplify variable declaration and assignment.The separate type hint declarations on lines 173-174 followed by assignment on line 175 are unnecessary. Python's type inference combined with the function's return annotation provides sufficient typing.
Apply this diff:
- # Execute the actual LLM call - response: LLMCallResponse | None - error: str | None - response, error = func(completion_config, query, **kwargs) + # Execute the actual LLM call + response, error = func(completion_config, query, **kwargs)
114-220: Consider leveraging the existingLangfuseTracerclass to reduce duplication.The decorator reimplements logic similar to
LangfuseTracer(lines 14-111), including trace/generation creation, error handling, and flushing. Refactoring the decorator to useLangfuseTracerinternally would improve maintainability and eliminate the duplicate error-handling blocks (lines 198-203 vs. 209-214).Example refactor:
def observe_llm_execution( session_id: str | None = None, credentials: dict | None = None, ): def decorator(func: Callable) -> Callable: @wraps(func) def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs): if not credentials or not all( key in credentials for key in ["public_key", "secret_key", "host"] ): logger.info("[Langfuse] No credentials - skipping observability") return func(completion_config, query, **kwargs) tracer = LangfuseTracer(credentials=credentials, session_id=session_id) # Use tracer methods for trace/generation lifecycle tracer.start_trace( name="unified-llm-call", input={"query": query.input}, metadata={"provider": completion_config.provider}, tags=[completion_config.provider], ) tracer.start_generation( name=f"{completion_config.provider}-completion", input={"query": query.input}, ) try: response, error = func(completion_config, query, **kwargs) if response: tracer.end_generation( output={"status": "success", "output": response.response.output.text}, usage={"input": response.usage.input_tokens, "output": response.usage.output_tokens}, model=response.response.model, ) tracer.update_trace( tags=[completion_config.provider], output={"status": "success", "output": response.response.output.text}, ) else: tracer.log_error(error or "Unknown error") tracer.flush() return response, error except Exception as e: tracer.log_error(str(e)) tracer.flush() raise return wrapper return decorator
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/core/langfuse/langfuse.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place core functionality (config, DB session, security, exceptions, middleware) in backend/app/core/
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/langfuse/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place Langfuse observability integration under backend/app/core/langfuse/
Files:
backend/app/core/langfuse/langfuse.py
🧠 Learnings (1)
📚 Learning: 2025-10-08T12:05:01.317Z
Learnt from: CR
Repo: ProjectTech4DevAI/ai-platform PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-10-08T12:05:01.317Z
Learning: Applies to backend/app/core/langfuse/**/*.py : Place Langfuse observability integration under backend/app/core/langfuse/
Applied to files:
backend/app/core/langfuse/langfuse.py
🧬 Code graph analysis (1)
backend/app/core/langfuse/langfuse.py (2)
backend/app/models/llm/request.py (2)
CompletionConfig(49-58)QueryParams(35-46)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)
🪛 Ruff (0.14.6)
backend/app/core/langfuse/langfuse.py
3-3: Import from collections.abc instead: Callable
Import from collections.abc
(UP035)
3-3: typing.Dict is deprecated, use dict instead
(UP035)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (1)
backend/app/core/langfuse/langfuse.py (1)
183-186: The review comment is incorrect.usage_detailsis the correct and preferred parameter for Langfuse 2.60.3.Based on verification:
- Langfuse version 2.60.3 uses
usage_detailsas the current v2/v3 standard parameter forgeneration.end()- The format
{"input": ..., "output": ...}matches the expected generic-style structure- The
usageparameter at line 95 is legacy/v1 style but remains backward-compatible- Both approaches work;
usage_detailsis actually more modern and correctNo changes are needed. The code at lines 183-186 is properly implemented.
| if not credentials or not all( | ||
| key in credentials for key in ["public_key", "secret_key", "host"] | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this extensive check as if credentials are there then "public_key", "secret_key", "host" should be there else user can't save credentials
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its just a simple check, backend should not assume the stored object is always well-formed.
Credentials may come from: DB inconsistencies, migrations, manual edits, older versions of the system
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't understand and agree with it completely. Ideally we should ensure DB has correct credentials otherwise what's the point of saving credentials that are incorrect and whenever used raises error. That way we dont need to have checks when fetching credentials
| if not credentials or not all( | ||
| key in credentials for key in ["public_key", "secret_key", "host"] | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't understand and agree with it completely. Ideally we should ensure DB has correct credentials otherwise what's the point of saving credentials that are incorrect and whenever used raises error. That way we dont need to have checks when fetching credentials
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
backend/app/core/langfuse/langfuse.py (2)
3-4: Use modern type hints for Python 3.11+.Per the coding guidelines, import
Callablefromcollections.abcand use the built-indictinstead ofDict. This was flagged in a previous review and by static analysis.-from typing import Any, Callable, Dict, Optional +from collections.abc import Callable +from typing import Any, Optional from functools import wrapsThen replace all
Dictoccurrences (lines 58, 76, 90, 91, 97) withdict.
152-156: Passsession_idto trace creation for proper session grouping.The
session_idis only set viatrace.update()later, which means traces are created without session grouping initially. Includesession_idin trace creation for conversation-level analytics to work correctly from the start.trace = langfuse.trace( name="unified-llm-call", input=query.input, tags=[completion_config.provider], + session_id=session_id, )
🧹 Nitpick comments (2)
backend/app/core/langfuse/langfuse.py (2)
152-156: Add request correlation metadata for debugging.Unlike
LangfuseTracer.start_tracewhich includesrequest_idfromcorrelation_id.get()in metadata, this trace is created without metadata. For consistent observability and easier debugging, include the request correlation ID.+ trace_metadata = {"request_id": correlation_id.get() or "N/A"} + trace = langfuse.trace( name="unified-llm-call", input=query.input, + metadata=trace_metadata, tags=[completion_config.provider], + session_id=session_id, )
131-133: Add return type hints for better type safety.The wrapper function has typed parameters but lacks a return type annotation. Based on the code, it returns
tuple[LLMCallResponse | None, str | None].- def decorator(func: Callable) -> Callable: + def decorator(func: Callable[..., tuple[LLMCallResponse | None, str | None]]) -> Callable[..., tuple[LLMCallResponse | None, str | None]]: @wraps(func) - def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs): + def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs) -> tuple[LLMCallResponse | None, str | None]:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/app/core/langfuse/langfuse.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place core functionality (config, DB session, security, exceptions, middleware) in backend/app/core/
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/langfuse/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place Langfuse observability integration under backend/app/core/langfuse/
Files:
backend/app/core/langfuse/langfuse.py
🧬 Code graph analysis (1)
backend/app/core/langfuse/langfuse.py (3)
backend/app/models/llm/request.py (2)
CompletionConfig(49-58)QueryParams(35-46)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)backend/app/tests/services/llm/providers/test_openai.py (2)
completion_config(32-37)provider(27-29)
🪛 GitHub Actions: AI Platform CI
backend/app/core/langfuse/langfuse.py
[error] 1-1: Black formatting failed. Reformatted 1 file during pre-commit. Exit code 1.
🪛 Ruff (0.14.8)
backend/app/core/langfuse/langfuse.py
3-3: Import from collections.abc instead: Callable
Import from collections.abc
(UP035)
3-3: typing.Dict is deprecated, use dict instead
(UP035)
🔇 Additional comments (1)
backend/app/core/langfuse/langfuse.py (1)
164-209: LGTM: Core execution logic is well-structured.The decorator properly handles:
- Success path with generation output and usage tracking
- Error path when response is None
- Exception path with proper cleanup and re-raise
- Langfuse flush in all exit paths
| return func(completion_config, query, **kwargs) | ||
|
|
||
|
|
||
| trace = langfuse.trace( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Black formatting: remove extra blank line.
The pipeline failure indicates Black formatting failed. There's an extra blank line between line 149 and 152. Remove one blank line to fix the formatting.
except Exception as e:
logger.warning(f"[Langfuse] Failed to initialize client: {e}")
return func(completion_config, query, **kwargs)
-
trace = langfuse.trace(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return func(completion_config, query, **kwargs) | |
| trace = langfuse.trace( | |
| return func(completion_config, query, **kwargs) | |
| trace = langfuse.trace( |
🤖 Prompt for AI Agents
In backend/app/core/langfuse/langfuse.py around lines 149 to 152, remove the
extra blank line between the return statement and the subsequent trace
invocation so the two lines are adjacent; reformat the file (or run Black) to
ensure no additional formatting issues remain.
5631985 to
8318707
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
backend/app/core/langfuse/langfuse.py (2)
3-4: Use modern type hints for Python 3.11+.This issue was already flagged in a previous review. Import
Callablefromcollections.abcinstead oftyping, and replaceDictwith the built-indicttype throughout the file.
151-155: Passsession_idto trace creation for proper session grouping.This issue was already flagged in a previous review. The
session_idparameter should be included when creating the trace so that session grouping works from the start, rather than being set later viatrace.update().
🧹 Nitpick comments (4)
backend/app/core/langfuse/langfuse.py (4)
114-117: Add return type annotation to the decorator factory.The function signature is missing a return type annotation. Since this is a decorator factory that returns a decorator, add
-> Callableto improve type safety.Apply this diff:
def observe_llm_execution( session_id: str | None = None, credentials: dict | None = None, -): +) -> Callable:
131-133: Add return type annotations to nested functions.Both the
decoratorandwrapperfunctions are missing return type annotations. Add them for better type safety and IDE support.Apply this diff:
- def decorator(func: Callable) -> Callable: + def decorator(func: Callable) -> Callable: @wraps(func) - def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs): + def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs) -> tuple[LLMCallResponse | None, str | None]:
141-146: Use direct key access after validation.Since you've already validated that
public_key,secret_key, andhostexist in credentials (lines 135-137), use direct dictionary access (credentials["key"]) instead of.get()to avoid unnecessaryOptionaltypes and make the code clearer.Apply this diff:
try: langfuse = Langfuse( - public_key=credentials.get("public_key"), - secret_key=credentials.get("secret_key"), - host=credentials.get("host"), + public_key=credentials["public_key"], + secret_key=credentials["secret_key"], + host=credentials["host"], )
114-212: Consider refactoring to use the existingLangfuseTracerclass.The
observe_llm_executiondecorator reimplements trace and generation management logic that already exists in theLangfuseTracerclass above (lines 14-111). This creates code duplication and maintenance overhead. Consider refactoring the decorator to instantiate and useLangfuseTracerinstead, which would provide a consistent pattern across the codebase and reduce duplication.For example:
def decorator(func: Callable) -> Callable: @wraps(func) def wrapper(completion_config: CompletionConfig, query: QueryParams, **kwargs): tracer = LangfuseTracer(credentials=credentials, session_id=session_id) tracer.start_trace( name="unified-llm-call", input=query.input, tags=[completion_config.provider] ) tracer.start_generation( name=f"{completion_config.provider}-completion", input=query.input, metadata={"model": completion_config.params.get("model")} ) try: response, error = func(completion_config, query, **kwargs) if response: tracer.end_generation( output={"status": "success", "output": response.response.output.text}, usage={...}, model=response.response.model ) tracer.update_trace( tags=[], output={"status": "success", "output": response.response.output.text} ) else: tracer.log_error(error or "Unknown error") tracer.flush() return response, error except Exception as e: tracer.log_error(str(e)) tracer.flush() raise return wrapper
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
backend/app/core/langfuse/langfuse.py(2 hunks)backend/app/services/llm/jobs.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/services/llm/jobs.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use type hints in Python code (Python 3.11+ project)
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place core functionality (config, DB session, security, exceptions, middleware) in backend/app/core/
Files:
backend/app/core/langfuse/langfuse.py
backend/app/core/langfuse/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place Langfuse observability integration under backend/app/core/langfuse/
Files:
backend/app/core/langfuse/langfuse.py
🧬 Code graph analysis (1)
backend/app/core/langfuse/langfuse.py (3)
backend/app/models/llm/request.py (1)
CompletionConfig(49-58)backend/app/models/llm/response.py (1)
LLMCallResponse(42-52)backend/app/tests/services/llm/providers/test_openai.py (2)
completion_config(32-37)provider(27-29)
🪛 Ruff (0.14.8)
backend/app/core/langfuse/langfuse.py
3-3: Import from collections.abc instead: Callable
Import from collections.abc
(UP035)
3-3: typing.Dict is deprecated, use dict instead
(UP035)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (3)
backend/app/core/langfuse/langfuse.py (3)
157-161: LGTM!The generation is correctly created as a child of the trace, with appropriate input and model information extracted from the configuration.
200-208: LGTM!The exception handling correctly logs the error to Langfuse, flushes the data, and re-raises the exception to preserve the original error flow. This ensures observability without hiding failures.
169-180: No changes needed. Theusage_detailsparameter is correct according to the Langfuse SDK documentation. The code at line 175 properly usesusage_detailswith the correct structure ({"input": ..., "output": ...}).Likely an incorrect or invalid review comment.
Summary
Target issue is #438
This PR introduces Langfuse observability into the LLM provider execution flow by wrapping provider_instance.execute with a configurable decorator. This allows every LLM call to automatically generate:
This enables unified tracing, debugging, and analytics across all LLM providers.
Checklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Notes
Please add here if any other information is required for the reviewer.
Summary by CodeRabbit
New Features
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.