Skip to content

Comments

feat: add LangChain integration, docs and example#54

Open
chtushar wants to merge 8 commits intomainfrom
chtushar/explore-ui-options
Open

feat: add LangChain integration, docs and example#54
chtushar wants to merge 8 commits intomainfrom
chtushar/explore-ui-options

Conversation

@chtushar
Copy link
Contributor

Summary

Adds comprehensive LangChain integration for Amarillo, enabling users to route LangChain LLM calls through the LLMOps gateway and send execution traces to the built-in OTLP endpoint.

Changes

  • Integration Guide (docs/content/docs/integrations/langchain.mdx): Documents both LLM proxy setup and OpenTelemetry tracing configuration with TypeScript and Python examples
  • Example Project (examples/langchain/): Full-stack Express server with LLMOps middleware showing chat completions, streaming, and embeddings endpoints
  • Icon Support: Added LangChain icon to the docs icon plugin (simple-icons)

This enables organizations to replace LangSmith with Amarillo's native observability for full chain tracing via OTLP.

Add comprehensive LangChain integration guide covering:
- LLM proxy setup to route ChatOpenAI calls through LLMOps gateway
- OpenTelemetry tracing configuration for full chain observability
- Support for direct provider routing, streaming, embeddings, and tool calling
- Examples in both TypeScript and Python

Includes new @llmops/langchain-example with Express server, LLMOps SDK setup,
and endpoints for chat completions, streaming, and embeddings.
LangChain's ChatOpenAI does not append /v1/ automatically unlike
the OpenAI SDK, causing 404 on /api/genai/chat/completions.
Accumulate delta.content from SSE chunks in the streaming cost
extractor so streaming responses show output in the trace view
instead of "No output captured".
provider() sets x-llmops-internal which skips trace creation,
expecting a separate OTLP exporter. Use explicit baseURL instead
so the gateway creates traces automatically.
OTLP handler, trace batch writer, credentials cache, gateway adapter,
and playground execute were all logging internal processing details at
info level, cluttering production output.
… client

Add full LangChain trace capture (agents, chains, tools, LLM calls) as
nested spans. Two integration paths:

- TypeScript SDK: llmopsClient.langchainTracer() with LangChainTracer
- Python/env vars: LANGCHAIN_ENDPOINT pointed at /api/langsmith

New files:
- LangSmith HTTP handler (GET /info, POST /runs/batch)
- SDK LangChain client (buffer-and-merge strategy, in-process routing)

Updated example with agent + tools using the tracer, and docs with all
three tracing approaches (SDK, env vars, OpenTelemetry).
@chtushar chtushar changed the title feat: add LangChain integration docs and example feat: add LangChain integration, docs and example Feb 23, 2026
- Use full 32-char UUID hex for spanId (was truncated to 16 chars,
  causing collisions with UUID v7 time-ordered IDs from LangChain)
- Fix TraceBatchWriter flush order: insert spans before upserting
  traces to prevent spanCount inflation on re-queue
- Make batchInsertSpans/Events skip invalid items instead of failing
  the entire batch
- Add response status check to SDK langchain-client postBatch
- Classify LangSmith spans by run_type for proper color coding
  (llm=blue, tool/retriever=green, embedding=cyan, chain=purple)
@chtushar chtushar marked this pull request as ready for review February 23, 2026 09:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant