Skip to content

Conversation

@akitaSummer
Copy link
Contributor

@akitaSummer akitaSummer commented Jan 9, 2026

Checklist
  • npm test passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines
Affected core subsystem(s)
Description of change

Summary by CodeRabbit

  • Chores

    • Pinned core LangChain dependencies to exact versions to improve build consistency and runtime stability.
  • New Features

    • Introduced a base middleware class to streamline integration behavior.
  • Tests

    • Added comprehensive test utilities and mocks for language-model interactions, tool-calling flows, structured outputs, and checkpoint validation to improve test coverage and reliability.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

📝 Walkthrough

Walkthrough

Adds a new abstract middleware type and extensive test utilities for LangChain decorators, and pins LangChain-related dependencies to exact versions in two package.json files. (50 words)

Changes

Cohort / File(s) Summary
LangChain Dependency Pinning
core/langchain-decorator/package.json, plugin/langchain/package.json
Pinned @langchain/core -> 1.1.11, @langchain/langgraph -> 1.0.7, and langchain -> 1.1.6 (removed caret ranges).
Middleware type
core/langchain-decorator/src/type/middleware.ts
New exported abstract class TeggAgentMiddleware and createMiddlewareParams alias implementing createMiddleware parameter shape; constructor sets name.
Test utilities & mocks
core/langchain-decorator/test/utils.ts
Large new test support module: message matchers, fake/chat models (tool-calling, configurable), structured-output helpers, MemorySaver immutability checker, checkpointer factory, and a SearchAPI structured tool. Many exported classes/interfaces and complex test doubles added.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • fix: langchain version #384: Touches the same LangChain dependency entries across the same package.json files (dependency pinning vs range changes).
  • fix: langchain build bug #373: Alters public surface and re-exports related to the LangChain decorator module; potentially overlaps with new exports and test utilities.

Suggested reviewers

  • gxkl
  • killagu

Poem

🐇 I nudge the deps and tweak the test,

I give the middleware a cozy nest.
With mocks in paw and code in tune,
I hop—unit tests hum a happy tune.
🌿🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: lock langchain version' directly aligns with the primary change: pinning LangChain dependency versions from range specifiers to exact versions across two package.json files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @akitaSummer, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the stability and predictability of the project's langchain integrations. By transitioning key langchain dependencies from flexible version ranges to exact version specifications in the package.json files, the change ensures that the application consistently uses tested and known versions of these libraries, mitigating risks associated with automatic updates and promoting a more reliable development and deployment environment.

Highlights

  • Dependency Version Locking: The pull request updates package.json files in core/langchain-decorator and plugin/langchain to specify exact versions for several langchain related packages, moving from caret (^) ranges to fixed versions.
  • Enhanced Stability: By locking down specific versions for @langchain/core, @langchain/langgraph, and langchain, this change aims to prevent unexpected breaking changes or inconsistencies that might arise from minor or patch updates in these dependencies.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to lock the versions of langchain dependencies to ensure stability. The changes correctly pin the versions for @langchain/core, @langchain/langgraph, and langchain in two package.json files. My review includes a critical fix for a typo in a version string that would break dependency installation. I've also included suggestions to improve consistency by pinning other related @langchain/* dependencies that are currently using version ranges. This would fully align the changes with the stated goal of the pull request.

"@langchain/community": "^1.0.0",
"@langchain/core": "^1.1.1",
"@langchain/langgraph": "^1.0.2",
"@langchain/core": "^.1.11",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There seems to be a typo in the version for @langchain/core. The value ^.1.11 is not a valid semver range and will likely cause dependency installation to fail. It should probably be pinned to 1.1.11. Additionally, for consistency with the PR's goal of locking versions, consider also pinning other @langchain/* dependencies in this file, such as @langchain/community (line 68), @langchain/mcp-adapters (line 71), and @langchain/openai (line 72).

Suggested change
"@langchain/core": "^.1.11",
"@langchain/core": "1.1.11",

Comment on lines +44 to +47
"@langchain/core": "1.1.11",
"@langchain/langgraph": "1.0.7",
"@langchain/openai": "^1.1.0",
"langchain": "^1.1.2",
"langchain": "1.1.6",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While you've pinned the versions for @langchain/core, @langchain/langgraph, and langchain, other langchain-related dependencies like @langchain/community (line 43) and @langchain/openai (line 46) are still using version ranges. For consistency and to fully achieve the goal of locking langchain versions, consider pinning these as well.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @plugin/langchain/package.json:
- Line 69: Update the invalid semver for the dependency key "@langchain/core":
replace the broken value "^.1.11" with the exact version string "1.1.11" so the
package.json contains "@langchain/core": "1.1.11" (matching the version used in
core/langchain-decorator).
🧹 Nitpick comments (1)
core/langchain-decorator/package.json (1)

44-45: Consider the implications of exact version pinning.

The dependency updates correctly pin @langchain/core, @langchain/langgraph, and langchain to exact versions. This strategy eliminates automatic minor/patch updates and requires manual version bumps.

While exact pinning provides reproducibility and prevents unexpected breaking changes, it also means:

  • Security patches won't be automatically applied
  • Bug fixes in patch releases require manual updates
  • Maintenance overhead increases across the monorepo

Ensure your CI/CD pipeline includes automated dependency checks (e.g., Dependabot, Renovate) to monitor for updates and security advisories for these pinned versions.

Also applies to: 47-47

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a160a46 and 2203e10.

📒 Files selected for processing (2)
  • core/langchain-decorator/package.json
  • plugin/langchain/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Socket Security: Pull Request Alerts
🔇 Additional comments (1)
plugin/langchain/package.json (1)

70-70: Exact version pins are correctly formatted and verified on npm.

The pinned versions exist on npm without security warnings: @langchain/langgraph@1.0.7, langchain@1.1.6, and @langchain/core@1.1.11.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In @core/langchain-decorator/src/type/middleware.ts:
- Around line 1-10: The class TeggAgentMiddleware should not implement
createMiddlewareParams; remove the "implements createMiddlewareParams" clause,
explicitly type the name property as "name: string" on the class, and add an
abstract method (e.g., getMiddlewareConfig(): createMiddlewareParams) that
subclasses must implement to return the actual middleware config object to be
passed into createMiddleware; keep the constructor that assigns this.name =
this.constructor.name and update any callers to use
instance.getMiddlewareConfig() when constructing middleware via
createMiddleware.

In @core/langchain-decorator/test/utils.ts:
- Around line 116-156: FakeConfigurableModel.bindTools currently unsafely casts
this._chatModel to FakeToolCallingChatModel and will throw at runtime if
fields.model doesn't implement bindTools; update the constructor signature to
constrain fields.model to an interface that includes bindTools (or accept a
union with that interface) and use that typed property when creating
modelWithTools, or if you cannot change the type, add an explicit runtime check
in bindTools (e.g., verify typeof this._chatModel.bindTools === 'function') and
throw a clear error referencing FakeConfigurableModel.bindTools when bindTools
is missing before calling it.
- Around line 286-326: In MemorySaverAssertImmutable.put, ensure you fail fast
if config.configurable?.thread_id is missing (throw a clear error instead of
using the "undefined" bucket) and use the actual Uint8Array byte comparison
rather than decoding to strings: locate the thread_id usage in
MemorySaverAssertImmutable.put, validate and throw when thread_id is falsy,
ensure this.storageForCopies[thread_id] is initialized, and when comparing saved
vs stored copies (after calling this.serde.dumpsTyped(saved) and
this.serde.dumpsTyped(checkpoint)) compare the Uint8Array contents directly
(byte-by-byte or via length+every-byte equality) instead of using TextDecoder.
- Around line 158-285: The inline comment in
FakeToolCallingChatModel.withStructuredOutput incorrectly states the function
returns "{ raw: BaseMessage, parsed: RunOutput }" while the implementation
returns only this.structuredResponse (RunOutput); update or remove that
misleading comment to reflect the actual return shape (RunOutput only) so the
comment matches the code in the withStructuredOutput method of
FakeToolCallingChatModel.
🧹 Nitpick comments (2)
core/langchain-decorator/test/utils.ts (2)

36-114: Exported _AnyId*Message is fine, but name the intent + tighten types a bit.
Since these are exported, consider dropping the leading _ (reads “private”) and add explicit return types for lc_id + AnyString matcher methods to reduce inference drift across TS configs.


350-498: FakeToolCallingModel is serviceable, but content building is a bit fragile.
The concatenation logic mixes multiple possible BaseMessage['content'] shapes; consider extracting it to a helper with explicit type guards to keep future changes safe (and easier to unit test).

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2203e10 and 91f7301.

📒 Files selected for processing (3)
  • core/langchain-decorator/src/type/middleware.ts
  • core/langchain-decorator/test/utils.ts
  • plugin/langchain/package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Socket Security: Pull Request Alerts
  • GitHub Check: Runner-macos (18)
🔇 Additional comments (3)
core/langchain-decorator/test/utils.ts (2)

342-346: createCheckpointer() looks good for tests.
Simple and predictable default.


500-515: SearchAPI test tool is clean.
Clear schema + deterministic behavior.

plugin/langchain/package.json (1)

69-75: Dependency pinning is correctly applied. The exact versions for @langchain/core (1.1.11), @langchain/langgraph (1.0.7), and langchain (1.1.6) are properly pinned. Cross-workspace consistency is not a concern as these dependencies are isolated to the plugin/langchain workspace only. Note that the repository does not currently have a lockfile (no package-lock.json, pnpm-lock.yaml, or yarn.lock); if lockfile generation is desired for deterministic installs, that would be a separate repository-wide initiative.

Comment on lines +1 to +10
import { createMiddleware } from 'langchain';

type createMiddlewareParams = Parameters<typeof createMiddleware>['0'];

export abstract class TeggAgentMiddleware implements createMiddlewareParams {
name;
constructor() {
this.name = this.constructor.name;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

langchain createMiddleware API TypeScript type definition

💡 Result:

Signature (TypeScript, simplified):

import { AgentMiddleware } from "langchain";

declare function createMiddleware<
TSchema = undefined,
TContextSchema = undefined

(config: {
name: string;
stateSchema?: TSchema;
contextSchema?: TContextSchema;
beforeAgent?: BeforeAgentHook<TSchema, NormalizeContextSchema>;
afterAgent?: AfterAgentHook<TSchema, NormalizeContextSchema>;
beforeModel?: BeforeModelHook<TSchema, NormalizeContextSchema>;
afterModel?: AfterModelHook<TSchema, NormalizeContextSchema>;
wrapModelCall?: WrapModelCallHook<TSchema, NormalizeContextSchema>;
wrapToolCall?: WrapToolCallHook<TSchema, NormalizeContextSchema>;
tools?: any[];
}): AgentMiddleware<TSchema, TContextSchema, any>;

References: LangChain createMiddleware docs and AgentMiddleware/type refs. [1][2][3]

Sources
[1] createMiddleware docs. [2] AgentMiddleware interface/type. [3] middleware types (InferMiddlewareContext/State).


🏁 Script executed:

# Search the codebase for any imports or usage of createMiddleware
rg "createMiddleware" --type ts --type tsx -B 2 -A 2

# Look for langchain imports in the codebase
rg "from 'langchain'" --type ts --type tsx | head -20

# Check if there's a package.json to see langchain version
cat package.json | grep -A 5 -B 5 langchain

Repository: eggjs/tegg

Length of output: 164


TypeScript implementation error: class cannot safely implement createMiddlewareParams and name is implicitly any.

The config object expected by createMiddleware requires a name: string property along with optional hooks (beforeAgent, afterAgent, beforeModel, afterModel, wrapModelCall, wrapToolCall, tools). Implementing this type on the class doesn't make semantic sense—it suggests instances are themselves valid middleware config objects, which they are not. The untyped name property will also fail under noImplicitAny.

Remove the implements clause, explicitly type name: string, and have subclasses provide the actual config object via a method if middleware parameters need to be constructed:

Suggested approach
 import { createMiddleware } from 'langchain';

-type createMiddlewareParams = Parameters<typeof createMiddleware>['0'];
+export type CreateMiddlewareParams = Parameters<typeof createMiddleware>[0];

-export abstract class TeggAgentMiddleware implements createMiddlewareParams {
-  name;
+export abstract class TeggAgentMiddleware {
+  public readonly name: string;
   constructor() {
     this.name = this.constructor.name;
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { createMiddleware } from 'langchain';
type createMiddlewareParams = Parameters<typeof createMiddleware>['0'];
export abstract class TeggAgentMiddleware implements createMiddlewareParams {
name;
constructor() {
this.name = this.constructor.name;
}
}
import { createMiddleware } from 'langchain';
export type CreateMiddlewareParams = Parameters<typeof createMiddleware>[0];
export abstract class TeggAgentMiddleware {
public readonly name: string;
constructor() {
this.name = this.constructor.name;
}
}
🤖 Prompt for AI Agents
In @core/langchain-decorator/src/type/middleware.ts around lines 1 - 10, The
class TeggAgentMiddleware should not implement createMiddlewareParams; remove
the "implements createMiddlewareParams" clause, explicitly type the name
property as "name: string" on the class, and add an abstract method (e.g.,
getMiddlewareConfig(): createMiddlewareParams) that subclasses must implement to
return the actual middleware config object to be passed into createMiddleware;
keep the constructor that assigns this.name = this.constructor.name and update
any callers to use instance.getMiddlewareConfig() when constructing middleware
via createMiddleware.

Comment on lines +116 to +156
export class FakeConfigurableModel extends BaseChatModel {
_queuedMethodOperations: Record<string, any> = {};

_chatModel: LanguageModelLike;

constructor(
fields: {
model: LanguageModelLike;
} & BaseChatModelParams,
) {
super(fields);
this._chatModel = fields.model;
}

_llmType() {
return 'fake_configurable';
}

async _generate(
// eslint-disable-next-line @typescript-eslint/no-unused-vars
_messages: BaseMessage[],
// eslint-disable-next-line @typescript-eslint/no-unused-vars
_options: this['ParsedCallOptions'],
// eslint-disable-next-line @typescript-eslint/no-unused-vars
_runManager?: CallbackManagerForLLMRun,
): Promise<ChatResult> {
throw new Error('Not implemented');
}

async _model() {
return this._chatModel;
}

bindTools(tools: BindToolsInput[]) {
const modelWithTools = new FakeConfigurableModel({
model: (this._chatModel as FakeToolCallingChatModel).bindTools(tools),
});
modelWithTools._queuedMethodOperations.bindTools = tools;
return modelWithTools;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

FakeConfigurableModel.bindTools unsafe cast can explode at runtime.
(this._chatModel as FakeToolCallingChatModel).bindTools(...) will throw if the wrapped model isn’t that fake. Consider constraining fields.model to an interface that has bindTools, or throw a clearer error when missing.

🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 116 - 156,
FakeConfigurableModel.bindTools currently unsafely casts this._chatModel to
FakeToolCallingChatModel and will throw at runtime if fields.model doesn't
implement bindTools; update the constructor signature to constrain fields.model
to an interface that includes bindTools (or accept a union with that interface)
and use that typed property when creating modelWithTools, or if you cannot
change the type, add an explicit runtime check in bindTools (e.g., verify typeof
this._chatModel.bindTools === 'function') and throw a clear error referencing
FakeConfigurableModel.bindTools when bindTools is missing before calling it.

Comment on lines +158 to +285
export class FakeToolCallingChatModel extends BaseChatModel {
sleep?: number = 50;

responses?: BaseMessage[];

thrownErrorString?: string;

idx: number;

toolStyle: 'openai' | 'anthropic' | 'bedrock' | 'google' = 'openai';

structuredResponse?: Record<string, unknown>;

// Track messages passed to structured output calls
structuredOutputMessages: BaseMessage[][] = [];

constructor(
fields: {
sleep?: number;
responses?: BaseMessage[];
thrownErrorString?: string;
toolStyle?: 'openai' | 'anthropic' | 'bedrock' | 'google';
structuredResponse?: Record<string, unknown>;
} & BaseChatModelParams,
) {
super(fields);
this.sleep = fields.sleep ?? this.sleep;
this.responses = fields.responses;
this.thrownErrorString = fields.thrownErrorString;
this.idx = 0;
this.toolStyle = fields.toolStyle ?? this.toolStyle;
this.structuredResponse = fields.structuredResponse;
this.structuredOutputMessages = [];
}

_llmType() {
return 'fake';
}

async _generate(
messages: BaseMessage[],
_options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun,
): Promise<ChatResult> {
if (this.thrownErrorString) {
throw new Error(this.thrownErrorString);
}
if (this.sleep !== undefined) {
await new Promise(resolve => setTimeout(resolve, this.sleep));
}
const responses = this.responses?.length ? this.responses : messages;
const msg = responses[this.idx % responses.length];
const generation: ChatResult = {
generations: [
{
text: '',
message: msg,
},
],
};
this.idx += 1;

if (typeof msg.content === 'string') {
await runManager?.handleLLMNewToken(msg.content);
}
return generation;
}

bindTools(tools: BindToolsInput[]): Runnable<any> {
const toolDicts = [];
const serverTools = [];
for (const tool of tools) {
if (!('name' in tool)) {
serverTools.push(tool);
continue;
}

// NOTE: this is a simplified tool spec for testing purposes only
if (this.toolStyle === 'openai') {
toolDicts.push({
type: 'function',
function: {
name: tool.name,
},
});
} else if ([ 'anthropic', 'google' ].includes(this.toolStyle)) {
toolDicts.push({
name: tool.name,
});
} else if (this.toolStyle === 'bedrock') {
toolDicts.push({
toolSpec: {
name: tool.name,
},
});
}
}
let toolsToBind: BindToolsInput[] = toolDicts;
if (this.toolStyle === 'google') {
toolsToBind = [{ functionDeclarations: toolDicts }];
}
return this.withConfig({
tools: [ ...toolsToBind, ...serverTools ],
} as BaseChatModelCallOptions);
}

withStructuredOutput<
RunOutput extends Record<string, any> = Record<string, any>
// eslint-disable-next-line @typescript-eslint/no-unused-vars
>(_: unknown): Runnable<any> {
if (!this.structuredResponse) {
throw new Error('No structured response provided');
}
// Create a runnable that returns the proper structured format
return RunnableLambda.from(async (messages: BaseMessage[]) => {
if (this.sleep) {
await new Promise(resolve => setTimeout(resolve, this.sleep));
}

// Store the messages that were sent to generate structured output
this.structuredOutputMessages.push([ ...messages ]);

// Return in the format expected: { raw: BaseMessage, parsed: RunOutput }
return this.structuredResponse as RunOutput;
});
}
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

LangChain 1.1.6 BaseChatModel withStructuredOutput return shape documentation

💡 Result:

BaseChatModel.withStructuredOutput returns a Runnable whose output shape depends on the generic/config you pass:

  • If you use the simple form it returns the parsed structured value: RunOutput (i.e., your parsed JSON/Pydantic/Zod result). [1]
  • If you call the variant that requests the raw message, it returns an object { parsed: RunOutput; raw: BaseMessage } (the parsed output plus the raw model message). [1][2]

(Where RunOutput is the schema type you supplied — e.g., Record<string, any>, a Zod type, Pydantic model, or JSON Schema.) [1][2]

Sources: LangChain BaseChatModel / BaseChatOpenAI API docs. [1] https://v03.api.js.langchain.com/classes/_langchain_core.language_models_chat_models.BaseChatModel.html [2] https://v03.api.js.langchain.com/classes/_langchain_openai.BaseChatOpenAI.html


🏁 Script executed:

# Search the codebase for how withStructuredOutput is being used and what's expected
rg "withStructuredOutput" --type ts --type js -A 5 -B 2

Repository: eggjs/tegg

Length of output: 1183


Remove misleading comment from withStructuredOutput(...) return statement.
The code correctly returns only this.structuredResponse (matching LangChain's simple form), but the comment claims { raw: BaseMessage, parsed: RunOutput }. Either remove the comment or update it to match the actual return shape (RunOutput only).

🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 158 - 285, The inline
comment in FakeToolCallingChatModel.withStructuredOutput incorrectly states the
function returns "{ raw: BaseMessage, parsed: RunOutput }" while the
implementation returns only this.structuredResponse (RunOutput); update or
remove that misleading comment to reflect the actual return shape (RunOutput
only) so the comment matches the code in the withStructuredOutput method of
FakeToolCallingChatModel.

Comment on lines +286 to +326
export class MemorySaverAssertImmutable extends MemorySaver {
storageForCopies: Record<string, Record<string, Uint8Array>> = {};

constructor() {
super();
this.storageForCopies = {};
}

async put(
config: RunnableConfig,
checkpoint: Checkpoint,
metadata: CheckpointMetadata,
): Promise<RunnableConfig> {
const thread_id = config.configurable?.thread_id;
this.storageForCopies[thread_id] ??= {};

// assert checkpoint hasn't been modified since last written
const saved = await this.get(config);
if (saved) {
const savedId = saved.id;
if (this.storageForCopies[thread_id][savedId]) {
const [, serializedSaved] = await this.serde.dumpsTyped(saved);
const serializedCopy = this.storageForCopies[thread_id][savedId];

// Compare Uint8Array contents by converting to string
const savedStr = new TextDecoder().decode(serializedSaved);
const copyStr = new TextDecoder().decode(serializedCopy);
if (savedStr !== copyStr) {
throw new Error(
`Checkpoint [${savedId}] has been modified since last written`,
);
}
}
}
const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint);
// save a copy of the checkpoint
this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint;

return super.put(config, checkpoint, metadata);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

thread_id can be undefined and collapse all copies into one bucket.
If config.configurable?.thread_id is missing, everything writes under the "undefined" key, weakening the immutability assertion. Consider throwing when absent, and compare bytes directly (not TextDecoder strings) to avoid encoding artifacts.

Possible adjustment
   async put(
     config: RunnableConfig,
     checkpoint: Checkpoint,
     metadata: CheckpointMetadata,
   ): Promise<RunnableConfig> {
     const thread_id = config.configurable?.thread_id;
-    this.storageForCopies[thread_id] ??= {};
+    if (!thread_id) throw new Error('Missing configurable.thread_id');
+    this.storageForCopies[thread_id] ??= {};

@@
-        const savedStr = new TextDecoder().decode(serializedSaved);
-        const copyStr = new TextDecoder().decode(serializedCopy);
-        if (savedStr !== copyStr) {
+        if (
+          serializedSaved.length !== serializedCopy.length ||
+          serializedSaved.some((b, i) => b !== serializedCopy[i])
+        ) {
           throw new Error(
             `Checkpoint [${savedId}] has been modified since last written`,
           );
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export class MemorySaverAssertImmutable extends MemorySaver {
storageForCopies: Record<string, Record<string, Uint8Array>> = {};
constructor() {
super();
this.storageForCopies = {};
}
async put(
config: RunnableConfig,
checkpoint: Checkpoint,
metadata: CheckpointMetadata,
): Promise<RunnableConfig> {
const thread_id = config.configurable?.thread_id;
this.storageForCopies[thread_id] ??= {};
// assert checkpoint hasn't been modified since last written
const saved = await this.get(config);
if (saved) {
const savedId = saved.id;
if (this.storageForCopies[thread_id][savedId]) {
const [, serializedSaved] = await this.serde.dumpsTyped(saved);
const serializedCopy = this.storageForCopies[thread_id][savedId];
// Compare Uint8Array contents by converting to string
const savedStr = new TextDecoder().decode(serializedSaved);
const copyStr = new TextDecoder().decode(serializedCopy);
if (savedStr !== copyStr) {
throw new Error(
`Checkpoint [${savedId}] has been modified since last written`,
);
}
}
}
const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint);
// save a copy of the checkpoint
this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint;
return super.put(config, checkpoint, metadata);
}
}
export class MemorySaverAssertImmutable extends MemorySaver {
storageForCopies: Record<string, Record<string, Uint8Array>> = {};
constructor() {
super();
this.storageForCopies = {};
}
async put(
config: RunnableConfig,
checkpoint: Checkpoint,
metadata: CheckpointMetadata,
): Promise<RunnableConfig> {
const thread_id = config.configurable?.thread_id;
if (!thread_id) throw new Error('Missing configurable.thread_id');
this.storageForCopies[thread_id] ??= {};
// assert checkpoint hasn't been modified since last written
const saved = await this.get(config);
if (saved) {
const savedId = saved.id;
if (this.storageForCopies[thread_id][savedId]) {
const [, serializedSaved] = await this.serde.dumpsTyped(saved);
const serializedCopy = this.storageForCopies[thread_id][savedId];
// Compare Uint8Array contents by converting to string
if (
serializedSaved.length !== serializedCopy.length ||
serializedSaved.some((b, i) => b !== serializedCopy[i])
) {
throw new Error(
`Checkpoint [${savedId}] has been modified since last written`,
);
}
}
}
const [ , serializedCheckpoint ] = await this.serde.dumpsTyped(checkpoint);
// save a copy of the checkpoint
this.storageForCopies[thread_id][checkpoint.id] = serializedCheckpoint;
return super.put(config, checkpoint, metadata);
}
}
🤖 Prompt for AI Agents
In @core/langchain-decorator/test/utils.ts around lines 286 - 326, In
MemorySaverAssertImmutable.put, ensure you fail fast if
config.configurable?.thread_id is missing (throw a clear error instead of using
the "undefined" bucket) and use the actual Uint8Array byte comparison rather
than decoding to strings: locate the thread_id usage in
MemorySaverAssertImmutable.put, validate and throw when thread_id is falsy,
ensure this.storageForCopies[thread_id] is initialized, and when comparing saved
vs stored copies (after calling this.serde.dumpsTyped(saved) and
this.serde.dumpsTyped(checkpoint)) compare the Uint8Array contents directly
(byte-by-byte or via length+every-byte equality) instead of using TextDecoder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants