Skip to content

Comments

test(providers): add tests for workspace-provider#102

Draft
Dexploarer wants to merge 1 commit intodevelopfrom
workspace-provider-tests-3082123732170428619
Draft

test(providers): add tests for workspace-provider#102
Dexploarer wants to merge 1 commit intodevelopfrom
workspace-provider-tests-3082123732170428619

Conversation

@Dexploarer
Copy link
Owner

This PR adds a comprehensive test suite for src/providers/workspace-provider.ts.

The workspace-provider is critical for injecting project context and coding agent status into the LLM prompt. These tests ensure that:

  1. Context building logic correctly handles file truncation and total size limits.
  2. Missing or empty files are skipped gracefully.
  3. Coding agent summaries are correctly formatted with errors, feedback, and command results.
  4. The provider's caching mechanism works as expected (verified via vi.useFakeTimers).
  5. Error handling is robust.

To support testing the module-level cache, a _resetCacheForTest helper was added and marked as internal. This allows tests to run in isolation without cache pollution.


PR created automatically by Jules for task 3082123732170428619 started by @Dexploarer

- Created `src/providers/workspace-provider.test.ts` with 15 test cases covering:
  - `truncate` utility
  - `buildContext` logic (including truncation and missing file handling)
  - `buildCodingAgentSummary` formatting
  - `createWorkspaceProvider` integration (caching, error handling, metadata enrichment)
- Exported internal `_resetCacheForTest` helper in `src/providers/workspace-provider.ts` to facilitate reliable testing of the module-level cache.
- Ensured tests pass using `npx vitest`.
@google-labs-jules
Copy link

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@coderabbitai
Copy link

coderabbitai bot commented Feb 20, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch workspace-provider-tests-3082123732170428619

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the tests label Feb 20, 2026
@gemini-code-assist
Copy link

Summary of Changes

Hello @Dexploarer, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a thorough test suite for the workspace-provider module, which is vital for constructing LLM prompts with relevant project context and coding agent status. The new tests validate core functionalities such as file content truncation, handling of missing or empty files, accurate formatting of coding agent summaries, and the integrity of the caching mechanism. These additions significantly enhance the robustness and maintainability of this critical component.

Highlights

  • Comprehensive Test Suite: Added a comprehensive test suite for workspace-provider.ts to ensure its reliability and correctness.
  • Context Building Logic: Verified that context building logic correctly handles file truncation and total size limits, and gracefully skips missing or empty files.
  • Coding Agent Summaries: Confirmed that coding agent summaries are correctly formatted, including errors, feedback, and recent command results.
  • Caching Mechanism: Validated the provider's caching mechanism, ensuring proper cache invalidation over time.
  • Testability Improvement: Improved testability by adding an internal _resetCacheForTest helper to clear the module-level cache, enabling isolated test runs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/providers/workspace-provider.test.ts
    • Added tests for the truncate utility function.
    • Added tests for the buildContext function, covering file inclusion, skipping, truncation, and total size limits.
    • Added tests for the buildCodingAgentSummary function, verifying basic summary generation, error inclusion, feedback inclusion, and recent command display.
    • Added tests for the createWorkspaceProvider function, including workspace file loading, caching behavior, error handling, coding agent context enrichment, and subagent session filtering.
  • src/providers/workspace-provider.ts
    • Added an internal _resetCacheForTest function to clear the module's cache, facilitating isolated testing.
Activity
  • The pull request was automatically created by Jules for a task started by @Dexploarer.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Comment on lines +266 to +274
it("handles errors gracefully", async () => {
(loadWorkspaceBootstrapFiles as any).mockRejectedValue(new Error("Disk error"));
const provider = createWorkspaceProvider();

const result = await provider.get({} as any, { metadata: {} } as any, {} as any);

expect(result?.text).toContain("[Workspace context unavailable: Disk error]");
expect(logger.warn).toHaveBeenCalledWith(expect.stringContaining("Failed to load workspace context"));
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test for error handling ("handles errors gracefully") only asserts that logger.warn is called with a string containing 'Failed to load workspace context', but does not verify that the actual error message (e.g., 'Disk error') is included in the log output. This could allow regressions where error details are omitted from logs, reducing observability and making debugging more difficult.

Recommended solution:
Extend the assertion to check that the logged warning includes the specific error message:

expect(logger.warn).toHaveBeenCalledWith(expect.stringContaining("Disk error"));

Comment on lines +246 to +264
it("caches files for CACHE_TTL_MS", async () => {
vi.useFakeTimers();
const provider = createWorkspaceProvider();

// First call
await provider.get({} as any, { metadata: {} } as any, {} as any);
expect(loadWorkspaceBootstrapFiles).toHaveBeenCalledTimes(1);

// Second call immediately
await provider.get({} as any, { metadata: {} } as any, {} as any);
expect(loadWorkspaceBootstrapFiles).toHaveBeenCalledTimes(1);

// Advance time past TTL (60s)
vi.advanceTimersByTime(61_000);

// Third call
await provider.get({} as any, { metadata: {} } as any, {} as any);
expect(loadWorkspaceBootstrapFiles).toHaveBeenCalledTimes(2);
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the cache TTL test ("caches files for CACHE_TTL_MS"), the test only verifies the number of times loadWorkspaceBootstrapFiles is called, but does not check that the returned content is actually refreshed after the TTL expires. This could miss subtle bugs where the cache is not properly invalidated, and stale data is returned even after the TTL.

Recommended solution:
After advancing the timer and making the third call, also assert that the returned content reflects the new data (e.g., by changing the mock return value before the third call and verifying the result):

(loadWorkspaceBootstrapFiles as any).mockResolvedValueOnce(newFiles);
// ...
const result = await provider.get(...);
expect(result?.text).toContain("new content");

Comment on lines 33 to 35
string,
{ files: WorkspaceBootstrapFile[]; at: number }
>();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential Data Race in Cache Structure

The cache is defined as a global variable and may be accessed concurrently if this module is used in a multi-threaded environment (e.g., Node.js worker threads). This could lead to data races or inconsistent cache state. If concurrent access is possible, consider using a thread-safe cache implementation or scoping the cache per agent instance.

Recommended solution:

  • If concurrency is expected, use synchronization primitives or a thread-safe cache library.
  • Otherwise, document that this module is not safe for concurrent access.

Comment on lines 43 to 44
const MAX_CACHE_ENTRIES = 20;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cache Size Limitation May Impact Performance

The MAX_CACHE_ENTRIES constant is set to 20, which may be insufficient for environments with many workspace directories. This could cause frequent cache eviction and reduce the effectiveness of caching, leading to increased file loading and degraded performance.

Recommended solution:

  • Consider making MAX_CACHE_ENTRIES configurable based on expected workload.
  • Monitor cache hit/miss rates and adjust the value accordingly.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a comprehensive test suite for the workspace-provider, covering critical logic such as context building, truncation, and coding agent summary generation. The tests are well-structured and use appropriate mocking and fake timers. The code adheres to standard TypeScript best practices. I've identified a few minor issues in the test file, including a redundant mock and some misleading comments/test names that could affect future maintainability.

Comment on lines +17 to +24
vi.mock("@elizaos/core", () => ({
logger: {
warn: vi.fn(),
info: vi.fn(),
error: vi.fn(),
},
isSubagentSessionKey: vi.fn(),
}));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The mock for isSubagentSessionKey is redundant here. This function is not used by the workspace-provider directly, and since workspace.js is also mocked, its usage within the real filterBootstrapFilesForSession is not exercised in these tests.

vi.mock("@elizaos/core", () => ({
  logger: {
    warn: vi.fn(),
    info: vi.fn(),
    error: vi.fn(),
  },
}));

expect(result).not.toContain("EMPTY.md");
});

it("respects MAX_TOTAL_WORKSPACE_CHARS (soft limit check via truncation)", () => {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The test name is slightly misleading. MAX_TOTAL_WORKSPACE_CHARS refers to the global hard limit for the entire context, whereas this test is verifying the maxChars parameter which acts as a per-file soft limit.

    it("respects maxCharsPerFile (soft limit check via truncation)", () => {

Comment on lines +188 to +190
// This one is before the iteration started (should be ignored? logic says: f.timestamp > lastIteration.startedAt)
// If it's old feedback, it might be ignored if we consider it "addressed".
// The code filters: return f.timestamp > lastIteration.startedAt;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is incorrect. The feedback timestamp (start + 100) is actually after the iteration started (start - 1000). The test correctly verifies that this feedback is included in the summary, but the comment suggests it should be ignored.

           // This feedback is submitted after the iteration started, so it should be included.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant