Skip to content

Conversation

@daniel-lxs
Copy link
Member

@daniel-lxs daniel-lxs commented Feb 9, 2026

Summary

Replace the direct ollama npm package usage in NativeOllamaHandler with the ollama-ai-provider-v2 AI SDK community provider, following the same pattern as already-migrated providers like DeepSeek, LM Studio, etc.

Changes

Provider (src/api/providers/native-ollama.ts)

  • Use createOllama from ollama-ai-provider-v2 instead of the Ollama class from the ollama package
  • Replace client.chat({ stream: true }) with streamText() + processAiSdkStreamPart()
  • Replace client.chat({ stream: false }) with generateText()
  • Use convertToAiSdkMessages instead of the custom convertToOllamaMessages function
  • Use AI SDK's built-in tool support via convertToolsForAiSdk / mapToolChoice
  • Pass num_ctx via providerOptions.ollama
  • Preserve Ollama-specific error handling (ECONNREFUSED → "service not running", 404 → "model not found")
  • Override isAiSdkProvider() to return true
  • Remove the custom convertToOllamaMessages function (no longer needed)

Dependencies (src/package.json)

  • Add ollama-ai-provider-v2 package
  • Remove ollama npm package (no longer used by any file)

Tests (src/api/providers/__tests__/native-ollama.spec.ts)

  • Rewrite all tests to mock the AI SDK (streamText, generateText) instead of the ollama package
  • All 16 tests pass
  • Full test suite (5381 tests) passes with no regressions

Testing

  • cd src && npx vitest run api/providers/__tests__/native-ollama.spec.ts — 16/16 pass
  • Full pre-push hook passes (lint, type-check, all tests)

Important

Refactor NativeOllamaHandler to use ollama-ai-provider-v2 AI SDK, updating provider logic, tests, and dependencies.

  • Provider (native-ollama.ts):
    • Replaces Ollama class with createOllama from ollama-ai-provider-v2.
    • Uses streamText and generateText for text generation.
    • Implements convertToAiSdkMessages and convertToolsForAiSdk for message and tool conversion.
    • Preserves error handling for specific Ollama errors (ECONNREFUSED, 404).
    • Overrides isAiSdkProvider() to return true.
  • Dependencies (package.json):
    • Adds ollama-ai-provider-v2.
    • Removes ollama package.
  • Tests (native-ollama.spec.ts):
    • Mocks AI SDK functions (streamText, generateText) instead of ollama.
    • Updates tests to reflect new SDK usage and error handling.
    • All tests pass, including full suite (5381 tests).

This description was created by Ellipsis for 6d36235. You can customize this summary. It will automatically update as commits are pushed.

Replace direct `ollama` npm package usage with `ollama-ai-provider-v2`
AI SDK community provider, following the same pattern as other migrated
providers (DeepSeek, LM Studio, etc.).

Changes:
- Use `createOllama` from `ollama-ai-provider-v2` instead of `Ollama` class
- Replace `client.chat({ stream: true })` with `streamText()` + `processAiSdkStreamPart()`
- Replace `client.chat({ stream: false })` with `generateText()`
- Use `convertToAiSdkMessages` instead of custom `convertToOllamaMessages`
- Use AI SDK's built-in tool support via `convertToolsForAiSdk`
- Pass `num_ctx` via `providerOptions.ollama`
- Preserve Ollama-specific error handling (ECONNREFUSED, 404)
- Override `isAiSdkProvider()` to return true
- Remove unused `convertToOllamaMessages` function
- Remove `ollama` npm package (no longer used)
- Add `ollama-ai-provider-v2` package
- Update all tests to mock AI SDK instead of ollama package
@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Feb 9, 2026
@roomote
Copy link
Contributor

roomote bot commented Feb 9, 2026

Rooviewer Clock   See task

Found 2 issues related to how providerOptions.ollama is structured for ollama-ai-provider-v2.

  • num_ctx is at the wrong nesting level -- should be providerOptions.ollama.options.num_ctx, not providerOptions.ollama.num_ctx (affects both createMessage and completePrompt)
  • Missing think: true in providerOptions.ollama for DeepSeek R1 models -- without it, the TagMatcher removal causes reasoning to appear as raw <think> tags in text output

Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues.

tools: aiSdkTools,
toolChoice: mapToolChoice(metadata?.tool_choice),
...(this.options.ollamaNumCtx !== undefined && {
providerOptions: { ollama: { num_ctx: this.options.ollamaNumCtx } } as any,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_ctx is at the wrong nesting level. The ollama-ai-provider-v2 schema (ollamaProviderOptions) expects num_ctx inside a nested options object, not directly on the ollama object. The schema is: { think?: boolean, options?: { num_ctx?: number, ... } }. As written, num_ctx will be stripped by Zod validation (or cause a validation error in Zod v4 strict mode), so the context window override is silently lost. The same issue applies in completePrompt on line 126.

Suggested change
providerOptions: { ollama: { num_ctx: this.options.ollamaNumCtx } } as any,
providerOptions: { ollama: { options: { num_ctx: this.options.ollamaNumCtx } } } as any,

Fix it with Roo Code or mention @roomote and request a fix.

const { id: modelId } = await this.fetchModel()
await this.fetchModel()
const { id: modelId } = this.getModel()
const useR1Format = modelId.toLowerCase().includes("deepseek-r1")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

useR1Format is computed here but only used for temperature. The old code used TagMatcher to parse <think>...</think> tags from the response content, which worked regardless of Ollama API settings. Since TagMatcher is removed, the provider now needs to emit reasoning-delta events -- but ollama-ai-provider-v2 defaults think to false and only returns structured thinking data when think: true is explicitly passed via providerOptions.ollama.think. Without it, DeepSeek R1 reasoning will appear as raw <think> tags in plain text output instead of being separated into reasoning chunks. This also affects completePrompt. The fix is to include think: true in providerOptions.ollama when useR1Format is true.

Fix it with Roo Code or mention @roomote and request a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant