-
Notifications
You must be signed in to change notification settings - Fork 2.9k
refactor: migrate NativeOllamaHandler to AI SDK #11355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Replace direct `ollama` npm package usage with `ollama-ai-provider-v2`
AI SDK community provider, following the same pattern as other migrated
providers (DeepSeek, LM Studio, etc.).
Changes:
- Use `createOllama` from `ollama-ai-provider-v2` instead of `Ollama` class
- Replace `client.chat({ stream: true })` with `streamText()` + `processAiSdkStreamPart()`
- Replace `client.chat({ stream: false })` with `generateText()`
- Use `convertToAiSdkMessages` instead of custom `convertToOllamaMessages`
- Use AI SDK's built-in tool support via `convertToolsForAiSdk`
- Pass `num_ctx` via `providerOptions.ollama`
- Preserve Ollama-specific error handling (ECONNREFUSED, 404)
- Override `isAiSdkProvider()` to return true
- Remove unused `convertToOllamaMessages` function
- Remove `ollama` npm package (no longer used)
- Add `ollama-ai-provider-v2` package
- Update all tests to mock AI SDK instead of ollama package
Found 2 issues related to how
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
| tools: aiSdkTools, | ||
| toolChoice: mapToolChoice(metadata?.tool_choice), | ||
| ...(this.options.ollamaNumCtx !== undefined && { | ||
| providerOptions: { ollama: { num_ctx: this.options.ollamaNumCtx } } as any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num_ctx is at the wrong nesting level. The ollama-ai-provider-v2 schema (ollamaProviderOptions) expects num_ctx inside a nested options object, not directly on the ollama object. The schema is: { think?: boolean, options?: { num_ctx?: number, ... } }. As written, num_ctx will be stripped by Zod validation (or cause a validation error in Zod v4 strict mode), so the context window override is silently lost. The same issue applies in completePrompt on line 126.
| providerOptions: { ollama: { num_ctx: this.options.ollamaNumCtx } } as any, | |
| providerOptions: { ollama: { options: { num_ctx: this.options.ollamaNumCtx } } } as any, |
Fix it with Roo Code or mention @roomote and request a fix.
| const { id: modelId } = await this.fetchModel() | ||
| await this.fetchModel() | ||
| const { id: modelId } = this.getModel() | ||
| const useR1Format = modelId.toLowerCase().includes("deepseek-r1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
useR1Format is computed here but only used for temperature. The old code used TagMatcher to parse <think>...</think> tags from the response content, which worked regardless of Ollama API settings. Since TagMatcher is removed, the provider now needs to emit reasoning-delta events -- but ollama-ai-provider-v2 defaults think to false and only returns structured thinking data when think: true is explicitly passed via providerOptions.ollama.think. Without it, DeepSeek R1 reasoning will appear as raw <think> tags in plain text output instead of being separated into reasoning chunks. This also affects completePrompt. The fix is to include think: true in providerOptions.ollama when useR1Format is true.
Fix it with Roo Code or mention @roomote and request a fix.
Summary
Replace the direct
ollamanpm package usage inNativeOllamaHandlerwith theollama-ai-provider-v2AI SDK community provider, following the same pattern as already-migrated providers like DeepSeek, LM Studio, etc.Changes
Provider (
src/api/providers/native-ollama.ts)createOllamafromollama-ai-provider-v2instead of theOllamaclass from theollamapackageclient.chat({ stream: true })withstreamText()+processAiSdkStreamPart()client.chat({ stream: false })withgenerateText()convertToAiSdkMessagesinstead of the customconvertToOllamaMessagesfunctionconvertToolsForAiSdk/mapToolChoicenum_ctxviaproviderOptions.ollamaisAiSdkProvider()to returntrueconvertToOllamaMessagesfunction (no longer needed)Dependencies (
src/package.json)ollama-ai-provider-v2packageollamanpm package (no longer used by any file)Tests (
src/api/providers/__tests__/native-ollama.spec.ts)streamText,generateText) instead of theollamapackageTesting
cd src && npx vitest run api/providers/__tests__/native-ollama.spec.ts— 16/16 passImportant
Refactor
NativeOllamaHandlerto useollama-ai-provider-v2AI SDK, updating provider logic, tests, and dependencies.native-ollama.ts):Ollamaclass withcreateOllamafromollama-ai-provider-v2.streamTextandgenerateTextfor text generation.convertToAiSdkMessagesandconvertToolsForAiSdkfor message and tool conversion.isAiSdkProvider()to returntrue.package.json):ollama-ai-provider-v2.ollamapackage.native-ollama.spec.ts):streamText,generateText) instead ofollama.This description was created by
for 6d36235. You can customize this summary. It will automatically update as commits are pushed.