Skip to content

Conversation

@mik-tf
Copy link
Contributor

@mik-tf mik-tf commented Dec 20, 2025

Summary

  • Adds comprehensive Ollama integration for running AI models locally, providing users with a privacy-focused, zero-cost alternative to cloud APIs.
  • Closes Add Local AI Support #11

Key Features

  • Multi-Provider Architecture: Switching between Google Gemini (cloud) and Ollama (local)
  • Complete Ollama Integration: Automatic service detection, model management, and tool calling support
  • Enhanced UI: Provider selection with dynamic model lists and contextual settings
  • Robust Implementation: OpenAI-compatible API with error handling and conversation persistence
  • Comprehensive Documentation: Step-by-step setup guides for native (Ollama) and Docker installations

Files Changed

  • app.go - Provider switching and Ollama initialization
  • ollama.go - Ollama provider implementation with tool calling
  • Settings.svelte - UI for provider/model selection
  • Docs.svelte - Complete Ollama setup documentation

Benefits

  • Zero API Costs: Unlimited local AI usage
  • Complete Privacy: Conversations never leave the machine
  • High Performance: Instant responses with local processing (If good machine...)
  • Easy Setup: Simple installation with comprehensive guides

Tests

  • Base tests as I don't have a strong AI-ready machine locally

@mik-tf
Copy link
Contributor Author

mik-tf commented Dec 21, 2025

Converting to draft and will make another PR without conflict.

@mik-tf mik-tf marked this pull request as draft December 21, 2025 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Local AI Support

2 participants