Skip to content

vadhh/vaultsearch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VaultSearch 🔒

Private, Local, Offline RAG (Retrieval Augmented Generation). Chat with your sensitive documents without data ever leaving your machine.

Screenshot 2026-01-06 015748

License Docker Version

🌟 New in v1.1.0: The Cinematic Update

  • Cinematic UI: Complete frontend overhaul with "Aurora" ambient effects, glassmorphism cards, and interactive hover states.
  • Active Streaming: Custom React hooks to handle high-velocity token streaming without UI jitter.
  • Robust "Dragnet" Search: Backend logic is now structure-agnostic, capable of finding and deleting document vectors regardless of nested metadata schema.
  • Smart Context: Visual "Thinking..." indicators and instant Stop generation controls.

🏗 Architecture

  • Brain: Llama 3 (via Ollama)
  • Memory: Qdrant (Vector Database) with Self-Healing Volume Logic
  • Backend: FastAPI + LangChain (Crash-proof startup)
  • Frontend: Next.js 14 + Tailwind (Glassmorphism UI)

🚀 Quick Start

Prerequisites

  1. Docker Desktop (Running)
  2. Ollama (Running locally on port 11434)
    • ollama run llama3

Installation

1. Clone the repo

git clone [https://github.com/vadhh/vaultsearch.git](https://github.com/vadhh/vaultsearch.git)
cd vaultsearch

2. Launch the stack

docker-compose up --build
Open http://localhost:3000.

Usage

  • Upload a PDF via the "Knowledge Base" sidebar.

  • Ask questions.

  • The system will strictly cite sources and page numbers.

📦 Release History

v1.1.0 - UI Overhaul, Robust "Dragnet" Delete Logic, Stop Button.

v1.0.0 - Initial Release. Dockerized RAG pipeline.

🤝 Contributing

Pull requests are welcome. For major changes, please open an issue first.

About

Private Local RAG using Llama 3 and Qdrant.

Resources

Stars

Watchers

Forks