Private, Local, Offline RAG (Retrieval Augmented Generation). Chat with your sensitive documents without data ever leaving your machine.
- Cinematic UI: Complete frontend overhaul with "Aurora" ambient effects, glassmorphism cards, and interactive hover states.
- Active Streaming: Custom React hooks to handle high-velocity token streaming without UI jitter.
- Robust "Dragnet" Search: Backend logic is now structure-agnostic, capable of finding and deleting document vectors regardless of nested metadata schema.
- Smart Context: Visual "Thinking..." indicators and instant Stop generation controls.
- Brain: Llama 3 (via Ollama)
- Memory: Qdrant (Vector Database) with Self-Healing Volume Logic
- Backend: FastAPI + LangChain (Crash-proof startup)
- Frontend: Next.js 14 + Tailwind (Glassmorphism UI)
- Docker Desktop (Running)
- Ollama (Running locally on port 11434)
ollama run llama3
git clone [https://github.com/vadhh/vaultsearch.git](https://github.com/vadhh/vaultsearch.git)
cd vaultsearchdocker-compose up --build
Open http://localhost:3000.-
Upload a PDF via the "Knowledge Base" sidebar.
-
Ask questions.
-
The system will strictly cite sources and page numbers.
v1.1.0 - UI Overhaul, Robust "Dragnet" Delete Logic, Stop Button.
v1.0.0 - Initial Release. Dockerized RAG pipeline.
Pull requests are welcome. For major changes, please open an issue first.