Local-first RAG for personal knowledge management.
LSM ingests local documents, builds embeddings, retrieves relevant context, and produces cited answers with configurable LLM providers.
0.6.0
This project is maintained for personal use first.
- Pull requests are not accepted. See CONTRIBUTING.md for how to help.
- Bugs and feature requests are welcome as GitHub issues.
- Until
v1.0.0, breaking changes can happen between releases, especially in configuration schema and interfaces. - Pin versions and review
docs/CHANGELOG.mdbefore upgrading.
- Compact TUI layout with density modes (
auto,compact,comfortable) for small terminals (80x24) - Split modular CSS and refactored Settings screen with MVC architecture and ViewModel state
- Interactive agent approvals/replies with multi-agent runtime support and real-time log streaming
- TUI startup under 1 second with lazy background ML initialization
- Session-completed agent history, unified log formats, and refined agent screen UX
- Hardened TUI test infrastructure with fast/slow marker split and global state reset fixtures
pip install -e .- Copy config:
cp example_config.json config.json-
Add API keys to
.env(see.env.example). -
Build embeddings:
lsm ingest build- Start the TUI:
lsm{
"global": {
"global_folder": "C:/Users/You/Local Second Mind",
"embed_model": "sentence-transformers/all-MiniLM-L6-v2",
"device": "cpu",
"batch_size": 32
},
"ingest": {
"roots": ["C:/Users/You/Documents"]
},
"llms": {
"providers": [{ "provider_name": "openai" }],
"services": {
"default": { "provider": "openai", "model": "gpt-5.2" }
}
}
}lsm- launch TUIlsm ingest build [--dry-run] [--force] [--skip-errors]lsm ingest tag [--max N]lsm ingest wipe --confirm
Global flags:
--config path/to/config.json--verbose--log-level DEBUG|INFO|WARNING|ERROR|CRITICAL--log-file path/to/lsm.log
docs/README.mddocs/user-guide/GETTING_STARTED.mddocs/user-guide/CONFIGURATION.mddocs/user-guide/QUERY_MODES.mddocs/user-guide/NOTES.mddocs/user-guide/REMOTE_SOURCES.md.agents/docs/INDEX.mddocs/CHANGELOG.md
MIT (LICENSE)