A framework for verifiable reasoning with language models.
-
Updated
Feb 13, 2026 - Python
A framework for verifiable reasoning with language models.
framework for detecting hallucinations in LLM chain-of-thought reasoning. Features synthetic data corruption, transformer-based classifiers, Streamlit UI, and FastAPI backend.
Add a description, image, and links to the reasoning-verification topic page so that developers can more easily learn about it.
To associate your repository with the reasoning-verification topic, visit your repo's landing page and select "manage topics."