A dependency visualization and multi-checker verification tool for Lean 4 projects. LeanDepViz extracts declaration dependencies and provides a unified framework for running multiple independent verification tools, giving you defense-in-depth assurance for your Lean code.
- Multi-Checker Verification (v0.3.1): Run multiple independent verifiers (LeanParanoia, lean4checker, SafeVerify) with unified reporting
- Defense in Depth: Different checkers catch different issues - policy violations, kernel corruption, statement changes
- Interactive Viewer: Sortable table with per-tool columns, embedded dependency graph, failures sorted to top
- Smart Filtering: By default, keeps only declarations from your project's root modules, producing manageable graph sizes
- Multiple Output Formats: Generate DOT files, JSON for verification, or render directly to SVG/PNG via Graphviz
- Unified Report Format: Easy to integrate new verification tools with consistent JSON schema
Try the interactive viewer with real data:
-
Verification Demo ⭐ Recommended
- 17 declarations with 3 verification tool columns
- Shows defense-in-depth: LeanParanoia, lean4checker, SafeVerify
- Sortable columns, embedded graph, multi-tool results
- Note: Demonstration data showcasing UI capabilities (SafeVerify column uses mock data)
-
- Real-world example: ~800 declarations from probability theory formalization
- Interactive dependency graph exploration
-
- Strong Prime Number Theorem formalization
- 1129 declarations, complex analysis and number theory
- Demonstrates large-scale project visualization
-
- Complete test suite: 14 declarations with 2-tool verification
- Real verification data from LeanParanoia + lean4checker
- Shows various exploit categories: custom axioms, sorry, unsafe code, partial functions
-
- Simple 3-declaration example
- Good starting point to understand the viewer
Browse all demos: https://cameronfreer.github.io/LeanDepViz/demos.html
Load your own files: https://cameronfreer.github.io/LeanDepViz/
The verification demo uses test files from LeanParanoia demonstrating various verification scenarios:
Interactive Table View:
17 declarations with 3 verification tool columns. Red ✗ = failed, Green ✓ = passed, — = not checked.
Dependency Graph:
Dependency relationships between test declarations from LeanParanoia's test suite.
Examples include:
- ✅
good_theorem,simple_theorem- Pass all checks - ❌
bad_axiom- Custom axiom (caught by all 3 tools) - ❌
exploit_theorem- Uses unsafe code (caught by all 3 tools) - ❌ Various sorry, unsafe, and partial function violations
Important: This demo showcases the multi-checker UI with demonstration data. The verification results are based on known issues in the LeanParanoia test files, with SafeVerify results being mock data (SafeVerify requires comparing two versions, but these test files exist in only one version). The demo illustrates what a complete multi-tool verification workflow would look like.
Credit: Example declarations from LeanParanoia test suite
Add to your project's lakefile.lean:
require LeanDepViz from git
"https://github.com/cameronfreer/LeanDepViz.git" @ "main"Then:
lake update LeanDepViz
lake build depvizClone this repository and build:
git clone https://github.com/cameronfreer/LeanDepViz.git
cd LeanDepViz
lake buildFrom your project directory:
lake exe depviz --roots MyProject --json-out depgraph.json --dot-out depgraph.dotThis creates:
depgraph.json- Machine-readable format for verification (for Table View)depgraph.dot- GraphViz format for visualization (for Graph View)
If you want to verify code quality with LeanParanoia:
# Install Python dependencies
pip install pyyaml
# Copy example policy
cp .lake/packages/LeanDepViz/examples/policy.yaml ./my-policy.yaml
# Edit my-policy.yaml to match your project structure
# Run checks
python .lake/packages/LeanDepViz/scripts/paranoia_runner.py \
--depgraph depgraph.json \
--policy my-policy.yaml \
--out paranoia_report.json \
--jobs 8Open the interactive viewer:
# Copy viewer to your project (one-time)
cp .lake/packages/LeanDepViz/viewer/paranoia-viewer.html ./
# Open in browser
open paranoia-viewer.html
# Then load your files:
# - In Table View: Load depgraph.json and (optionally) paranoia_report.json
# - In Graph View: Load depgraph.dot for visual graphWant to see what LeanDepViz produces? Check out the examples:
Note: To view examples locally, use a local web server instead of opening files directly:
python serve.py # Then open http://localhost:8000/docs/This is required for the Graph View to work correctly (browser security restrictions).
examples/output/ - Complete output from the Exchangeability project (probability theory formalization with ~800 declarations):
- Interactive viewer: Live Demo
- All formats: JSON, DOT, SVG, PNG, and embedded HTML
- See
examples/output/README.mdfor details about each format and file sizes
examples/leanparanoia-tests/ - Comprehensive demo with multiple verification tools:
🎯 Verification Demo (Recommended) - 12 declarations verified by 3 tools
Shows defense-in-depth verification with:
- LeanParanoia: Policy enforcement
- lean4checker: Kernel replay
- SafeVerify: Reference vs implementation
Results: ✅ 2 Pass (all tools) | ❌ 10 Fail (various exploits caught by multiple checkers)
Example Categories:
- 🔴 Custom Axioms | 🟡 Sorry Usage | 🟠 Unsafe Code | 🟣 Partial Functions | 🟢 Valid Code
Legacy Demos (single-tool):
- 📊 Basic Demo - 3 declarations (LeanParanoia only)
- 📈 All Examples - 12 declarations (LeanParanoia only)
See examples/leanparanoia-tests/README.md for details
Generate a filtered dependency graph:
lake exe depviz --roots MyProject --dot-out depgraph.dotInclude additional module prefixes:
lake exe depviz --roots MyProject --include-prefix Std,Init --dot-out depgraph.dotGenerate both DOT and JSON:
lake exe depviz --roots MyProject --dot-out depgraph.dot --json-out depgraph.jsonRender directly to SVG/PNG (requires Graphviz):
lake exe depviz --roots MyProject --svg-out depgraph.svg --png-out depgraph.png--roots <name>: Project root name(s) for filtering (required)--dot-out <file>: Output DOT file path--json-out <file>: Output JSON file path (for LeanParanoia integration)--svg-out <file>: Output SVG file path (requires Graphviz)--png-out <file>: Output PNG file path (requires Graphviz)--include-prefix <prefixes>: Comma-separated list of additional module prefixes to include--keep-all: Disable filtering entirely (include all declarations)
For defense-in-depth verification, run multiple checkers and merge their results:
# 1. Extract dependency graph
lake exe depviz --roots MyProject --json-out depgraph.json --dot-out depgraph.dot
# 2. Run individual checkers
python scripts/paranoia_runner.py --depgraph depgraph.json --policy policy.yaml --out paranoia.json
python scripts/lean4checker_adapter.py --depgraph depgraph.json --out kernel.json
python scripts/safeverify_adapter.py --depgraph depgraph.json --target-dir /path/to/baseline --submit-dir .lake/build --out safeverify.json
# 3. Merge results
python scripts/merge_reports.py --reports paranoia.json kernel.json safeverify.json --out unified.json
# 4. View in interactive viewer
python scripts/embed_data.py --viewer viewer/paranoia-viewer.html --depgraph depgraph.json --dot depgraph.dot --report unified.json --output review.html
open review.htmlSee MULTI_CHECKER.md for complete documentation.
LeanParanoia enforces source-level policies: no sorry, approved axioms only, no unsafe/extern/partial functions.
Note: Currently has version compatibility constraints with Mathlib that are being addressed. Works well for standalone projects. See examples/leanparanoia-tests/ for examples.
What it checks:
- Sorry usage
- Custom axioms (only standard axioms allowed)
- Unsafe declarations
- Partial/non-terminating functions
- Extern implementations
lean4checker independently replays your proof environment in the Lean kernel to verify correctness.
What it checks:
- Environment integrity (detects corrupted
.oleanfiles) - Kernel-level correctness (independent verification)
- Declaration validity (type-checks proof terms)
What it does NOT check:
⚠️ sorryusage (sorry is a valid kernel axiom -sorryAx)⚠️ Custom axioms (axioms are valid kernel constructs)⚠️ Policy violations (that's what LeanParanoia does)
Why it's valuable: Provides independent verification that your compiled proof environment is sound. If code compiles correctly, lean4checker will typically pass. Its value is in defense-in-depth - catching issues the main compiler might miss due to bugs or corruption.
Usage:
python scripts/lean4checker_adapter.py --depgraph depgraph.json --out kernel.jsonUse --fresh flag for thorough checking including imports (slower but more comprehensive).
SafeVerify compares a reference/specification version against an implementation to ensure they match.
What it checks:
- Statement equality (theorems haven't changed)
- No extra axioms in implementation
- No unsafe/partial in implementation vs reference
Usage:
# Build baseline reference
git checkout main && lake build
mv .lake/build /tmp/target_build
# Build implementation
git checkout feature-branch && lake build
# Compare
python scripts/safeverify_adapter.py \
--depgraph depgraph.json \
--target-dir /tmp/target_build \
--submit-dir .lake/build \
--out safeverify.jsonPerfect for PR reviews and verifying AI-generated code.
LeanDepViz provides a Python wrapper for running LeanParanoia checks.
-
Add LeanParanoia to your project (in
lakefile.lean):require paranoia from git "https://github.com/oOo0oOo/LeanParanoia.git" @ "main"
-
Update and build dependencies:
lake update paranoia lake build paranoia
Note: If you get toolchain mismatches, sync your
lean-toolchainwith Mathlib:cp .lake/packages/mathlib/lean-toolchain ./lean-toolchain lake clean lake build
-
Install Python dependencies:
pip install pyyaml
-
Generate dependency graph:
lake exe depviz --roots MyProject --json-out depgraph.json
-
Create a policy file (copy and customize an example):
cp .lake/packages/LeanDepViz/examples/policy.yaml ./my-policy.yaml
Edit
my-policy.yamlto define zones matching your project structure. -
Run policy checks:
# Recommended: Use --summary-only for large projects (smaller output) python .lake/packages/LeanDepViz/scripts/paranoia_runner.py \ --depgraph depgraph.json \ --policy my-policy.yaml \ --out paranoia_report.json \ --summary-only \ --jobs 8 # Or without --summary-only for detailed output (larger files)
Performance tip: The
--summary-onlyflag captures only error summaries instead of full output, reducing report size from gigabytes to megabytes for large projects. -
View results in the interactive viewer:
open paranoia-viewer.html # Load depgraph.json and paranoia_report.json in the browser
The repository includes three example policy configurations in the examples/ directory:
policy.yaml: Balanced policy for production code with multiple zonespolicy-strict.yaml: Ultra-strict (constructive-only, no Classical.choice)policy-dev.yaml: Lenient policy for development (only trackssorry)
Edit these files to define zones with different verification requirements for your project.
zones:
- name: "Zone Name"
include: ["MyProject.Module.**"] # Which modules to check
exclude: ["MyProject.Module.Skip"] # Exceptions
allowed_axioms: # Whitelist of axioms
- "propext"
- "Classical.choice"
- "Quot.sound"
forbid: # What to forbid
- "sorry"
- "metavariables"
- "unsafe"
trust_modules: # Don't re-verify these
- "Std"
- "Mathlib"For each declaration, LeanParanoia can check:
- Sorry/Admit: Incomplete proofs
- Axioms: Use of axioms beyond your whitelist
- Metavariables: Partially elaborated terms
- Unsafe: Unsafe constructs
- Extern: External (FFI) declarations
Each zone in your policy can have different rules.
The viewer/paranoia-viewer.html file provides a web-based interface with two viewing modes:
- Browse all declarations with verification status
- Filter by pass/fail, zone, or search text
- See detailed error messages for failing checks
- Identify which axioms are used by each declaration
- Click any row for detailed information
- Visual dependency graph rendered from DOT files
- Interactive zoom and pan
- See the full structure of your project
- Load with:
lake exe depviz --roots YourProject --dot-out graph.dot
No server required - pure client-side JavaScript that works offline.
Try it live: https://cameronfreer.github.io/LeanDepViz/
Visit the URL, load your JSON files, and explore your project's dependencies. All processing happens in your browser - no data is sent to any server.
Share your depgraph.json and paranoia_report.json files via GitHub Gist or email. Recipients can load them into the hosted viewer.
Create a self-contained HTML file with your data embedded:
python .lake/packages/LeanDepViz/scripts/embed_data.py \
--depgraph depgraph.json \
--dot depgraph.dot \
--report paranoia_report.json \
--output my-project-report.htmlThis creates a single HTML file that:
- Contains all your data embedded (JSON graph + DOT graph + verification report)
- Both Table View and Graph View work without file uploads
- Can be opened directly in any browser
- Can be hosted on GitHub Pages, Vercel, Netlify, or any static hosting
- Requires no server - works completely offline
Share the HTML file or host it anywhere!
Graphviz is required for SVG/PNG output formats.
brew install graphvizsudo apt-get install graphvizsudo dnf install graphvizsudo pacman -S graphvizchoco install graphvizOr download the MSI installer from graphviz.org
If you generate a .dot file, you can render it manually with Graphviz:
dot -Tsvg depgraph.dot -o depgraph.svg
dot -Tpng depgraph.dot -o depgraph.pngOr open it with visualization tools like:
- Graphviz Online
- VS Code extensions (e.g., Graphviz Preview)
- Desktop viewers (e.g., xdot, gvedit)
# 1. Add LeanDepViz to your project's lakefile.lean
# 2. Update dependencies
lake update LeanDepViz
lake build depviz
# 3. Generate graph
lake exe depviz --roots MyProject --json-out depgraph.json
# 4. (Optional) Set up verification
lake update paranoia
lake build paranoia
pip install pyyaml
# 5. Create and customize policy
cp .lake/packages/LeanDepViz/examples/policy.yaml ./my-policy.yaml
# Edit my-policy.yaml with your project's module structure
# 6. Run verification
python .lake/packages/LeanDepViz/scripts/paranoia_runner.py \
--depgraph depgraph.json \
--policy my-policy.yaml \
--out report.json \
--jobs 8
# 7. View results
cp .lake/packages/LeanDepViz/viewer/paranoia-viewer.html ./
open paranoia-viewer.htmlpython scripts/paranoia_runner.py \
--policy examples/policy-dev.yaml \
--depgraph depgraph.json \
--out report.jsonExit code will be 1 if any sorries are found - perfect for CI.
python scripts/paranoia_runner.py \
--policy examples/policy-strict.yaml \
--depgraph depgraph.json \
--out strict-report.jsonSee which theorems require Classical.choice.
Edit policy.yaml to define zones with different rules for core vs. experimental code.
- Parallel execution: Use
--jobsequal to your CPU count - Trust Mathlib/Std: Always include in
trust_modules - Filter the graph: Only check relevant modules
- Use dev policy first: Quick sorry check before full verification
Build the executable:
lake build depvizToolchain mismatch. Sync with Mathlib:
cp .lake/packages/mathlib/lean-toolchain ./lean-toolchain
lake clean
lake buildInstall PyYAML:
pip install pyyaml- Use more parallel jobs:
--jobs 16 - Ensure
trust_modulesincludes Std and Mathlib - Consider filtering to specific modules
The filtering logic:
- Node Filtering: Keeps only declarations from modules matching your specified root prefix(es)
- Edge Filtering: Removes edges where either endpoint was filtered out
- Consistent Output: Ensures the resulting DOT/JSON references only existing nodes
Default behavior produces graphs with hundreds to thousands of nodes instead of millions, making them practical to visualize with standard tools.
The JSON output includes full declaration names, module paths, kinds (theorem/def), and metadata about sorry, axioms, and unsafe constructs - everything needed for policy-based verification.
The demo files (like leanparanoia-examples-all.html) use real verification data generated from test projects. To regenerate the "All LeanParanoia Examples" demo:
Use the test-driven build script for reproducible generation:
./scripts/generate_all_examples.shThis script:
- Creates a temporary test project with example files from
examples/leanparanoia-tests/ - Builds the project with Lean 4.27.0-rc1
- Generates dependency graph (JSON + DOT formats)
- Runs LeanParanoia verification with policy checks
- Runs lean4checker for kernel replay verification
- Merges reports into unified JSON format
- Validates the unified report (structure, data integrity, expected results)
- Generates interactive HTML viewer with embedded data
- Copies outputs to
docs/andexamples/leanparanoia-tests/
- Lean 4.27.0-rc1 (via elan):
elan default leanprover/lean4:v4.27.0-rc1 - Python 3: For adapter scripts and validation
- LeanDepViz built:
lake buildin this repository
If you want to understand or customize the process:
# 1. Create test project
mkdir /tmp/leanparanoia-test && cd /tmp/leanparanoia-test
# 2. Setup project files
cat > lakefile.lean <<EOF
import Lake
open Lake DSL
package LeanTestProject where
version := v!"0.1.0"
require LeanDepViz from git "https://github.com/CameronFreer/LeanDepViz.git" @ "main"
@[default_target]
lean_lib LeanTestProject where
EOF
echo "leanprover/lean4:v4.27.0-rc1" > lean-toolchain
# 3. Copy example files
mkdir -p LeanTestProject
cp $REPO/examples/leanparanoia-tests/{Basic,ProveAnything,SorryDirect,PartialNonTerminating,UnsafeDefinition,ValidSimple}.lean LeanTestProject/
cp $REPO/examples/leanparanoia-tests/policy.yaml .
# 4. Build project
lake update && lake build
# 5. Generate dependency graph
lake exe depviz --roots LeanTestProject --json-out depgraph.json --dot-out depgraph.dot
# 6. Run verifiers
python3 $REPO/scripts/paranoia_runner.py \
--depgraph depgraph.json \
--policy policy.yaml \
--out paranoia-report.json
python3 $REPO/scripts/lean4checker_adapter.py \
--depgraph depgraph.json \
--out lean4checker-report.json
# 7. Merge reports
python3 $REPO/scripts/merge_reports.py \
--reports paranoia-report.json lean4checker-report.json \
--out unified-report.json
# 8. Validate
python3 $REPO/scripts/validate_unified_report.py \
--report unified-report.json \
--expect-tests
# 9. Generate HTML
python3 $REPO/scripts/embed_data.py \
--viewer $REPO/viewer/paranoia-viewer.html \
--depgraph depgraph.json \
--dot depgraph.dot \
--report unified-report.json \
--output $REPO/docs/leanparanoia-examples-all.htmlThe validate_unified_report.py script checks:
- Structure: Required fields, correct types
- Data Integrity: Declaration counts match across tools
- Expected Results: Known test cases produce correct pass/fail outcomes
- No Hardcoded Paths: No machine-specific paths in output
For the LeanTestProject test suite, validation ensures:
- 2 theorems pass all checks (
good_theorem,simple_theorem) - 8 declarations fail with expected errors:
- 2 custom axiom violations (
bad_axiom,magic) - 2 sorry violations (
sorry_theorem,partial_theorem) - 4 unsafe code violations (
unsafeAddImpl,unsafeProof,seeminglySafeAdd,unsafe_theorem)
- 2 custom axiom violations (
- All 14 declarations pass lean4checker (kernel replay doesn't catch policy violations)
To add new example files:
- Add
.leanfile toexamples/leanparanoia-tests/ - Update
generate_all_examples.shto include the new file - Run the generation script
- Commit the updated outputs
Contributions are welcome! Please:
- Open an issue for bugs or feature requests
- Submit pull requests with clear descriptions
- Follow the existing code style
- Add tests for new features
- LeanParanoia - Policy-driven verifier
- lean4checker - Environment replay tool
- SafeVerify - Reference vs implementation comparison
- Graphviz - Graph visualization software