A performance measurement tracking tool for Git repositories that stores metrics using git-notes.
git-perf provides a comprehensive solution for tracking and analyzing performance metrics in Git repositories. It stores measurements as git-notes, enabling version-controlled performance tracking alongside your codebase.
π Live Example: See the example report for master branch
- Installation
- Quick Start
- Key Features
- How git-perf Works: Storage and Workflow
- Audit System
- Understanding Audit Output
- Configuration
- Migration
- Remote Setup
- Frequently Asked Questions
- Development
- Documentation
For Linux and macOS:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/kaihowl/git-perf/releases/latest/download/git-perf-installer.sh | shcargo install git-perfDownload pre-built tarballs for your platform from the latest release:
- Linux x86_64:
git-perf-x86_64-unknown-linux-gnu.tar.xz - Linux x86_64 (musl):
git-perf-x86_64-unknown-linux-musl.tar.xz - Linux ARM64:
git-perf-aarch64-unknown-linux-gnu.tar.xz - macOS ARM64 (Apple Silicon):
git-perf-aarch64-apple-darwin.tar.xz
All tarballs include SHA256 checksums for verification.
git clone https://github.com/kaihowl/git-perf.git
cd git-perf
cargo install --path .# Add a performance measurement
git perf add 42.5 -m build_time
# Verify measurement was stored (optional)
git notes --ref=refs/notes/perf-v3 list | head -1
# Audit for performance regressions
git perf audit -m build_time
# Push measurements to remote
git perf pushTrack test execution times and benchmark results automatically:
# Import test results (JUnit XML format)
cargo nextest run --profile ci
git perf import junit target/nextest/ci/junit.xml
# Import benchmark results (Criterion JSON format)
cargo criterion --message-format json > bench-results.json
git perf import criterion-json bench-results.json
# Audit for performance regressions
git perf audit --measurement "test::*"
git perf audit --measurement "bench::*"Supported formats:
- JUnit XML - Works with cargo-nextest, pytest, Jest, JUnit, and many other test frameworks
- Criterion JSON - Rust benchmark results with statistical data
See the Importing Measurements Guide for comprehensive documentation.
- Git-notes Integration: Store performance data alongside your Git history
- Statistical Analysis: Advanced audit system with configurable dispersion methods
- Regression Detection: Automated detection of performance changes
- Centralized Collection: Designed for centralized metric gathering (e.g., CI/CD)
- Multiple Formats: Support for data migration between format versions
git-perf uses git-notes, a Git feature for attaching metadata to commits without modifying commit SHAs. Performance measurements are stored in refs/notes/perf-v3 as line-by-line field-separated data.
Storage Format:
- Each measurement is one line in the git note
- Fields are concatenated with a special delimiter
- Format:
{epoch}{name}{timestamp}{value}{key=value pairs...}\n - Multiple measurements on the same commit = multiple lines in the note
Example raw format:
0build_time1702685461.234542.5
0memory_usage1702685461.567256.0
When multiple processes add measurements concurrently, git-perf uses the cat_sort_uniq merge strategy:
- All lines from both sides are concatenated
- Lines are sorted
- Duplicates are removed
- This ensures exact deduplication of identical measurements
In GitHub PRs, git-perf uses first-parent traversal:
- GitHub creates a merge commit when PRs are merged
- git-perf stores measurements on the merge commit (HEAD)
--first-parentfollows back through the target branch for historical data- This means PR measurements attach to the merge commit, not individual feature commits
- Historical lineage comes from the target branch
- Non-invasive: Measurements don't pollute commit history
- Retroactive: Add measurements after commits are made
- Independent sync: Push/pull measurements separately from code
- Centralized collection: CI can add measurements without creating commits
# List commits with measurements
git notes --ref=refs/notes/perf-v3 list
# View measurements for current commit
git log --show-notes=refs/notes/perf-v3 --oneline -1
# View raw measurement data for specific commit
git notes --ref=refs/notes/perf-v3 show <commit-sha>By default, measurements are added to the current HEAD commit. However, you can target specific commits using the --commit flag or positional arguments:
Write Operations (use --commit flag):
# Add measurement to a specific commit
git perf add 100.5 -m build_time --commit abc123
# Measure at a specific commit
git perf measure -m test_time --commit HEAD~5 -- cargo test
# Import results to a specific commit
git perf import junit results.xml --commit v1.0.0Read Operations (use positional argument):
# Generate report from a specific commit
git perf report HEAD~10 -o report.html
# Audit measurements at a specific commit
git perf audit v2.0.0 -m critical_testSupported commit formats: Full/short SHA, relative refs (HEAD~5), branch names, tag names, and symbolic refs.
Common use cases:
- Backfilling historical data: Import past CI results without changing HEAD
- Multi-branch development: Add measurements to feature branches without switching
- Release tracking: Measure performance at specific release points
- Historical analysis: Audit past commits to find when regressions were introduced
See the Non-HEAD Measurements Guide for detailed examples and best practices.
- Experimental Status: This tool is experimental and under active development
- Performance Impact: Repeated individual measurements are costly; prefer bulk additions when possible
- Centralized Design: Unlike Git's decentralized model, git-perf assumes centralized metric collection
Contrary to git itself, git-perf does not support decentralized collection of performance measurements. Instead, git-perf assumes that there is a single, central place for the collection of metrics. This should usually be your source foundry, e.g., GitHub. As performance measurements become less relevant over time, we allow metrics to be purged. As a delete in git still preserves the history before that deletion event, we have to rewrite history. To make rewriting of shared history safe, git-perf deliberately dropped some basic ideas of decentralized version control and instead focuses on the collection of metrics in a single central location.
git-perf provides helper scripts to migrate existing performance notes between format versions.
| From | To | Script | Target Ref |
|---|---|---|---|
| v1 | v2 | ./migration/to_v2.sh <path-to-your-repo> |
refs/notes/perf-v2 |
| v2 | v3 | ./migration/to_v3.sh <path-to-your-repo> |
refs/notes/perf-v3 |
The migration scripts:
- Clone the target repository into a temporary directory
- Transform the notes to the new format
- Commit the changes
- Push to the appropriate notes ref on
origin
After migration, ensure consumers fetch the new notes ref:
git fetch origin refs/notes/perf-v3:refs/notes/perf-v3git-perf push/pull automatically use a special remote called git-perf-origin.
If this remote doesn't exist, git-perf will automatically create it using your
origin remote's URL.
To use a different remote for performance measurements:
# Option 1: Set the git-perf-origin remote to a different URL
git remote set-url git-perf-origin git@github.com:org/repo.git
# Option 2: Add a new remote and set git-perf-origin to use it
git remote add perf-upstream git@github.com:org/repo.git
git remote set-url git-perf-origin git@github.com:org/repo.git
# Now git-perf push/pull will use the new remote
git perf push
git perf pullgit-perf includes a powerful audit system for detecting performance regressions and improvements. The system uses statistical analysis to identify meaningful performance changes while filtering out noise.
Choose between two statistical methods for calculating dispersion:
| Method | Description | Best For |
|---|---|---|
| Standard Deviation (stddev) | Traditional method, sensitive to outliers | Normally distributed data, stable measurements |
| Median Absolute Deviation (MAD) | Robust method, less sensitive to outliers | Data with outliers, variable environments |
Standard Deviation is ideal when:
- β Performance data is normally distributed
- β You want to detect all changes, including outlier-caused ones
- β You have consistent, stable measurement environments
MAD is recommended when:
- β Performance data has occasional outliers or spikes
- β You want to focus on typical performance changes
- β You're measuring in environments with variable system load
- β You need more robust regression detection
Epochs are boundaries in measurement history that allow you to accept expected performance changes without triggering audit failures.
- Each measurement includes an epoch identifier (default:
0) - Epochs are commit SHAs (first 8 characters) configured in
.gitperfconfig - When you bump an epoch, the audit system only compares against measurements in the current epoch
- Prior measurements (old epoch) are excluded from statistical comparison
Epochs are stored in .gitperfconfig (version controlled) rather than git notes because:
- Merge visibility: Changes become part of the PR and target branch
- Conflict detection: Multiple authors changing the same measurement's epoch provokes merge conflicts
- Team coordination: Forces explicit discussion when multiple people tune performance
- History tracking: Git history shows who approved which performance changes
This conflict behavior is intentional - it prevents silent overwrites of performance expectations and ensures team visibility.
Scenario: You refactored code and performance legitimately changed.
# Measurement fails audit (regression detected)
git perf audit -m build_time
# Output: β 'build_time' - HEAD measurement 85.3 is outside acceptable range
# Accept this as expected change
git perf bump-epoch -m build_time
# This updates .gitperfconfig with current commit SHA:
# [measurement."build_time"]
# epoch = "abc12345" # First 8 chars of HEAD commit
# Commit the configuration change
git add .gitperfconfig
git commit -m "Accept build_time increase from optimization changes"
# Add new measurement (will have the new epoch)
git perf add 85.3 -m build_time
# Now audit passes - only comparing against current epoch measurements
git perf audit -m build_time
# Output: β
'build_time' - Within acceptable rangeBump multiple epochs at once:
git perf bump-epoch -m metric1 -m metric2 -m metric3The audit system filters historical data by epoch:
- Fetch historical measurements from git notes
- Filter to only measurements with the same epoch as HEAD
- Calculate statistical baselines (mean, stddev/MAD) from same-epoch data
- Compare HEAD measurement against these baselines
This allows clean separation between:
- Statistical noise (handled within an epoch)
- Intentional changes (marked by epoch boundaries)
In .gitperfconfig:
[measurement]
epoch = "00000000" # Global default
[measurement."build_time"]
epoch = "abc12345" # Per-measurement epoch
[measurement."memory_usage"]
epoch = "def67890" # Different measurements can have different epochsThe audit system requires historical data for statistical comparison:
- Minimum commits: At least
--min-measurementscommits with data (built-in default: 2, recommended: 10) - Same epoch: Only measurements in the current epoch are used
- First measurement: Audit will skip with informational message
Note: While the built-in default is 2 measurements, we recommend using at least 10 measurements (via CLI flag or configuration) for reliable statistical analysis.
Example workflow:
# First commit with measurement
git perf add 42.5 -m build_time
git perf audit -m build_time
# Output: βοΈ 'build_time' - Only 1 measurement found. Less than min_measurements of 10.
# After 10+ commits with measurements in same epoch
git perf audit -m build_time
# Output: β
'build_time' - Statistical comparison with historical dataCreate a .gitperfconfig file in your repository root:
# Default settings for all measurements (parent table)
[measurement]
min_relative_deviation = 5.0
dispersion_method = "mad" # Use MAD for all measurements by default
epoch = "00000000" # Default epoch for performance changes
# Measurement-specific settings (override defaults)
[measurement."build_time"]
min_relative_deviation = 10.0
dispersion_method = "mad" # Build times can have outliers, use MAD
epoch = "12345678"
[measurement."memory_usage"]
min_relative_deviation = 2.0
dispersion_method = "stddev" # Memory usage is more stable, use stddev
epoch = "abcdef12"
[measurement."test_runtime"]
min_relative_deviation = 7.5
dispersion_method = "mad" # Test times can vary significantlygit-perf supports specifying units for measurements in your configuration. Units are displayed in audit output, HTML reports, and CSV exports:
# Default unit for all measurements (optional)
[measurement]
unit = "ms"
# Measurement-specific units (override defaults)
[measurement."build_time"]
unit = "ms"
[measurement."memory_usage"]
unit = "MB"
[measurement."throughput"]
unit = "requests/sec"
[measurement."test_runtime"]
unit = "seconds"How Units Work:
- Units are defined in configuration and applied at display time
- Units are not stored with measurement data
- Measurements without configured units display normally (backward compatible)
- Units appear in:
- Audit output:
β build_time: 42.5 ms (within acceptable range) - HTML reports: Legend entries and axis labels show units (e.g., "build_time (ms)")
- CSV exports: Dedicated unit column populated from configuration
- Audit output:
Example with units:
# Configure units in .gitperfconfig
cat >> .gitperfconfig << EOF
[measurement."build_time"]
unit = "ms"
EOF
# Add measurement (no CLI change needed)
git perf add 42.5 -m build_time
# Audit shows unit automatically
git perf audit -m build_time
# Output: β build_time: 42.5 ms (within acceptable range)
# Reports and CSV exports automatically include units
git perf report -o report.html -m build_time
git perf report -o data.csv -m build_time# Basic audit (uses configuration or defaults to stddev)
git perf audit -m build_time
# Audit multiple measurements
git perf audit -m build_time -m memory_usage
# Custom deviation threshold
git perf audit -m build_time -d 3.0
# Override dispersion method
git perf audit -m build_time --dispersion-method mad
git perf audit -m build_time -D stddev # Short formThe audit compares the HEAD measurement against historical measurements:
- Z-score: Statistical significance based on chosen dispersion method (stddev or MAD)
- Relative deviation: Practical significance as percentage change from median
- Threshold: Configurable minimum relative deviation to filter noise
- Sparkline: Visualizes measurement range relative to tail median (historical measurements)
The dispersion method is determined in this order:
- CLI option (
--dispersion-methodor-D) - highest priority - Measurement-specific config (
[measurement."name"].dispersion_method) - Default config (
[measurement].dispersion_method) - Built-in default (stddev) - lowest priority
$ git perf audit -m build_time --dispersion-method mad
β
'build_time'
z-score (mad): β 2.15
Head: ΞΌ: 110.0 Ο: 0.0 MAD: 0.0 n: 1
Tail: ΞΌ: 101.7 Ο: 45.8 MAD: 2.5 n: 3
[-1.0% β +96.0%] ββββWhen the relative deviation is below the threshold, the audit passes even if the z-score indicates statistical significance. This helps focus on meaningful performance changes while ignoring noise.
The audit system provides detailed statistical analysis of your performance measurements. Here's a complete example followed by a breakdown of each component:
$ git perf audit -m build_time --dispersion-method mad
β
'build_time'
z-score (mad): β 2.15
Head: ΞΌ: 110.0 Ο: 0.0 MAD: 0.0 n: 1
Tail: ΞΌ: 101.7 Ο: 45.8 MAD: 2.5 n: 3
[-1.0% β +96.0%] ββββThis output shows a passing audit where the current build time (110.0) is being compared against 3 historical measurements. Let's break down each component:
β
'build_time'
The first line shows the audit result:
- β 'measurement_name' - Audit passed (no significant regression detected)
- β 'measurement_name' - Audit failed (significant performance change detected)
- βοΈ 'measurement_name' - Audit skipped (insufficient measurements)
z-score (mad): β 2.15
- z-score:
2.15- Statistical measure of how many MADs (or standard deviations) the HEAD measurement is from the tail mean- Higher values indicate more significant deviations
- Typically, z-scores above 4.0 (default sigma) indicate statistical significance
- Direction arrows:
- β - HEAD measurement is higher than tail average (potential regression for time metrics)
- β - HEAD measurement is lower than tail average (potential improvement for time metrics)
- β - HEAD measurement is roughly equal to tail average
- Method indicator:
(mad)- Shows which dispersion method was used (stddevormad)
Head: ΞΌ: 110.0 Ο: 0.0 MAD: 0.0 n: 1
Tail: ΞΌ: 101.7 Ο: 45.8 MAD: 2.5 n: 3
- Head: Statistics for the current commit's measurement(s)
- In this example: single measurement of 110.0
- Tail: Statistics for historical measurements
- In this example: 3 measurements with mean of 101.7
- ΞΌ (mu): Mean (average) value
- Ο (sigma): Standard deviation (measure of variability)
- MAD: Median Absolute Deviation (robust measure of variability)
- n: Number of measurements used in the calculation
[-1.0% β +96.0%] ββββ
- Percentage range:
[-1.0% β +96.0%]- Shows min and max measurements relative to the tail median-1.0%means the lowest measurement is 1% below the tail median+96.0%means the highest measurement is 96% above the tail median- In this example, there's significant variation with one outlier
- Sparkline:
ββββ- Visual representation of all measurements (tail + head)- Each bar represents a measurement's relative magnitude
- Bars range from β (lowest) to β (highest)
- Here: low value, medium value, very high outlier, another low value
- Helps quickly identify outliers and trends
When configured with min_relative_deviation, you may see:
Note: Passed due to relative deviation (3.2%) being below threshold (5.0%)
This indicates the audit passed because the performance change was below the configured threshold, even though it may have been statistically significant. This prevents false alarms from minor fluctuations.
When there aren't enough measurements:
βοΈ 'build_time'
Only 3 measurements found. Less than requested min_measurements of 10. Skipping test.
[-2.5% β +5.1%] ββββ
The audit is skipped but still shows the sparkline for available data. Adjust --min-measurements to change the requirement.
When a regression is detected:
β 'build_time'
HEAD differs significantly from tail measurements.
z-score (stddev): β 5.23
Head: ΞΌ: 250.0 Ο: 0.0 MAD: 0.0 n: 1
Tail: ΞΌ: 100.0 Ο: 15.2 MAD: 8.3 n: 10
[-12.5% β +150.0%] ββ
βββ
ββ
ββ
β
This shows build time increased from ~100 to 250 (150% increase) with high statistical significance (z-score of 5.23).
Audit Passed (β ):
- Performance is stable or improved
- Any changes are within acceptable thresholds
- Safe to merge/deploy
Audit Failed (β):
- Significant performance regression detected
- Review code changes that may have caused the regression
- Consider optimization or investigation before merging
Audit Skipped (βοΈ):
- Not enough historical data for statistical analysis
- Continue collecting measurements
- Results will become more reliable over time
- Git: Version 2.43.0 or higher
- Platforms: Linux (x86_64, ARM64), macOS (ARM64/Apple Silicon)
- For building from source: Rust toolchain (latest stable)
Yes! git-perf works with both public and private repositories. For custom remote setups:
# Set up a custom remote for measurements
git remote add perf-upstream git@github.com:org/private-repo.git
git remote set-url git-perf-origin git@github.com:org/private-repo.gitSee the Remote Setup section for details.
Measurement operations have minimal overhead:
- Adding measurements: Individual
addcommands are slower than bulk operations - Recommendation: Use
git perf measurefor automatic timing or bulk additions when possible - Push/pull: Operations are efficient and similar to git-notes operations
For CI/CD usage, the overhead is typically negligible compared to build times.
git-perf uses a configuration-only approach for units rather than storing them with each measurement. This design decision provides several advantages:
Benefits:
- Simplicity: No changes to data model or serialization format
- Zero risk: Perfect backward compatibility with existing measurements
- Centralized management: Single source of truth in
.gitperfconfig - Flexibility: Units can be updated without re-recording measurements
- No storage overhead: No additional bytes per measurement
Trade-offs:
- No per-measurement validation: Can't detect if measurements were recorded in different units
- Manual consistency: Users must ensure config units match actual measurement units
- User responsibility: Changing unit config doesn't change values - config must accurately reflect how measurements were recorded
Best practices:
- Choose appropriate units when starting measurements and document them in config
- Use consistent naming conventions (e.g.,
build_time_msmakes the unit clear) - Keep unit configuration stable once established
This approach matches git-perf's configuration philosophy where display settings (like dispersion_method) are config-based. It provides 80% of the value (clear report display) with 20% of the complexity, and can be extended later if validation becomes important.
No migration needed! Simply add unit configuration to your .gitperfconfig:
[measurement."your_metric"]
unit = "ms"Existing measurements will automatically display with units in all output (audit, reports, CSV exports). The configuration is applied at display time, so it works retroactively with all historical measurements.
While technically possible by changing the configuration, this is not recommended. Units reflect how measurements were actually recorded. If you change from recording milliseconds to seconds, you should:
- Create a new measurement name (e.g.,
build_time_secinstead ofbuild_time_ms) - Update your configuration with the new unit
- Use the new measurement name going forward
This ensures clarity and prevents confusion when analyzing historical data.
If audits detect regressions for normal variations, try these solutions:
-
Increase relative deviation threshold in
.gitperfconfig:[measurement."build_time"] min_relative_deviation = 10.0 # Percentage change required
-
Switch to MAD for more robust detection:
[measurement."build_time"] dispersion_method = "mad" # Less sensitive to outliers
-
Increase sigma threshold via CLI:
git perf audit -m build_time -d 6.0 # Default is 4.0
Use Standard Deviation (stddev) when:
- Your performance data is normally distributed
- You want to detect all changes, including outlier-caused ones
- You have consistent, stable measurement environments
Use MAD (Median Absolute Deviation) when:
- Performance data has occasional outliers or spikes
- You want to focus on typical performance changes
- You're measuring in environments with variable system load
- You need more robust regression detection
See the Audit System section for complete details.
By default, git-perf requires at least 10 measurements (configurable with --min-measurements). With fewer measurements:
- Audits are skipped with a message
- Sparkline is still shown for available data
- Statistical analysis becomes more reliable as more data is collected
# Adjust minimum required measurements
git perf audit -m build_time --min-measurements 5The sparkline (e.g., ββββ) shows:
- Bars: Each represents a measurement's relative magnitude
- Height: From β (lowest) to β (highest)
- Percentage range: Shows min/max measurements relative to tail median
- Purpose: Quickly identify outliers, trends, and distribution
Example: [-1.0% β +96.0%] ββββ shows most values are low, with one significant outlier.
When running git perf size in a shallow clone, you'll see:
β οΈ Shallow clone detected - measurement counts may be incomplete (see FAQ)
What this means:
- The measurement counts for commits in your current lineage are accurate
- However, measurements for commits outside your shallow clone's history are not counted
- The
git notes listcommand only sees notes for locally available commits
Example: If your full repository has 1000 commits with 800 measurements, but your shallow clone only has 100 commits, you might see only ~80 measurements instead of the full 800.
Important distinction:
- β Measurements in your lineage: Correctly captured and counted
- β Measurements outside your lineage: Not visible in shallow clones (e.g., measurements on branches not in your history)
To see all measurements:
git fetch --unshallowThis converts your shallow clone to a full clone, allowing all measurements to be counted.
Recommended: At least 90 days of measurement data for meaningful trend analysis.
Configure cleanup retention:
# In .github/workflows/cleanup-measurements.yml
- uses: kaihowl/git-perf/.github/actions/cleanup@master
with:
retention-days: 90 # Days to retain measurementsAdjust based on your needs:
- Active development: 90-180 days
- Stable projects: 180-365 days
- Long-term analysis: 365+ days
Yes! Use key-value pairs to track multi-environment measurements:
# Record with environment tag
git perf measure -m build_time -k env=dev -- cargo build
git perf measure -m build_time -k env=prod -- cargo build --release
# Filter by environment
git perf report -m build_time -k env=dev
git perf audit -m build_time -s env=prodCommon causes and solutions:
-
Missing permissions:
permissions: contents: write # Required for git perf push
-
Insufficient fetch depth:
- uses: actions/checkout@v6 with: fetch-depth: 0 # Fetch full history
-
Protected branch: Add exception for GitHub Actions in branch protection settings
Use the report action for automatic PR comments:
- name: Generate report with audit
uses: kaihowl/git-perf/.github/actions/report@master
with:
depth: 40
audit-args: '-m build_time -m binary_size -d 4.0'
github-token: ${{ secrets.GITHUB_TOKEN }}The action will:
- Run audits automatically
- Comment on PRs with results
- Fail the workflow if regressions are detected
Yes, git-perf supports CSV export using the report command:
# Export to CSV file
git perf report -m build_time -o data.csv
# Export to stdout
git perf report -m build_time -o -
# Export with aggregation (e.g., mean)
git perf report -m build_time -a mean -o data.csvUnits configured in .gitperfconfig automatically appear in the CSV output.
π Documentation Index - Start here for a comprehensive guide to all documentation
- Integration Tutorial - Complete GitHub Actions setup guide (recommended starting point)
- Quick Start - Get running in 5 minutes
- Importing Measurements - Import test and benchmark results
- JUnit XML (pytest, Jest, cargo-nextest, JUnit, etc.)
- Criterion JSON (Rust benchmarks)
- Configuration - Complete .gitperfconfig reference
- Audit System - Understand regression detection
- CLI Reference - All commands and options
- Config Example - Annotated configuration template
- FAQ - Common questions answered
- Contributing Guide - How to contribute
- Development Setup - Developer and agent instructions
- Evaluation Tools - Statistical analysis tools
- Live Example Report - See git-perf in action
- GitHub Discussions - Ask questions and share ideas
Documentation is automatically generated using clap_mangen and clap_markdown:
# Generate with default version (0.0.0)
./scripts/generate-manpages.sh
# Generate with custom version
GIT_PERF_VERSION=1.0.0 ./scripts/generate-manpages.sh- Manpages:
target/man/man1/ - Markdown:
docs/manpage.md
The CI automatically validates that documentation stays current with CLI definitions.
Install required development dependencies:
# macOS
brew install libfaketime
# Ubuntu/Debian
sudo apt-get install libfaketimeFor contributors, run the setup script to install development tools:
# Install cargo-nextest and other development tools
./scripts/setup-dev-tools.shThis script will install cargo-nextest if not already present, enabling faster test execution in your local development environment.
This project uses nextest for faster, more reliable test execution.
# Development testing (recommended - skips slow tests)
cargo nextest run -- --skip slow
# Full test suite
cargo nextest run
# Specific test pattern
cargo nextest run --test-pattern "git_interop"
# Verbose output
cargo nextest run --verbose
# Specific package
cargo nextest run -p git-perf# All tests
cargo test
# Skip slow tests
cargo test -- --skip slowBefore submitting changes, ensure code quality:
# Format code
cargo fmt
# Run linter
cargo clippy