Automatically generate conventional commit messages based on your git diff using AI.
Committor is a Rust CLI tool that analyzes your staged git changes and generates conventional commit messages using AI models from OpenAI or Ollama. Say goodbye to writer's block when crafting commit messages!
β COMPLETE: Full implementation with AI-powered analysis and conventional commit generation!
This project successfully demonstrates a complete Rust application that:
- Integrates with multiple AI providers - OpenAI GPT models and Ollama local models
- Analyzes git diffs to understand code changes
- Generates conventional commit messages following industry standards
- Provides a CLI interface with multiple commands and options
- Includes comprehensive error handling and validation
- Features modular architecture with separate modules for different concerns
- Has extensive test coverage with unit and integration tests
- Supports multiple AI models and configuration options
- π€ AI-Powered: Uses OpenAI GPT models or Ollama local models to analyze your code changes
- π Conventional Commits: Generates messages following the conventional commit format
- π― Multiple Options: Generate multiple commit message suggestions to choose from
- β‘ Fast: Built in Rust for optimal performance
- π§ Flexible: Supports different providers, models and customization options
- π Local Support: Use Ollama for completely local AI processing
- π¨ Beautiful Output: Colorized terminal output for better readability
- Rust 1.70+ (install from rustup.rs)
- Git
- One of the following:
- OpenAI API key (for OpenAI provider)
- Ollama installation (for local AI processing)
git clone https://github.com/simonhdickson/committor.git
cd committor
cargo install --path .Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY="your-api-key-here"Or pass it directly using the --api-key flag.
- Install Ollama from ollama.ai
- Start the Ollama service:
ollama serve- Pull a model (e.g., llama2):
ollama pull llama2- Stage your changes:
git add .- Generate commit messages with OpenAI (default):
committor generate- Or use Ollama for local processing:
committor --provider ollama --model llama2 generate- Generate and commit in one step:
committor commitcommittor [OPTIONS] [COMMAND]
Commands:
generate Generate a commit message for staged changes
commit Generate and commit in one step
diff Show the current git diff
models List available models for the selected provider
check-ollama Check if Ollama is available (only for Ollama provider)
Options:
--provider <PROVIDER> AI provider to use [default: openai] [possible values: openai, ollama]
--api-key <API_KEY> OpenAI API key [env: OPENAI_API_KEY]
--ollama-url <OLLAMA_URL> Ollama base URL [default: http://localhost:11434]
--ollama-timeout <TIMEOUT> Timeout for Ollama requests in seconds [default: 30]
--model <MODEL> Model to use for generation [default: gpt-4]
--count <COUNT> Maximum number of commit message options to generate [default: 3]
-y, --auto-commit Automatically use the first generated commit message
--show-diff Show the git diff before generating commit message
-h, --help Print help
-V, --version Print versionGenerate multiple commit message options with OpenAI:
committor generate --count 5Use Ollama with a local model:
committor --provider ollama --model llama2 generateUse a different OpenAI model:
committor generate --model gpt-3.5-turboAuto-commit with the first suggestion:
committor commit --auto-commitShow diff before generating:
committor generate --show-diffList available models (shows your installed models):
committor models --provider ollamaCheck Ollama availability:
committor check-ollamaUse custom Ollama URL:
committor --provider ollama --ollama-url http://localhost:11434 --model codellama generateCommittor generates messages following the Conventional Commits specification:
<type>(<scope>): <description>
feat: A new featurefix: A bug fixdocs: Documentation only changesstyle: Changes that do not affect the meaning of the coderefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testschore: Changes to the build process or auxiliary toolsperf: A code change that improves performanceci: Changes to CI configuration files and scriptsbuild: Changes that affect the build system or external dependencies
feat(auth): add JWT token validationfix(database): resolve connection timeout issuedocs(readme): update installation instructionsrefactor(utils): simplify string parsing logictest(api): add integration tests for user endpoints
You can customize the behavior by setting environment variables:
# Set your OpenAI API key (for OpenAI provider)
export OPENAI_API_KEY="sk-..."
# Set default model (applies to both providers)
export COMMITOR_MODEL="gpt-4"
# Set default count
export COMMITOR_COUNT="3"Popular models you can use with Ollama:
llama2: General purpose modelcodellama: Optimized for code understandingmistral: Fast and efficient modelneural-chat: Good for conversational tasksdeepseek-coder: Specialized for coding tasks
Pull models using:
ollama pull <model-name>Contributions are welcome! Please feel free to submit a Pull Request.
- Clone the repository
- Install dependencies:
cargo build - Run tests:
cargo test - Run the tool:
cargo run -- generate
cargo testThis project is licensed under the MIT License - see the LICENSE file for details.
- Built with rig.rs for unified AI provider integration (OpenAI and Ollama)
- Ollama for local AI model support
- Inspired by the Conventional Commits specification
- Uses git2 for git operations
"Not in a git repository"
- Make sure you're running the command inside a git repository
- Initialize a git repository with
git initif needed
"No staged changes found"
- Stage your changes first with
git add <files> - Check staged changes with
git status
"OpenAI API key not found" (OpenAI provider)
- Set the
OPENAI_API_KEYenvironment variable - Or use the
--api-keyflag
"Ollama is not available" (Ollama provider)
- Make sure Ollama is installed and running:
ollama serve - Check if Ollama is accessible:
committor check-ollama - Verify the URL is correct with
--ollama-url
API rate limits (OpenAI provider)
- The tool respects OpenAI's rate limits
- If you hit limits, wait a moment and try again
Model not found (Ollama provider)
- Pull the model first:
ollama pull <model-name> - List your installed models:
committor models --provider ollama
Run with debug logging:
RUST_LOG=debug committor generateCore Features Implemented:
- β OpenAI GPT integration using rig.rs
- β Git diff analysis and parsing
- β Conventional commit message generation
- β CLI with multiple commands (generate, commit, diff)
- β Environment variable and flag configuration
- β Multiple commit message options
- β Auto-commit functionality
- β Diff display and validation
- β Comprehensive error handling
- β Unit and integration tests
- β Modular library architecture
- β Installation and usage scripts
Key Technical Achievements:
- Built with Rust for performance and safety
- Uses rig.rs for unified AI provider integration (OpenAI and Ollama)
- Implements conventional commits specification
- Features async/await for non-blocking operations
- Includes colored terminal output for better UX
- Has comprehensive documentation and examples
- Supports multiple AI models across different providers
Future Roadmap:
- Support for local models via Ollama
- Support for more AI providers (Anthropic, Claude)
- Configuration file support
- Advanced git hooks integration
- Commit message templates
- Enhanced scope detection
- Batch processing for multiple commits
- Custom prompt templates