A command-line tool that generates commit messages using Large Language Models (LLMs) from various API providers.
It uses Gemini API or Mistral API or Ollama API to generate commit messages based on the changes made in your git repository.
- Python >= 3.8
- git
- astral uv
- Gemini API key
- Mistral API key
cd ~/projects
git clone https://github.com/trottomv/aicommitEdit ~/.bashrc
export GEMINI_API_KEY=<gemini-api-key>
export MISTRAL_API_KEY=<mistral-api-key>
alias aicommit='uv run --no-project ~/aicommit/aicommit.py'Remember to reload your shell configuration source ~/.bashrc or open a new terminal.
After setting up the alias, you can use aicommit in your project directory to generate a commit message based on the changes made.
By default, it will use the gemini-2.5-flash model to generate the commit message.
cd ~/projects/myproject
aicommit By default, aicommit uses the mistral model.
You can select a different model by passing an argument.
The available options are:
| Argument | Model Used |
|---|---|
gemini |
gemini-2.5-flash |
gemini-2 |
gemini-2.0-flash |
mistral-small |
mistral-small-latest |
mistral-large |
mistral-large-latest |
mistral |
mistral |
llama3.2 |
llama3.2 |
qwen3 |
qwen3 |
By default, aicommit uses the ollama service.
You can select a different API service by passing an argument.
The available options are:
| Argument | API Service Used |
|---|---|
ollama |
ollama |
gemini |
gemini |
mistral |
mistral |
Examples:
# Use the gemini-2.0-flash model
aicommit gemini-2 gemini
# Use the mistral-large-latest model
aicommit mistral-large mistral
# Use the mistral model on ollama api service
aicommit mistral ollama
# Use the llama3.2 model on ollama api service
aicommit llama3.2 ollama The commit message generated by aicommit will be opened in your default text editor.
Using vim, you can see the generated commit message before saving and committing:
