Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 20 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,23 @@
This package uses the power of OpenAI's GPT-4o-mini model to understand your code changes and generate meaningful commit messages for you. Whether you're working on a solo project or collaborating with a team, AI-Commit makes it easy to keep your commit history organized and informative.

## Demo
![ai_commit_demo(1)(2)](https://github.com/JinoArch/ai-commit/assets/39610834/3002dfa2-737a-44b9-91c9-b43907f11144)

![ai_commit_demo(1)(2)](https://github.com/JinoArch/ai-commit/assets/39610834/3002dfa2-737a-44b9-91c9-b43907f11144)

## How it Works

1. Install AI-Commit using `npm install -g ai-commit`
2. Generate an OpenAI API key [here](https://platform.openai.com/account/api-keys )
3. Set your `OPENAI_API_KEY` environment variable to your API key
1. Make your code changes and stage them with `git add .`
2. Type `ai-commit` in your terminal
3. AI-Commit will analyze your changes and generate a commit message
4. Approve the commit message and AI-Commit will create the commit for you ✅
2. Generate an OpenAI API key [here](https://platform.openai.com/account/api-keys)
3. Set your `AI_COMMIT_API_KEY` environment variable to your API key
4. Set `PROVIDER` in your environment to `openai` or `gemini`. Default is `openai`
5. Make your code changes and stage them with `git add .`
6. Type `ai-commit` in your terminal
7. AI-Commit will analyze your changes and generate a commit message
8. Approve the commit message and AI-Commit will create the commit for you ✅

## Gemini Note

We're using https://openrouter.ai/ and the model `google/gemini-2.0-flash-lite-preview-02-05:free` for Gemini, it support many models also free model, you can create account and try your own key without paying anything.

## Using local model (ollama)

Expand All @@ -28,12 +34,13 @@ You can also use the local model for free with Ollama.
2. Install Ollama from https://ollama.ai/
3. Run `ollama run mistral` to fetch model for the first time
4. Set `PROVIDER` in your environment to `ollama`
1. Make your code changes and stage them with `git add .`
2. Type `ai-commit` in your terminal
3. AI-Commit will analyze your changes and generate a commit message
4. Approve the commit message and AI-Commit will create the commit for you ✅
5. Make your code changes and stage them with `git add .`
6. Type `ai-commit` in your terminal
7. AI-Commit will analyze your changes and generate a commit message
8. Approve the commit message and AI-Commit will create the commit for you ✅

## Options

`--list`: Select from a list of 5 generated messages (or regenerate the list)

`--force`: Automatically create a commit without being prompted to select a message (can't be used with `--list`)
Expand All @@ -51,6 +58,7 @@ You can also use the local model for free with Ollama.
`--commit-type`: Specify the type of commit to generate. This will be used as the type in the commit message e.g. `--commit-type feat`

## Contributing

We'd love for you to contribute to AI-Commit! Here's how:

1. Fork the repository
Expand All @@ -73,6 +81,7 @@ We'd love for you to contribute to AI-Commit! Here's how:
- [ ] Reverse commit message generation: Allow users to generate code changes from a commit message.

## License

AI-Commit is licensed under the MIT License.

## Happy coding 🚀
104 changes: 104 additions & 0 deletions gemini.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
import inquirer from "inquirer";
import { AI_PROVIDER } from "./config.js";

const FEE_PER_1K_TOKENS = 0.0;
const MAX_TOKENS = 1_000_000;
const FEE_COMPLETION = 0.001;

const gemini = {
sendMessage: async (
input,
{ apiKey, model = "google/gemini-2.0-flash-lite-preview-02-05:free" }
) => {
console.log("prompting Gemini API...");
console.log("prompt: ", input);

const response = await fetch(
"https://openrouter.ai/api/v1/chat/completions",
{
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model,
messages: [
{
role: "user",
content: [{ type: "text", text: input }],
},
],
}),
}
);

const data = await response.json();
return data.choices?.[0]?.message?.content || "";
},

getPromptForSingleCommit: (
diff,
{ commitType, customMessageConvention, language }
) => {
return (
`Write a professional git commit message based on the diff below in ${language} language` +
(commitType ? ` with commit type '${commitType}'. ` : ". ") +
`${
customMessageConvention
? `Apply these JSON formatted rules: ${customMessageConvention}.`
: ""
}` +
"Do not preface the commit with anything, use the present tense, return the full sentence and also commit type." +
`\n\n${diff}`
);
},

getPromptForMultipleCommits: (
diff,
{ commitType, customMessageConvention, numOptions, language }
) => {
return (
`Write a professional git commit message based on the diff below in ${language} language` +
(commitType ? ` with commit type '${commitType}'. ` : ". ") +
`Generate ${numOptions} options separated by ";".` +
"For each option, use the present tense, return the full sentence and also commit type." +
`${
customMessageConvention
? ` Apply these JSON formatted rules: ${customMessageConvention}.`
: ""
}` +
`\n\n${diff}`
);
},

filterApi: async ({ prompt, numCompletion = 1, filterFee }) => {
const numTokens = prompt.split(" ").length; // Approximate token count
const fee =
(numTokens / 1000) * FEE_PER_1K_TOKENS + FEE_COMPLETION * numCompletion;

if (numTokens > MAX_TOKENS) {
console.log(
"The commit diff is too large for the Gemini API. Max 128k tokens."
);
return false;
}

// if (filterFee) {
// console.log(`This will cost you ~$${fee.toFixed(3)} for using the API.`);
// const answer = await inquirer.prompt([
// {
// type: "confirm",
// name: "continue",
// message: "Do you want to continue 💸?",
// default: true,
// },
// ]);
// if (!answer.continue) return false;
// }

return true;
},
};

export default gemini;
Loading