Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# InkOS Environment Configuration
# Copy to .env and fill in your values

# LLM Provider (openai, anthropic, custom)
# LLM Provider (openai, anthropic, minimax, custom)
INKOS_LLM_PROVIDER=openai

# API Base URL (OpenAI-compatible endpoint)
Expand All @@ -13,6 +13,12 @@ INKOS_LLM_API_KEY=sk-your-key-here
# Model name
INKOS_LLM_MODEL=gpt-4o

# --- MiniMax example ---
# INKOS_LLM_PROVIDER=minimax
# INKOS_LLM_BASE_URL=https://api.minimax.io/v1
# INKOS_LLM_API_KEY=your-minimax-api-key
# INKOS_LLM_MODEL=MiniMax-M2.7

# Notifications (optional)
INKOS_TELEGRAM_BOT_TOKEN=
INKOS_TELEGRAM_CHAT_ID=
Expand Down
11 changes: 7 additions & 4 deletions README.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,19 +49,22 @@ Once installed, Claw can invoke InkOS atomic commands and control-surface operat
```bash
inkos config set-global \
--lang en \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|minimax|custom> \
--base-url <API endpoint> \
--api-key <your API key> \
--model <model name>

# provider: openai / anthropic / custom (use custom for OpenAI-compatible proxies)
# provider: openai / anthropic / minimax / custom (use custom for OpenAI-compatible proxies)
# base-url: your API provider URL
# api-key: your API key
# model: your model name
```

`--lang en` sets English as the default writing language for all projects. Saved to `~/.inkos/.env`. New projects just work without extra config.

> **MiniMax quick setup:** `inkos config set-global --lang en --provider minimax --base-url https://api.minimax.io/v1 --api-key <key> --model MiniMax-M2.7`
> Supported models: `MiniMax-M2.7` (latest, 204K context), `MiniMax-M2.7-highspeed` (204K context). Temperature is auto-clamped to (0, 1].

**Option 2: Per-project `.env`**

```bash
Expand All @@ -71,8 +74,8 @@ inkos init my-novel # Initialize project

```bash
# Required
INKOS_LLM_PROVIDER= # openai / anthropic / custom (use custom for any OpenAI-compatible API)
INKOS_LLM_BASE_URL= # API endpoint
INKOS_LLM_PROVIDER= # openai / anthropic / minimax / custom (use custom for any OpenAI-compatible API)
INKOS_LLM_BASE_URL= # API endpoint (supports MiniMax, proxies, etc.)
INKOS_LLM_API_KEY= # API Key
INKOS_LLM_MODEL= # Model name

Expand Down
11 changes: 7 additions & 4 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,19 +49,22 @@ npm でインストール済み、またはリポジトリをクローン済み
```bash
inkos config set-global \
--lang en \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|minimax|custom> \
--base-url <APIエンドポイント> \
--api-key <APIキー> \
--model <モデル名>

# provider: openai / anthropic / custom(OpenAI互換プロキシにはcustomを使用)
# provider: openai / anthropic / minimax / custom(OpenAI互換プロキシにはcustomを使用)
# base-url: APIプロバイダーURL
# api-key: APIキー
# model: モデル名
```

`--lang en` で英語をすべてのプロジェクトのデフォルト執筆言語に設定。`~/.inkos/.env` に保存されます。新規プロジェクトは追加設定なしですぐに使えます。

> **MiniMax クイック設定:** `inkos config set-global --provider minimax --base-url https://api.minimax.io/v1 --api-key <key> --model MiniMax-M2.7`
> 対応モデル:`MiniMax-M2.7`(最新、204Kコンテキスト)、`MiniMax-M2.7-highspeed`(204Kコンテキスト)。温度は自動的に (0, 1] に制限されます。

**方法2:プロジェクトごとの `.env`**

```bash
Expand All @@ -71,8 +74,8 @@ inkos init my-novel # プロジェクトを初期化

```bash
# 必須
INKOS_LLM_PROVIDER= # openai / anthropic / custom(OpenAI互換APIにはcustomを使用)
INKOS_LLM_BASE_URL= # APIエンドポイント
INKOS_LLM_PROVIDER= # openai / anthropic / minimax / custom(OpenAI互換APIにはcustomを使用)
INKOS_LLM_BASE_URL= # APIエンドポイント(MiniMax、プロキシ等対応)
INKOS_LLM_API_KEY= # APIキー
INKOS_LLM_MODEL= # モデル名

Expand Down
11 changes: 7 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,19 +48,22 @@ clawhub install inkos # 从 ClawHub 安装 InkOS Skill

```bash
inkos config set-global \
--provider <openai|anthropic|custom> \
--provider <openai|anthropic|minimax|custom> \
--base-url <API 地址> \
--api-key <你的 API Key> \
--model <模型名>

# provider: openai / anthropic / custom(兼容 OpenAI 格式的中转站选 custom)
# provider: openai / anthropic / minimax / custom(兼容 OpenAI 格式的中转站选 custom)
# base-url: 你的 API 提供商地址
# api-key: 你的 API Key
# model: 你的模型名称
```

配置保存在 `~/.inkos/.env`,所有项目共享。之后新建项目不用再配。

> **MiniMax 快速配置:** `inkos config set-global --provider minimax --base-url https://api.minimax.io/v1 --api-key <key> --model MiniMax-M2.7`
> 支持模型:`MiniMax-M2.7`(最新,204K 上下文)、`MiniMax-M2.7-highspeed`(204K 上下文)。温度自动限制在 (0, 1] 范围。

**方式二:项目级 `.env`**

```bash
Expand All @@ -70,8 +73,8 @@ inkos init my-novel # 初始化项目

```bash
# 必填
INKOS_LLM_PROVIDER= # openai / anthropic / custom(兼容 OpenAI 接口的都选 custom)
INKOS_LLM_BASE_URL= # API 地址(支持中转站、智谱、Gemini 等)
INKOS_LLM_PROVIDER= # openai / anthropic / minimax / custom(兼容 OpenAI 接口的都选 custom)
INKOS_LLM_BASE_URL= # API 地址(支持中转站、MiniMax、智谱、Gemini 等)
INKOS_LLM_API_KEY= # API Key
INKOS_LLM_MODEL= # 模型名

Expand Down
2 changes: 1 addition & 1 deletion packages/cli/src/commands/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ configCommand
configCommand
.command("set-global")
.description("Set global LLM config (~/.inkos/.env), shared by all projects")
.requiredOption("--provider <provider>", "LLM provider (openai / anthropic)")
.requiredOption("--provider <provider>", "LLM provider (openai / anthropic / minimax)")
.requiredOption("--base-url <url>", "API base URL")
.requiredOption("--api-key <key>", "API key")
.requiredOption("--model <model>", "Model name")
Expand Down
7 changes: 6 additions & 1 deletion packages/cli/src/commands/init.ts
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ export const initCommand = new Command("init")
[
"# LLM Configuration",
"# Tip: Run 'inkos config set-global' to set once for all projects.",
"# Provider: openai (OpenAI / compatible proxy), anthropic (Anthropic native)",
"# Provider: openai (OpenAI / compatible proxy), anthropic (Anthropic native), minimax (MiniMax)",
"INKOS_LLM_PROVIDER=openai",
"INKOS_LLM_BASE_URL=",
"INKOS_LLM_API_KEY=",
Expand All @@ -110,6 +110,11 @@ export const initCommand = new Command("init")
"# INKOS_LLM_PROVIDER=anthropic",
"# INKOS_LLM_BASE_URL=",
"# INKOS_LLM_MODEL=",
"",
"# MiniMax example:",
"# INKOS_LLM_PROVIDER=minimax",
"# INKOS_LLM_BASE_URL=https://api.minimax.io/v1",
"# INKOS_LLM_MODEL=MiniMax-M2.7",
].join("\n"),
"utf-8",
);
Expand Down
57 changes: 57 additions & 0 deletions packages/core/src/__tests__/minimax-integration.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
/**
* Integration tests for MiniMax LLM provider.
*
* These tests verify the MiniMax integration end-to-end against the real API.
* They require the MINIMAX_API_KEY environment variable to be set.
*
* Run with:
* MINIMAX_API_KEY=<your-key> npx vitest run src/__tests__/minimax-integration.test.ts
*/
import { describe, expect, it } from "vitest";
import { createLLMClient, chatCompletion } from "../llm/provider.js";
import { LLMConfigSchema } from "../models/project.js";

const MINIMAX_API_KEY = process.env.MINIMAX_API_KEY;

const describeIf = MINIMAX_API_KEY ? describe : describe.skip;

describeIf("MiniMax integration (real API)", () => {
const config = LLMConfigSchema.parse({
provider: "minimax",
baseUrl: "https://api.minimax.io/v1",
apiKey: MINIMAX_API_KEY ?? "",
model: "MiniMax-M2.7-highspeed",
temperature: 0.7,
maxTokens: 64,
});
const client = createLLMClient(config);

it("completes a simple chat request (sync)", async () => {
const syncClient = createLLMClient({ ...config, stream: false });
const result = await chatCompletion(syncClient, config.model, [
{ role: "user", content: "Say OK and nothing else." },
], { maxTokens: 16 });

expect(result.content).toBeTruthy();
expect(result.content.length).toBeGreaterThan(0);
expect(result.usage.totalTokens).toBeGreaterThan(0);
}, 30000);

it("completes a simple chat request (streaming)", async () => {
const result = await chatCompletion(client, config.model, [
{ role: "user", content: "Say OK and nothing else." },
], { maxTokens: 16 });

expect(result.content).toBeTruthy();
expect(result.content.length).toBeGreaterThan(0);
}, 30000);

it("handles system messages correctly", async () => {
const result = await chatCompletion(client, config.model, [
{ role: "system", content: "You are a helpful assistant. Always reply with exactly one word." },
{ role: "user", content: "What color is the sky?" },
], { maxTokens: 16 });

expect(result.content).toBeTruthy();
}, 30000);
});
Loading