Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -31,12 +31,12 @@ REFRESH_INTERVAL_MINUTES=15

# === LLM Layer (optional) ===
# Enables AI-enhanced trade ideas and breaking news Telegram alerts.
# Provider options: anthropic | openai | gemini | codex | openrouter | minimax
# Provider options: anthropic | openai | gemini | codex | openrouter | minimax | mistral
LLM_PROVIDER=
# Not needed for codex (uses ~/.codex/auth.json)
LLM_API_KEY=
# Optional override. Each provider has a sensible default:
# anthropic: claude-sonnet-4-6 | openai: gpt-5.4 | gemini: gemini-3.1-pro | codex: gpt-5.3-codex | openrouter: openrouter/auto | minimax: MiniMax-M2.5
# anthropic: claude-sonnet-4-6 | openai: gpt-5.4 | gemini: gemini-3.1-pro | codex: gpt-5.3-codex | openrouter: openrouter/auto | minimax: MiniMax-M2.5 | mistral: mistral-small-latest
LLM_MODEL=

# === Telegram Alerts (optional, requires LLM) ===
Expand Down
12 changes: 7 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,10 +158,10 @@ Alerts are delivered as rich embeds with color-coded sidebars: red for FLASH, ye
**Optional dependency:** The full bot requires `discord.js`. Install it with `npm install discord.js`. If it's not installed, Crucix automatically falls back to webhook-only mode.

### Optional LLM Layer
Connect any of 6 LLM providers for enhanced analysis:
Connect any of 7 LLM providers for enhanced analysis:
- **AI trade ideas** — quantitative analyst producing 5-8 actionable ideas citing specific data
- **Smarter alert evaluation** — LLM classifies signals into FLASH/PRIORITY/ROUTINE tiers with cross-domain correlation and confidence scoring
- Providers: Anthropic Claude, OpenAI, Google Gemini, OpenRouter (Unified API), OpenAI Codex (ChatGPT subscription), MiniMax
- Providers: Anthropic Claude, OpenAI, Google Gemini, OpenRouter (Unified API), OpenAI Codex (ChatGPT subscription), MiniMax, Mistral
- Graceful fallback — when LLM is unavailable, a rule-based engine takes over alert evaluation. LLM failures never crash the sweep cycle.

---
Expand Down Expand Up @@ -194,7 +194,7 @@ These three unlock the most valuable economic and satellite data. Each takes abo

### LLM Provider (optional, for AI-enhanced ideas)

Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`
Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral`

| Provider | Key Required | Default Model |
|----------|-------------|---------------|
Expand All @@ -204,6 +204,7 @@ Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrou
| `openrouter` | `LLM_API_KEY` | openrouter/auto |
| `codex` | None (uses `~/.codex/auth.json`) | gpt-5.3-codex |
| `minimax` | `LLM_API_KEY` | MiniMax-M2.5 |
| `mistral` | `LLM_API_KEY` | mistral-small-latest |

For Codex, run `npx @openai/codex login` to authenticate via your ChatGPT subscription.

Expand Down Expand Up @@ -273,14 +274,15 @@ crucix/
│ └── jarvis.html # Self-contained Jarvis HUD
├── lib/
│ ├── llm/ # LLM abstraction (5 providers, raw fetch, no SDKs)
│ ├── llm/ # LLM abstraction (7 providers, raw fetch, no SDKs)
│ │ ├── provider.mjs # Base class
│ │ ├── anthropic.mjs # Claude
│ │ ├── openai.mjs # GPT
│ │ ├── gemini.mjs # Gemini
│ │ ├── openrouter.mjs # OpenRouter (Unified API)
│ │ ├── codex.mjs # Codex (ChatGPT subscription)
│ │ ├── minimax.mjs # MiniMax (M2.5, 204K context)
│ │ ├── mistral.mjs # Mistral (OpenAI-compatible, JSON mode)
│ │ ├── ideas.mjs # LLM-powered trade idea generation
│ │ └── index.mjs # Factory: createLLMProvider()
│ ├── delta/ # Change tracking between sweeps
Expand Down Expand Up @@ -382,7 +384,7 @@ All settings are in `.env` with sensible defaults:
|----------|---------|-------------|
| `PORT` | `3117` | Dashboard server port |
| `REFRESH_INTERVAL_MINUTES` | `15` | Auto-refresh interval |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, or `minimax` |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, or `mistral` |
| `LLM_API_KEY` | — | API key (not needed for codex) |
| `LLM_MODEL` | per-provider default | Override model selection |
| `TELEGRAM_BOT_TOKEN` | disabled | For Telegram alerts + bot commands |
Expand Down
4 changes: 4 additions & 0 deletions lib/llm/index.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import { OpenRouterProvider } from './openrouter.mjs';
import { GeminiProvider } from './gemini.mjs';
import { CodexProvider } from './codex.mjs';
import { MiniMaxProvider } from './minimax.mjs';
import { MistralProvider } from './mistral.mjs';

export { LLMProvider } from './provider.mjs';
export { AnthropicProvider } from './anthropic.mjs';
Expand All @@ -14,6 +15,7 @@ export { OpenRouterProvider } from './openrouter.mjs';
export { GeminiProvider } from './gemini.mjs';
export { CodexProvider } from './codex.mjs';
export { MiniMaxProvider } from './minimax.mjs';
export { MistralProvider } from './mistral.mjs';

/**
* Create an LLM provider based on config.
Expand All @@ -38,6 +40,8 @@ export function createLLMProvider(llmConfig) {
return new CodexProvider({ model });
case 'minimax':
return new MiniMaxProvider({ apiKey, model });
case 'mistral':
return new MistralProvider({ apiKey, model });
default:
console.warn(`[LLM] Unknown provider "${provider}". LLM features disabled.`);
return null;
Expand Down
53 changes: 53 additions & 0 deletions lib/llm/mistral.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Mistral Provider — raw fetch, no SDK
// Uses Mistral's OpenAI-compatible Chat Completions API

import { LLMProvider } from './provider.mjs';

export class MistralProvider extends LLMProvider {
constructor(config) {
super(config);
this.name = 'mistral';
this.apiKey = config.apiKey;
this.model = config.model || 'mistral-small-latest';
}

get isConfigured() { return !!this.apiKey; }

async complete(systemPrompt, userMessage, opts = {}) {
const res = await fetch('https://api.mistral.ai/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.apiKey}`,
},
body: JSON.stringify({
model: this.model,
max_tokens: opts.maxTokens || 4096,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
],
// Enforce JSON output so callers can parse directly without markdown stripping
response_format: { type: 'json_object' },
}),
signal: AbortSignal.timeout(opts.timeout || 60000),
});

if (!res.ok) {
const err = await res.text().catch(() => '');
throw new Error(`Mistral API ${res.status}: ${err.substring(0, 200)}`);
}

const data = await res.json();
const text = data.choices?.[0]?.message?.content || '';

return {
text,
usage: {
inputTokens: data.usage?.prompt_tokens || 0,
outputTokens: data.usage?.completion_tokens || 0,
},
model: data.model || this.model,
};
}
}
7 changes: 5 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,13 @@
"npm": ">=10"
},
"dependencies": {
"@mistralai/mistralai": "^2.0.0",
"express": "^5.1.0"
},
"optionalDependencies": {
"discord.js": "^14.25.1" },
"discord.js": "^14.25.1"
},
"overrides": {
"undici": "^7.24.4" }
"undici": "^7.24.4"
}
}