Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,9 @@ LLM_API_KEY=
# Optional override. Each provider has a sensible default:
# anthropic: claude-sonnet-4-6 | openai: gpt-5.4 | gemini: gemini-3.1-pro | codex: gpt-5.3-codex | openrouter: openrouter/auto | minimax: MiniMax-M2.5
LLM_MODEL=
# Optional: custom base URL when using openai-compatible provider (LM Studio, Ollama, etc.)
# Example: http://localhost:1234/v1/chat/completions
LLM_BASE_URL=

# === Telegram Alerts (optional, requires LLM) ===
# Create a bot via @BotFather, get chat ID via @userinfobot
Expand Down
11 changes: 11 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -44,5 +44,16 @@ npm-debug.log*
# Local maintainer notes
MAINTAINER_DECISIONS.local.md

# Local docs (personal reference, not for distribution)
docs/

# Local working folders
data/
input/
logs/
src/
temp/
tests/

# Local deploy config
dashboard/public/vercel.json
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,10 +163,11 @@ Alerts are delivered as rich embeds with color-coded sidebars: red for FLASH, ye
**Optional dependency:** The full bot requires `discord.js`. Install it with `npm install discord.js`. If it's not installed, Crucix automatically falls back to webhook-only mode.

### Optional LLM Layer
Connect any of 6 LLM providers for enhanced analysis:
Connect any of 7 LLM providers — including local models — for enhanced analysis:
- **AI trade ideas** — quantitative analyst producing 5-8 actionable ideas citing specific data
- **Smarter alert evaluation** — LLM classifies signals into FLASH/PRIORITY/ROUTINE tiers with cross-domain correlation and confidence scoring
- Providers: Anthropic Claude, OpenAI, Google Gemini, OpenRouter (Unified API), OpenAI Codex (ChatGPT subscription), MiniMax, Mistral
- **Local LLMs** — use `openai-compatible` with `LLM_BASE_URL` to point at LM Studio, Ollama, or any OpenAI-compatible endpoint. No API key required.
- Graceful fallback — when LLM is unavailable, a rule-based engine takes over alert evaluation. LLM failures never crash the sweep cycle.

---
Expand Down Expand Up @@ -199,12 +200,13 @@ These three unlock the most valuable economic and satellite data. Each takes abo

### LLM Provider (optional, for AI-enhanced ideas)

Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral`
Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `openai-compatible`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral`

| Provider | Key Required | Default Model |
|----------|-------------|---------------|
| `anthropic` | `LLM_API_KEY` | claude-sonnet-4-6 |
| `openai` | `LLM_API_KEY` | gpt-5.4 |
| `openai-compatible` | None (set `LLM_BASE_URL`) | — |
| `gemini` | `LLM_API_KEY` | gemini-3.1-pro |
| `openrouter` | `LLM_API_KEY` | openrouter/auto |
| `codex` | None (uses `~/.codex/auth.json`) | gpt-5.3-codex |
Expand All @@ -213,6 +215,14 @@ Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrou

For Codex, run `npx @openai/codex login` to authenticate via your ChatGPT subscription.

**Local LLMs (LM Studio, Ollama, etc.):** Set `LLM_PROVIDER=openai-compatible` and point `LLM_BASE_URL` at any OpenAI-compatible endpoint. No API key needed.

```env
LLM_PROVIDER=openai-compatible
LLM_MODEL=local-model-name
LLM_BASE_URL=http://localhost:1234/v1/chat/completions
```

### Telegram Bot + Alerts (optional)

| Key | How to Get |
Expand Down Expand Up @@ -378,6 +388,7 @@ crucix/
| `npm run inject` | `node dashboard/inject.mjs` | Inject latest data into static HTML |
| `npm run brief:save` | `node apis/save-briefing.mjs` | Run sweep + save timestamped JSON |
| `npm run diag` | `node diag.mjs` | Run diagnostics (Node version, imports, port check) |
| `npm test` | `node --test test/*.test.mjs` | Run full test suite (412 tests, 100% coverage) |

---

Expand All @@ -389,9 +400,10 @@ All settings are in `.env` with sensible defaults:
|----------|---------|-------------|
| `PORT` | `3117` | Dashboard server port |
| `REFRESH_INTERVAL_MINUTES` | `15` | Auto-refresh interval |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, or `mistral` |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `openai-compatible`, `gemini`, `codex`, `openrouter`, `minimax`, or `mistral` |
| `LLM_API_KEY` | — | API key (not needed for codex) |
| `LLM_MODEL` | per-provider default | Override model selection |
| `LLM_BASE_URL` | — | Custom endpoint for local/OpenAI-compatible LLMs (LM Studio, Ollama, etc.) |
| `TELEGRAM_BOT_TOKEN` | disabled | For Telegram alerts + bot commands |
| `TELEGRAM_CHAT_ID` | — | Your Telegram chat ID |
| `TELEGRAM_CHANNELS` | — | Extra channel IDs to monitor (comma-separated) |
Expand Down
3 changes: 2 additions & 1 deletion crucix.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@ export default {
refreshIntervalMinutes: parseInt(process.env.REFRESH_INTERVAL_MINUTES) || 15,

llm: {
provider: process.env.LLM_PROVIDER || null, // anthropic | openai | gemini | codex | openrouter | minimax | mistral
provider: process.env.LLM_PROVIDER || null, // anthropic | openai | openai-compatible | gemini | codex | openrouter | minimax | mistral
apiKey: process.env.LLM_API_KEY || null,
model: process.env.LLM_MODEL || null,
baseUrl: process.env.LLM_BASE_URL || null,
},

telegram: {
Expand Down
7 changes: 4 additions & 3 deletions lib/llm/index.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,20 @@ export { MistralProvider } from './mistral.mjs';

/**
* Create an LLM provider based on config.
* @param {{ provider: string|null, apiKey: string|null, model: string|null }} llmConfig
* @param {{ provider: string|null, apiKey: string|null, model: string|null, baseUrl: string|null }} llmConfig
* @returns {LLMProvider|null}
*/
export function createLLMProvider(llmConfig) {
if (!llmConfig?.provider) return null;

const { provider, apiKey, model } = llmConfig;
const { provider, apiKey, model, baseUrl } = llmConfig;

switch (provider.toLowerCase()) {
case 'anthropic':
return new AnthropicProvider({ apiKey, model });
case 'openai':
return new OpenAIProvider({ apiKey, model });
case 'openai-compatible':
return new OpenAIProvider({ apiKey, model, baseUrl });
case 'openrouter':
return new OpenRouterProvider({ apiKey, model });
case 'gemini':
Expand Down
13 changes: 7 additions & 6 deletions lib/llm/openai.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,18 @@ export class OpenAIProvider extends LLMProvider {
this.name = 'openai';
this.apiKey = config.apiKey;
this.model = config.model || 'gpt-5.4';
this.baseUrl = config.baseUrl || null;
}

get isConfigured() { return !!this.apiKey; }
get isConfigured() { return !!this.apiKey || !!this.baseUrl; }

async complete(systemPrompt, userMessage, opts = {}) {
const res = await fetch('https://api.openai.com/v1/chat/completions', {
const url = this.baseUrl || 'https://api.openai.com/v1/chat/completions';
const headers = { 'Content-Type': 'application/json' };
if (this.apiKey) headers['Authorization'] = `Bearer ${this.apiKey}`;
const res = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.apiKey}`,
},
headers,
body: JSON.stringify({
model: this.model,
max_completion_tokens: opts.maxTokens || 4096,
Expand Down
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@
"brief:save": "node apis/save-briefing.mjs",
"diag": "node diag.mjs",
"clean": "node scripts/clean.mjs",
"fresh-start": "npm run clean && npm start"
"fresh-start": "npm run clean && npm start",
"test": "node --test test/*.test.mjs"
},
"keywords": [
"osint",
Expand Down
199 changes: 199 additions & 0 deletions test/alerts-discord.test.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
// DiscordAlerter — unit tests
// Uses Node.js built-in test runner (node:test) — no extra dependencies

import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import { DiscordAlerter } from '../lib/alerts/discord.mjs';

// ─── isConfigured ─────────────────────────────────────────────────────────────

describe('DiscordAlerter.isConfigured', () => {
it('returns true with botToken + channelId', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch1', guildId: null, webhookUrl: null });
assert.equal(alerter.isConfigured, true);
});

it('returns true with webhookUrl only (no botToken)', () => {
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: 'https://discord.com/api/webhooks/123/abc' });
assert.equal(alerter.isConfigured, true);
});

it('returns false with neither botToken/channelId nor webhookUrl', () => {
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: null });
assert.equal(alerter.isConfigured, false);
});

it('returns false with botToken but no channelId and no webhookUrl', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: null, guildId: null, webhookUrl: null });
assert.equal(alerter.isConfigured, false);
});
});

// ─── _sendWebhook ─────────────────────────────────────────────────────────────

describe('DiscordAlerter._sendWebhook', () => {
it('POSTs correct JSON payload to the webhookUrl', async () => {
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: 'https://discord.com/api/webhooks/999/xyz' });
let capturedUrl, capturedOpts;
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url, opts) => {
capturedUrl = url;
capturedOpts = opts;
return { ok: true, text: async () => '' };
};
try {
const result = await alerter._sendWebhook('https://discord.com/api/webhooks/999/xyz', 'hello world', []);
assert.equal(capturedUrl, 'https://discord.com/api/webhooks/999/xyz');
assert.equal(capturedOpts.method, 'POST');
assert.equal(capturedOpts.headers['Content-Type'], 'application/json');
const body = JSON.parse(capturedOpts.body);
assert.equal(body.content, 'hello world');
assert.equal(result, true);
} finally {
globalThis.fetch = originalFetch;
}
});

it('returns false when fetch throws (network error)', async () => {
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: 'https://x' });
const originalFetch = globalThis.fetch;
globalThis.fetch = async () => { throw new Error('network error'); };
try {
const result = await alerter._sendWebhook('https://x', 'msg', []);
assert.equal(result, false);
} finally {
globalThis.fetch = originalFetch;
}
});

it('returns false when HTTP response is not ok', async () => {
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: 'https://x' });
const originalFetch = globalThis.fetch;
globalThis.fetch = async () => ({ ok: false, status: 400, text: async () => 'Bad Request' });
try {
const result = await alerter._sendWebhook('https://x', 'msg', []);
assert.equal(result, false);
} finally {
globalThis.fetch = originalFetch;
}
});
});

// ─── _ruleBasedEvaluation ─────────────────────────────────────────────────────

describe('DiscordAlerter._ruleBasedEvaluation', () => {
const makeDelta = () => ({ summary: { direction: 'up', totalChanges: 5, criticalChanges: 1 } });

it('nuclear anomaly signal → FLASH', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = [{ key: 'nuke_anomaly', severity: 'critical', description: 'test' }];
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, true);
assert.equal(result.tier, 'FLASH');
});

it('2+ cross-domain critical signals → FLASH', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = [
{ key: 'vix', severity: 'critical', direction: 'up', label: 'VIX' },
{ key: 'conflict_events', severity: 'critical', direction: 'up', label: 'Conflict Events' },
];
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, true);
assert.equal(result.tier, 'FLASH');
});

it('2+ escalating high signals (direction=up) → PRIORITY', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = [
{ key: 'wti', severity: 'high', direction: 'up', label: 'WTI' },
{ key: 'hy_spread', severity: 'high', direction: 'up', label: 'HY Spread' },
];
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, true);
assert.equal(result.tier, 'PRIORITY');
});

it('5+ urgent OSINT posts → PRIORITY', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = Array.from({ length: 5 }, (_, i) => ({
key: `tg_urgent_${i}`, severity: 'low', text: `osint post ${i}`,
}));
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, true);
assert.equal(result.tier, 'PRIORITY');
});

it('single critical signal (no cross-domain) → ROUTINE', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = [{ key: 'vix', severity: 'critical', direction: 'down', label: 'VIX' }];
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, true);
assert.equal(result.tier, 'ROUTINE');
});

it('signals below threshold → shouldAlert=false', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
const signals = [{ key: 'misc', severity: 'low', direction: 'up', label: 'Misc' }];
const result = alerter._ruleBasedEvaluation(signals, makeDelta());
assert.equal(result.shouldAlert, false);
});
});

// ─── _checkRateLimit ──────────────────────────────────────────────────────────

describe('DiscordAlerter._checkRateLimit', () => {
it('allows first alert (empty history)', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
assert.equal(alerter._checkRateLimit('FLASH'), true);
});

it('blocks alert within cooldown period', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
alerter._alertHistory.push({ tier: 'FLASH', timestamp: Date.now() });
assert.equal(alerter._checkRateLimit('FLASH'), false);
});

it('allows alert after cooldown has elapsed', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
// FLASH cooldown = 5 min; record an alert 10 minutes ago
alerter._alertHistory.push({ tier: 'FLASH', timestamp: Date.now() - 10 * 60 * 1000 });
assert.equal(alerter._checkRateLimit('FLASH'), true);
});
});

// ─── _isMuted ─────────────────────────────────────────────────────────────────

describe('DiscordAlerter._isMuted', () => {
it('returns false initially', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
assert.equal(alerter._isMuted(), false);
});

it('returns true when muted until future time', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
alerter._muteUntil = Date.now() + 60 * 60 * 1000;
assert.equal(alerter._isMuted(), true);
});

it('returns false and clears mute when timestamp has expired', () => {
const alerter = new DiscordAlerter({ botToken: 'tok', channelId: 'ch', guildId: null, webhookUrl: null });
alerter._muteUntil = Date.now() - 1000;
assert.equal(alerter._isMuted(), false);
assert.equal(alerter._muteUntil, null);
});
});

// ─── _embed ───────────────────────────────────────────────────────────────────

describe('DiscordAlerter._embed', () => {
it('returns object with title field when no discord.js EmbedBuilder loaded', () => {
// _EmbedBuilder is not set by default (discord.js not imported in test env)
const alerter = new DiscordAlerter({ botToken: null, channelId: null, guildId: null, webhookUrl: 'https://x' });
const embed = alerter._embed('Test Title', 'Test description', 0xFF0000);
assert.equal(embed.title, 'Test Title');
assert.equal(embed.description, 'Test description');
assert.equal(embed.color, 0xFF0000);
assert.ok(embed.timestamp, 'embed should have a timestamp');
});
});
Loading