Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -33,13 +33,15 @@ REFRESH_INTERVAL_MINUTES=15

# === LLM Layer (optional) ===
# Enables AI-enhanced trade ideas and breaking news Telegram alerts.
# Provider options: anthropic | openai | gemini | codex | openrouter | minimax | mistral | ollama
# Provider options: anthropic | openai | gemini | codex | openrouter | minimax | mistral | ollama | cursor
LLM_PROVIDER=
# Not needed for codex (uses ~/.codex/auth.json) or ollama (local)
LLM_API_KEY=
# Optional override. Each provider has a sensible default:
# anthropic: claude-sonnet-4-6 | openai: gpt-5.4 | gemini: gemini-3.1-pro | codex: gpt-5.3-codex | openrouter: openrouter/auto | minimax: MiniMax-M2.5 | ollama: llama3.1:8b
LLM_MODEL=
# For cursor: proxy URL (optional; default auto-starts on 127.0.0.1:8765)
LLM_CURSOR_BASE_URL=
# Ollama base URL (only needed if not using default http://localhost:11434)
OLLAMA_BASE_URL=

Expand Down
17 changes: 11 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<div align="center">
****<div align="center">

# Crucix

Expand Down Expand Up @@ -186,10 +186,10 @@ Alerts are delivered as rich embeds with color-coded sidebars: red for FLASH, ye
**Optional dependency:** The full bot requires `discord.js`. Install it with `npm install discord.js`. If it's not installed, Crucix automatically falls back to webhook-only mode.

### Optional LLM Layer
Connect any of 6 LLM providers for enhanced analysis:
Connect any of 7 LLM providers for enhanced analysis:
- **AI trade ideas** — quantitative analyst producing 5-8 actionable ideas citing specific data
- **Smarter alert evaluation** — LLM classifies signals into FLASH/PRIORITY/ROUTINE tiers with cross-domain correlation and confidence scoring
- Providers: Anthropic Claude, OpenAI, Google Gemini, OpenRouter (Unified API), OpenAI Codex (ChatGPT subscription), MiniMax, Mistral
- Providers: Anthropic Claude, OpenAI, Google Gemini, OpenRouter (Unified API), OpenAI Codex (ChatGPT subscription), MiniMax, Mistral, [Cursor (via cursor-api-proxy)](https://www.npmjs.com/package/cursor-api-proxy)
- Graceful fallback — when LLM is unavailable, a rule-based engine takes over alert evaluation. LLM failures never crash the sweep cycle.

---
Expand Down Expand Up @@ -222,18 +222,21 @@ These three unlock the most valuable economic and satellite data. Each takes abo

### LLM Provider (optional, for AI-enhanced ideas)

Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral`
Set `LLM_PROVIDER` to one of: `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral`, `cursor`

| Provider | Key Required | Default Model |
|----------|-------------|---------------|
| `anthropic` | `LLM_API_KEY` | claude-sonnet-4-6 |
| `openai` | `LLM_API_KEY` | gpt-5.4 |
| `gemini` | `LLM_API_KEY` | gemini-3.1-pro |
| `cursor` | Optional (if proxy uses `CURSOR_BRIDGE_API_KEY`) | auto (proxy auto-starts on first use) |
| `openrouter` | `LLM_API_KEY` | openrouter/auto |
| `codex` | None (uses `~/.codex/auth.json`) | gpt-5.3-codex |
| `minimax` | `LLM_API_KEY` | MiniMax-M2.5 |
| `mistral` | `LLM_API_KEY` | mistral-large-latest |

**Cursor setup:** Uses the [cursor-api-proxy](https://www.npmjs.com/package/cursor-api-proxy) dependency. (1) Install and log in to the Cursor agent CLI (`curl https://cursor.com/install -fsS | bash`, then `agent login`). (2) Set `LLM_PROVIDER=cursor`. (3) No need to run the proxy separately — it starts automatically on first use. (4) Optional: `LLM_CURSOR_BASE_URL` if you run the proxy elsewhere; `LLM_API_KEY` if the proxy requires auth; `LLM_MODEL` (e.g. `gpt-5.2`) to override the default model.

For Codex, run `npx @openai/codex login` to authenticate via your ChatGPT subscription.

### Telegram Bot + Alerts (optional)
Expand Down Expand Up @@ -311,6 +314,7 @@ crucix/
│ │ ├── codex.mjs # Codex (ChatGPT subscription)
│ │ ├── minimax.mjs # MiniMax (M2.5, 204K context)
│ │ ├── mistral.mjs # Mistral AI
│ │ ├── cursor.mjs # Cursor (cursor-api-proxy, auto-starts proxy)
│ │ ├── ideas.mjs # LLM-powered trade idea generation
│ │ └── index.mjs # Factory: createLLMProvider()
│ ├── delta/ # Change tracking between sweeps
Expand All @@ -328,7 +332,7 @@ crucix/

### Design Principles
- **Pure ESM** — every file is `.mjs` with explicit imports
- **Minimal dependencies** — Express is the only runtime dependency. `discord.js` is optional (for Discord bot). LLM providers use raw `fetch()`, no SDKs.
- **Minimal dependencies** — Express and optional `cursor-api-proxy` (for Cursor provider). `discord.js` is optional (for Discord bot). LLM providers use raw `fetch()` or the cursor-api-proxy SDK.
- **Parallel execution** — `Promise.allSettled()` fires all 27 sources simultaneously
- **Graceful degradation** — missing keys produce errors, not crashes. LLM failures don't kill sweeps.
- **Each source is standalone** — run `node apis/sources/gdelt.mjs` to test any source independently
Expand Down Expand Up @@ -412,9 +416,10 @@ All settings are in `.env` with sensible defaults:
|----------|---------|-------------|
| `PORT` | `3117` | Dashboard server port |
| `REFRESH_INTERVAL_MINUTES` | `15` | Auto-refresh interval |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, or `mistral` |
| `LLM_PROVIDER` | disabled | `anthropic`, `openai`, `gemini`, `codex`, `openrouter`, `minimax`, `mistral` or `cursor` |
| `LLM_API_KEY` | — | API key (not needed for codex) |
| `LLM_MODEL` | per-provider default | Override model selection |
| `LLM_CURSOR_BASE_URL` | — | For cursor: proxy URL (optional; default auto-starts on 127.0.0.1:8765) |
| `TELEGRAM_BOT_TOKEN` | disabled | For Telegram alerts + bot commands |
| `TELEGRAM_CHAT_ID` | — | Your Telegram chat ID |
| `TELEGRAM_CHANNELS` | — | Extra channel IDs to monitor (comma-separated) |
Expand Down
3 changes: 2 additions & 1 deletion crucix.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@ export default {
refreshIntervalMinutes: parseInt(process.env.REFRESH_INTERVAL_MINUTES) || 15,

llm: {
provider: process.env.LLM_PROVIDER || null, // anthropic | openai | gemini | codex | openrouter | minimax | mistral | ollama
provider: process.env.LLM_PROVIDER || null, // anthropic | openai | gemini | codex | openrouter | minimax | mistral | cursor | ollama
apiKey: process.env.LLM_API_KEY || null,
model: process.env.LLM_MODEL || null,
cursorBaseUrl: process.env.LLM_CURSOR_BASE_URL || null,
baseUrl: process.env.OLLAMA_BASE_URL || null,
},

Expand Down
60 changes: 60 additions & 0 deletions lib/llm/cursor.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
// Cursor Provider — uses cursor-api-proxy SDK (proxy auto-starts on first use)

import { LLMProvider } from './provider.mjs';
import { createCursorProxyClient } from 'cursor-api-proxy';

export class CursorProvider extends LLMProvider {
constructor(config) {
super(config);
this.name = 'cursor';
this.baseUrl = config.baseUrl ?? process.env.LLM_CURSOR_BASE_URL ?? undefined;
this.apiKey = config.apiKey ?? process.env.LLM_API_KEY ?? undefined;
this.model = config.model ?? process.env.LLM_MODEL ?? 'auto';
this._client = null;
}

get isConfigured() {
return true;
}

_getClient() {
if (!this._client) {
this._client = createCursorProxyClient({
baseUrl: this.baseUrl,
apiKey: this.apiKey,
startProxy: !this.baseUrl,
});
}
return this._client;
}

async complete(systemPrompt, userMessage, opts = {}) {
const client = this._getClient();
try {
const data = await client.chatCompletionsCreate({
model: this.model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
],
});

if (data.error?.message) {
throw new Error(data.error.message);
}

const text = data.choices?.[0]?.message?.content ?? '';
const usage = data.usage ?? {};
return {
text,
usage: {
inputTokens: usage.prompt_tokens ?? 0,
outputTokens: usage.completion_tokens ?? 0,
},
model: data.model ?? this.model,
};
} catch (err) {
throw new Error(`Cursor proxy: ${err.message}`);
}
}
}
10 changes: 7 additions & 3 deletions lib/llm/index.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import { GeminiProvider } from "./gemini.mjs";
import { CodexProvider } from "./codex.mjs";
import { MiniMaxProvider } from "./minimax.mjs";
import { MistralProvider } from "./mistral.mjs";
import { CursorProvider } from "./cursor.mjs";
import { OllamaProvider } from "./ollama.mjs";

export { LLMProvider } from "./provider.mjs";
Expand All @@ -17,17 +18,18 @@ export { GeminiProvider } from "./gemini.mjs";
export { CodexProvider } from "./codex.mjs";
export { MiniMaxProvider } from "./minimax.mjs";
export { MistralProvider } from "./mistral.mjs";
export { CursorProvider } from "./cursor.mjs";
export { OllamaProvider } from "./ollama.mjs";

/**
* Create an LLM provider based on config.
* @param {{ provider: string|null, apiKey: string|null, model: string|null }} llmConfig
* @param {{ provider: string|null, apiKey: string|null, model: string|null, cursorBaseUrl: string|null }} llmConfig
* @returns {LLMProvider|null}
*/
export function createLLMProvider(llmConfig) {
if (!llmConfig?.provider) return null;

const { provider, apiKey, model } = llmConfig;
const { provider, apiKey, model, cursorBaseUrl } = llmConfig;

switch (provider.toLowerCase()) {
case "anthropic":
Expand All @@ -40,7 +42,9 @@ export function createLLMProvider(llmConfig) {
return new GeminiProvider({ apiKey, model });
case "codex":
return new CodexProvider({ model });
case "minimax":
case 'cursor':
return new CursorProvider({ baseUrl: cursorBaseUrl, apiKey, model });
case 'minimax':
return new MiniMaxProvider({ apiKey, model });
case "mistral":
return new MistralProvider({ apiKey, model });
Expand Down
5 changes: 4 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@
"brief:save": "node apis/save-briefing.mjs",
"diag": "node diag.mjs",
"clean": "node scripts/clean.mjs",
"fresh-start": "npm run clean && npm start"
"fresh-start": "npm run clean && npm start",
"test": "node --test test/",
"test:cursor-integration": "node --test test/llm-cursor-integration.test.mjs"
},
"keywords": [
"osint",
Expand All @@ -27,6 +29,7 @@
"npm": ">=10"
},
"dependencies": {
"cursor-api-proxy": "^0.4.0",
"express": "^5.1.0"
},
"optionalDependencies": {
Expand Down
27 changes: 27 additions & 0 deletions test/llm-cursor-integration.test.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
// Integration test for Cursor LLM provider (requires Cursor CLI + proxy)

import test from 'node:test';
import assert from 'node:assert/strict';
import { createLLMProvider } from '../lib/llm/index.mjs';

const skip = process.env.LLM_PROVIDER !== 'cursor';

test('CursorProvider integration test', { skip }, async (t) => {
await t.test('performs live API call', async () => {
const provider = createLLMProvider({
provider: 'cursor',
apiKey: process.env.LLM_API_KEY || null,
model: process.env.LLM_MODEL || 'auto',
});

const result = await provider.complete(
'Reply with exactly "Hello".',
'Hi'
);
assert.ok(result.text.length > 0, 'Should return text');
assert.ok(
result.usage.inputTokens >= 0 && result.usage.outputTokens >= 0,
'Should return usage'
);
});
});