An intelligent intelligence aggregation and refinement platform. This system polls data from various sources (CMS, APIs), refines it using AI agents, and presents it in a modern, interactive dashboard.
- Multi-Source Polling: Automatically fetches data from Payload CMS and other API sources.
- AI Refinement: Uses AgentScope and LLMs (DashScope) to analyze, summarize, and tag intelligence items.
- Real-time Updates: Live data streaming and updates via server-side events (SSE).
- Intelligence Dashboard:
- Virtualized scrolling for high performance with large datasets.
- Advanced filtering (Time range, Hot topics, Search).
- Detail view with original vs. refined content.
- User Management:
- Secure Authentication (JWT).
- Profile management and preferences.
- Favorites system.
- Data Export: Support for exporting intelligence reports in CSV, JSON, and DOCX formats.
- Framework: Python (FastAPI)
- Database: SQLite (SQLAlchemy ORM)
- AI/LLM: AgentScope, DashScope
- Async: asyncio, aiohttp
- Auth: OAuth2 with JWT (Passlib, Python-Jose)
- Framework: React 18 (Vite)
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: Lucide React, React Virtuoso (Virtual List)
- State/API: Context API, Axios
- Python 3.9+
- Node.js 16+
- DashScope API Key (for AI features)
- Payload CMS credentials (optional, for CMS integration)
-
Navigate to the backend directory:
cd backend -
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r ../requirements.txt
-
Configure Environment Variables: Create or edit
backend/.envwith the following content:# AI Provider DASHSCOPE_API_KEY=your_dashscope_api_key # CMS Integration (Optional) CMS_URL=https://your-cms-url.com CMS_COLLECTION=posts CMS_EMAIL=your_email CMS_PASSWORD=your_password CMS_USER_COLLECTION=users POLL_INTERVAL=60 # Security SECRET_KEY=your_secret_key_generated_by_openssl ALGORITHM=HS256 ACCESS_TOKEN_EXPIRE_MINUTES=30
-
Run the Server:
uvicorn app.main:app --host 0.0.0.0 --port 8001 --reload
The API will be available at
http://localhost:8001. API Docs athttp://localhost:8001/docs.
-
Navigate to the frontend directory:
cd frontend -
Install dependencies:
npm install
-
Run the Development Server:
npm run dev
The application will be available at
http://localhost:5173.
-
Ensure the backend virtualenv is created and dependencies are installed:
cd backend python -m venv venv source venv/bin/activate pip install -r ../requirements.txt
-
Install PM2 and start the API:
npm i -g pm2 pm2 start backend/ecosystem.config.cjs pm2 status pm2 logs api
cd frontend
npm install
npm run build- Endpoint:
POST /api/intel/export - Data Source:
- If
idsare provided, the API exports items in the same order asids. - If some
idsare not found in the database, the API falls back to the hot-stream cache and persists them to the database during export. - If
idsare not provided, the API exports by filters (type/q/range) withlimit=1000.
- If
- DOCX Layout (per item):
- 拟投栏目:
tag1 / tag2 / ... - 事件时间:
time - 价值点:
summary - 标题(居中加粗):
title - 正文:
content(为空时回退到summary) - (来源信息):
来源 / 原标题 / 来源URL
- 拟投栏目:
This repo uses runnable Python scripts under tests/ for validation.
- DOCX export format test:
This test generates a DOCX, reads it back, and asserts the paragraph order and content.
python tests/test_export_docx_format.py
Some HTTP-flow tests use BASE_URL = "http://localhost:8000" in the script; adjust it to your actual API base (e.g. http://localhost:8001) if needed.
system_mvp/
├── requirements.txt
├── backend/
│ ├── app/
│ │ ├── agent/ # AI Agent logic (Orchestrator, Refiner)
│ │ ├── routes/ # API Endpoints (Auth, Intel, Agent)
│ │ ├── services/ # Business logic (Pollers, Auth Utils)
│ │ ├── crud.py # Database operations
│ │ ├── models.py # Pydantic models
│ │ ├── db_models.py # SQLAlchemy models
│ │ └── main.py # App entry point
│ ├── ecosystem.config.cjs
│ └── .env
├── frontend/
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── pages/ # Application pages
│ │ ├── context/ # React Context (Auth)
│ │ ├── hooks/ # Custom Hooks
│ │ ├── api.ts # API Client
│ │ └── App.tsx
│ └── package.json
├── tests/
└── readme.md
- Poller: A service that runs in the background (or on schedule) to fetch raw data from external sources.
- Orchestrator: The central brain that coordinates data flow between the Poller, Database, and AI Agents.
- Refinement: The process of taking raw, potentially unstructured data and using LLMs to extract key information (Summary, Tags, Sentiment) and standardize the format.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/amazing-feature). - Commit your changes (
git commit -m 'Add some amazing feature'). - Push to the branch (
git push origin feature/amazing-feature). - Open a Pull Request.