Ananta is a local-first OSINT analysis platform built around a simple idea: collect evidence from public sources, keep the workflow auditable, and turn technical signals into readable intelligence reports.
It combines a FastAPI backend, a static web interface, optional Celery workers for background scans, and an optional local LLM served through text-generation-webui for richer report synthesis. The platform is designed for analysts, security practitioners, and researchers who want structured reconnaissance and reporting without sending their workflow to a third-party cloud service.
| Main Console | Project View |
|---|---|
![]() |
![]() |
Ananta accepts a target such as a domain, IP address, URL, or general research query and builds a report from multiple OSINT sources. The system can run lightweight passive lookups, API-backed enrichments, and, when explicitly approved, more sensitive checks such as port and vulnerability scans. Results are stored, rendered in multiple views, and exported in several formats.
Core capabilities include:
- Local-first OSINT workflow with a browser-based interface.
- Layered execution model that separates passive, enriched, and sensitive tooling.
- Async background jobs with progress tracking and worker monitoring.
- Structured intelligence outputs such as graph data, exposures, timeline events, and diffs.
- Multi-format export to PDF, JSON, CSV, XML, Markdown, and XLSX.
- Audit logging for tool execution, approvals, and operational observability.
- Optional local LLM integration for better narrative reporting while keeping model execution on your machine.
Ananta classifies tools into three layers:
Layer 1: passive, low-risk collection such as WHOIS, DNS, HTTP headers, robots.txt, TLS and email posture checks.Layer 2: enriched or aggregated sources such as Censys, crt.sh, Wayback Machine, Shodan, VirusTotal, SecurityTrails, and SpiderFoot when configured.Layer 3: sensitive actions such as active port scanning and vulnerability scanning, protected by explicit approval flows and audit logging.
This model is not just documentation. The application uses it to decide what can run automatically, what must be logged, and what requires human confirmation.
Ananta is designed to produce usable intelligence rather than a list of disconnected outputs. A typical report combines:
- Raw tool evidence.
- Normalized and structured findings.
- Derived graph relationships.
- Exposure summaries.
- Timeline events and history.
- Optional LLM-written narrative synthesis.
If the local LLM is unavailable, the backend can still generate a fallback report path so the platform remains useful without a model.
Longer scans can run in the background through Celery and Redis. The UI and API can track:
- Job status.
- Progress percentage.
- Result payloads.
- Worker availability.
- Monitoring logs and system health.
This allows the main interface to remain responsive while longer scans continue in the background.
Ananta includes a number of controls that matter for a public security-oriented project:
- Request IDs on every request.
- Rate limiting on expensive endpoints.
- Security headers and configurable CORS.
- Standardized error payloads.
- Full audit trail for tool execution.
- Approval workflow for sensitive actions.
- Health and worker status endpoints.
The project is organized around a few central components:
| Component | Role |
|---|---|
main.py |
FastAPI application setup, middleware, error handling, app metadata |
web_routes.py |
Main HTTP and WebSocket routes |
backend_logic.py |
Core orchestration, enrichment, normalization, reporting, and exports |
tasks.py |
Celery jobs for async scans, cleanup, scheduling, and parallel workflows |
celery_config.py |
Celery queues, routing, limits, and Redis-backed configuration |
database.py |
SQLAlchemy models, engine setup, session handling, DB fallback logic |
models.py |
Pydantic request and response schemas |
tools/tool_registry.py |
Tool classification, legal-risk metadata, and execution policy |
web/html |
Main UI pages |
web/javascript |
Frontend application logic, pages, service worker, and monitoring scripts |
web/css |
Main styling and mobile styling |
At a high level, a full scan looks like this:
- A user submits a target through the web UI or API.
- The backend determines whether the request is conversational, passive OSINT, enriched OSINT, or a sensitive workflow.
- Relevant tools run directly or are dispatched to Celery depending on scan mode.
- Results are normalized and stored in the database.
- Structured outputs such as graph, exposures, and timeline data are derived.
- A final report is generated, optionally with local LLM synthesis.
- The report becomes available through the UI, history views, exports, and comparison endpoints.
The frontend is served directly by the FastAPI app and includes several focused pages:
| Page | Purpose |
|---|---|
/web/html/index.html |
Main analysis console |
/web/html/database.html |
Stored reports and history |
/web/html/monitoring.html |
Operational monitoring and logs |
/web/html/workers.html |
Celery worker visibility |
/web/html/timeline.html |
Timeline-based report history |
/web/html/comparison.html |
Diff and comparison views |
/web/html/scheduled.html |
Scheduled scan management |
/web/html/offline.html |
Offline fallback page for the PWA experience |
The frontend also includes:
- WebSocket job updates.
- Offline support through a service worker.
- Mobile styling.
- Theme support.
- Monitoring and scheduled-scan pages separated into dedicated scripts.
Ananta exposes a wide API surface. The most important endpoints are grouped below.
POST /agent/askPOST /agent/ask_asyncGET /jobs/{job_id}GET /jobs/
GET /osint/search_smart/GET /osint/whois/GET /osint/dns/GET /osint/headers/GET /osint/censys/GET /osint/report/GET /osint/report/viewGET /osint/history/
GET /osint/graphGET /osint/structuredGET /osint/exposuresGET /osint/timeline_eventsGET /osint/timeline_summaryGET /osint/timelineGET /osint/diffGET /osint/compare
GET /osint/generate_pdf/GET /osint/export/jsonGET /osint/export/csvGET /osint/export/xmlGET /osint/export/markdownGET /osint/export/xlsx
GET /healthGET /workers/statusGET /monitoring/statsGET /monitoring/logsGET /cache/statsPOST /cache/clearPOST /api-keys/createGET /api-keys/listDELETE /api-keys/{key_id}
POST /agent/request_approvalPOST /agent/approve/{approval_id}POST /agent/deny/{approval_id}
GET /ws/jobs/{job_id}via WebSocket
Interactive API documentation is available at /docs when the server is running.
Some administrative or protected routes may require an API key through the X-API-Key header.
The async analysis endpoint supports several execution modes:
| Mode | Description |
|---|---|
fast |
Layer 1 only, optimized for quick passive analysis |
standard |
Layer 1 and Layer 2 sources |
full |
Main sequential report flow |
parallel |
Layer 1 and Layer 2 executed in parallel, then aggregated |
priority |
Urgent queue |
critical |
Layer 3 workflow with approved sensitive tools |
This gives you a practical tradeoff between speed, depth, and operational risk.
The main persisted entities include:
EntityReport: stored report and raw data for a target.ScanJob: async job state and progress.ScanJobArchive: historical archive for completed jobs.ToolExecutionLog: audit log for tool runs.Entity: normalized entities discovered during analysis.Finding: structured findings and evidence.PendingApproval: approval records for sensitive tools.ScheduledScan: recurring analysis definitions.
This structure is what enables history, exports, timeline generation, graph views, and auditability.
- Python
3.10+ pip- A supported SQL database:
- PostgreSQL is the main intended configuration.
- SQLite fallback is supported automatically when
DATABASE_URLis not set.
- Redis
- Celery worker process
text-generation-webuisubmodule initialized- A local model compatible with the configured endpoint
Install the project requirements from:
requirements.txtrequirements-dev.txt
git clone --recursive https://github.com/Akasha53/Ananta.git
cd AnantaThe --recursive flag is important because the repository uses text-generation-webui as a git submodule.
pip install -r requirements.txt -r requirements-dev.txtStart from .env.example and create a local .env.
Example:
DATABASE_URL=postgresql://user:password@localhost/ananta_db
REDIS_URL=redis://localhost:6379/0
LLM_API_URL=http://127.0.0.1:5000/v1/chat/completions
LLM_TIMEOUT=420
# Optional external enrichments
CENSYS_API_KEY=
VIRUSTOTAL_API_KEY=
SHODAN_API_KEY=
SECURITYTRAILS_API_KEY=
SPIDERFOOT_API_URL=http://127.0.0.1:5001
SPIDERFOOT_API_KEY=
# Environment and web security
ENVIRONMENT=development
CORS_ORIGINS=http://localhost:8010
RATE_LIMIT_ENABLED=trueImportant notes:
- If
DATABASE_URLis omitted, Ananta falls back tosqlite:///./ananta.db. - If optional provider keys are not configured, those enrichments are skipped cleanly.
LLM_API_URLis configurable and does not have to be the default local endpoint.
python -c "from database import init_db; init_db()"
alembic upgrade headThe repository includes helper scripts:
launch_all.batThis starts:
- FastAPI on port
8010 - the local LLM server on port
5000 - a Celery worker
To stop services:
stop_all.batTerminal 1:
redis-serverTerminal 2:
python -m uvicorn main:app --host 127.0.0.1 --port 8010 --reloadTerminal 3:
cd text-generation-webui
python server.py --model mistralai_Mistral-7B-Instruct-v0.2 --api --nowebui --gpu-memory 7GiB --load-in-4bitTerminal 4:
python -m celery -A tasks.app worker -Q default,osint_fast,osint_medium,osint_critical,priority,maintenance --loglevel=info- Web UI:
http://localhost:8010/web/html/index.html - API docs:
http://localhost:8010/docs - Health check:
http://localhost:8010/health
Open the main console and submit a target such as:
example.com8.8.8.8analyze example.com
The application will decide whether to handle the request as chat, passive analysis, or a deeper OSINT workflow.
curl -X POST http://localhost:8010/agent/ask \
-H "Content-Type: application/json" \
-d "{\"query\": \"analyze example.com\"}"Start a background scan:
curl -X POST http://localhost:8010/agent/ask_async \
-H "Content-Type: application/json" \
-d "{\"query\": \"analyze example.com\", \"scan_mode\": \"full\", \"language\": \"en\"}"Supported report language codes in the async flow are fr, en, es, and de.
Check progress:
curl http://localhost:8010/jobs/{JOB_ID}curl http://localhost:8010/workers/statusAnanta stores report data so it can be revisited, compared, and exported later. Supported export formats include:
- JSON
- CSV
- XML
- Markdown
- XLSX
The PDF export is intended to produce a readable analyst-facing report with summary sections, technical annexes, and legal warning text.
Ananta is built for OSINT, research, and authorized security work. The project contains guardrails, but those guardrails do not replace operator responsibility.
Important principles:
- Passive collection is the default path.
- Sensitive actions must require explicit approval.
- Tool runs are auditable.
- The user remains responsible for legal authorization and use context.
Layer 3 functionality such as port scanning and vulnerability checks should only be used where you have clear permission to test the target.
The platform includes operational endpoints and UI pages for:
- application health
- cache statistics
- worker detection
- monitoring logs
- scheduled scans
- background cleanup
Middleware and runtime protections include:
- CORS configuration by environment
- CSP and other security headers
- per-endpoint rate limiting
- gzip compression
- request tracing with
X-Request-ID - standardized error payloads
Run the server directly:
python main.pyRun tests:
pytest tests/ -vInspect routes:
python -c "from main import app; print([r.path for r in app.routes])"Check the configured database URL:
python -c "from database import engine; print(engine.url)"Create a migration:
alembic revision --autogenerate -m "describe change"Apply migrations:
alembic upgrade headRollback one step:
alembic downgrade -1.
├── main.py
├── web_routes.py
├── backend_logic.py
├── tasks.py
├── celery_config.py
├── database.py
├── models.py
├── tools/
├── osint_tools/
├── web/
│ ├── html/
│ ├── javascript/
│ └── css/
├── docs/
├── tests/
├── alembic/
└── text-generation-webui/ # git submodule
- Redis is required for async jobs, worker monitoring, and scheduled scans.
- The local LLM is optional in architecture but strongly recommended for the full reporting experience.
- External provider enrichments depend on your own API keys and quotas.
- Windows uses
--pool=solofor Celery compatibility. - Some project documentation in
docs/is still more detailed than the README for specific implementation areas.
The Ananta codebase is released under the MIT License. See LICENSE.
This repository also includes the optional text-generation-webui submodule, which remains under its own upstream AGPL-3.0 license. The MIT license at the root does not replace or override that upstream license.
Issues and pull requests are welcome. If you contribute to the project:
- keep changes aligned with the layered safety model
- avoid weakening auditability or approval flows for sensitive actions
- document any new environment variables, routes, or external provider requirements
Ananta is an actively evolving project with working core functionality around analysis, reporting, async execution, structured outputs, and operational tooling. The roadmap in docs/ROADMAP_ANANTA.md covers future work such as richer graphing, stronger correlation, better comparison workflows, and deeper reporting views.

