Skip to content
45 changes: 45 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Project Guidelines

## Code Style
- Keep compatibility with Jython 2.7 and Burp extension APIs.
- Prefer simple, defensive Python over modern Python 3-only features.
- Preserve existing naming and class structure in `silentchain_ai_community.py`.
- Keep UI changes EDT-safe: mutate Swing UI on `SwingUtilities.invokeLater`.

## Architecture
- Primary entry point is `silentchain_ai_community.py`.
- `BurpExtender` owns lifecycle, UI, settings, provider dispatch, caching, and scan orchestration.
- Passive analysis pipeline is `doPassiveScan/processHttpMessage -> AnalyzeTask -> analyze -> _perform_analysis -> ask_ai -> addScanIssue`.
- Threading model uses fixed thread pool (`Executors.newFixedThreadPool(5)`) and semaphores:
- global cap: 5 concurrent AI calls
- per-host cap: 2 concurrent calls
- Persistent files are in home directory:
- `~/.silentchain_config.json`
- `~/.silentchain_vuln_cache.json`

## Build and Test
- There is no local build/test harness; this is a Burp runtime extension.
- Main verification path is manual load in Burp:
1. `Extender -> Extensions -> Add -> Python`
2. Load `silentchain_ai_community.py`
3. Use Settings -> Test Connection
- Optional Azure env validation:
- `./tools/test_azure_env.sh ./.env`

## Conventions
- Prefer new work in `silentchain_ai_community.py` unless a task explicitly targets v2 variants.
- Keep exported CSV filename pattern unchanged: `SILENTCHAIN_Findings_YYYYMMDD_HHMMSS.csv`.
- Preserve confidence mapping and severity normalization behavior.
- Keep request signature logic stable unless intentional cache behavior change is requested.

## Pitfalls
- Burp/Jython imports will appear unresolved outside Burp; this is expected.
- Avoid blocking calls on the UI thread.
- Keep semaphore acquisition order in `analyze` (host, then global) to avoid deadlock regressions.
- If changing config schema, update migration path and config versioning.

## Documentation
- Architecture internals: `docs/INTERNAL_WORKING.md`
- Developer flow and release checks: `docs/DEVELOPER_WORKFLOW.md`
- Setup and provider configuration: `docs/guides/QUICKSTART.md`, `docs/guides/INSTALLATION.md`
- Change history and optimization rationale: `CHANGELOG.md`, `docs/project/OPTIMIZATION_PLAN.md`
130 changes: 0 additions & 130 deletions BENCHMARK.md

This file was deleted.

95 changes: 86 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,59 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Fixed - Critical (P0)
- **CRITICAL: `doPassiveScan()` bypasses thread pool** - Raw threading spawned in passive scan ignored the thread pool entirely
- Changed from `threading.Thread(target=self.analyze, ...)` to thread pool submission
- Now properly queues all passive scan analysis through `AnalyzeTask` and thread pool
- Prevents resource exhaustion from unlimited thread spawning
- **CRITICAL: Semaphore deadlock risk** - Global and per-host semaphores acquired in wrong order
- Changed acquisition order: now acquires host semaphore before global (narrow before wide)
- Prevents threads holding global slots from blocking on host locks
- Eliminates silent hangs under concurrent load
- **CRITICAL: `_migrate_config()` crashes on startup** - Calls `save_config()` before `stdout` wrapper is initialized
- Removed `save_config()` call from `_migrate_config()` — migration auto-persists on next settings save
- Prevents `AttributeError` on stdout access during initial config load

### Fixed - High (P1)
- **`_store_cached_findings()` blocks analysis threads with synchronous disk writes** - Every finding triggered immediate file I/O
- Changed to set `_cache_dirty = True` flag only, letting async timer handle writes
- Removed `self.save_vuln_cache()` blocking call
- Analysis threads now proceed without waiting for disk I/O
- **`_async_save_cache()` race condition** - Cache dirty flag cleared before background write could complete
- Now clears flag optimistically before spawn (acceptable for async write)
- Background thread re-queues on failure by setting `_cache_dirty = True` again
- Added exception handling to prevent lost findings
- **Context menu analysis still uses raw threading** - `analyzeFromContextMenu()` spawned threads instead of using pool
- Created `ForcedAnalyzeTask` runnable class for context menu operations
- Now submits through thread pool like passive scan (after fix)
- **MD5 still used in 3 places** - Weak hash in request/finding signature generation
- `_get_url_hash()`: Changed MD5 → SHA-256 (took first 32 chars for compatibility)
- `_get_finding_hash()`: Changed MD5 → SHA-256 (full hash)
- `_analyzeFromContextMenuThread()`: Changed MD5 → SHA-256 for request hash
- Improves collision resistance and security posture
- **AI response `param` field ignored** - Prompt asks AI to identify vulnerable parameters but findings never displayed them
- Added extraction of `ai_param = item.get("param", "")` from AI findings
- Now displays as `<b>Vulnerable Parameter (AI):</b> <code>{param}</code>` in finding details
- Helps pentesters quickly identify the exact vulnerable parameter

### Added - Medium (P2)
- **Security header coverage** - Added AI prompt categories for common header misconfigurations
- New categories: "Missing security headers - CSP, HSTS, X-Frame-Options, X-Content-Type-Options"
- New categories: "Sensitive data in responses - PII, tokens, internal paths, debug info"
- New categories: "API versioning issues - v1/v2 endpoints with different access controls"
- AI now checks response headers systematically
- **IDOR parameter detection** - Detects common IDOR-vulnerable parameter names
- Checks for patterns: `id`, `user_id`, `account_id`, `order_id`, `invoice_id`, `file_id`, `doc_id`, `record_id`, `item_id`, `uid`, `pid`, `customer_id`, `profile_id`, `token`, `ref`, `key`
- Generates IDOR signal when detected: `{"type": "idor_param_name", "name": "...", "value": "..."}`
- Complements numeric ID detection for better IDOR findings

### Improved - Low (P3)
- **Claude connection test was fake** - `_test_claude_connection()` hardcoded success without verifying API
- Now sends actual test request: `{"model": "...", "max_tokens": 5, "messages": [{"role": "user", "content": "ping"}]}`
- Properly handles HTTP 429 (rate limited but reachable) vs actual failures
- Prints clear feedback: "OK Claude API verified" or specific error message
- Catches both connection and rate-limit conditions

### Planned
- Stream AI responses for faster perceived performance
- Support for custom AI models (local fine-tuned models)
Expand All @@ -16,6 +69,30 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

---

## [1.1.4] - 2026-03-18

### Added
- **Azure Foundry provider support** - Added Azure AI Foundry (Azure OpenAI-compatible) as a first-class AI provider in Settings
- New provider option: `Azure Foundry`
- Default endpoint helper: `https://YOUR-RESOURCE.openai.azure.com`
- Added provider routing for connection tests and AI inference requests
- Supports deployment discovery via Azure deployments API in Test Connection
- Supports direct chat completion calls with API version handling
- **Azure .env validation script** - Added `tools/test_azure_env.sh` for safe local configuration checks
- Validates required `.env` keys
- Probes Azure endpoint and deployment chat completion reachability
- Reports clear `STATUS: VALID` or `STATUS: INVALID`

### Changed
- **Configuration help text expanded** - Settings now documents Azure Foundry endpoint and deployment-name usage
- **Documentation updated** - README, Quick Start, and Installation guides now include Azure Foundry setup

### User Impact
- Users can run SILENTCHAIN with Azure-hosted OpenAI deployments from Azure AI Foundry
- Existing providers (Ollama, OpenAI, Claude, Gemini) continue to work as before

---

## [1.1.3] - 2026-02-08

### Changed
Expand Down Expand Up @@ -484,9 +561,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Documentation
- Comprehensive README with quick start
- Detailed installation guide (INSTALLATION.md)
- 5-minute quick start guide (QUICKSTART.md)
- Contributing guidelines (CONTRIBUTING.md)
- Detailed installation guide (docs/guides/INSTALLATION.md)
- 5-minute quick start guide (docs/guides/QUICKSTART.md)
- Contributing guidelines (docs/project/CONTRIBUTING.md)
- Settings verification guide (SETTINGS_VERIFICATION.md)

### Known Limitations
Expand Down Expand Up @@ -529,9 +606,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
```
silentchain_ai_community.py # Main extension file (1549 lines)
README.md # Project documentation
INSTALLATION.md # Setup guide
QUICKSTART.md # 5-minute guide
CONTRIBUTING.md # Development guide
docs/guides/INSTALLATION.md # Setup guide
docs/guides/QUICKSTART.md # 5-minute guide
docs/project/CONTRIBUTING.md # Development guide
LICENSE # MIT License
CHANGELOG.md # This file
```
Expand Down Expand Up @@ -627,7 +704,7 @@ Visit https://silentchain.ai for upgrade options.

## Contributing

Found a bug? Have a feature request? See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
Found a bug? Have a feature request? See [docs/project/CONTRIBUTING.md](docs/project/CONTRIBUTING.md) for guidelines.

---

Expand All @@ -640,8 +717,8 @@ MIT License - See [LICENSE](LICENSE) file for details.
## Support

- **Community Support**: GitHub Issues
- **Documentation**: https://github.com/silentchainai/SILENTCHAIN
- **Professional Support**: support@silentchain.ai (Professional Edition only)
- **Documentation**: See repository docs in this fork
- **Support**: Use your fork's issue tracker

---

Expand Down
18 changes: 0 additions & 18 deletions CONTRIBUTING.md

This file was deleted.

Loading
Loading