-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
docs: Add Azure DevOps integration plan #766
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
- Add comprehensive branching strategy documentation - Explain main, develop, feature, fix, release, and hotfix branches - Clarify that all PRs should target develop (not main) - Add release process documentation for maintainers - Update PR process to branch from develop - Expand table of contents with new sections
* refactor: restructure project to Apps/frontend and Apps/backend - Move auto-claude-ui to Apps/frontend with feature-based architecture - Move auto-claude to Apps/backend - Switch from pnpm to npm for frontend - Update Node.js requirement to v24.12.0 LTS - Add pre-commit hooks for lint, typecheck, and security audit - Add commit-msg hook for conventional commits - Fix CommonJS compatibility issues (postcss.config, postinstall scripts) - Update README with comprehensive setup and contribution guidelines - Configure ESLint to ignore .cjs files - 0 npm vulnerabilities Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> * feat(refactor): clean code and move to npm * feat(refactor): clean code and move to npm * chore: update to v2.7.0, remove Docker deps (LadybugDB is embedded) * feat: v2.8.0 - update workflows and configs for Apps/ structure, npm * fix: resolve Python lint errors (F401, I001) * fix: update test paths for Apps/backend structure * fix: add missing facade files and update paths for Apps/backend structure - Fix ruff lint error I001 in auto_claude_tools.py - Create missing facade files to match upstream (agent, ci_discovery, critique, etc.) - Update test paths from auto-claude/ to Apps/backend/ - Update .pre-commit-config.yaml paths for Apps/ structure - Add pytest to pre-commit hooks (skip slow/integration/Windows-incompatible tests) - Fix Unicode encoding in test_agent_architecture.py for Windows Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> * feat: improve readme * fix: new path * fix: correct release workflow and docs for Apps/ restructure - Fix ARM64 macOS build: pnpm → npm, auto-claude-ui → Apps/frontend - Fix artifact upload paths in release.yml - Update Node.js version to 24 for consistency - Update CLI-USAGE.md with Apps/backend paths - Update RELEASE.md with Apps/frontend/package.json paths 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * refactor: rename Apps/ to apps/ and fix backend path resolution - Rename Apps/ folder to apps/ for consistency with JS/Node conventions - Update all path references across CI/CD workflows, docs, and config files - Fix frontend Python path resolver to look for 'backend' instead of 'auto-claude' - Update path-resolver.ts to correctly find apps/backend in development mode This completes the Apps restructure from PR AndyMik90#122 and prepares for v2.8.0 release. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(electron): correct preload script path from .js to .mjs electron-vite builds the preload script as ESM (index.mjs) but the main process was looking for CommonJS (index.js). This caused the preload to fail silently, making the app fall back to browser mock mode with fake data and non-functional IPC handlers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * - Introduced `dev:debug` script to enable debugging during development. - Added `dev:mcp` script for running the frontend in MCP mode. These enhancements streamline the development process for frontend developers. * refactor(memory): make Graphiti memory mandatory and remove Docker dependency Memory is now a core component of Auto Claude rather than optional: - Python 3.12+ is required for the backend (not just memory layer) - Graphiti is enabled by default in .env.example - Removed all FalkorDB/Docker references (migrated to embedded LadybugDB) - Deleted guides/DOCKER-SETUP.md and docker-handlers.ts - Updated onboarding UI to remove "optional" language - Updated all documentation to reflect LadybugDB architecture 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat: add cross-platform Windows support for npm scripts - Add scripts/install-backend.js for cross-platform Python venv setup - Auto-detects Python 3.12 (py -3.12 on Windows, python3.12 on Unix) - Handles platform-specific venv paths - Add scripts/test-backend.js for cross-platform pytest execution - Update package.json to use Node.js scripts instead of shell commands - Update CONTRIBUTING.md with correct paths and instructions: - apps/backend/ and apps/frontend/ paths - Python 3.12 requirement (memory system now required) - Platform-specific install commands (winget, brew, apt) - npm instead of pnpm - Quick Start section with npm run install:all 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * remove doc * fix(frontend): correct Ollama detector script path after apps restructure The Ollama status check was failing because memory-handlers.ts was looking for ollama_model_detector.py at auto-claude/ but the script is now at apps/backend/ after the directory restructure. This caused "Ollama not running" to display even when Ollama was actually running and accessible. * chore: bump version to 2.7.2 Downgrade version from 2.8.0 to 2.7.2 as the Apps/ restructure is better suited as a patch release rather than a minor release. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore: update package-lock.json for Windows compatibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * docs(contributing): add hotfix workflow and update paths for apps/ structure Add Git Flow hotfix workflow documentation with step-by-step guide and ASCII diagram showing the branching strategy. Update all paths from auto-claude/auto-claude-ui to apps/backend/apps/frontend and migrate package manager references from pnpm to npm to match the new project structure. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ci): remove duplicate ARM64 build from Intel runner The Intel runner was building both x64 and arm64 architectures, while a separate ARM64 runner also builds arm64 natively. This caused duplicate ARM64 builds, wasting CI resources. Now each runner builds only its native architecture: - Intel runner: x64 only - ARM64 runner: arm64 only 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Alex Madera <[email protected]> Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <[email protected]>
…Mik90#141) * feat(ollama): add real-time download progress tracking for model downloads Implement comprehensive download progress tracking with: - NDJSON parsing for streaming progress data from Ollama API - Real-time speed calculation (MB/s, KB/s, B/s) with useRef for delta tracking - Time remaining estimation based on download speed - Animated progress bars in OllamaModelSelector component - IPC event streaming from main process to renderer - Proper listener management with cleanup functions Changes: - memory-handlers.ts: Parse NDJSON from Ollama stderr, emit progress events - OllamaModelSelector.tsx: Display progress bars with speed and time remaining - project-api.ts: Implement onDownloadProgress listener with cleanup - ipc.ts types: Define onDownloadProgress listener interface - infrastructure-mock.ts: Add mock implementation for browser testing This allows users to see real-time feedback when downloading Ollama models, including percentage complete, current download speed, and estimated time remaining. * test: add focused test coverage for Ollama download progress feature Add unit tests for the critical paths of the real-time download progress tracking: - Progress calculation tests (52 tests): Speed/time/percentage calculations with comprehensive edge case coverage (zero speeds, NaN, Infinity, large numbers) - NDJSON parser tests (33 tests): Streaming JSON parsing from Ollama, buffer management for incomplete lines, error handling All 562 unit tests passing with clean dependencies. Tests focus on critical mathematical logic and data processing - the most important paths that need verification. Test coverage: ✅ Speed calculation and formatting (B/s, KB/s, MB/s) ✅ Time remaining calculations (seconds, minutes, hours) ✅ Percentage clamping (0-100%) ✅ NDJSON streaming with partial line buffering ✅ Invalid JSON handling ✅ Real Ollama API responses ✅ Multi-chunk streaming scenarios * docs: add comprehensive JSDoc docstrings for Ollama download progress feature - Enhanced OllamaModelSelector component with detailed JSDoc * Documented component props, behavior, and usage examples * Added docstrings to internal functions (checkInstalledModels, handleDownload, handleSelect) * Explained progress tracking algorithm and useRef usage - Improved memory-handlers.ts documentation * Added docstring to main registerMemoryHandlers function * Documented all Ollama-related IPC handlers (check-status, list-embedding-models, pull-model) * Added JSDoc to executeOllamaDetector helper function * Documented interface types (OllamaStatus, OllamaModel, OllamaEmbeddingModel, OllamaPullResult) * Explained NDJSON parsing and progress event structure - Enhanced test file documentation * Added docstrings to NDJSON parser test utilities with algorithm explanation * Documented all calculation functions (speed, time, percentage) * Added detailed comments on formatting and bounds-checking logic - Improved overall code maintainability * Docstring coverage now meets 80%+ threshold for code review * Clear explanation of progress tracking implementation details * Better context for future maintainers working with download streaming * feat: add batch task creation and management CLI commands - Handle batch task creation from JSON files - Show status of all specs in project - Cleanup tool for completed specs - Full integration with new apps/backend structure - Compatible with implementation_plan.json workflow * test: add batch task test file and testing checklist - batch_test.json: Sample tasks for testing batch creation - TESTING_CHECKLIST.md: Comprehensive testing guide for Ollama and batch tasks - Includes UI testing steps, CLI testing steps, and edge cases - Ready for manual and automated testing * chore: update package-lock.json to match v2.7.2 * test: update checklist with verification results and architecture validation * docs: add comprehensive implementation summary for Ollama + Batch features * docs: add comprehensive Phase 2 testing guide with checklists and procedures * docs: add NEXT_STEPS guide for Phase 2 testing * fix: resolve merge conflict in project-api.ts from Ollama feature cherry-pick * fix: remove duplicate Ollama check status handler registration * test: update checklist with Phase 2 bug findings and fixes --------- Co-authored-by: ray <[email protected]>
Implemented promise queue pattern in PythonEnvManager to handle concurrent initialization requests. Previously, multiple simultaneous requests (e.g., startup + merge) would fail with "Already initializing" error. Also fixed parsePythonCommand() to handle file paths with spaces by checking file existence before splitting on whitespace. Changes: - Added initializationPromise field to queue concurrent requests - Split initialize() into public and private _doInitialize() - Enhanced parsePythonCommand() with existsSync() check Co-authored-by: Joris Slagter <[email protected]>
) Removes the legacy 'auto-claude' path from the possiblePaths array in agent-process.ts. This path was from before the monorepo restructure (v2.7.2) and is no longer needed. The legacy path was causing spec_runner.py to be looked up at the wrong location: - OLD (wrong): /path/to/auto-claude/auto-claude/runners/spec_runner.py - NEW (correct): /path/to/apps/backend/runners/spec_runner.py This aligns with the new monorepo structure where all backend code lives in apps/backend/. Fixes AndyMik90#147 Co-authored-by: Joris Slagter <[email protected]>
* fix: Linear API authentication and GraphQL types - Remove Bearer prefix from Authorization header (Linear API keys are sent directly) - Change GraphQL variable types from String! to ID! for teamId and issue IDs - Improve error handling to show detailed Linear API error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: Radix Select empty value error in Linear import modal Use '__all__' sentinel value instead of empty string for "All projects" option, as Radix Select does not allow empty string values. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat: add CodeRabbit configuration file Introduce a new .coderabbit.yaml file to configure CodeRabbit settings, including review profiles, automatic review options, path filters, and specific instructions for different file types. This enhances the code review process by providing tailored guidelines for Python, TypeScript, and test files. * fix: correct GraphQL types for Linear team queries Linear API uses different types for different queries: - team(id:) expects String! - issues(filter: { team: { id: { eq: } } }) expects ID! 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: refresh task list after Linear import Call loadTasks() after successful Linear import to update the kanban board without requiring a page reload. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * cleanup * cleanup * fix: address CodeRabbit review comments for Linear integration - Fix unsafe JSON parsing: check response.ok before parsing JSON to handle non-JSON error responses (e.g., 503 from proxy) gracefully - Use ID! type instead of String! for teamId in LINEAR_GET_PROJECTS query for GraphQL type consistency - Remove debug console.log (ESLint config only allows warn/error) - Refresh task list on partial import success (imported > 0) instead of requiring full success - Fix pre-existing TypeScript and lint issues blocking commit 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * version sync logic * lints for develop branch * chore: update CI workflow to include develop branch - Modified the CI configuration to trigger on pushes and pull requests to both main and develop branches, enhancing the workflow for development and integration processes. * fix: update project directory auto-detection for apps/backend structure The project directory auto-detection was checking for the old `auto-claude/` directory name but needed to check for `apps/backend/`. When running from `apps/backend/`, the directory name is `backend` not `auto-claude`, so the check would fail and `project_dir` would incorrectly remain as `apps/backend/` instead of resolving to the project root (2 levels up). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: use GraphQL variables instead of string interpolation in LINEAR_GET_ISSUES Replace direct string interpolation of teamId and linearProjectId with proper GraphQL variables. This prevents potential query syntax errors if IDs contain special characters like double quotes, and aligns with the variable-based approach used elsewhere in the file. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ui): correct logging level and await loadTasks on import complete - Change console.warn to console.log for import success messages (warn is incorrect severity for normal completion) - Make onImportComplete callback async and await loadTasks() to prevent potential unhandled promise rejections Applies CodeRabbit review feedback across 3 LinearTaskImportModal usages. * fix(hooks): use POSIX-compliant find instead of bash glob The pre-commit hook uses #!/bin/sh but had bash-specific ** glob pattern for staging ruff-formatted files. The ** pattern only works in bash with globstar enabled - in POSIX sh it expands literally and won't match subdirectories, causing formatted files in nested directories to not be staged. --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…_progress When a user drags a running task back to Planning (or any other column), the process was not being stopped, leaving a "ghost" process that prevented deletion with "Cannot delete a running task" error. Now the task process is automatically killed when status changes away from in_progress, ensuring the process state stays in sync with the UI.
* feat: add UI scale feature * refactor: extract UI scale bounds to shared constants * fix: duplicated import
…90#154) * fix: analyzer Python compatibility and settings integration Fixes project index analyzer failing with TypeError on Python type hints. Changes: - Added 'from __future__ import annotations' to all analysis modules - Fixed project discovery to support new analyzer JSON format - Read Python path directly from settings.json instead of pythonEnvManager - Added stderr/stdout logging for analyzer debugging Resolves 'Discovered 0 files' and 'TypeError: unsupported operand type' issues. * auto-claude: subtask-1-1 - Hide status badge when execution phase badge is showing When a task has an active execution (planning, coding, etc.), the execution phase badge already displays the correct state with a spinner. The status badge was also rendering, causing duplicate/confusing badges (e.g., both "Planning" and "Pending" showing at the same time). This fix wraps the status badge in a conditional that only renders when there's no active execution, eliminating the redundant badge display. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ipc): remove unused pythonEnvManager parameter and fix ES6 import Address CodeRabbit review feedback: - Remove unused pythonEnvManager parameter from registerProjectContextHandlers and registerContextHandlers (the code reads Python path directly from settings.json instead) - Replace require('electron').app with proper ES6 import for consistency 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore(lint): fix import sorting in analysis module Run ruff --fix to resolve I001 lint errors after merging develop. All 23 files in apps/backend/analysis/ now have properly sorted imports. --------- Co-authored-by: Joris Slagter <[email protected]> Co-authored-by: Claude Opus 4.5 <[email protected]>
* fix(core): add task persistence, terminal handling, and HTTP 300 fixes Consolidated bug fixes from PRs AndyMik90#168, AndyMik90#170, AndyMik90#171: - Task persistence (AndyMik90#168): Scan worktrees for tasks on app restart to prevent loss of in-progress work and wasted API credits. Tasks in .worktrees/*/specs are now loaded and deduplicated with main. - Terminal buttons (AndyMik90#170): Fix "Open Terminal" buttons silently failing on macOS by properly awaiting createTerminal() Promise. Added useTerminalHandler hook with loading states and error display. - HTTP 300 errors (AndyMik90#171): Handle branch/tag name collisions that cause update failures. Added validation script to prevent conflicts before releases and user-friendly error messages with manual download links. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(platform): add path resolution, spaces handling, and XDG support This commit consolidates multiple bug fixes from community PRs: - PR AndyMik90#187: Path resolution fix - Update path detection to find apps/backend instead of legacy auto-claude directory after v2.7.2 restructure - PR AndyMik90#182/AndyMik90#155: Python path spaces fix - Improve parsePythonCommand() to handle quoted paths and paths containing spaces without splitting - PR AndyMik90#161: Ollama detection fix - Add new apps structure paths for ollama_model_detector.py script discovery - PR AndyMik90#160: AppImage support - Add XDG Base Directory compliant paths for Linux sandboxed environments (AppImage, Flatpak, Snap). New files: - config-paths.ts: XDG path utilities - fs-utils.ts: Filesystem utilities with fallback support - PR AndyMik90#159: gh CLI PATH fix - Add getAugmentedEnv() utility to include common binary locations (Homebrew, snap, local) in PATH for child processes. Fixes gh CLI not found when app launched from Finder/Dock. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address CodeRabbit/Cursor review comments on PR AndyMik90#185 Fixes from code review: - http-client.ts: Use GITHUB_CONFIG instead of hardcoded owner in HTTP 300 error message - validate-release.js: Fix substring matching bug in branch detection that could cause false positives (e.g., v2.7 matching v2.7.2) - bump-version.js: Remove unnecessary try-catch wrapper (exec() already exits on failure) - execution-handlers.ts: Capture original subtask status before mutation for accurate logging - fs-utils.ts: Add error handling to safeWriteFile with proper logging Dismissed as trivial/not applicable: - config-paths.ts: Exhaustive switch check (over-engineering) - env-utils.ts: PATH priority documentation (existing comments sufficient) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address additional CodeRabbit review comments (round 2) Fixes from second round of code review: - fs-utils.ts: Wrap test file cleanup in try-catch for Windows file locking - fs-utils.ts: Add error handling to safeReadFile for consistency with safeWriteFile - http-client.ts: Use GITHUB_CONFIG in fetchJson (missed in first round) - validate-release.js: Exclude symbolic refs (origin/HEAD -> origin/main) from branch check - python-detector.ts: Return cleanPath instead of pythonPath for empty input edge case Dismissed as trivial/not applicable: - execution-handlers.ts: Redundant checkSubtasksCompletion call (micro-optimization) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
* chore: update README version to 2.7.1 Updated the version badge and download links in the README to reflect the new release version 2.7.1, ensuring users have the correct information for downloading the latest builds. * feat(releases): add beta release system with user opt-in Implements a complete beta release workflow that allows users to opt-in to receiving pre-release versions. This enables testing new features before they're included in stable releases. Changes: - Add beta-release.yml workflow for creating beta releases from develop - Add betaUpdates setting with UI toggle in Settings > Updates - Add update channel support to electron-updater (beta vs latest) - Extract shared settings-utils.ts to reduce code duplication - Add prepare-release.yml workflow for automated release preparation - Document beta release process in CONTRIBUTING.md and RELEASE.md Users can enable beta updates in Settings > Updates, and maintainers can trigger beta releases via the GitHub Actions workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * workflow update --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
* chore: update README version to 2.7.1 Updated the version badge and download links in the README to reflect the new release version 2.7.1, ensuring users have the correct information for downloading the latest builds. * feat(releases): add beta release system with user opt-in Implements a complete beta release workflow that allows users to opt-in to receiving pre-release versions. This enables testing new features before they're included in stable releases. Changes: - Add beta-release.yml workflow for creating beta releases from develop - Add betaUpdates setting with UI toggle in Settings > Updates - Add update channel support to electron-updater (beta vs latest) - Extract shared settings-utils.ts to reduce code duplication - Add prepare-release.yml workflow for automated release preparation - Document beta release process in CONTRIBUTING.md and RELEASE.md Users can enable beta updates in Settings > Updates, and maintainers can trigger beta releases via the GitHub Actions workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * workflow update * ci(github): update Discord link and redirect feature requests to discussions Update Discord invite link to correct URL (QhRnz9m5HE) across all GitHub templates and workflows. Redirect feature requests from issue template to GitHub Discussions for better community engagement. Changes: - config.yml: Add feature request link to Discussions, fix Discord URL - question.yml: Update Discord link in pre-question guidance - welcome.yml: Update Discord link in first-time contributor message --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
- Change branch reference from main to develop - Fix contribution guide link to use full URL - Remove hyphen from "Auto Claude" in welcome message
…tup (AndyMik90#180 AndyMik90#167) (AndyMik90#208) This fixes critical bug where macOS users with default Python 3.9.6 couldn't use Auto-Claude because claude-agent-sdk requires Python 3.10+. Root Cause: - Auto-Claude doesn't bundle Python, relies on system Python - python-detector.ts accepted any Python 3.x without checking minimum version - macOS ships with Python 3.9.6 by default (incompatible) - GitHub Actions runners didn't explicitly set Python version Changes: 1. python-detector.ts: - Added getPythonVersion() to extract version from command - Added validatePythonVersion() to check if >= 3.10.0 - Updated findPythonCommand() to skip Python < 3.10 with clear error messages 2. python-env-manager.ts: - Import and use findPythonCommand() (already has version validation) - Simplified findSystemPython() to use shared validation logic - Updated error message from "Python 3.9+" to "Python 3.10+" with download link 3. .github/workflows/release.yml: - Added Python 3.11 setup to all 4 build jobs (macOS Intel, macOS ARM64, Windows, Linux) - Ensures consistent Python version across all platforms during build Impact: - macOS users with Python 3.9 now see clear error with download link - macOS users with Python 3.10+ work normally - CI/CD builds use consistent Python 3.11 - Prevents "ModuleNotFoundError: dotenv" and dependency install failures Fixes AndyMik90#180, AndyMik90#167 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Sonnet 4.5 <[email protected]>
* feat: Add OpenRouter as LLM/embedding provider Add OpenRouter provider support for Graphiti memory integration, enabling access to multiple LLM providers through a single API. Changes: Backend: - Created openrouter_llm.py: OpenRouter LLM provider using OpenAI-compatible API - Created openrouter_embedder.py: OpenRouter embedder provider - Updated config.py: Added OpenRouter to provider enums and configuration - New fields: openrouter_api_key, openrouter_base_url, openrouter_llm_model, openrouter_embedding_model - Validation methods updated for OpenRouter - Updated factory.py: Added OpenRouter to LLM and embedder factories - Updated provider __init__.py files: Exported new OpenRouter functions Frontend: - Updated project.ts types: Added 'openrouter' to provider type unions - GraphitiProviderConfig extended with OpenRouter fields - Updated GraphitiStep.tsx: Added OpenRouter to provider arrays - LLM_PROVIDERS: 'Multi-provider aggregator' - EMBEDDING_PROVIDERS: 'OpenAI-compatible embeddings' - Added OpenRouter API key input field with show/hide toggle - Link to https://openrouter.ai/keys - Updated env-handlers.ts: OpenRouter .env generation and parsing - Template generation for OPENROUTER_* variables - Parsing from .env files with proper type casting Documentation: - Updated .env.example with OpenRouter section - Configuration examples - Popular model recommendations - Example configuration (AndyMik90#6) Fixes AndyMik90#92 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * refactor: address CodeRabbit review comments for OpenRouter - Add globalOpenRouterApiKey to settings types and store updates - Initialize openrouterApiKey from global settings - Update documentation to include OpenRouter in provider lists - Add OpenRouter handling to get_embedding_dimension() method - Add openrouter to provider cleanup list - Add OpenRouter to get_available_providers() function - Clarify Legacy comment for openrouterLlmModel These changes complete the OpenRouter integration by ensuring proper settings persistence and provider detection across the application. * fix: apply ruff formatting to OpenRouter code - Break long error message across multiple lines - Format provider list with one item per line - Fixes lint CI failure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Sonnet 4.5 <[email protected]>
…Mik90#209) Implements distributed file-based locking for spec number coordination across main project and all worktrees. Previously, parallel spec creation could assign the same number to different specs (e.g., 042-bmad-task and 042-gitlab-integration both using number 042). The fix adds SpecNumberLock class that: - Acquires exclusive lock before calculating spec numbers - Scans ALL locations (main project + worktrees) for global maximum - Creates spec directories atomically within the lock - Handles stale locks via PID-based detection with 30s timeout Applied to both Python backend (spec_runner.py flow) and TypeScript frontend (ideation conversion, GitHub/GitLab issue import). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.5 <[email protected]>
* fix(ideation): add missing event forwarders for status sync - Add event forwarders in ideation-handlers.ts for progress, log, type-complete, type-failed, complete, error, and stopped events - Fix ideation-type-complete to load actual ideas array from JSON files instead of emitting only the count Resolves UI getting stuck at 0/3 complete during ideation generation. * fix(ideation): fix UI not updating after actions - Fix getIdeationSummary to count only active ideas (exclude dismissed/archived) This ensures header stats match the visible ideas count - Add transformSessionFromSnakeCase to properly transform session data from backend snake_case to frontend camelCase on ideation-complete event - Transform raw session before emitting ideation-complete event Resolves header showing stale counts after dismissing/deleting ideas. * fix(ideation): improve type safety and async handling in ideation type completion - Replace synchronous readFileSync with async fsPromises.readFile in ideation-type-complete handler - Wrap async file read in IIFE with proper error handling to prevent unhandled promise rejections - Add type validation for IdeationType with VALID_IDEATION_TYPES set and isValidIdeationType guard - Add validateEnabledTypes function to filter out invalid type values and log dropped entries - Handle ENOENT separately * fix(ideation): improve generation state management and error handling - Add explicit isGenerating flag to prevent race conditions during async operations - Implement 5-minute timeout for generation with automatic cleanup and error state - Add ideation-stopped event emission when process is intentionally killed - Replace console.warn/error with proper ideation-error events in agent-queue - Add resetGeneratingTypes helper to transition all generating types to a target state - Filter out dismissed/ * refactor(ideation): improve event listener cleanup and timeout management - Extract event handler functions in ideation-handlers.ts to enable proper cleanup - Return cleanup function from registerIdeationHandlers to remove all listeners - Replace single generationTimeoutId with Map to support multiple concurrent projects - Add clearGenerationTimeout helper to centralize timeout cleanup logic - Extract loadIdeationType IIFE to named function for better error context - Enhance error logging with projectId, * refactor: use async file read for ideation and roadmap session loading - Replace synchronous readFileSync with async fsPromises.readFile - Prevents blocking the event loop during file operations - Consistent with async pattern used elsewhere in the codebase - Improved error handling with proper event emission * fix(agent-queue): improve roadmap completion handling and error reporting - Add transformRoadmapFromSnakeCase to convert backend snake_case to frontend camelCase - Transform raw roadmap data before emitting roadmap-complete event - Add roadmap-error emission for unexpected errors during completion - Add roadmap-error emission when project path is unavailable - Remove duplicate ideation-type-complete emission from error handler (event already emitted in loadIdeationType) - Update error log message
Adds 'from __future__ import annotations' to spec/discovery.py for Python 3.9+ compatibility with type hints. This completes the Python compatibility fixes that were partially applied in previous commits. All 26 analysis and spec Python files now have the future annotations import. Related: AndyMik90#128 Co-authored-by: Joris Slagter <[email protected]>
…#241) * fix: resolve Python detection and backend packaging issues - Fix backend packaging path (auto-claude -> backend) to match path-resolver.ts expectations - Add future annotations import to config_parser.py for Python 3.9+ compatibility - Use findPythonCommand() in project-context-handlers to prioritize Homebrew Python - Improve Python detection to prefer Homebrew paths over system Python on macOS This resolves the following issues: - 'analyzer.py not found' error due to incorrect packaging destination - TypeError with 'dict | None' syntax on Python < 3.10 - Wrong Python interpreter being used (system Python instead of Homebrew Python 3.10+) Tested on macOS with packaged app - project index now loads successfully. * refactor: address PR review feedback - Extract findHomebrewPython() helper to eliminate code duplication between findPythonCommand() and getDefaultPythonCommand() - Remove hardcoded version-specific paths (python3.12) and rely only on generic Homebrew symlinks for better maintainability - Remove unnecessary 'from __future__ import annotations' from config_parser.py since backend requires Python 3.12+ where union types are native These changes make the code more maintainable, less fragile to Python version changes, and properly reflect the project's Python 3.12+ requirement.
…#250) * feat(github): add GitHub automation system for issues and PRs Implements comprehensive GitHub automation with three major components: 1. Issue Auto-Fix: Automatically creates specs from labeled issues - AutoFixButton component with progress tracking - useAutoFix hook for config and queue management - Backend handlers for spec creation from issues 2. GitHub PRs Tool: AI-powered PR review sidebar - New sidebar tab (Cmd+Shift+P) alongside GitHub Issues - PRList/PRDetail components for viewing PRs - Review system with findings by severity - Post review comments to GitHub 3. Issue Triage: Duplicate/spam/feature-creep detection - Triage handlers with label application - Configurable detection thresholds Also adds: - Debug logging (DEBUG=true) for all GitHub handlers - Backend runners/github module with orchestrator - AI prompts for PR review, triage, duplicate/spam detection - dev:debug npm script for development with logging 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-runner): resolve import errors for direct script execution Changes runner.py and orchestrator.py to handle both: - Package import: `from runners.github import ...` - Direct script: `python runners/github/runner.py` Uses try/except pattern for relative vs direct imports. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): correct argparse argument order for runner.py Move --project global argument before subcommand so argparse can correctly parse it. Fixes "unrecognized arguments: --project" error. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * logs when debug mode is on * refactor(github): extract service layer and fix linting errors Major refactoring to improve maintainability and code quality: Backend (Python): - Extracted orchestrator.py (2,600 → 835 lines, 68% reduction) into 7 service modules: - prompt_manager.py: Prompt template management - response_parsers.py: AI response parsing - pr_review_engine.py: PR review orchestration - triage_engine.py: Issue triage logic - autofix_processor.py: Auto-fix workflow - batch_processor.py: Batch issue handling - Fixed 18 ruff linting errors (F401, C405, C414, E741): - Removed unused imports (BatchValidationResult, AuditAction, locked_json_write) - Optimized collection literals (set([n]) → {n}) - Removed unnecessary list() calls - Renamed ambiguous variable 'l' to 'label' throughout Frontend (TypeScript): - Refactored IPC handlers (19% overall reduction) with shared utilities: - autofix-handlers.ts: 1,042 → 818 lines - pr-handlers.ts: 648 → 543 lines - triage-handlers.ts: 437 lines (no duplication) - Created utils layer: logger, ipc-communicator, project-middleware, subprocess-runner - Split github-store.ts into focused stores: issues, pr-review, investigation, sync-status - Split ReviewFindings.tsx into focused components All imports verified, type checks passing, linting clean. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…ndyMik90#250)" (AndyMik90#251) This reverts commit 348de6d.
* Add multilingual support and i18n integration - Implemented i18n framework using `react-i18next` for translation management. - Added support for English and French languages with translation files. - Integrated language selector into settings. - Updated all text strings in UI components to use translation keys. - Ensured smooth language switching with live updates. * Migrate remaining hard-coded strings to i18n system - TaskCard: status labels, review reasons, badges, action buttons - PhaseProgressIndicator: execution phases, progress labels - KanbanBoard: drop zone, show archived, tooltips - CustomModelModal: dialog title, description, labels - ProactiveSwapListener: account switch notifications - AgentProfileSelector: phase labels, custom configuration - GeneralSettings: agent framework option Added translation keys for en/fr locales in tasks.json, common.json, and settings.json for complete i18n coverage. * Add i18n support to dialogs and settings components - AddFeatureDialog: form labels, validation messages, buttons - AddProjectModal: dialog steps, form fields, actions - RateLimitIndicator: rate limit notifications - RateLimitModal: account switching, upgrade prompts - AdvancedSettings: updates and notifications sections - ThemeSettings: theme selection labels - Updated dialogs.json locales (en/fr) * Fix truncated 'ready' message in dialogs locales * Fix backlog terminology in i18n locales Change "Planning"/"Planification" to standard PM term "Backlog" * Migrate settings navigation and integration labels to i18n - AppSettings: nav items, section titles, buttons - IntegrationSettings: Claude accounts, auto-switch, API keys labels - Added settings nav/projectSections/integrations translation keys - Added buttons.saving to common translations * Migrate AgentProfileSettings and Sidebar init dialog to i18n - AgentProfileSettings: migrate phase config labels, section title, description, and all hardcoded strings to settings namespace - Sidebar: migrate init dialog strings to dialogs namespace with common buttons from common namespace - Add new translation keys for agent profile settings and update dialog * Migrate AppSettings navigation labels to i18n - Add useTranslation hook to AppSettings.tsx - Replace hardcoded section labels with dynamic translations - Add projectSections translations for project settings nav - Add rerunWizardDescription translation key * Add explicit typing to notificationItems array Import NotificationSettings type and use keyof to properly type the notification item keys, removing manual type assertion.
…AndyMik90#266) * ci: implement enterprise-grade PR quality gates and security scanning * ci: implement enterprise-grade PR quality gates and security scanning * fix:pr comments and improve code * fix: improve commit linting and code quality * Removed the dependency-review job (i added it) * fix: address CodeRabbit review comments - Expand scope pattern to allow uppercase, underscores, slashes, dots - Add concurrency control to cancel duplicate security scan runs - Add explanatory comment for Bandit CLI flags - Remove dependency-review job (requires repo settings) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * docs: update commit lint examples with expanded scope patterns Show slashes and dots in scope examples to demonstrate the newly allowed characters (api/users, package.json) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore: remove feature request issue template Feature requests are directed to GitHub Discussions via the issue template config.yml 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address security vulnerabilities in service orchestrator - Fix port parsing crash on malformed docker-compose entries - Fix shell injection risk by using shlex.split() with shell=False Prevents crashes when docker-compose.yml contains environment variables in port mappings (e.g., '${PORT}:8080') and eliminates shell injection vulnerabilities in subprocess execution. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…90#252) * feat(github): add GitHub automation system for issues and PRs Implements comprehensive GitHub automation with three major components: 1. Issue Auto-Fix: Automatically creates specs from labeled issues - AutoFixButton component with progress tracking - useAutoFix hook for config and queue management - Backend handlers for spec creation from issues 2. GitHub PRs Tool: AI-powered PR review sidebar - New sidebar tab (Cmd+Shift+P) alongside GitHub Issues - PRList/PRDetail components for viewing PRs - Review system with findings by severity - Post review comments to GitHub 3. Issue Triage: Duplicate/spam/feature-creep detection - Triage handlers with label application - Configurable detection thresholds Also adds: - Debug logging (DEBUG=true) for all GitHub handlers - Backend runners/github module with orchestrator - AI prompts for PR review, triage, duplicate/spam detection - dev:debug npm script for development with logging 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-runner): resolve import errors for direct script execution Changes runner.py and orchestrator.py to handle both: - Package import: `from runners.github import ...` - Direct script: `python runners/github/runner.py` Uses try/except pattern for relative vs direct imports. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): correct argparse argument order for runner.py Move --project global argument before subcommand so argparse can correctly parse it. Fixes "unrecognized arguments: --project" error. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * logs when debug mode is on * refactor(github): extract service layer and fix linting errors Major refactoring to improve maintainability and code quality: Backend (Python): - Extracted orchestrator.py (2,600 → 835 lines, 68% reduction) into 7 service modules: - prompt_manager.py: Prompt template management - response_parsers.py: AI response parsing - pr_review_engine.py: PR review orchestration - triage_engine.py: Issue triage logic - autofix_processor.py: Auto-fix workflow - batch_processor.py: Batch issue handling - Fixed 18 ruff linting errors (F401, C405, C414, E741): - Removed unused imports (BatchValidationResult, AuditAction, locked_json_write) - Optimized collection literals (set([n]) → {n}) - Removed unnecessary list() calls - Renamed ambiguous variable 'l' to 'label' throughout Frontend (TypeScript): - Refactored IPC handlers (19% overall reduction) with shared utilities: - autofix-handlers.ts: 1,042 → 818 lines - pr-handlers.ts: 648 → 543 lines - triage-handlers.ts: 437 lines (no duplication) - Created utils layer: logger, ipc-communicator, project-middleware, subprocess-runner - Split github-store.ts into focused stores: issues, pr-review, investigation, sync-status - Split ReviewFindings.tsx into focused components All imports verified, type checks passing, linting clean. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * fixes during testing of PR * feat(github): implement PR merge, assign, and comment features - Add auto-assignment when clicking "Run AI Review" - Implement PR merge functionality with squash method - Add ability to post comments on PRs - Display assignees in PR UI - Add Approve and Merge buttons when review passes - Update backend gh_client with pr_merge, pr_comment, pr_assign methods - Create IPC handlers for new PR operations - Update TypeScript interfaces and browser mocks 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * Improve PR review AI * fix(github): use temp files for PR review posting to avoid shell escaping issues When posting PR reviews with findings containing special characters (backticks, parentheses, quotes), the shell command was interpreting them as commands instead of literal text, causing syntax errors. Changed both postPRReview and postPRComment handlers to write the body content to temporary files and use gh CLI's --body-file flag instead of --body with inline content. This safely handles ALL special characters without escaping issues. Fixes shell errors when posting reviews with suggested fixes containing code snippets. * fix(i18n): add missing GitHub PRs translation and document i18n requirements Fixed missing translation key for GitHub PRs feature that was causing "items.githubPRs" to display instead of the proper translated text. Added comprehensive i18n guidelines to CLAUDE.md to ensure all future frontend development follows the translation key pattern instead of using hardcoded strings. Also fixed missing deletePRReview mock function in browser-mock.ts to resolve TypeScript compilation errors. Changes: - Added githubPRs translation to en/navigation.json - Added githubPRs translation to fr/navigation.json - Added Development Guidelines section to CLAUDE.md with i18n requirements - Documented translation file locations and namespace usage patterns - Added deletePRReview mock function to browser-mock.ts 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * fix ui loading * Github PR fixes * improve claude.md * lints/tests * fix(github): handle PRs exceeding GitHub's 20K line diff limit - Add PRTooLargeError exception for large PR detection - Update pr_diff() to catch and raise PRTooLargeError for HTTP 406 errors - Gracefully handle large PRs by skipping full diff and using individual file patches - Add diff_truncated flag to PRContext to track when diff was skipped - Large PRs will now review successfully using per-file diffs instead of failing Fixes issue with PR AndyMik90#252 which has 100+ files exceeding the 20,000 line limit. * fix: implement individual file patch fetching for large PRs The PR review was getting stuck for large PRs (>20K lines) because when we skipped the full diff due to GitHub API limits, we had no code to analyze. The individual file patches were also empty, leaving the AI with just file names and metadata. Changes: - Implemented _get_file_patch() to fetch individual patches via git diff - Updated PR review engine to build composite diff from file patches when diff_truncated is True - Added missing 'state' field to PRContext dataclass - Limits composite diff to first 50 files for very large PRs - Shows appropriate warnings when using reconstructed diffs This allows AI review to proceed with actual code analysis even when the full PR diff exceeds GitHub's limits. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * 1min reduction * docs: add GitHub Sponsors funding configuration Enable the Sponsor button on the repository by adding FUNDING.yml with the AndyMik90 GitHub Sponsors profile. * feat(github-pr): add orchestrating agent for thorough PR reviews Implement a new Opus 4.5 orchestrating agent that performs comprehensive PR reviews regardless of size. Key changes: - Add orchestrator_reviewer.py with strategic review workflow - Add review_tools.py with subagent spawning capabilities - Add pr_orchestrator.md prompt emphasizing thorough analysis - Add pr_security_agent.md and pr_quality_agent.md subagent prompts - Integrate orchestrator into pr_review_engine.py with config flag - Fix critical bug where findings were extracted but not processed (indentation issue in _parse_orchestrator_output) The orchestrator now correctly identifies issues in PRs that were previously approved as "trivial". Testing showed 7 findings detected vs 0 before the fix. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * i18n * fix(github-pr): restrict pr_reviewer to read-only permissions The PR review agent was using qa_reviewer agent type which has Bash access, allowing it to checkout branches and make changes during review. Created new pr_reviewer agent type with BASE_READ_TOOLS only (no Bash, no writes, no auto-claude tools). This prevents the PR review from accidentally modifying code or switching branches during analysis. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-pr): robust category mapping and JSON parsing for PR review The orchestrator PR review was failing to extract findings because: 1. AI generates category names like 'correctness', 'consistency', 'testing' that aren't in our ReviewCategory enum - added flexible mapping 2. JSON sometimes embedded in markdown code blocks (```json) which broke parsing - added code block extraction as first parsing attempt Changes: - Add _CATEGORY_MAPPING dict to map AI categories to valid enum values - Add _map_category() helper function with fallback to QUALITY - Add severity parsing with fallback to MEDIUM - Add markdown code block detection (```json) before raw JSON parsing - Add _extract_findings_from_data() helper to reduce code duplication - Apply same fixes to review_tools.py for subagent parsing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(pr-review): improve post findings UX with batch support and feedback - Fix post findings failing on own PRs by falling back from REQUEST_CHANGES to COMMENT when GitHub returns 422 error - Change status badge to show "Reviewed" instead of "Commented" until findings are actually posted to GitHub - Add success notification when findings are posted (auto-dismisses after 3s) - Add batch posting support: track posted findings, show "Posted" badge, allow posting remaining findings in additional batches - Show loading state on button while posting 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): resolve stale timestamp and null author bugs - Fix stale timestamp in batch_issues.py: Move updated_at assignment BEFORE to_dict() serialization so the saved JSON contains the correct timestamp instead of the old value - Fix AttributeError in context_gatherer.py: Handle null author/user fields when GitHub API returns null for deleted/suspended users instead of an empty object 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high and medium severity PR review findings HIGH severity fixes: - Command Injection in autofix-handlers.ts: Use execFileSync with args array - Command Injection in pr-handlers.ts (3 locations): Use execFileSync + validation - Command Injection in triage-handlers.ts: Use execFileSync + label validation - Token Exposure in bot_detection.py: Pass token via GH_TOKEN env var MEDIUM severity fixes: - Environment variable leakage in subprocess-runner.ts: Filter to safe vars only - Debug logging in subprocess-runner.ts: Only log in development mode - Delimiter escape bypass in sanitize.py: Use regex pattern for variations - Insecure file permissions in trust.py: Use os.open with 0o600 mode - No file locking in learning.py: Use FileLock + atomic_write utilities - Bare except in confidence.py: Log error with specific exception info - Fragile module import in pr_review_engine.py: Import at module level - State transition validation in models.py: Enforce can_transition_to() 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * PR followup * fix(security): add usedforsecurity=False to MD5 hash calls MD5 is used for generating unique IDs/cache keys, not for security purposes. Adding usedforsecurity=False resolves Bandit B324 warnings. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high-priority PR review findings Fixes 5 high-priority issues from Auto Claude PR Review: 1. orchestrator_reviewer.py: Token budget tracking now increments total_tokens from API response usage data 2. pr_review_engine.py: Async exceptions now re-raise RuntimeError instead of silently returning empty results 3. batch_issues.py: IssueBatch.save() now uses locked_json_write for atomic file operations with file locking 4. project-middleware.ts: Added validateProjectPath() to prevent path traversal attacks (checks absolute, no .., exists, is dir) 5. orchestrator.py: Exception handling now logs full traceback and preserves exception type/context in error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high-priority PR review findings Fixes 5 high-priority issues from Auto Claude PR Review: 1. orchestrator_reviewer.py: Token budget tracking now increments total_tokens from API response usage data 2. pr_review_engine.py: Async exceptions now re-raise RuntimeError instead of silently returning empty results 3. batch_issues.py: IssueBatch.save() now uses locked_json_write for atomic file operations with file locking 4. project-middleware.ts: Added validateProjectPath() to prevent path traversal attacks (checks absolute, no .., exists, is dir) 5. orchestrator.py: Exception handling now logs full traceback and preserves exception type/context in error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat(ui): add PR status labels to list view Add secondary status badges to the PR list showing review state at a glance: - "Changes Requested" (warning) - PRs with blocking issues (critical/high) - "Ready to Merge" (green) - PRs with only non-blocking suggestions - "Ready for Follow-up" (blue) - PRs with new commits since last review The "Ready for Follow-up" badge uses a cached new commits check from the store, only shown after the detail view confirms new commits via SHA comparison. This prevents false positives from PR updatedAt timestamp changes (which can happen from comments, labels, etc). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * PR labels * auto-claude: Initialize subtask-based implementation plan - Workflow type: feature - Phases: 3 - Subtasks: 6 - Ready for autonomous implementation --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…yMik90#272) Bumps [vitest](https://github.com/vitest-dev/vitest/tree/HEAD/packages/vitest) from 4.0.15 to 4.0.16. - [Release notes](https://github.com/vitest-dev/vitest/releases) - [Commits](https://github.com/vitest-dev/vitest/commits/v4.0.16/packages/vitest) --- updated-dependencies: - dependency-name: vitest dependency-version: 4.0.16 dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [@electron/rebuild](https://github.com/electron/rebuild) from 3.7.2 to 4.0.2. - [Release notes](https://github.com/electron/rebuild/releases) - [Commits](electron/rebuild@v3.7.2...v4.0.2) --- updated-dependencies: - dependency-name: "@electron/rebuild" dependency-version: 4.0.2 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Andy <[email protected]>
Co-authored-by: danielfrey63 <[email protected]> Co-authored-by: Andy <[email protected]>
* fix(planning): accept bug_fix workflow_type alias * style(planning): ruff format * fix: refatored common logic * fix: remove ruff errors * fix: remove duplicate _normalize_workflow_type method Remove the incorrectly placed duplicate method inside ContextLoader class. The module-level function is the correct implementation being used. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: danielfrey63 <[email protected]> Co-authored-by: Andy <[email protected]> Co-authored-by: AndyMik90 <[email protected]> Co-authored-by: Claude Opus 4.5 <[email protected]>
…ow (AndyMik90#276) When dry_run=true, the workflow skipped creating the version tag but build jobs still tried to checkout that non-existent tag, causing all 4 platform builds to fail with "git failed with exit code 1". Now build jobs checkout develop branch for dry runs while still using the version tag for real releases. Closes: GitHub Actions run #20464082726
…es (AndyMik90#710) * fix(a11y): Add context menu for keyboard-accessible task status changes Adds a kebab menu (⋮) to task cards with "Move to" options for changing task status without drag-and-drop. This enables screen reader users to move tasks between Kanban columns using standard keyboard navigation. - Add DropdownMenu with status options (excluding current status) - Wire up persistTaskStatus through KanbanBoard → SortableTaskCard → TaskCard - Add i18n translations for menu labels (en/fr) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(i18n): Internationalize task status column labels Replace hardcoded English strings in TASK_STATUS_LABELS with translation keys. Update all components that display status labels to use t() for proper internationalization. - Add columns.* translation keys to en/tasks.json and fr/tasks.json - Update TASK_STATUS_LABELS to store translation keys instead of strings - Update TaskCard, KanbanBoard, TaskHeader, TaskDetailModal to use t() 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * perf(TaskCard): Memoize dropdown menu items for status changes Wrap the TASK_STATUS_COLUMNS filter/map in useMemo to avoid recreating the menu items on every render. Only recomputes when task.status, onStatusChange handler, or translations change. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(types): Allow async functions for onStatusChange prop Change onStatusChange signature from returning void to unknown to accept async functions like persistTaskStatus. Updated in TaskCard, SortableTaskCard, and KanbanBoard interfaces. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]> Co-authored-by: Andy <[email protected]>
…racking (AndyMik90#732) * fix(agents): resolve 4 critical agent execution bugs 1. File state tracking: Enable file checkpointing in SDK client to prevent "File has not been read yet" errors in recovery sessions 2. Insights JSON parsing: Add TextBlock type check before accessing .text attribute in 11 files to fix empty JSON parsing failures 3. Pre-commit hooks: Add worktree detection to skip hooks that fail in worktree context (version-sync, pytest, eslint, typecheck) 4. Path triplication: Add explicit warning in coder prompt about path doubling bug when using cd with relative paths in monorepos These fixes address issues discovered in task kanban agents 099 and 100 that were causing exit code 1/128 errors, file state loss, and path resolution failures in worktree-based builds. * fix(logs): dynamically re-discover worktree for task log watching When users opened the Logs tab before a worktree was created (during planning phase), the worktreeSpecDir was captured as null and never re-discovered. This caused validation logs to appear under 'Coding' instead of 'Validation', requiring a hard refresh to fix. Now the poll loop dynamically re-discovers the worktree if it wasn't found initially, storing it once discovered to avoid repeated lookups. * fix: prevent path confusion after cd commands in coder agent Resolves Issue AndyMik90#13 - Path Confusion After cd Command **Problem:** Agent was using doubled paths after cd commands, resulting in errors like: - "warning: could not open directory 'apps/frontend/apps/frontend/src/'" - "fatal: pathspec 'apps/frontend/src/file.ts' did not match any files" After running `cd apps/frontend`, the agent would still prefix paths with `apps/frontend/`, creating invalid paths like `apps/frontend/apps/frontend/src/`. **Solution:** 1. **Enhanced coder.md prompt** with new prominent section: - 🚨 CRITICAL: PATH CONFUSION PREVENTION section added at top - Detailed examples of WRONG vs CORRECT path usage after cd - Mandatory pre-command check: pwd → ls → git add - Added verification step in STEP 6 (Implementation) - Added verification step in STEP 9 (Commit Progress) 2. **Enhanced prompt_generator.py**: - Added CRITICAL warning in environment context header - Reminds agent to run pwd before git commands - References PATH CONFUSION PREVENTION section for details **Key Changes:** - apps/backend/prompts/coder.md: - Lines 25-84: New PATH CONFUSION PREVENTION section with examples - Lines 423-435: Verify location FIRST before implementation - Lines 697-706: Path verification before commit (MANDATORY) - Lines 733-742: pwd check and troubleshooting steps - apps/backend/prompts_pkg/prompt_generator.py: - Lines 65-68: CRITICAL warning in environment context **Testing:** - All existing tests pass (1376 passed in main test suite) - Environment context generation verified - Path confusion prevention guidance confirmed in prompts **Impact:** Prevents the AndyMik90#1 bug in monorepo implementations by enforcing pwd checks before every git operation and providing clear examples of correct vs incorrect path usage. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: Add path confusion prevention to qa_fixer.md prompt (AndyMik90#13) Add comprehensive path handling guidance to prevent doubled paths after cd commands in monorepos. The qa_fixer agent now includes: - Clear warning about path triplication bug - Examples of correct vs incorrect path usage - Mandatory pwd check before git commands - Path verification steps before commits Fixes AndyMik90#13 - Path Confusion After cd Command 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: Binary file handling and semantic evolution tracking - Add get_binary_file_content_from_ref() for proper binary file handling - Fix binary file copy in merge to use bytes instead of text encoding - Auto-create FileEvolution entries in refresh_from_git() for retroactive tracking - Skip flaky tests that fail due to environment/fixture issues 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: Address PR review feedback for security and robustness HIGH priority fixes: - Add binary file handling for modified files in workspace.py - Enable all PRWorktreeManager tests with proper fixture setup - Add timeout exception handling for all subprocess calls MEDIUM priority fixes: - Add more binary extensions (.wasm, .dat, .db, .sqlite, etc.) - Add input validation for head_sha with regex pattern LOW priority fixes: - Replace print() with logger.debug() in pr_worktree_manager.py - Fix timezone handling in worktree.py days calculation Test fixes: - Fix macOS path symlink issue with .resolve() - Change module constants to runtime functions for testability - Fix orphan worktree test to manually create orphan directory Note: pre-commit hook skipped due to git index lock conflict with worktree tests (tests pass independently, see CI for validation) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): inject Claude OAuth token into PR review subprocess PR reviews were not using the active Claude OAuth profile token. The getRunnerEnv() function only included API profile env vars but missed the CLAUDE_CODE_OAUTH_TOKEN from ClaudeProfileManager. This caused PR reviews to fail with rate limits even after switching to a non-rate-limited Claude account, while terminals worked correctly. Now getRunnerEnv() includes claudeProfileEnv from the active Claude OAuth profile, matching the terminal behavior. * fix: Address follow-up PR review findings HIGH priority (confirmed crash): - Fix ImportError in cleanup_pr_worktrees.py - use DEFAULT_ prefix constants and runtime functions for env var overrides MEDIUM priority (validated): - Add env var validation with graceful fallback to defaults (prevents ValueError on invalid MAX_PR_WORKTREES or PR_WORKTREE_MAX_AGE_DAYS values) LOW priority (validated): - Fix inconsistent path comparison in show_stats() - use .resolve() to match cleanup_worktrees() behavior on macOS 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat(pr-review): add real-time merge readiness validation Add a lightweight freshness check when selecting PRs to validate that the AI's verdict is still accurate. This addresses the issue where PRs showing 'Ready to Merge' could have stale verdicts if the PR state changed after the AI review (merge conflicts, draft mode, failing CI). Changes: - Add checkMergeReadiness IPC endpoint that fetches real-time PR status - Add warning banner in PRDetail when blockers contradict AI verdict - Fix checkNewCommits always running on PR select (remove stale cache skip) - Display blockers: draft mode, merge conflicts, CI failures * fix: Add per-file error handling in refresh_from_git Previously, a git diff failure for one file would abort processing of all remaining files. Now each file is processed in its own try/except block, logging warnings for failures while continuing with the rest. Also improved the log message to show processed/total count. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(pr-followup): check merge conflicts before generating summary The follow-up reviewer was generating the summary BEFORE checking for merge conflicts. This caused the summary to show the AI original verdict reasoning instead of the merge conflict override message. Fixed by moving the merge conflict check to run BEFORE summary generation, ensuring the summary reflects the correct blocked status when conflicts exist. * style: Fix ruff formatting in cleanup_pr_worktrees.py * fix(pr-followup): include blockers section in summary output The follow-up reviewer summary was missing the blockers section that the initial reviewer has. Now the summary includes all blocking issues: - Merge conflicts - Critical/High/Medium severity findings This gives users everything at once - they can fix merge conflicts AND code issues in one go instead of iterating through multiple reviews. * fix(memory): properly await async Graphiti saves to prevent resource leaks The _save_to_graphiti_sync function was using asyncio.ensure_future() when called from an async context, which scheduled the coroutine but immediately returned without awaiting completion. This caused the GraphitiMemory.close() in the finally block to potentially never execute, leading to: - Unclosed database connections (resource leak) - Incomplete data writes Fixed by: 1. Creating _save_to_graphiti_async() as the core async implementation 2. Having async callers (record_discovery, record_gotcha) await it directly 3. Keeping _save_to_graphiti_sync for sync-only contexts, with a warning if called from async context * fix(merge): normalize line endings before applying semantic changes The regex_analyzer normalizes content to LF when extracting content_before and content_after. When apply_single_task_changes() and combine_non_conflicting_changes() receive baselines with CRLF endings, the LF-based patterns fail to match, causing modifications to silently fail. Fix by normalizing baseline to LF before applying changes, then restoring original line endings before returning. This ensures cross-platform compatibility for file merging operations. * fix: address PR follow-up review findings - modification_tracker: verify 'main' exists before defaulting, fall back to HEAD~10 for non-standard branch setups (CODE-004) - pr_worktree_manager: refresh registered worktrees after git prune to ensure accurate filtering (LOW severity stale list issue) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(pr-review): include finding IDs in posted PR review comments The PR review system generated finding IDs internally (e.g., CODE-004) and referenced them in the verdict section, but the findings list didn't display these IDs. This made it impossible to cross-reference when the verdict said "fix CODE-004" because there was no way to identify which finding that referred to. Added finding ID to the format string in both auto-approve and standard review formats, so findings now display as: 🟡 [CODE-004] [MEDIUM] Title here 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(prompts): add verification requirement for 'missing' findings Addresses false positives in PR review where agents claim something is missing (no validation, no fallback, no error handling) without verifying the complete function scope. Added 'Verify Before Claiming Missing' guidance to: - pr_followup_newcode_agent.md (safeguards/fallbacks) - pr_security_agent.md (validation/sanitization/auth) - pr_quality_agent.md (error handling/cleanup) - pr_logic_agent.md (edge case handling) Key principle: Evidence must prove absence exists, not just that the agent didn't see it. Agents must read the complete function/scope before reporting that protection is missing. --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
AndyMik90#699) * fix: use --continue instead of --resume for Claude session restoration The Claude session restore system was incorrectly using 'claude --resume session-id' with internal .jsonl file IDs from ~/.claude/projects/, which aren't valid session names. Claude Code's --resume flag expects user-named sessions (set via /rename), not internal session file IDs like 'agent-a02b21e'. Changed to always use 'claude --continue' which resumes the most recent conversation in the current directory. This is simpler and more reliable since Auto Claude already restores terminals to their correct cwd/projectPath. * test: update test for --continue behavior (sessionId deprecated) - Updated test to verify resumeClaude always uses --continue - sessionId parameter is now deprecated and ignored - claudeSessionId is cleared since --continue doesn't track specific sessions 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: auto-resume only requires isClaudeMode (sessionId deprecated) Cursor Bot correctly identified that clearing claudeSessionId in resumeClaude would break auto-resume on subsequent restarts. The fix: auto-resume condition now only requires storedIsClaudeMode, not storedClaudeSessionId. Since resumeClaude uses `claude --continue` which resumes the most recent session automatically, we don't need to track specific session IDs anymore. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> Co-Authored-By: Cursor Bot <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]> Co-authored-by: Cursor Bot <[email protected]>
…#742) Added macOS-specific branch in getOllamaInstallCommand() to use 'brew install ollama' instead of the Linux-only curl install script. - macOS: now uses 'brew install ollama' (Homebrew) - Linux: continues using 'curl -fsSL https://ollama.com/install.sh | sh' - Windows: unchanged (uses winget) Closes ACS-114 Co-authored-by: Andy <[email protected]>
…AndyMik90#750) - Updated ProjectStore to use the full task description for the modal view instead of extracting a summary. - Enhanced TaskDetailModal layout to prevent overflow and ensure proper display of task descriptions. - Adjusted TaskMetadata component styling for better readability and responsiveness. These changes improve the user experience by providing complete task descriptions and ensuring that content is displayed correctly across different screen sizes.
…locking (AndyMik90#680 regression) (AndyMik90#720) * fix: convert Claude CLI detection to async to prevent main process freeze PR AndyMik90#680 introduced synchronous execFileSync calls for Claude CLI detection. When terminal sessions with Claude mode are restored on startup, these blocking calls freeze the Electron main process for 1-3 seconds. Changes: - Add async versions: getAugmentedEnvAsync(), getToolPathAsync(), getClaudeCliInvocationAsync(), invokeClaudeAsync(), resumeClaudeAsync() - Use caching to avoid repeated subprocess calls - Pre-warm CLI cache at startup with setImmediate() for non-blocking detection - Fix ENOWORKSPACES npm error by running npm commands from home directory The sync versions are preserved for backward compatibility but now include warnings in their JSDoc comments recommending the async alternatives. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> Signed-off-by: aslaker <[email protected]> * refactor: extract shared helpers to reduce sync/async duplication Address PR review feedback by: - Extract pure helper functions for Claude CLI detection: - getClaudeDetectionPaths(): returns platform-specific candidate paths - sortNvmVersionDirs(): sorts NVM versions (newest first) - buildClaudeDetectionResult(): builds detection result from validation - Extract pure helper functions for Claude invocation: - buildClaudeShellCommand(): builds shell command for all methods - finalizeClaudeInvoke(): consolidates post-invocation logic - Add .catch() error handling for all async promise calls - Replace sync fs calls with async versions in detectClaudeAsync - Replace writeFileSync with fsPromises.writeFile in invokeClaudeAsync - Add 24 new unit tests for helper functions - Fix env-handlers tests to use async mock with flushPromises() - Fix claude-integration-handler tests with os.tmpdir() mock 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address PR review comments for async CLI detection - Fix TOCTOU race condition in profile-storage.ts by removing existence check before readFile (Comment AndyMik90#7) - Add semver validation regex to sortNvmVersionDirs to filter malformed version strings (Comment AndyMik90#5) - Refactor buildClaudeShellCommand to use discriminated union type for better type safety (Comment AndyMik90#6) - Add async validation/detection methods for Python, Git, and GitHub CLI with proper timeout handling (Comment AndyMik90#3) - Extract shared path-building helpers (getExpandedPlatformPaths, buildPathsToAdd) to reduce sync/async duplication (Comment AndyMik90#4) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: add env parameter to async CLI validation and pre-warm all tools - Add `env: await getAugmentedEnvAsync()` to validateClaudeAsync, validatePythonAsync, validateGitAsync, and validateGitHubCLIAsync to prevent sync PATH resolution blocking the main thread - Pre-warm all commonly used CLI tools (claude, git, gh, python) instead of just claude to avoid sync blocking on first use Fixes mouse hover freeze on macOS where the app would hang infinitely when the mouse entered the window. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address PR review comments for async Windows helpers and profile deduplication CMT-001 [MEDIUM]: detectGitAsync now uses fully async Windows helpers - Add getWindowsExecutablePathsAsync using fs.promises.access - Add findWindowsExecutableViaWhereAsync using promisified execFile - Update detectGitAsync to use async helpers instead of sync versions - Prevents blocking Electron main process on Windows CMT-002 [LOW]: Extract shared profile parsing logic - Add parseAndMigrateProfileData helper function - Simplifies loadProfileStore and loadProfileStoreAsync - Reduces code duplication for version migration and date parsing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: cast through unknown to satisfy TypeScript strict type checking The direct cast from Record<string, unknown> to ProfileStoreData fails TypeScript's overlap check. Cast through unknown first to allow the intentional type assertion. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address PR review comments for async Windows helpers and profile deduplication Address AndyMik90's Auto Claude PR Review comments: - [NEW-002] Add missing --location=global flag to async npm prefix detection in getNpmGlobalPrefixAsync (env-utils.ts line 292) to match sync version and prevent ENOWORKSPACES errors in monorepos - [NEW-001/NEW-005] Update resumeClaudeAsync to match sync resumeClaude behavior: always use --continue, clear claudeSessionId to prevent stale IDs, and add deprecation warning for sessionId parameter - [NEW-004] Remove blocking existsSync check in ClaudeProfileManager.initialize() by using idempotent mkdir with recursive:true directly 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Signed-off-by: aslaker <[email protected]> Co-authored-by: Claude Opus 4.5 <[email protected]> Co-authored-by: Alex <[email protected]> Co-authored-by: Andy <[email protected]>
…ACS-145) (AndyMik90#755) * fix: add helpful error message when Python dependencies are missing When running runner scripts (spec_runner, insights_runner, etc.) without the virtual environment activated, users would get a cryptic ModuleNotFoundError for 'dotenv' or other dependencies. This fix adds a try-except around the dotenv import that provides a clear error message explaining: - The issue is likely due to not using the virtual environment - How to activate the venv (Linux/macOS/Windows) - How to install dependencies directly - Shows the current Python executable being used Also fixes CLI-USAGE.md which had incorrect paths for spec_runner.py (the file is in runners/, not the backend root). Related to: ACS-145 Signed-off-by: StillKnotKnown <[email protected]> * fix: improve error messages with explicit package name and requirements path - cli/utils.py: Explicitly mention 'python-dotenv' and add 'pip install python-dotenv' option - insights_runner.py: Use full path 'apps/backend/requirements.txt' for clarity Signed-off-by: StillKnotKnown <[email protected]> * refactor: centralize dotenv import error handling - Create shared import_dotenv() function in cli/utils.py - Update all runner scripts to use centralized function - Removes ~73 lines of duplicate code across 6 files - Ensures consistent error messaging (mentions python-dotenv explicitly) - Fixes path inconsistency in insights_runner.py Addresses CodeRabbit feedback about DRY principle violations. Signed-off-by: StillKnotKnown <[email protected]> * style: fix import ordering to satisfy ruff I001 rule Add blank lines to separate local imports and function calls from third-party imports, properly delineating import groups. Signed-off-by: StillKnotKnown <[email protected]> * style: auto-fix ruff I001 import ordering Ruff auto-fixed by adding blank line after 'from cli.utils import import_dotenv' to properly separate the import from the function call. Signed-off-by: StillKnotKnown <[email protected]> * style: apply ruff formatting to cli/utils.py - Add blank line after import statement - Use double quotes instead of single quotes Signed-off-by: StillKnotKnown <[email protected]> * refactor: return load_dotenv instead of mutating sys.modules - Change import_dotenv() to return load_dotenv callable - Remove sys.modules mutation for cleaner approach - Update callers to do: load_dotenv = import_dotenv() - Fixes ruff I001 import ordering violations - Preserves same error message on ImportError Addresses CodeRabbit feedback about import-order complexity. Signed-off-by: StillKnotKnown <[email protected]> --------- Signed-off-by: StillKnotKnown <[email protected]> Co-authored-by: StillKnotKnown <[email protected]> Co-authored-by: Alex <[email protected]>
…-115] (AndyMik90#763) * fix(memory): use Homebrew for Ollama installation on macOS Added macOS-specific branch in getOllamaInstallCommand() to use 'brew install ollama' instead of the Linux-only curl install script. - macOS: now uses 'brew install ollama' (Homebrew) - Linux: continues using 'curl -fsSL https://ollama.com/install.sh | sh' - Windows: unchanged (uses winget) Closes ACS-114 * fix(frontend): force remount of kanban view on roadmap update (ACS-115) * fix(roadmap): normalize feature status values for Kanban display Fixes ACS-115 - roadmap features were not appearing in Kanban columns. Root cause: Backend generates features with status 'idea' but Kanban columns expect 'under_review', 'planned', 'in_progress', or 'done'. The type cast was passing through invalid values unchanged. Changes: - Add normalizeFeatureStatus() to map backend values to valid column IDs - Map 'idea', 'backlog', 'proposed' → 'under_review' - Map 'approved', 'scheduled' → 'planned' - Map 'active', 'building' → 'in_progress' - Map 'complete', 'completed', 'shipped' → 'done' - Fallback unknown values to 'under_review' - Add Python env readiness check in agent-queue.ts * refactor: address reviewer feedback on ACS-115 PR - Extract duplicated Python env check into ensurePythonEnvReady() helper - Move STATUS_MAP to module-level constant for efficiency - Simplify normalizeFeatureStatus with single map lookup - Add debug logging for unmapped status values - Add JSDoc documentation for new methods 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]> Co-authored-by: Alex <[email protected]>
- Add comprehensive implementation plan for ADO support - Define multi-provider architecture (GitHub/GitLab/ADO) - Include provider abstraction layer design - Document API mappings and code examples - Add frontend provider selection UI specs
|
|
📝 WalkthroughWalkthroughAdds Azure DevOps (ADO) integration across backend and frontend: new backend Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Renderer as Renderer (UI)
participant Preload as Preload API
participant Main as Main (IPC handlers)
participant Backend as Backend (server)
participant Factory as Provider Factory
participant ADOProv as AzureDevOpsProvider
participant ADO as Azure DevOps API
Renderer->>Preload: getADOPullRequest(projectId, prId)
Preload->>Main: IPC request (ado:pr:get)
Main->>Backend: forward provider request (type=AZURE_DEVOPS)
Backend->>Factory: resolve provider(provider_type=AZURE_DEVOPS, kwargs)
Factory-->>Backend: AzureDevOpsProvider instance
Backend->>ADOProv: fetch_pr(prId)
ADOProv->>ADO: GET /pullrequests/{id}
ADOProv->>ADO: GET /pullrequests/{id}/iterations
ADO-->>ADOProv: PR payload + iterations/diff
ADOProv-->>Backend: normalized PRData
Backend-->>Main: PRData
Main-->>Preload: IPC response (PRData)
Preload-->>Renderer: PRData
rect rgba(128,200,255,0.10)
note right of ADOProv: ADOProv maps ADO API -> internal PR/Issue types
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @QliknzMB, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a comprehensive plan to integrate Azure DevOps (ADO) support into the Auto-Claude application. The plan details a multi-provider architecture, including a new backend abstraction layer for Git services and a dedicated ADO client implementation. It also covers frontend changes for provider selection and ADO API interaction, along with necessary environment configurations and dependencies, laying the groundwork for expanding Auto-Claude's compatibility beyond GitHub and GitLab. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉 Thanks for your first PR!
A maintainer will review it soon. Please make sure:
- Your branch is synced with
develop - CI checks pass
- You've followed our contribution guide
Welcome to the Auto Claude community!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a comprehensive and well-structured design document for integrating Azure DevOps support into Auto-Claude. The plan is detailed, covering backend and frontend architecture, a provider abstraction layer, API mappings, and configuration changes. The proposed design is solid. My review includes a few suggestions to address minor inconsistencies within the plan, improve type safety in the data models, and enhance the robustness of the API client implementation. Overall, this is an excellent planning document that will be a valuable guide for development.
| created_at: datetime | ||
| web_url: str | ||
| provider: str | ||
| tags: List[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the WorkItem dataclass, tags: List[str] = None is not type-safe, as None is not a List[str]. The recommended way to handle mutable defaults in dataclasses is with field(default_factory=list). This ensures a new list is created for each instance. You will also need to update the import from dataclasses to include field (e.g., from dataclasses import dataclass, field).
| tags: List[str] = None | |
| tags: List[str] = field(default_factory=list) |
| ├── client.py # ADO API client wrapper | ||
| ├── repos.py # Repository operations | ||
| ├── pull_requests.py | ||
| ├── work_items.py # ADO equivalent of GitHub Issues | ||
| └── webhooks.py # Service hooks handler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposed directory structure for the ado integration lists separate files like repos.py, pull_requests.py, and work_items.py. However, the implementation plan then shows all of this logic consolidated within apps/backend/integrations/ado/client.py. This is inconsistent and could be confusing during implementation. For better modularity and to match the proposed structure, consider splitting the logic from client.py into the respective files. For example, pull_request related methods would go into pull_requests.py.
| provider="ado" | ||
| ) | ||
|
|
||
| async def list_branches(self) -> list[Branch]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an inconsistency in Python list type hints. The GitProvider abstract base class uses List[Branch] (from typing), but the ADOProvider implementation here uses the modern list[Branch] syntax. While list is fine for Python 3.9+, for consistency with the base class and to ensure compatibility with potentially older Python versions supported by the project, it would be better to use List[Branch] here. This also applies to list_pull_requests and list_issues.
| async def list_branches(self) -> list[Branch]: | |
| async def list_branches(self) -> List[Branch]: |
| if not result.work_items: | ||
| return [] | ||
|
|
||
| ids = [wi.id for wi in result.work_items[:200]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The list_issues method hardcodes a limit of 200 work items (result.work_items[:200]). This could be problematic for projects with more than 200 issues in a given state, as it will silently truncate the list. It's better to either implement pagination to fetch all items or at least make this limit configurable and document it clearly in the method's docstring.
| Add to `apps/frontend/package.json`: | ||
|
|
||
| ```json | ||
| { | ||
| "dependencies": { | ||
| "azure-devops-node-api": "^14.0.0" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The plan includes adding the azure-devops-node-api dependency to apps/frontend/package.json. However, the implementation example for the frontend service (adoApi.ts) uses the native fetch API directly and does not appear to use this library. The azure-devops-node-api library is intended for the Node.js environment (main process), while adoApi.ts is for the renderer process. Please clarify if this library is intended for use in the main process. If it's not used, it should be removed from the dependencies to avoid an unnecessary package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🤖 Fix all issues with AI agents
In @guides/ADO-INTEGRATION-PLAN.md:
- Around line 690-696: The fetch<T> function lacks error handling for network,
timeout, and JSON parse failures; wrap the fetch and subsequent response.json()
in a try/catch inside the private async fetch<T>(endpoint: string, options:
RequestInit = {}) method, check response.ok as before, and on any exception log
a clear message including the endpoint and error (e.g., via console.error or the
project's logger) and rethrow the error so callers can handle it; ensure the
catch also covers JSON parsing errors by awaiting response.json() inside the try
block.
- Line 686: The code is exposing the Personal Access Token via this.config.pat
in the frontend Authorization header; remove any usage of this.config.pat from
client-side code and instead route ADO requests through a backend endpoint that
injects the Authorization header, or implement OAuth/MSAL to obtain short-lived
tokens server-side; update any frontend call sites that currently build headers
like `Authorization: \`Basic ${btoa(`:${this.config.pat}`)}\`` to call your new
backend proxy (or OAuth flow) and have the backend add the Authorization header
using a securely stored PAT or access token.
- Line 30: Replace lowercase vendor branding "github" with the official
capitalized form "GitHub" wherever it appears as a value for the GIT_PROVIDER
option and elsewhere in the document; specifically update the table row that
lists `GIT_PROVIDER` values (currently showing `github`, `gitlab`, `ado`) to use
`GitHub` and search for other occurrences (e.g., the other instance flagged) and
change them to "GitHub" to match official branding.
- Around line 541-543: The _map_work_item implementation assumes `assigned`
supports .get which can raise if `assigned` is None or not a dict-like; guard
this by verifying `assigned` is dict-like before accessing keys (e.g. check
isinstance(assigned, dict) or hasattr(assigned, "get")) and only call .get when
safe, otherwise extract fields via attribute access (e.g. getattr) or default to
empty strings; update the `assigned_to=Identity(...) if assigned else None`
construction to perform that type-check and safe extraction for displayName and
uniqueName.
- Around line 820-853: Add a new "Security best practices" subsection to the
appendix advising: never commit PATs to version control and avoid exposing them
in command history or browser consoles (update the curl PAT example to note this
risk and to read the PAT from a secure store or environment variable without
echoing it), prefer environment variables or secret/credential managers (and
include a note to rotate PATs regularly), avoid embedding hardcoded project
names in WIQL by parameterizing/escaping inputs and sanitize any user-supplied
fields used in WIQL queries to prevent injection, and recommend using OAuth/MSAL
or Managed Identities instead of PATs for long-term, production-grade
authentication.
- Line 765: Replace the loose semver ranges with tighter patch-level pins:
change the "azure-devops>=7.1.0" entry to a patch-bound specifier (e.g.,
"~7.1.0" or "7.1.x" or an exact "7.1.0") to allow only 7.1.* fixes, and change
"azure-devops-node-api@^14.0.0" to a patch-bound specifier like "~14.0.0" (or
"14.0.x" / exact "14.0.0") so only patch releases are accepted; update the two
occurrences referenced by their exact strings ("azure-devops>=7.1.0" and
"azure-devops-node-api@^14.0.0") accordingly.
- Around line 1-862: Summary: The review flags four critical issues to fix
before implementation: insecure PAT handling in frontend, incorrect async/await
use in backend ADO client, unsafe mutation of Azure SDK objects, and a WIQL
injection risk. Fix instructions: 1) Remove PAT usage from frontend
ADOApiService.initialize and ProviderSelector flow — instead implement a backend
proxy/IPC endpoint that holds ADO_PAT securely and performs ADO REST calls;
update frontend to call that proxy (references: ADOApiService.initialize,
ProviderSelector and adoApi). 2) Correct async usage in ADOProvider methods:
ensure ADOProvider.get_repository, list_branches, create_pull_request,
list_pull_requests, get_pull_request, merge_pull_request, create_issue,
list_issues, get_issue, close_issue properly await or run blocking azure-devops
client calls off the event loop (use sync wrappers or asyncio.to_thread) rather
than leaving synchronous SDK calls inside async functions (references: class
ADOProvider and its methods). 3) Avoid mutating SDK model objects returned from
the Azure SDK—construct proper update models or call provided update helpers
instead of setting attributes on objects (fix merge_pull_request where pr.status
and pr.completion_options are assigned; use the appropriate
GitPullRequestUpdate/CompletionOptions model). 4) Eliminate WIQL injection by
parameterizing queries instead of f-string interpolation in list_issues (and
frontend queryWorkItems) — build WIQL with safe parameters or use prepared
filters and validate/whitelist input (references: ADOProvider.list_issues Wiql
usage and frontend ADOApiService.queryWorkItems). Ensure these four fixes are
implemented before proceeding with other changes.
- Line 66: Three fenced code blocks lack language specifiers which breaks syntax
highlighting and markdownlint; update the three blocks referenced at lines 66,
557, and 764 to include an appropriate language tag (e.g., use "tree" or "text"
for directory listings at line 66, "text" or "tree" for the frontend directory
block at line 557, and "text" or "requirements" for the requirements file block
at line 764) by editing the opening fence to include the chosen identifier so
renderers and linters correctly apply syntax highlighting.
- Line 275: The WorkItem dataclass currently defines tags: List[str] = None
which uses a mutable list via a shared default; change the field to use
dataclasses.field with a default factory: update the tags annotation to tags:
List[str] = field(default_factory=list) and add an import for field from
dataclasses (ensure typing.List is already imported); locate the WorkItem class
and replace the default None with field(default_factory=list) so each instance
gets its own list.
- Line 744: The template literal in the fetch call is incorrectly using
`$${type}` and should interpolate the variable `type` as `${type}`; locate the
fetch invocation (the return this.fetch(...) call) that builds the
`/wit/workitems/...` endpoint and replace the malformed `$${type}` with
`${type}` so the endpoint becomes `/wit/workitems/${type}?api-version=7.1`,
ensuring the API path is correctly constructed.
- Around line 491-499: The WIQL query in the Wiql(...) construction is
vulnerable to injection because it interpolates self.project directly; update
the query to use the WIQL server-side @project macro instead of '{self.project}'
(leave state_filter interpolation intact), and ensure you still pass
project=self.project to query_by_wiql() so the macro is resolved server-side;
specifically edit the Wiql(query=...) string used before calling query_by_wiql()
to replace the literal project value with @project.
- Line 361: The repo uses Python 3.10+ type hint syntax (e.g., list[Branch],
list[PullRequest], list[WorkItem], and union syntax in
apps/backend/agents/tools_pkg/models.py), but there is no explicit Python
version pinned; either pin Python >=3.10 in pyproject.toml or setup.py by adding
python = "^3.10" (or equivalent) or convert all 3.10-only annotations to Python
3.9-compatible typing (replace list[...] with typing.List[...] and X | Y unions
with typing.Union[...] or Optional[...]), updating signatures such as
list_branches and the models in models.py to use List, Optional, or Union and
adding the necessary imports from typing.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
guides/ADO-INTEGRATION-PLAN.md
🧰 Additional context used
🪛 LanguageTool
guides/ADO-INTEGRATION-PLAN.md
[uncategorized] ~30-~30: The official name of this software platform is spelled with a capital “H”.
Context: ...| GIT_PROVIDER | Provider selection | github, gitlab, ado | | ADO_PAT | Perso...
(GITHUB)
[uncategorized] ~803-~803: The official name of this software platform is spelled with a capital “H”.
Context: ... get_provider() factory | Backend | | github.com URLs | Provider-specific URLs | Th...
(GITHUB)
🪛 markdownlint-cli2 (0.18.1)
guides/ADO-INTEGRATION-PLAN.md
66-66: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
557-557: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
764-764: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (javascript-typescript)
- GitHub Check: CodeQL (python)
🔇 Additional comments (2)
guides/ADO-INTEGRATION-PLAN.md (2)
599-599: Thetypekeyword syntax is standard TypeScript and does not require TypeScript 5.4+.The project uses TypeScript 5.9.3, and the basic type alias syntax (
type ProviderType = ...) has been valid TypeScript since early versions (1.6+), not 5.4+. No TypeScript version compatibility action is needed.However, the accessibility concern is valid: the provider selection buttons lack proper ARIA attributes. For better a11y compliance, add
role="radio",aria-checked, oraria-selectedattributes, and consider implementing keyboard navigation (arrow keys) for radio-button-like selection patterns.Likely an incorrect or invalid review comment.
313-314: Avoid storing authentication credentials as instance properties.Storing
self.credentialsand maintaining a persistentConnectionobject with PAT authentication as instance properties creates a security risk. If the provider instance is serialized, logged, or debugged, sensitive credentials could be exposed. This pattern differs from the existing GitHub provider implementation, which passes tokens via environment variables instead.Consider:
- Creating credentials fresh for each API call, or
- Using a secure credential manager (e.g.,
keyringlibrary), or- Documenting that instances must never be logged/serialized.
| # Auto-Claude Azure DevOps Integration Plan | ||
|
|
||
| ## Overview | ||
|
|
||
| This document outlines the implementation plan for adding Azure DevOps (ADO) support to Auto-Claude, making it a multi-provider tool that works with GitHub, GitLab, and Azure DevOps. | ||
|
|
||
| **Repository:** [github.com/QliknzMB/Auto-Claude-ADO](https://github.com/QliknzMB/Auto-Claude-ADO) | ||
| **Forked from:** [github.com/AndyMik90/Auto-Claude](https://github.com/AndyMik90/Auto-Claude) | ||
|
|
||
| ### Multi-Provider Architecture | ||
|
|
||
| This fork adds **provider selection** so users can choose their Git platform: | ||
|
|
||
| | Provider | Status | Features | | ||
| |----------|--------|----------| | ||
| | **GitHub** | Existing | PRs, Issues, Webhooks | | ||
| | **GitLab** | Existing | MRs, Issues, Webhooks | | ||
| | **Azure DevOps** | New (this plan) | PRs, Work Items, Service Hooks | | ||
|
|
||
| Users configure their provider in settings or via environment variables. The UI adapts terminology (e.g., "Pull Request" vs "Merge Request", "Issues" vs "Work Items"). | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 1: Environment Configuration | ||
|
|
||
| ### 1.1 New Environment Variables | ||
|
|
||
| | Variable | Purpose | Example | | ||
| |----------|---------|---------| | ||
| | `GIT_PROVIDER` | Provider selection | `github`, `gitlab`, `ado` | | ||
| | `ADO_PAT` | Personal Access Token | `xxxxxxxxxxxxxxxxx` | | ||
| | `ADO_ORGANIZATION` | Organization name | `Aurivue` | | ||
| | `ADO_PROJECT` | Project name | `Aurivue-Platform` | | ||
| | `ADO_REPO_NAME` | Repository name | `Aurivue-Platform` | | ||
| | `ADO_INSTANCE_URL` | Instance URL (optional) | `https://dev.azure.com` | | ||
|
|
||
| ### 1.2 Updated `.env.example` | ||
|
|
||
| ```env | ||
| # Git Provider Selection (github, gitlab, ado) | ||
| GIT_PROVIDER=ado | ||
| # Azure DevOps Configuration | ||
| ADO_PAT=your_personal_access_token | ||
| ADO_ORGANIZATION=YourOrg | ||
| ADO_PROJECT=YourProject | ||
| ADO_REPO_NAME=YourRepo | ||
| ADO_INSTANCE_URL=https://dev.azure.com | ||
| # GitHub (existing) | ||
| # GITHUB_TOKEN= | ||
| # GITHUB_OWNER= | ||
| # GITHUB_REPO= | ||
| # GitLab (existing) | ||
| # GITLAB_TOKEN= | ||
| # GITLAB_INSTANCE_URL= | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 2: Backend Integration Changes | ||
|
|
||
| ### 2.1 Directory Structure (Provider Abstraction) | ||
|
|
||
| ``` | ||
| apps/backend/integrations/ | ||
| ├── __init__.py | ||
| ├── graphiti/ # Keep as-is (memory system) | ||
| ├── linear/ # Keep as-is (optional issue tracker) | ||
| ├── providers/ # NEW: Provider abstraction layer | ||
| │ ├── __init__.py | ||
| │ ├── base.py # Abstract base class for all providers | ||
| │ ├── factory.py # Provider factory (returns GitHub/GitLab/ADO) | ||
| │ └── types.py # Shared types (PR, Issue/WorkItem, etc.) | ||
| ├── github/ # Keep existing | ||
| │ └── github.py | ||
| ├── gitlab/ # Keep existing | ||
| │ └── gitlab.py | ||
| └── ado/ # NEW: Azure DevOps integration | ||
| ├── __init__.py | ||
| ├── client.py # ADO API client wrapper | ||
| ├── repos.py # Repository operations | ||
| ├── pull_requests.py | ||
| ├── work_items.py # ADO equivalent of GitHub Issues | ||
| └── webhooks.py # Service hooks handler | ||
| ``` | ||
|
|
||
| ### 2.2 Provider Abstraction Layer | ||
|
|
||
| **File: `apps/backend/integrations/providers/base.py`** | ||
|
|
||
| ```python | ||
| """Abstract base class for Git providers (GitHub, GitLab, ADO).""" | ||
|
|
||
| from abc import ABC, abstractmethod | ||
| from typing import List, Optional, Dict, Any | ||
| from .types import PullRequest, WorkItem, Repository, Branch | ||
|
|
||
| class GitProvider(ABC): | ||
| """Abstract interface for Git provider integrations.""" | ||
|
|
||
| @property | ||
| @abstractmethod | ||
| def name(self) -> str: | ||
| """Provider name (github, gitlab, ado).""" | ||
| pass | ||
|
|
||
| @property | ||
| @abstractmethod | ||
| def display_name(self) -> str: | ||
| """Human-readable name (GitHub, GitLab, Azure DevOps).""" | ||
| pass | ||
|
|
||
| # Terminology mapping | ||
| @property | ||
| def pr_term(self) -> str: | ||
| """What this provider calls a PR (Pull Request, Merge Request).""" | ||
| return "Pull Request" | ||
|
|
||
| @property | ||
| def issue_term(self) -> str: | ||
| """What this provider calls issues (Issue, Work Item).""" | ||
| return "Issue" | ||
|
|
||
| # Repository operations | ||
| @abstractmethod | ||
| async def get_repository(self) -> Repository: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def list_branches(self) -> List[Branch]: | ||
| pass | ||
|
|
||
| # Pull Request / Merge Request operations | ||
| @abstractmethod | ||
| async def create_pull_request( | ||
| self, | ||
| title: str, | ||
| description: str, | ||
| source_branch: str, | ||
| target_branch: str | ||
| ) -> PullRequest: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def list_pull_requests(self, status: str = "open") -> List[PullRequest]: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def get_pull_request(self, pr_id: int) -> PullRequest: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def merge_pull_request(self, pr_id: int, strategy: str = "squash") -> PullRequest: | ||
| pass | ||
|
|
||
| # Issue / Work Item operations | ||
| @abstractmethod | ||
| async def create_issue( | ||
| self, | ||
| title: str, | ||
| description: str, | ||
| issue_type: str = "task" | ||
| ) -> WorkItem: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def list_issues(self, state: str = "open") -> List[WorkItem]: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def get_issue(self, issue_id: int) -> WorkItem: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| async def close_issue(self, issue_id: int) -> WorkItem: | ||
| pass | ||
| ``` | ||
|
|
||
| **File: `apps/backend/integrations/providers/factory.py`** | ||
|
|
||
| ```python | ||
| """Factory for creating Git provider instances.""" | ||
|
|
||
| import os | ||
| from typing import Optional | ||
| from .base import GitProvider | ||
|
|
||
| def get_provider(provider_name: Optional[str] = None) -> GitProvider: | ||
| """ | ||
| Get a Git provider instance. | ||
| Args: | ||
| provider_name: One of 'github', 'gitlab', 'ado'. | ||
| If None, reads from GIT_PROVIDER env var. | ||
| Returns: | ||
| Configured GitProvider instance. | ||
| """ | ||
| provider = provider_name or os.getenv("GIT_PROVIDER", "github").lower() | ||
|
|
||
| if provider == "github": | ||
| from ..github.github import GitHubProvider | ||
| return GitHubProvider() | ||
| elif provider == "gitlab": | ||
| from ..gitlab.gitlab import GitLabProvider | ||
| return GitLabProvider() | ||
| elif provider in ("ado", "azure", "azuredevops"): | ||
| from ..ado.client import ADOProvider | ||
| return ADOProvider() | ||
| else: | ||
| raise ValueError(f"Unknown provider: {provider}. Use 'github', 'gitlab', or 'ado'.") | ||
| ``` | ||
|
|
||
| **File: `apps/backend/integrations/providers/types.py`** | ||
|
|
||
| ```python | ||
| """Shared types for all Git providers.""" | ||
|
|
||
| from dataclasses import dataclass | ||
| from typing import Optional, List | ||
| from datetime import datetime | ||
|
|
||
| @dataclass | ||
| class Identity: | ||
| """User identity across providers.""" | ||
| display_name: str | ||
| unique_name: str # email or username | ||
| avatar_url: Optional[str] = None | ||
|
|
||
| @dataclass | ||
| class Repository: | ||
| """Repository info (normalized across providers).""" | ||
| id: str | ||
| name: str | ||
| default_branch: str | ||
| web_url: str | ||
| provider: str # github, gitlab, ado | ||
|
|
||
| @dataclass | ||
| class Branch: | ||
| """Branch info.""" | ||
| name: str | ||
| commit_id: str | ||
| is_default: bool = False | ||
|
|
||
| @dataclass | ||
| class PullRequest: | ||
| """Pull Request / Merge Request (normalized).""" | ||
| id: int | ||
| title: str | ||
| description: str | ||
| status: str # open, closed, merged | ||
| source_branch: str | ||
| target_branch: str | ||
| author: Identity | ||
| created_at: datetime | ||
| web_url: str | ||
| provider: str | ||
| provider_id_field: str = "id" | ||
|
|
||
| @dataclass | ||
| class WorkItem: | ||
| """Issue / Work Item (normalized).""" | ||
| id: int | ||
| title: str | ||
| description: Optional[str] | ||
| state: str # open, closed, active, resolved, etc. | ||
| item_type: str # bug, task, feature, user story, etc. | ||
| assigned_to: Optional[Identity] | ||
| created_at: datetime | ||
| web_url: str | ||
| provider: str | ||
| tags: List[str] = None | ||
| ``` | ||
|
|
||
| ### 2.3 ADO Client Implementation | ||
|
|
||
| **File: `apps/backend/integrations/ado/client.py`** | ||
|
|
||
| ```python | ||
| """Azure DevOps API Client for Auto-Claude.""" | ||
|
|
||
| import os | ||
| from azure.devops.connection import Connection | ||
| from msrest.authentication import BasicAuthentication | ||
| from typing import Optional | ||
| from ..providers.base import GitProvider | ||
| from ..providers.types import Repository, Branch, PullRequest, WorkItem, Identity | ||
|
|
||
| class ADOProvider(GitProvider): | ||
| """Azure DevOps provider implementation.""" | ||
|
|
||
| def __init__( | ||
| self, | ||
| organization: Optional[str] = None, | ||
| project: Optional[str] = None, | ||
| repo_name: Optional[str] = None, | ||
| pat: Optional[str] = None, | ||
| instance_url: Optional[str] = None | ||
| ): | ||
| self.organization = organization or os.getenv("ADO_ORGANIZATION") | ||
| self.project = project or os.getenv("ADO_PROJECT") | ||
| self.repo_name = repo_name or os.getenv("ADO_REPO_NAME") | ||
| self.pat = pat or os.getenv("ADO_PAT") | ||
| self.instance_url = instance_url or os.getenv("ADO_INSTANCE_URL", "https://dev.azure.com") | ||
|
|
||
| if not all([self.organization, self.project, self.pat]): | ||
| raise ValueError("ADO_ORGANIZATION, ADO_PROJECT, and ADO_PAT are required") | ||
|
|
||
| self.org_url = f"{self.instance_url}/{self.organization}" | ||
| self.credentials = BasicAuthentication("", self.pat) | ||
| self.connection = Connection(base_url=self.org_url, creds=self.credentials) | ||
|
|
||
| # Lazy-loaded API clients | ||
| self._git_client = None | ||
| self._work_item_client = None | ||
|
|
||
| @property | ||
| def name(self) -> str: | ||
| return "ado" | ||
|
|
||
| @property | ||
| def display_name(self) -> str: | ||
| return "Azure DevOps" | ||
|
|
||
| @property | ||
| def pr_term(self) -> str: | ||
| return "Pull Request" | ||
|
|
||
| @property | ||
| def issue_term(self) -> str: | ||
| return "Work Item" | ||
|
|
||
| @property | ||
| def git(self): | ||
| """Get Git API client.""" | ||
| if self._git_client is None: | ||
| self._git_client = self.connection.clients.get_git_client() | ||
| return self._git_client | ||
|
|
||
| @property | ||
| def work_items(self): | ||
| """Get Work Item Tracking API client.""" | ||
| if self._work_item_client is None: | ||
| self._work_item_client = self.connection.clients.get_work_item_tracking_client() | ||
| return self._work_item_client | ||
|
|
||
| # Repository operations | ||
| async def get_repository(self) -> Repository: | ||
| repo = self.git.get_repository(self.repo_name, project=self.project) | ||
| return Repository( | ||
| id=repo.id, | ||
| name=repo.name, | ||
| default_branch=repo.default_branch.replace("refs/heads/", ""), | ||
| web_url=repo.web_url, | ||
| provider="ado" | ||
| ) | ||
|
|
||
| async def list_branches(self) -> list[Branch]: | ||
| branches = self.git.get_branches(self.repo_name, project=self.project) | ||
| return [ | ||
| Branch( | ||
| name=b.name, | ||
| commit_id=b.commit.commit_id, | ||
| is_default=b.is_base_version | ||
| ) | ||
| for b in branches | ||
| ] | ||
|
|
||
| # Pull Request operations | ||
| async def create_pull_request( | ||
| self, | ||
| title: str, | ||
| description: str, | ||
| source_branch: str, | ||
| target_branch: str | ||
| ) -> PullRequest: | ||
| from azure.devops.v7_1.git.models import GitPullRequest | ||
|
|
||
| source_ref = f"refs/heads/{source_branch}" if not source_branch.startswith("refs/") else source_branch | ||
| target_ref = f"refs/heads/{target_branch}" if not target_branch.startswith("refs/") else target_branch | ||
|
|
||
| pr = GitPullRequest( | ||
| title=title, | ||
| description=description, | ||
| source_ref_name=source_ref, | ||
| target_ref_name=target_ref | ||
| ) | ||
|
|
||
| created = self.git.create_pull_request( | ||
| git_pull_request_to_create=pr, | ||
| repository_id=self.repo_name, | ||
| project=self.project | ||
| ) | ||
|
|
||
| return self._map_pull_request(created) | ||
|
|
||
| async def list_pull_requests(self, status: str = "open") -> list[PullRequest]: | ||
| from azure.devops.v7_1.git.models import GitPullRequestSearchCriteria | ||
|
|
||
| status_map = {"open": "active", "closed": "completed", "all": "all"} | ||
| ado_status = status_map.get(status, "active") | ||
|
|
||
| search_criteria = GitPullRequestSearchCriteria(status=ado_status) | ||
| prs = self.git.get_pull_requests( | ||
| repository_id=self.repo_name, | ||
| search_criteria=search_criteria, | ||
| project=self.project | ||
| ) | ||
|
|
||
| return [self._map_pull_request(pr) for pr in prs] | ||
|
|
||
| async def get_pull_request(self, pr_id: int) -> PullRequest: | ||
| pr = self.git.get_pull_request( | ||
| repository_id=self.repo_name, | ||
| pull_request_id=pr_id, | ||
| project=self.project | ||
| ) | ||
| return self._map_pull_request(pr) | ||
|
|
||
| async def merge_pull_request(self, pr_id: int, strategy: str = "squash") -> PullRequest: | ||
| pr = self.git.get_pull_request( | ||
| repository_id=self.repo_name, | ||
| pull_request_id=pr_id, | ||
| project=self.project | ||
| ) | ||
|
|
||
| pr.status = "completed" | ||
| pr.completion_options = { | ||
| "deleteSourceBranch": True, | ||
| "mergeStrategy": strategy | ||
| } | ||
|
|
||
| updated = self.git.update_pull_request( | ||
| git_pull_request_to_update=pr, | ||
| repository_id=self.repo_name, | ||
| pull_request_id=pr_id, | ||
| project=self.project | ||
| ) | ||
|
|
||
| return self._map_pull_request(updated) | ||
|
|
||
| def _map_pull_request(self, pr) -> PullRequest: | ||
| return PullRequest( | ||
| id=pr.pull_request_id, | ||
| title=pr.title, | ||
| description=pr.description or "", | ||
| status="open" if pr.status == "active" else "closed", | ||
| source_branch=pr.source_ref_name.replace("refs/heads/", ""), | ||
| target_branch=pr.target_ref_name.replace("refs/heads/", ""), | ||
| author=Identity( | ||
| display_name=pr.created_by.display_name, | ||
| unique_name=pr.created_by.unique_name | ||
| ), | ||
| created_at=pr.creation_date, | ||
| web_url=f"{self.instance_url}/{self.organization}/{self.project}/_git/{self.repo_name}/pullrequest/{pr.pull_request_id}", | ||
| provider="ado", | ||
| provider_id_field="pullRequestId" | ||
| ) | ||
|
|
||
| # Work Item operations | ||
| async def create_issue( | ||
| self, | ||
| title: str, | ||
| description: str, | ||
| issue_type: str = "task" | ||
| ) -> WorkItem: | ||
| from azure.devops.v7_1.work_item_tracking.models import JsonPatchOperation | ||
|
|
||
| type_map = {"bug": "Bug", "task": "Task", "feature": "User Story"} | ||
| ado_type = type_map.get(issue_type.lower(), "Task") | ||
|
|
||
| operations = [ | ||
| JsonPatchOperation(op="add", path="/fields/System.Title", value=title), | ||
| JsonPatchOperation(op="add", path="/fields/System.Description", value=description) | ||
| ] | ||
|
|
||
| wi = self.work_items.create_work_item( | ||
| document=operations, | ||
| project=self.project, | ||
| type=ado_type | ||
| ) | ||
|
|
||
| return self._map_work_item(wi) | ||
|
|
||
| async def list_issues(self, state: str = "open") -> list[WorkItem]: | ||
| from azure.devops.v7_1.work_item_tracking.models import Wiql | ||
|
|
||
| state_filter = "Active" if state == "open" else "Closed" | ||
|
|
||
| wiql = Wiql(query=f""" | ||
| SELECT [System.Id] | ||
| FROM WorkItems | ||
| WHERE [System.TeamProject] = '{self.project}' | ||
| AND [System.State] = '{state_filter}' | ||
| ORDER BY [System.ChangedDate] DESC | ||
| """) | ||
|
|
||
| result = self.work_items.query_by_wiql(wiql, project=self.project) | ||
|
|
||
| if not result.work_items: | ||
| return [] | ||
|
|
||
| ids = [wi.id for wi in result.work_items[:200]] | ||
| items = self.work_items.get_work_items(ids=ids, expand="All") | ||
|
|
||
| return [self._map_work_item(wi) for wi in items] | ||
|
|
||
| async def get_issue(self, issue_id: int) -> WorkItem: | ||
| wi = self.work_items.get_work_item(id=issue_id, project=self.project, expand="All") | ||
| return self._map_work_item(wi) | ||
|
|
||
| async def close_issue(self, issue_id: int) -> WorkItem: | ||
| from azure.devops.v7_1.work_item_tracking.models import JsonPatchOperation | ||
|
|
||
| operations = [ | ||
| JsonPatchOperation(op="replace", path="/fields/System.State", value="Closed") | ||
| ] | ||
|
|
||
| wi = self.work_items.update_work_item( | ||
| document=operations, | ||
| id=issue_id, | ||
| project=self.project | ||
| ) | ||
|
|
||
| return self._map_work_item(wi) | ||
|
|
||
| def _map_work_item(self, wi) -> WorkItem: | ||
| fields = wi.fields | ||
| assigned = fields.get("System.AssignedTo") | ||
|
|
||
| return WorkItem( | ||
| id=wi.id, | ||
| title=fields.get("System.Title", ""), | ||
| description=fields.get("System.Description"), | ||
| state="open" if fields.get("System.State") in ["New", "Active"] else "closed", | ||
| item_type=fields.get("System.WorkItemType", "Task"), | ||
| assigned_to=Identity( | ||
| display_name=assigned.get("displayName", ""), | ||
| unique_name=assigned.get("uniqueName", "") | ||
| ) if assigned else None, | ||
| created_at=fields.get("System.CreatedDate"), | ||
| web_url=f"{self.instance_url}/{self.organization}/{self.project}/_workitems/edit/{wi.id}", | ||
| provider="ado", | ||
| tags=fields.get("System.Tags", "").split("; ") if fields.get("System.Tags") else [] | ||
| ) | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 3: Frontend Integration Changes | ||
|
|
||
| ### 3.1 Directory Structure (Provider Abstraction) | ||
|
|
||
| ``` | ||
| apps/frontend/src/renderer/features/ | ||
| ├── tasks/ # Keep as-is | ||
| ├── terminals/ # Keep as-is | ||
| ├── projects/ # Keep as-is | ||
| ├── settings/ # Add provider selection UI | ||
| ├── git-provider/ # NEW: Unified provider feature (replaces github/) | ||
| │ ├── components/ | ||
| │ │ ├── ProviderSelector.tsx # Dropdown: GitHub / GitLab / ADO | ||
| │ │ ├── ProviderSettings.tsx # Config form (adapts per provider) | ||
| │ │ ├── PullRequestList.tsx # Generic PR/MR list | ||
| │ │ ├── PullRequestCard.tsx | ||
| │ │ ├── IssueList.tsx # Generic Issue/Work Item list | ||
| │ │ └── IssueCard.tsx | ||
| │ ├── hooks/ | ||
| │ │ ├── useProvider.ts # Get current provider | ||
| │ │ ├── usePullRequests.ts | ||
| │ │ ├── useIssues.ts | ||
| │ │ └── useProviderConfig.ts | ||
| │ ├── services/ | ||
| │ │ ├── providerFactory.ts # Returns correct API service | ||
| │ │ ├── githubApi.ts | ||
| │ │ ├── gitlabApi.ts | ||
| │ │ └── adoApi.ts | ||
| │ ├── types/ | ||
| │ │ ├── provider.types.ts # Shared types | ||
| │ │ ├── github.types.ts | ||
| │ │ ├── gitlab.types.ts | ||
| │ │ └── ado.types.ts | ||
| │ └── context/ | ||
| │ └── ProviderContext.tsx # React context for provider state | ||
| ``` | ||
|
|
||
| ### 3.2 Provider Selection UI | ||
|
|
||
| **File: `apps/frontend/src/renderer/features/git-provider/components/ProviderSelector.tsx`** | ||
|
|
||
| ```tsx | ||
| import React from 'react'; | ||
| import { useTranslation } from 'react-i18next'; | ||
| import { useProviderConfig } from '../hooks/useProviderConfig'; | ||
|
|
||
| type ProviderType = 'github' | 'gitlab' | 'ado'; | ||
|
|
||
| interface ProviderOption { | ||
| id: ProviderType; | ||
| name: string; | ||
| icon: string; | ||
| description: string; | ||
| } | ||
|
|
||
| const PROVIDERS: ProviderOption[] = [ | ||
| { | ||
| id: 'github', | ||
| name: 'GitHub', | ||
| icon: '🐙', | ||
| description: 'GitHub.com or GitHub Enterprise', | ||
| }, | ||
| { | ||
| id: 'gitlab', | ||
| name: 'GitLab', | ||
| icon: '🦊', | ||
| description: 'GitLab.com or self-hosted GitLab', | ||
| }, | ||
| { | ||
| id: 'ado', | ||
| name: 'Azure DevOps', | ||
| icon: '🔷', | ||
| description: 'Azure DevOps Services or Server', | ||
| }, | ||
| ]; | ||
|
|
||
| export function ProviderSelector() { | ||
| const { t } = useTranslation(); | ||
| const { provider, setProvider } = useProviderConfig(); | ||
|
|
||
| return ( | ||
| <div className="provider-selector"> | ||
| <h3>{t('settings.gitProvider.title', 'Git Provider')}</h3> | ||
| <p className="text-muted"> | ||
| {t('settings.gitProvider.description', 'Select your Git hosting platform')} | ||
| </p> | ||
|
|
||
| <div className="provider-options"> | ||
| {PROVIDERS.map((p) => ( | ||
| <button | ||
| key={p.id} | ||
| className={`provider-option ${provider === p.id ? 'selected' : ''}`} | ||
| onClick={() => setProvider(p.id)} | ||
| > | ||
| <span className="provider-icon">{p.icon}</span> | ||
| <div className="provider-info"> | ||
| <span className="provider-name">{p.name}</span> | ||
| <span className="provider-desc">{p.description}</span> | ||
| </div> | ||
| {provider === p.id && <span className="checkmark">✓</span>} | ||
| </button> | ||
| ))} | ||
| </div> | ||
| </div> | ||
| ); | ||
| } | ||
| ``` | ||
|
|
||
| ### 3.3 ADO API Service (Frontend) | ||
|
|
||
| **File: `apps/frontend/src/renderer/features/git-provider/services/adoApi.ts`** | ||
|
|
||
| ```typescript | ||
| import type { ADOConfig, PullRequest, WorkItem } from '../types/ado.types'; | ||
|
|
||
| class ADOApiService { | ||
| private config: ADOConfig | null = null; | ||
| private baseUrl: string = ''; | ||
|
|
||
| initialize(config: ADOConfig) { | ||
| this.config = config; | ||
| const instanceUrl = config.instanceUrl || 'https://dev.azure.com'; | ||
| this.baseUrl = `${instanceUrl}/${config.organization}/${config.project}/_apis`; | ||
| } | ||
|
|
||
| private async fetch<T>(endpoint: string, options: RequestInit = {}): Promise<T> { | ||
| if (!this.config) { | ||
| throw new Error('ADO API not initialized'); | ||
| } | ||
|
|
||
| const url = `${this.baseUrl}${endpoint}`; | ||
| const headers = { | ||
| 'Content-Type': 'application/json', | ||
| Authorization: `Basic ${btoa(`:${this.config.pat}`)}`, | ||
| ...options.headers, | ||
| }; | ||
|
|
||
| const response = await fetch(url, { ...options, headers }); | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error(`ADO API error: ${response.status}`); | ||
| } | ||
|
|
||
| return response.json(); | ||
| } | ||
|
|
||
| // Pull Request operations | ||
| async getPullRequests(status: string = 'active'): Promise<{ value: PullRequest[] }> { | ||
| return this.fetch( | ||
| `/git/repositories/${this.config!.repoName}/pullrequests?searchCriteria.status=${status}&api-version=7.1` | ||
| ); | ||
| } | ||
|
|
||
| async createPullRequest( | ||
| title: string, | ||
| description: string, | ||
| sourceBranch: string, | ||
| targetBranch: string = 'develop' | ||
| ): Promise<PullRequest> { | ||
| return this.fetch(`/git/repositories/${this.config!.repoName}/pullrequests?api-version=7.1`, { | ||
| method: 'POST', | ||
| body: JSON.stringify({ | ||
| title, | ||
| description, | ||
| sourceRefName: `refs/heads/${sourceBranch}`, | ||
| targetRefName: `refs/heads/${targetBranch}`, | ||
| }), | ||
| }); | ||
| } | ||
|
|
||
| // Work Item operations | ||
| async queryWorkItems(wiql: string): Promise<{ workItems: { id: number }[] }> { | ||
| return this.fetch(`/wit/wiql?api-version=7.1`, { | ||
| method: 'POST', | ||
| body: JSON.stringify({ query: wiql }), | ||
| }); | ||
| } | ||
|
|
||
| async getWorkItems(ids: number[]): Promise<{ value: WorkItem[] }> { | ||
| return this.fetch(`/wit/workitems?ids=${ids.join(',')}&$expand=All&api-version=7.1`); | ||
| } | ||
|
|
||
| async createWorkItem(type: string, title: string, description?: string): Promise<WorkItem> { | ||
| const operations = [ | ||
| { op: 'add', path: '/fields/System.Title', value: title }, | ||
| ]; | ||
|
|
||
| if (description) { | ||
| operations.push({ op: 'add', path: '/fields/System.Description', value: description }); | ||
| } | ||
|
|
||
| return this.fetch(`/wit/workitems/$${type}?api-version=7.1`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json-patch+json' }, | ||
| body: JSON.stringify(operations), | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| export const adoApi = new ADOApiService(); | ||
| export default adoApi; | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 4: Dependencies | ||
|
|
||
| ### Backend (Python) | ||
|
|
||
| Add to `apps/backend/requirements.txt`: | ||
|
|
||
| ``` | ||
| azure-devops>=7.1.0 | ||
| msrest>=0.7.1 | ||
| ``` | ||
|
|
||
| ### Frontend (Node.js) | ||
|
|
||
| Add to `apps/frontend/package.json`: | ||
|
|
||
| ```json | ||
| { | ||
| "dependencies": { | ||
| "azure-devops-node-api": "^14.0.0" | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 5: Migration Checklist | ||
|
|
||
| ### Files to Modify | ||
|
|
||
| | File | Action | Priority | | ||
| |------|--------|----------| | ||
| | `apps/backend/.env.example` | Add ADO vars | High | | ||
| | `apps/backend/requirements.txt` | Add azure-devops | High | | ||
| | `apps/backend/integrations/__init__.py` | Export providers | High | | ||
| | `apps/frontend/package.json` | Add azure-devops-node-api | High | | ||
| | `apps/frontend/src/main/index.ts` | Register ADO IPC handlers | High | | ||
| | `apps/frontend/src/renderer/features/settings/` | Add provider selector | Medium | | ||
| | `README.md` | Document multi-provider setup | Medium | | ||
|
|
||
| ### Search & Replace Patterns | ||
|
|
||
| | Search | Replace | Scope | | ||
| |--------|---------|-------| | ||
| | `GITHUB_TOKEN` | `ADO_PAT` (when provider=ado) | Environment | | ||
| | Direct GitHub API calls | `get_provider()` factory | Backend | | ||
| | `github.com` URLs | Provider-specific URLs | Throughout | | ||
|
|
||
| --- | ||
|
|
||
| ## Phase 6: API Mapping Reference | ||
|
|
||
| | GitHub API | ADO API | Notes | | ||
| |------------|---------|-------| | ||
| | `pulls.create()` | `git_client.create_pull_request()` | Refs need `refs/heads/` prefix | | ||
| | `pulls.list()` | `git_client.get_pull_requests()` | Uses search criteria | | ||
| | `pulls.merge()` | `git_client.update_pull_request()` | Set status="completed" | | ||
| | `issues.create()` | `wit_client.create_work_item()` | JSON Patch operations | | ||
| | `issues.list()` | `wit_client.query_by_wiql()` | WIQL query language | | ||
| | Webhooks | Service Hooks | Different payload format | | ||
|
|
||
| --- | ||
|
|
||
| ## Appendix: ADO API Quick Reference | ||
|
|
||
| ### Authentication | ||
|
|
||
| ```bash | ||
| # Base64 encode PAT (empty username) | ||
| echo -n ":$ADO_PAT" | base64 | ||
|
|
||
| # Use in header | ||
| curl -H "Authorization: Basic $(echo -n :$ADO_PAT | base64)" \ | ||
| "https://dev.azure.com/Aurivue/Aurivue-Platform/_apis/git/repositories?api-version=7.1" | ||
| ``` | ||
|
|
||
| ### Common Endpoints | ||
|
|
||
| | Operation | Endpoint | | ||
| |-----------|----------| | ||
| | List repos | `GET /_apis/git/repositories` | | ||
| | List PRs | `GET /_apis/git/repositories/{repo}/pullrequests` | | ||
| | Create PR | `POST /_apis/git/repositories/{repo}/pullrequests` | | ||
| | Query work items | `POST /_apis/wit/wiql` | | ||
| | Create work item | `POST /_apis/wit/workitems/${type}` | | ||
|
|
||
| ### WIQL Examples | ||
|
|
||
| ```sql | ||
| -- Active bugs assigned to me | ||
| SELECT [System.Id], [System.Title] | ||
| FROM WorkItems | ||
| WHERE [System.TeamProject] = 'Aurivue-Platform' | ||
| AND [System.WorkItemType] = 'Bug' | ||
| AND [System.State] = 'Active' | ||
| AND [System.AssignedTo] = @Me | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Resources | ||
|
|
||
| - [Azure DevOps REST API Reference](https://learn.microsoft.com/en-us/rest/api/azure/devops/) | ||
| - [azure-devops-node-api](https://github.com/microsoft/azure-devops-node-api) | ||
| - [Azure DevOps Python API](https://github.com/microsoft/azure-devops-python-api) | ||
| - [Service Hooks Documentation](https://learn.microsoft.com/en-us/azure/devops/service-hooks/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall Assessment: Comprehensive plan with significant implementation concerns.
This is a well-structured design document for ADO integration. However, multiple critical and major issues must be addressed before implementation:
Critical (must fix before coding):
- Frontend credential storage security—PAT in JavaScript memory is unacceptable
- Async/await correctness in backend ADO client
- Azure SDK object mutation compatibility
- WIQL injection vulnerability
Major (address during implementation):
- Python 3.10+ type hint compatibility
- Frontend error handling for API failures
- Endpoint syntax validation (
$${type}) - Security documentation for credentials and query parameterization
Recommended refactors:
- Backend API via secure proxy pattern (avoid PAT in frontend)
- TypeScript/i18n accessibility (aria-selected, keyboard navigation)
- Tighter dependency version pinning (~ instead of ^)
- Comprehensive error handling and retry logic
The architecture is sound, but implementation will require careful attention to security, API compatibility, and error resilience.
🧰 Tools
🪛 LanguageTool
[uncategorized] ~30-~30: The official name of this software platform is spelled with a capital “H”.
Context: ...| GIT_PROVIDER | Provider selection | github, gitlab, ado | | ADO_PAT | Perso...
(GITHUB)
[uncategorized] ~803-~803: The official name of this software platform is spelled with a capital “H”.
Context: ... get_provider() factory | Backend | | github.com URLs | Provider-specific URLs | Th...
(GITHUB)
🪛 markdownlint-cli2 (0.18.1)
66-66: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
557-557: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
764-764: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md around lines 1 - 862, Summary: The review
flags four critical issues to fix before implementation: insecure PAT handling
in frontend, incorrect async/await use in backend ADO client, unsafe mutation of
Azure SDK objects, and a WIQL injection risk. Fix instructions: 1) Remove PAT
usage from frontend ADOApiService.initialize and ProviderSelector flow — instead
implement a backend proxy/IPC endpoint that holds ADO_PAT securely and performs
ADO REST calls; update frontend to call that proxy (references:
ADOApiService.initialize, ProviderSelector and adoApi). 2) Correct async usage
in ADOProvider methods: ensure ADOProvider.get_repository, list_branches,
create_pull_request, list_pull_requests, get_pull_request, merge_pull_request,
create_issue, list_issues, get_issue, close_issue properly await or run blocking
azure-devops client calls off the event loop (use sync wrappers or
asyncio.to_thread) rather than leaving synchronous SDK calls inside async
functions (references: class ADOProvider and its methods). 3) Avoid mutating SDK
model objects returned from the Azure SDK—construct proper update models or call
provided update helpers instead of setting attributes on objects (fix
merge_pull_request where pr.status and pr.completion_options are assigned; use
the appropriate GitPullRequestUpdate/CompletionOptions model). 4) Eliminate WIQL
injection by parameterizing queries instead of f-string interpolation in
list_issues (and frontend queryWorkItems) — build WIQL with safe parameters or
use prepared filters and validate/whitelist input (references:
ADOProvider.list_issues Wiql usage and frontend ADOApiService.queryWorkItems).
Ensure these four fixes are implemented before proceeding with other changes.
|
|
||
| | Variable | Purpose | Example | | ||
| |----------|---------|---------| | ||
| | `GIT_PROVIDER` | Provider selection | `github`, `gitlab`, `ado` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Capitalize "GitHub" in table entries and throughout.
Lines 30 and 803 contain lowercase github which should follow the official GitHub branding with a capital "H".
📝 Proposed fixes
- | Provider | Status | Features |
- |----------|--------|----------|
- | **GitHub** | Existing | PRs, Issues, Webhooks |
+ | Provider | Status | Features |
+ |----------|--------|----------|
+ | **GitHub** | Existing | PRs, Issues, Webhooks |Also update line 803:
- | `github.com` URLs | Provider-specific URLs | Throughout |
+ | `GitHub.com` URLs | Provider-specific URLs | Throughout |Also applies to: 803-803
🧰 Tools
🪛 LanguageTool
[uncategorized] ~30-~30: The official name of this software platform is spelled with a capital “H”.
Context: ...| GIT_PROVIDER | Provider selection | github, gitlab, ado | | ADO_PAT | Perso...
(GITHUB)
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 30, Replace lowercase vendor branding
"github" with the official capitalized form "GitHub" wherever it appears as a
value for the GIT_PROVIDER option and elsewhere in the document; specifically
update the table row that lists `GIT_PROVIDER` values (currently showing
`github`, `gitlab`, `ado`) to use `GitHub` and search for other occurrences
(e.g., the other instance flagged) and change them to "GitHub" to match official
branding.
|
|
||
| ### 2.1 Directory Structure (Provider Abstraction) | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language specifiers to fenced code blocks.
Lines 66, 557, and 764 have fenced code blocks without language identifiers (e.g., python, bash, ```json). This:
- Breaks syntax highlighting in markdown renderers
- Violates markdownlint rules
- Reduces readability
Add language identifiers to all code blocks:
-```
+```pythonApply to:
- Line 66: Directory structure (no language needed, but use
treeortext) - Line 557: Frontend directory structure (use
textortree) - Line 764: Requirements file (use
textorrequirements)
Also applies to: 557-557, 764-764
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
66-66: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 66, Three fenced code blocks lack
language specifiers which breaks syntax highlighting and markdownlint; update
the three blocks referenced at lines 66, 557, and 764 to include an appropriate
language tag (e.g., use "tree" or "text" for directory listings at line 66,
"text" or "tree" for the frontend directory block at line 557, and "text" or
"requirements" for the requirements file block at line 764) by editing the
opening fence to include the chosen identifier so renderers and linters
correctly apply syntax highlighting.
| created_at: datetime | ||
| web_url: str | ||
| provider: str | ||
| tags: List[str] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument in WorkItem dataclass.
Using None as a default for a mutable List[str] field is not idiomatic. Use field(default_factory=list) to avoid sharing state between instances.
🔧 Proposed fix
from dataclasses import dataclass
+from dataclasses import field
from typing import Optional, List
from datetime import datetime
@dataclass
class WorkItem:
"""Issue / Work Item (normalized)."""
id: int
title: str
description: Optional[str]
state: str
item_type: str
assigned_to: Optional[Identity]
created_at: datetime
web_url: str
provider: str
- tags: List[str] = None
+ tags: List[str] = field(default_factory=list)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 275, The WorkItem dataclass currently
defines tags: List[str] = None which uses a mutable list via a shared default;
change the field to use dataclasses.field with a default factory: update the
tags annotation to tags: List[str] = field(default_factory=list) and add an
import for field from dataclasses (ensure typing.List is already imported);
locate the WorkItem class and replace the default None with
field(default_factory=list) so each instance gets its own list.
| return self._work_item_client | ||
|
|
||
| # Repository operations | ||
| async def get_repository(self) -> Repository: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat guides/ADO-INTEGRATION-PLAN.md | tail -200Repository: AndyMik90/Auto-Claude
Length of output: 5867
🌐 Web query:
Azure DevOps Python SDK async operations support
💡 Result:
Short answer: The official Azure DevOps Python package (azure-devops / microsoft/azure-devops-python-api) is synchronous only — there is no official async/await client. [1]
Options if you need async behavior:
- Call the Azure DevOps REST API directly from an async HTTP client (aiohttp or httpx). Microsoft’s REST API docs include async examples and are the canonical surface to call. [2]
- Use a community async wrapper (e.g., aioazuredevops) or similar third‑party packages on PyPI. [3]
- Keep using the official SDK but run blocking calls in an executor (ThreadPoolExecutor) to avoid blocking an event loop.
For general Azure Python libraries, Microsoft provides .aio async clients for services that support it — but that pattern is not applied to the azure-devops Python client. [4]
References:
- Official Azure DevOps Python API repo (sync client). [1]
- Azure DevOps REST API docs / samples (recommended for async HTTP calls). [2]
- Example community async package on PyPI (aioazuredevops). [3]
- Azure Python SDK async guidance (.aio pattern). [4]
🏁 Script executed:
sed -n '400,425p' guides/ADO-INTEGRATION-PLAN.mdRepository: AndyMik90/Auto-Claude
Length of output: 1107
🏁 Script executed:
sed -n '460,520p' guides/ADO-INTEGRATION-PLAN.mdRepository: AndyMik90/Auto-Claude
Length of output: 2155
🌐 Web query:
azure-devops Python SDK async await support
💡 Result:
Short answer: The official microsoft/azure-devops-python-api (pip package azure-devops) is a synchronous wrapper around the Azure DevOps REST API and does not provide async/await (.aio) clients. See the repo and docs. [1][2]
If you need async:
- Call the Azure DevOps REST endpoints directly using an async HTTP client (aiohttp or httpx) and await requests. (This is the common workaround; the SDK is just a thin sync wrapper.) [2]
- Or run the sync SDK calls in a threadpool (asyncio.to_thread or loop.run_in_executor) if you must reuse the existing library.
- Note: other Azure Python SDKs follow the async .aio pattern, but that applies to Azure service SDKs (azure-*.aio) — not the Azure DevOps Python API. [3]
References:
- microsoft/azure-devops-python-api (repo README). [4]
- GitHub issues / samples showing the SDK is a REST wrapper and samples are not actively maintained. [5][6]
- Azure SDK Python guidelines / async .aio pattern (explains .aio clients for other Azure SDKs). [3]
Would you like a small aiohttp example that calls an Azure DevOps REST API (with PAT auth)?
🌐 Web query:
azure-devops-python-api GitClient synchronous blocking calls
💡 Result:
Short answer: The azure-devops-python-api GitClient is synchronous/blocking. The library is a thin synchronous wrapper around the Azure DevOps REST API (methods issue HTTP requests and return results), it does not ship separate async/aio clients — so calls block the calling thread. [1][2]
If you need non‑blocking behavior in an async app, run calls in a thread or executor (eg. asyncio.to_thread or loop.run_in_executor) or offload to worker threads/processes. (If you want, I can show a small example.)
Sources:
- azure-devops-python-api (repo README / docs) — thin wrapper / sync calls. [1]
- Azure Python SDK guidelines (sync vs async separate clients; async libs use .aio) — guidance explaining async clients are separate. [2]
References:
[1] https://github.com/microsoft/azure-devops-python-api
[2] https://azure.github.io/azure-sdk/python_design.html
🌐 Web query:
azure-devops-python-api is synchronous blocking library
💡 Result:
Short answer: Yes — azure-devops-python-api is a synchronous, blocking thin wrapper around the Azure DevOps REST APIs (no official async/.aio client). [1][2]
Notes:
- The repo README describes it as a "thin wrapper around the Azure DevOps REST APIs" and shows synchronous client usage. [1]
- Azure's modern Python SDK pattern documents the separate async (.aio) packages for libraries that provide async support; the azure-devops-python-api is not part of that async/.aio ecosystem. [2]
If you need non‑blocking calls you can:
- Run the synchronous client in a ThreadPoolExecutor / background thread, or
- Implement your own async HTTP transport that calls the REST endpoints (e.g., using httpx or aiohttp). [2]
Sources:
[1] microsoft/azure-devops-python-api README.
[2] Azure Python SDK docs — async libraries / transports.
Refactor async methods to properly handle blocking Azure DevOps SDK calls.
The async methods call blocking synchronous operations from the azure-devops-python-api library without await or asyncio.to_thread(). The official Azure DevOps SDK is synchronous-only with no async support. These method signatures create a contract that they won't block the event loop, but they do.
Either:
- Remove the
asyncdecorator and handle synchronously, or - Wrap blocking calls with
asyncio.to_thread()(e.g.,await asyncio.to_thread(self.git.get_pull_requests, ...)), or - Call the Azure DevOps REST API directly using an async HTTP client (aiohttp/httpx).
Affects: list_pull_requests(), get_pull_request(), merge_pull_request(), create_issue(), list_issues(), get_issue(), close_issue(), and similar methods that invoke blocking SDK operations.
| const url = `${this.baseUrl}${endpoint}`; | ||
| const headers = { | ||
| 'Content-Type': 'application/json', | ||
| Authorization: `Basic ${btoa(`:${this.config.pat}`)}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid storing PAT in frontend JavaScript memory.
Line 686: The Personal Access Token (PAT) is stored in the config object and used for every fetch request. Frontend JavaScript is inherently insecure for credential storage—users can inspect memory and local storage.
Best practices:
- Proxy API calls through the backend: Let the backend handle ADO authentication and expose sanitized endpoints.
- Use OAuth/MSAL: Azure DevOps supports OAuth; use Azure AD authentication instead of PAT.
- Temporary tokens: Fetch short-lived tokens from the backend, not long-lived PATs.
This is a significant security gap and should be addressed before deployment.
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 686, The code is exposing the
Personal Access Token via this.config.pat in the frontend Authorization header;
remove any usage of this.config.pat from client-side code and instead route ADO
requests through a backend endpoint that injects the Authorization header, or
implement OAuth/MSAL to obtain short-lived tokens server-side; update any
frontend call sites that currently build headers like `Authorization: \`Basic
${btoa(`:${this.config.pat}`)}\`` to call your new backend proxy (or OAuth flow)
and have the backend add the Authorization header using a securely stored PAT or
access token.
| const response = await fetch(url, { ...options, headers }); | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error(`ADO API error: ${response.status}`); | ||
| } | ||
|
|
||
| return response.json(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for API fetch calls.
Lines 690-696: The fetch() method lacks error handling for network failures, timeouts, or JSON parsing errors. Users won't be notified if API calls fail.
Add try/catch blocks and proper error propagation:
private async fetch<T>(endpoint: string, options: RequestInit = {}): Promise<T> {
// ... existing code ...
try {
const response = await fetch(url, { ...options, headers });
if (!response.ok) {
throw new Error(`ADO API error: ${response.status}`);
}
return response.json();
} catch (error) {
console.error(`API call failed for ${endpoint}:`, error);
throw error; // Let caller handle
}
}🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md around lines 690 - 696, The fetch<T> function
lacks error handling for network, timeout, and JSON parse failures; wrap the
fetch and subsequent response.json() in a try/catch inside the private async
fetch<T>(endpoint: string, options: RequestInit = {}) method, check response.ok
as before, and on any exception log a clear message including the endpoint and
error (e.g., via console.error or the project's logger) and rethrow the error so
callers can handle it; ensure the catch also covers JSON parsing errors by
awaiting response.json() inside the try block.
| operations.push({ op: 'add', path: '/fields/System.Description', value: description }); | ||
| } | ||
|
|
||
| return this.fetch(`/wit/workitems/$${type}?api-version=7.1`, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd /root && find . -name "ADO-INTEGRATION-PLAN.md" -type f 2>/dev/null | head -20Repository: AndyMik90/Auto-Claude
Length of output: 120
🏁 Script executed:
# Read the file to see the context around line 744
if [ -f "./guides/ADO-INTEGRATION-PLAN.md" ]; then
wc -l ./guides/ADO-INTEGRATION-PLAN.md
fiRepository: AndyMik90/Auto-Claude
Length of output: 101
🏁 Script executed:
# Get context around line 744
if [ -f "./guides/ADO-INTEGRATION-PLAN.md" ]; then
sed -n '735,755p' ./guides/ADO-INTEGRATION-PLAN.md
fiRepository: AndyMik90/Auto-Claude
Length of output: 664
🌐 Web query:
Azure DevOps REST API create work item endpoint syntax endpoint format
💡 Result:
Endpoint format
- POST https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/${type}?api-version={version}. [1] (learn.microsoft.com)
Headers
- Authorization: Basic <base64(user:PAT)>
- Content-Type: application/json-patch+json. [1] (learn.microsoft.com)
Request body (JSON Patch array)
- Example:
[
{ "op": "add", "path": "/fields/System.Title", "from": null, "value": "Sample task" },
{ "op": "add", "path": "/fields/System.AssignedTo", "value": "[email protected]" }
] - Use the work item type in the path (e.g., $Task, $Bug, $User Story). [2] (learn.microsoft.com)
Optional query params
- validateOnly, bypassRules, suppressNotifications, $expand. [1] (learn.microsoft.com)
Notes
- Use the latest documented api-version (e.g., 7.1 or preview versions as needed). See Microsoft docs for examples and latest versions. [1] (learn.microsoft.com)
Sources
[1] Microsoft Learn — Work Items - Create (Azure DevOps REST API). (learn.microsoft.com)
[2] Microsoft docs sample request (JSON Patch example). (learn.microsoft.com)
Citations:
- 1: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.1&utm_source=openai
- 2: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.1&utm_source=openai
- 3: https://learn.microsoft.com/en-gb/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-5.0&utm_source=openai
- 4: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.1&utm_source=openai
- 5: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.1&utm_source=openai
- 6: https://learn.microsoft.com/en-us/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-7.1&utm_source=openai
- 7: https://learn.microsoft.com/en-gb/rest/api/azure/devops/wit/work-items/create?view=azure-devops-rest-5.0&utm_source=openai
Fix malformed template literal interpolation in API endpoint.
Line 744: The endpoint uses $${type} which is incorrect. In JavaScript template literals, ${variable} is the correct syntax for interpolation. The extra $ will produce a malformed endpoint path.
Update to:
return this.fetch(`/wit/workitems/${type}?api-version=7.1`, {The double $$ syntax is not valid for template literal variable interpolation and will cause the API request to fail.
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 744, The template literal in the
fetch call is incorrectly using `$${type}` and should interpolate the variable
`type` as `${type}`; locate the fetch invocation (the return this.fetch(...)
call) that builds the `/wit/workitems/...` endpoint and replace the malformed
`$${type}` with `${type}` so the endpoint becomes
`/wit/workitems/${type}?api-version=7.1`, ensuring the API path is correctly
constructed.
| Add to `apps/backend/requirements.txt`: | ||
|
|
||
| ``` | ||
| azure-devops>=7.1.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Tighten dependency version pinning.
Lines 765 and 776 use loose version constraints:
azure-devops>=7.1.0allows any version >= 7.1.0, including breaking changes (8.0.0, 9.0.0, etc.)azure-devops-node-api@^14.0.0allows breaking changes up to 15.0.0
For production stability, use tighter constraints:
- azure-devops>=7.1.0
+ azure-devops~=7.1.0- "azure-devops-node-api": "^14.0.0"
+ "azure-devops-node-api": "~14.0.0"This ensures patch updates (7.1.x) but prevents major breaking changes (8.0.0).
Also applies to: 776-776
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md at line 765, Replace the loose semver ranges
with tighter patch-level pins: change the "azure-devops>=7.1.0" entry to a
patch-bound specifier (e.g., "~7.1.0" or "7.1.x" or an exact "7.1.0") to allow
only 7.1.* fixes, and change "azure-devops-node-api@^14.0.0" to a patch-bound
specifier like "~14.0.0" (or "14.0.x" / exact "14.0.0") so only patch releases
are accepted; update the two occurrences referenced by their exact strings
("azure-devops>=7.1.0" and "azure-devops-node-api@^14.0.0") accordingly.
| ## Appendix: ADO API Quick Reference | ||
|
|
||
| ### Authentication | ||
|
|
||
| ```bash | ||
| # Base64 encode PAT (empty username) | ||
| echo -n ":$ADO_PAT" | base64 | ||
|
|
||
| # Use in header | ||
| curl -H "Authorization: Basic $(echo -n :$ADO_PAT | base64)" \ | ||
| "https://dev.azure.com/Aurivue/Aurivue-Platform/_apis/git/repositories?api-version=7.1" | ||
| ``` | ||
|
|
||
| ### Common Endpoints | ||
|
|
||
| | Operation | Endpoint | | ||
| |-----------|----------| | ||
| | List repos | `GET /_apis/git/repositories` | | ||
| | List PRs | `GET /_apis/git/repositories/{repo}/pullrequests` | | ||
| | Create PR | `POST /_apis/git/repositories/{repo}/pullrequests` | | ||
| | Query work items | `POST /_apis/wit/wiql` | | ||
| | Create work item | `POST /_apis/wit/workitems/${type}` | | ||
|
|
||
| ### WIQL Examples | ||
|
|
||
| ```sql | ||
| -- Active bugs assigned to me | ||
| SELECT [System.Id], [System.Title] | ||
| FROM WorkItems | ||
| WHERE [System.TeamProject] = 'Aurivue-Platform' | ||
| AND [System.WorkItemType] = 'Bug' | ||
| AND [System.State] = 'Active' | ||
| AND [System.AssignedTo] = @Me | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document security best practices for WIQL queries and PAT usage.
The appendix includes WIQL examples and authentication instructions, but lacks security guidance:
- Line 824-830: Shows how to encode PAT in curl, but doesn't warn about storing PAT in command history, environment variables, or browser console
- Line 845-853: WIQL example uses hardcoded project name; should mention parameterization/escaping
Add a security section to the appendix:
- Never commit PATs to version control
- Use environment variables or secure credential managers
- Sanitize user input in WIQL queries
- Consider OAuth/MSAL over PAT for long-term solutions
🤖 Prompt for AI Agents
In @guides/ADO-INTEGRATION-PLAN.md around lines 820 - 853, Add a new "Security
best practices" subsection to the appendix advising: never commit PATs to
version control and avoid exposing them in command history or browser consoles
(update the curl PAT example to note this risk and to read the PAT from a secure
store or environment variable without echoing it), prefer environment variables
or secret/credential managers (and include a note to rotate PATs regularly),
avoid embedding hardcoded project names in WIQL by parameterizing/escaping
inputs and sanitize any user-supplied fields used in WIQL queries to prevent
injection, and recommend using OAuth/MSAL or Managed Identities instead of PATs
for long-term, production-grade authentication.
- Add AzureDevOpsProvider implementing GitProvider protocol
- Support for PRs, Work Items (issues), labels (tags), and repository ops
- Update factory to support azure_devops provider type
- Add ADO environment variables to .env.example
- Add azure-devops and msrest dependencies to requirements.txt
- Export AzureDevOpsProvider from providers module
Usage:
provider = get_provider("azure_devops", repo="org/project/repo")
pr = await provider.fetch_pr(123)
Environment variables:
ADO_ORGANIZATION, ADO_PROJECT, ADO_PAT, ADO_REPO_NAME, ADO_INSTANCE_URL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/backend/runners/github/providers/__init__.py (1)
20-32: Fix import sorting to resolve pipeline failure.The pipeline is failing due to unsorted imports (Ruff I001). Imports should be sorted alphabetically within groups.
Proposed fix
-from .factory import get_provider, register_provider, list_available_providers, is_provider_available +from .azure_devops_provider import AzureDevOpsProvider +from .factory import ( + get_provider, + is_provider_available, + list_available_providers, + register_provider, +) from .github_provider import GitHubProvider -from .azure_devops_provider import AzureDevOpsProvider from .protocol import ( GitProvider, IssueData,
🤖 Fix all issues with AI agents
In @apps/backend/requirements.txt:
- Around line 19-21: The azure-devops requirement is using a loose beta
constraint; change the line referencing the azure-devops package in
apps/backend/requirements.txt from "azure-devops>=7.1.0b4" to pin the beta
explicitly as "azure-devops==7.1.0b4" (or replace with a stable version when one
is available) and add a short comment nearby noting that this is a pinned beta
and should be reviewed/updated when a stable 7.1.x release is published; leave
msrest as-is unless you choose to pin it similarly.
In @apps/backend/runners/github/providers/azure_devops_provider.py:
- Around line 556-557: The empty except blocks in apply_labels and remove_labels
silently swallow errors; change them to catch Exception as e and log the
exception (e.g., logger.exception or logger.error with the exception) including
context such as the function name (apply_labels/remove_labels), repo/pr
identifiers and the labels being applied/removed, so failures are visible; do
not leave the block blank—either re-raise after logging or return a clear
failure result per the surrounding API contract.
- Around line 748-751: The POST call currently uses urllib.request.urlopen(req,
data=body) without a timeout; update the call to include the same timeout used
for other requests (e.g., pass timeout=timeout or a DEFAULT_TIMEOUT constant) so
it becomes urllib.request.urlopen(req, data=body, timeout=timeout), and ensure
the timeout variable or constant is defined/imported in the scope where body and
req are used (same function handling the request).
- Around line 239-251: The current broad except in the
create_pull_request_reviewer call swallows all failures; replace the silent pass
with logging of the exception and context so real errors aren't masked: catch
Exception as e and call the provider's logger (e.g., self.logger.exception or
self.logger.warning) including pr_number, repository_id/self._repo_name,
project/self._project, reviewer_id="me" and vote to provide context; optionally
still suppress expected permission-related errors by checking error type/status
and only escalate/log unexpected failures.
- Around line 615-622: The WIQL query interpolates self._project directly;
update the code that constructs the Wiql instance (the Wiql(...) block) to
escape or sanitize self._project the same way used earlier in this file (reuse
the existing escape/sanitize helper or apply the same quoting/escaping logic)
before embedding it into the f-string, i.e. compute a safe_project variable from
self._project and use that in the query to prevent WIQL injection while
preserving existing behavior.
- Around line 344-371: The WIQL construction in azure_devops_provider (the block
that builds conditions and the resulting query string) directly interpolates
user inputs (filters.author, filters.assignee, filters.labels) and is vulnerable
to injection; fix it by sanitizing those values before embedding them — replace
any single quote characters in filters.author, filters.assignee and each label
in filters.labels with two single quotes (escaping for WIQL/SQL-style literals)
or otherwise normalize/validate inputs, then use the sanitized variables when
appending to conditions in the method that builds the query (the SELECT ... FROM
WorkItems WHERE {' AND '.join(conditions)} block).
- Around line 191-214: Remove the unused local variables target_version and
source_version (they are computed but never used) and change the exception
handling so you don't return an error string as if it were diff content: instead
log the exception (using self.logger or self._logger; include the
exception/traceback for debugging) and return an empty string, or alternatively
re-raise the exception if the caller should handle it—do not return f"Error
fetching diff: {e}".
- Around line 277-285: The PR creation code hard-codes deleteSourceBranch=True
in the GitPullRequest completion options, which should be configurable; update
the logic around the GitPullRequest instantiation (variable update_pr) to read a
configuration or parameter (e.g., a new function arg, environment variable, or
provider setting) that controls deleteSourceBranch and pass that boolean instead
of the literal True, defaulting to the current behavior if unset, and ensure any
callers that create or call the merge routine (where update_pr is constructed)
are updated to accept or propagate this new option.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (5)
apps/backend/.env.exampleapps/backend/requirements.txtapps/backend/runners/github/providers/__init__.pyapps/backend/runners/github/providers/azure_devops_provider.pyapps/backend/runners/github/providers/factory.py
🧰 Additional context used
📓 Path-based instructions (2)
apps/backend/.env*
📄 CodeRabbit inference engine (CLAUDE.md)
apps/backend/.env*: Enable Electron MCP for E2E testing by settingELECTRON_MCP_ENABLED=truein.envand starting the Electron app withnpm run dev
Configure memory system credentials inapps/backend/.envand validate withgraphiti_config.py
Files:
apps/backend/.env.example
apps/backend/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
apps/backend/**/*.py: Always use the Claude Agent SDK (claude-agent-sdkpackage) for all AI interactions, never use the Anthropic API directly
Use thecreate_client()function fromapps/backend/core/client.pyto instantiate Claude SDK clients, not directClaudeSDKClientinitialization
Files:
apps/backend/runners/github/providers/__init__.pyapps/backend/runners/github/providers/factory.pyapps/backend/runners/github/providers/azure_devops_provider.py
⚙️ CodeRabbit configuration file
apps/backend/**/*.py: Focus on Python best practices, type hints, and async patterns.
Check for proper error handling and security considerations.
Verify compatibility with Python 3.12+.
Files:
apps/backend/runners/github/providers/__init__.pyapps/backend/runners/github/providers/factory.pyapps/backend/runners/github/providers/azure_devops_provider.py
🧬 Code graph analysis (3)
apps/backend/runners/github/providers/__init__.py (1)
apps/backend/runners/github/providers/factory.py (4)
get_provider(42-118)register_provider(22-39)list_available_providers(121-135)is_provider_available(138-159)
apps/backend/runners/github/providers/factory.py (3)
apps/backend/runners/github/providers/azure_devops_provider.py (2)
repo(108-110)provider_type(104-105)apps/backend/runners/github/providers/github_provider.py (2)
repo(67-68)provider_type(63-64)apps/backend/runners/github/providers/protocol.py (3)
repo(186-188)ProviderType(17-24)provider_type(181-183)
apps/backend/runners/github/providers/azure_devops_provider.py (1)
apps/backend/runners/github/providers/protocol.py (4)
IssueData(66-87)PRFilters(141-152)ProviderType(17-24)ReviewData(111-122)
🪛 GitHub Actions: Lint
apps/backend/runners/github/providers/__init__.py
[error] 20-20: Ruff check failed: Import block is un-sorted or un-formatted.
🪛 GitHub Check: CodeQL
apps/backend/runners/github/providers/azure_devops_provider.py
[notice] 192-192: Unused local variable
Variable target_version is not used.
[notice] 193-193: Unused local variable
Variable source_version is not used.
[notice] 556-556: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
[notice] 594-594: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🪛 GitHub Check: python
apps/backend/runners/github/providers/__init__.py
[failure] 20-32: Ruff (I001)
apps/backend/runners/github/providers/init.py:20:1: I001 Import block is un-sorted or un-formatted
apps/backend/runners/github/providers/azure_devops_provider.py
[failure] 732-734: Ruff (I001)
apps/backend/runners/github/providers/azure_devops_provider.py:732:9: I001 Import block is un-sorted or un-formatted
[failure] 702-705: Ruff (I001)
apps/backend/runners/github/providers/azure_devops_provider.py:702:9: I001 Import block is un-sorted or un-formatted
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (python)
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (7)
apps/backend/.env.example (1)
111-136: LGTM!The Azure DevOps configuration block is well-documented, follows the established pattern of other provider sections, and clearly specifies required PAT scopes. The support for both Azure DevOps Services (cloud) and Azure DevOps Server (on-prem) via
ADO_INSTANCE_URLis a nice touch.apps/backend/runners/github/providers/azure_devops_provider.py (3)
27-70: LGTM!The dataclass initialization with lazy environment variable loading and required field validation is well-implemented. The fallback of
_repo_nameto_projectwhen not specified is a sensible default.
71-101: LGTM!The lazy connection initialization with clear ImportError messaging is well-designed. The client properties properly ensure connection before access.
757-853: LGTM!The helper methods for parsing ADO data structures into protocol-compliant dataclasses are well-implemented. The fallback behaviors (e.g.,
updated_atusingcreation_date,datetime.now()for missing timestamps) are reasonable and documented.apps/backend/runners/github/providers/__init__.py (1)
46-51: LGTM!The
__all__exports are properly updated to include the newAzureDevOpsProviderand factory utility functions.apps/backend/runners/github/providers/factory.py (2)
102-116: LGTM with a minor note.The Azure DevOps provider instantiation correctly extracts ADO-specific kwargs and maps them to the provider's private fields. The fallback logic for
_repo_name(line 113) handles the case whererepo_nameisn't provided by extracting from therepoparameter.One observation: using private field names (
_organization,_project) as constructor kwargs is unconventional for dataclasses, but it works correctly given the provider's implementation.
128-128: LGTM!The
list_available_providersandis_provider_availablefunctions are correctly updated to includeAZURE_DEVOPSas a built-in provider.Also applies to: 154-156
apps/backend/requirements.txt
Outdated
| # Azure DevOps Integration (optional - for ADO provider) | ||
| azure-devops>=7.1.0b4 | ||
| msrest>=0.7.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
azure-devops python package PyPI latest stable version releases
💡 Result:
Latest PyPI release: azure-devops 7.1.0b4 (pre-release), published Nov 20, 2023. No newer non-prerelease (stable) version appears on PyPI. [1][2]
Sources:
[1] PyPI project page — azure-devops (release history).
[2] PyPI mirror listing for azure-devops.
Consider pinning azure-devops to ==7.1.0b4 or monitor for stable release.
The constraint azure-devops>=7.1.0b4 uses the latest available version on PyPI, which is currently in beta with no stable release published. While the loose constraint allows auto-upgrades to a future stable 7.1.0 when available, consider either pinning to ==7.1.0b4 explicitly for production stability or periodically checking for stable releases.
🤖 Prompt for AI Agents
In @apps/backend/requirements.txt around lines 19 - 21, The azure-devops
requirement is using a loose beta constraint; change the line referencing the
azure-devops package in apps/backend/requirements.txt from
"azure-devops>=7.1.0b4" to pin the beta explicitly as "azure-devops==7.1.0b4"
(or replace with a stable version when one is available) and add a short comment
nearby noting that this is a pinned beta and should be reviewed/updated when a
stable 7.1.x release is published; leave msrest as-is unless you choose to pin
it similarly.
| try: | ||
| self.git_client.create_pull_request_reviewer( | ||
| reviewer={ | ||
| "vote": vote, | ||
| }, | ||
| repository_id=self._repo_name, | ||
| pull_request_id=pr_number, | ||
| reviewer_id="me", # Current user | ||
| project=self._project, | ||
| ) | ||
| except Exception: | ||
| pass # Vote may fail if user is not a reviewer | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider logging vote failures instead of silently ignoring them.
The exception is silently caught and ignored when setting the reviewer vote. While the comment explains the rationale, this could mask legitimate errors (e.g., network issues, permission problems).
Proposed improvement
try:
self.git_client.create_pull_request_reviewer(
reviewer={
"vote": vote,
},
repository_id=self._repo_name,
pull_request_id=pr_number,
reviewer_id="me", # Current user
project=self._project,
)
- except Exception:
- pass # Vote may fail if user is not a reviewer
+ except Exception as e:
+ # Vote may fail if user is not a reviewer
+ import logging
+ logging.debug(f"Could not set reviewer vote on PR #{pr_number}: {e}")🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
239 - 251, The current broad except in the create_pull_request_reviewer call
swallows all failures; replace the silent pass with logging of the exception and
context so real errors aren't masked: catch Exception as e and call the
provider's logger (e.g., self.logger.exception or self.logger.warning) including
pr_number, repository_id/self._repo_name, project/self._project,
reviewer_id="me" and vote to provide context; optionally still suppress expected
permission-related errors by checking error type/status and only escalate/log
unexpected failures.
| update_pr = GitPullRequest( | ||
| status="completed", | ||
| last_merge_source_commit=pr.last_merge_source_commit, | ||
| completion_options={ | ||
| "deleteSourceBranch": True, | ||
| "mergeStrategy": merge_strategy_map.get(merge_method, "squash"), | ||
| "mergeCommitMessage": commit_title, | ||
| }, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider making deleteSourceBranch configurable.
The merge operation hard-codes deleteSourceBranch: True. This behavior may not be desired in all workflows—some teams prefer to keep source branches for history or debugging purposes.
Proposed fix
async def merge_pr(
self,
pr_number: int,
merge_method: str = "squash",
commit_title: str | None = None,
+ delete_source_branch: bool = True,
) -> bool:
"""Merge a pull request."""
try:
...
update_pr = GitPullRequest(
status="completed",
last_merge_source_commit=pr.last_merge_source_commit,
completion_options={
- "deleteSourceBranch": True,
+ "deleteSourceBranch": delete_source_branch,
"mergeStrategy": merge_strategy_map.get(merge_method, "squash"),
"mergeCommitMessage": commit_title,
},
)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
277 - 285, The PR creation code hard-codes deleteSourceBranch=True in the
GitPullRequest completion options, which should be configurable; update the
logic around the GitPullRequest instantiation (variable update_pr) to read a
configuration or parameter (e.g., a new function arg, environment variable, or
provider setting) that controls deleteSourceBranch and pass that boolean instead
of the literal True, defaulting to the current behavior if unset, and ensure any
callers that create or call the merge routine (where update_pr is constructed)
are updated to accept or propagate this new option.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty except blocks silently swallow errors.
Both apply_labels and remove_labels catch all exceptions and pass silently. This makes debugging difficult and leaves callers unaware of failures.
Proposed fix - add logging
except Exception:
- pass
+ import logging
+ logging.warning(f"Failed to apply labels to work item #{issue_or_pr_number}") except Exception:
- pass
+ import logging
+ logging.warning(f"Failed to remove labels from work item #{issue_or_pr_number}")Also applies to: 594-595
🧰 Tools
🪛 GitHub Check: CodeQL
[notice] 556-556: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
556 - 557, The empty except blocks in apply_labels and remove_labels silently
swallow errors; change them to catch Exception as e and log the exception (e.g.,
logger.exception or logger.error with the exception) including context such as
the function name (apply_labels/remove_labels), repo/pr identifiers and the
labels being applied/removed, so failures are visible; do not leave the block
blank—either re-raise after logging or return a clear failure result per the
surrounding API contract.
| import urllib.request | ||
| import urllib.parse | ||
| import json | ||
| import base64 | ||
|
|
||
| url = f"{self._instance_url}/{self._organization}/{self._project}/_apis{endpoint}" | ||
|
|
||
| if params: | ||
| url += "?" + urllib.parse.urlencode(params) | ||
|
|
||
| # Add API version if not present | ||
| if "api-version" not in url: | ||
| separator = "&" if "?" in url else "?" | ||
| url += f"{separator}api-version=7.1" | ||
|
|
||
| auth = base64.b64encode(f":{self._pat}".encode()).decode() | ||
|
|
||
| req = urllib.request.Request(url) | ||
| req.add_header("Authorization", f"Basic {auth}") | ||
| req.add_header("Content-Type", "application/json") | ||
|
|
||
| with urllib.request.urlopen(req) as response: | ||
| return json.loads(response.read().decode()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add timeout to urllib.request.urlopen to prevent indefinite hangs.
The urlopen call has no timeout, which can cause the application to hang indefinitely if the Azure DevOps API is unresponsive.
Proposed fix
- with urllib.request.urlopen(req) as response:
+ with urllib.request.urlopen(req, timeout=30) as response:
return json.loads(response.read().decode())Also, the static analysis flagged unsorted imports. Consider moving imports to the top of the file or sorting them:
- import urllib.request
- import urllib.parse
import json
+ import urllib.parse
+ import urllib.request📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import urllib.request | |
| import urllib.parse | |
| import json | |
| import base64 | |
| url = f"{self._instance_url}/{self._organization}/{self._project}/_apis{endpoint}" | |
| if params: | |
| url += "?" + urllib.parse.urlencode(params) | |
| # Add API version if not present | |
| if "api-version" not in url: | |
| separator = "&" if "?" in url else "?" | |
| url += f"{separator}api-version=7.1" | |
| auth = base64.b64encode(f":{self._pat}".encode()).decode() | |
| req = urllib.request.Request(url) | |
| req.add_header("Authorization", f"Basic {auth}") | |
| req.add_header("Content-Type", "application/json") | |
| with urllib.request.urlopen(req) as response: | |
| return json.loads(response.read().decode()) | |
| import urllib.request | |
| import urllib.parse | |
| import json | |
| import base64 | |
| url = f"{self._instance_url}/{self._organization}/{self._project}/_apis{endpoint}" | |
| if params: | |
| url += "?" + urllib.parse.urlencode(params) | |
| # Add API version if not present | |
| if "api-version" not in url: | |
| separator = "&" if "?" in url else "?" | |
| url += f"{separator}api-version=7.1" | |
| auth = base64.b64encode(f":{self._pat}".encode()).decode() | |
| req = urllib.request.Request(url) | |
| req.add_header("Authorization", f"Basic {auth}") | |
| req.add_header("Content-Type", "application/json") | |
| with urllib.request.urlopen(req, timeout=30) as response: | |
| return json.loads(response.read().decode()) |
🧰 Tools
🪛 GitHub Check: python
[failure] 702-705: Ruff (I001)
apps/backend/runners/github/providers/azure_devops_provider.py:702:9: I001 Import block is un-sorted or un-formatted
…dd WIQL injection protection, asyncio.to_thread wrapping, proper SDK models, and tighter dependency pinning
| """Fetch the diff for a pull request.""" | ||
| try: | ||
| # Get the PR to find the commits | ||
| pr = await asyncio.to_thread( |
Check notice
Code scanning / CodeQL
Unused local variable Note
| id=issue_or_pr_number, | ||
| project=self._project, | ||
| ) | ||
| except Exception: |
Check notice
Code scanning / CodeQL
Empty except Note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In @apps/backend/requirements.txt:
- Around line 19-21: The requirements file currently allows azure-devops to
float across beta releases using "azure-devops>=7.1.0b4,<8.0.0"; change the
constraint to pin the exact beta version by replacing that range with an exact
equality for the 7.1.0b4 release (i.e., use azure-devops==7.1.0b4) so builds are
reproducible; update any related CI/lockfile if present to match the pinned
azure-devops version.
In @apps/backend/runners/github/providers/azure_devops_provider.py:
- Around line 306-307: Replace the silent except block that currently does
"except Exception: pass" in the vote-related code with an exception handler that
captures the exception (e.g., except Exception as e) and logs the failure with
context (PR/commit id, reviewer id, and any action like "vote" or "set_vote")
using the module/class logger (e.g., self.logger or logger) — keep the existing
behavior of not raising after logging but ensure the log uses logger.exception
or logger.error with the exception details so network/permission errors are
visible for debugging.
- Around line 686-687: The empty except in remove_labels is swallowing errors;
change it to catch Exception as e and log the failure with the exception and
contextual details (e.g., issue/PR id, repo, labels) using the existing logger
(self.logger or logger) so label operation failures are visible for debugging;
do not suppress exceptions silently—either log and continue or re-raise if
callers expect errors.
- Around line 228-269: In fetch_pr_diff: remove the unused fetch of pr (the call
to self.git_client.get_pull_request) since pr is never used; keep the commits
check (commits variable) and bail with an empty string if no commits; safely
handle a missing or None changes.change_entries when building diff_lines; and do
not return an error string on exception—raise the exception (or let it
propagate) instead so callers can handle errors rather than receiving an invalid
diff string.
- Around line 385-386: The close_pr handler currently swallows all exceptions
(except Exception: return False); update the except block in close_pr to log the
failure before returning False by catching the exception as e and calling the
provider's logger (use the class logger used elsewhere in this file, e.g.,
self.logger or logger) with the PR context and the exception (use
logger.exception or logger.error(..., exc_info=True)), similar to how merge_pr
was fixed—include PR identifier/URL and the exception message/trace to aid
debugging.
- Line 21: Replace typing.List and typing.Optional with modern built-ins: change
the import line to use "from typing import Any" (remove List and Optional), then
update all type hints in this file (e.g., in functions/methods and class
signatures such as any occurrences inside AzureDevOpsProvider, get_* methods, or
helpers) replacing "List[X]" with "list[X]" and "Optional[X]" with "X | None".
Ensure any forward references or imports still resolve and run type checks after
the changes.
- Around line 555-556: The close_issue handler swallows all exceptions without
any logging; update the except block in the close_issue method of
AzureDevOpsProvider (or the close_issue function) to log the failure details
before returning False: catch Exception as e and call the module/class logger
(e.g., self.logger.exception(...) or self.logger.error(..., exc_info=True)) with
a short message that includes the issue id/context and the exception to aid
debugging, then return False as before.
- Around line 585-602: The current nested try/except swallows all errors; modify
both except blocks to capture the exception as a variable (e.g., "except
Exception as e") and log the full exception and context before continuing—use
the instance logger (self.logger.exception or self.logger.error with traceback)
or logging.exception if no instance logger, and include identifiers like
issue_or_pr_number, self._repo_name and self._project and whether it was the
work-item or PR-path; keep the fallback behavior (attempt PR comment) and still
return 0 on final failure after logging the error.
- Around line 359-360: The bare except block that returns False hides merge
errors; update the exception handler in the AzureDevOpsProvider merge routine
(the except Exception: return False) to log the failure before returning False —
call an appropriate logger (e.g., self.logger.exception or module-level
logger.exception) with a descriptive message like "Failed to merge PR" and
include the exception so the stacktrace is recorded, then return False as
before.
- Around line 778-787: The check_permissions function is a documented stub that
always returns "write" and is unused; update the function docstring or body to
include a clear TODO explaining it's a placeholder for future ADO permission
resolution, describe expected behavior (e.g., resolve project-level ADO
permissions or raise/not-used), and mark it as intentionally unused for now
(e.g., TODO: implement or integrate with permission checks) so reviewers know it
isn't an active authorization bypass; reference the async def
check_permissions(username: str) function and leave the current simplified
return in place until implementation.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
apps/backend/requirements.txtapps/backend/runners/github/providers/azure_devops_provider.py
🧰 Additional context used
📓 Path-based instructions (1)
apps/backend/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
apps/backend/**/*.py: Always use the Claude Agent SDK (claude-agent-sdkpackage) for all AI interactions, never use the Anthropic API directly
Use thecreate_client()function fromapps/backend/core/client.pyto instantiate Claude SDK clients, not directClaudeSDKClientinitialization
Files:
apps/backend/runners/github/providers/azure_devops_provider.py
⚙️ CodeRabbit configuration file
apps/backend/**/*.py: Focus on Python best practices, type hints, and async patterns.
Check for proper error handling and security considerations.
Verify compatibility with Python 3.12+.
Files:
apps/backend/runners/github/providers/azure_devops_provider.py
🧬 Code graph analysis (1)
apps/backend/runners/github/providers/azure_devops_provider.py (1)
apps/backend/runners/github/providers/protocol.py (5)
IssueData(66-87)IssueFilters(126-137)PRFilters(141-152)ProviderType(17-24)ReviewData(111-122)
🪛 GitHub Check: CodeQL
apps/backend/runners/github/providers/azure_devops_provider.py
[notice] 232-232: Unused local variable
Variable pr is not used.
[notice] 594-594: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
[notice] 646-646: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🪛 GitHub Check: python
apps/backend/runners/github/providers/azure_devops_provider.py
[failure] 315-315: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:315:23: UP045 Use X | None for type annotations
[failure] 185-185: Ruff (UP006)
apps/backend/runners/github/providers/azure_devops_provider.py:185:71: UP006 Use list instead of List for type annotation
[failure] 185-185: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:185:40: UP045 Use X | None for type annotations
[failure] 87-87: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:87:20: UP045 Use X | None for type annotations
[failure] 86-86: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:86:11: UP045 Use X | None for type annotations
[failure] 85-85: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:85:17: UP045 Use X | None for type annotations
[failure] 84-84: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:84:15: UP045 Use X | None for type annotations
[failure] 83-83: Ruff (UP045)
apps/backend/runners/github/providers/azure_devops_provider.py:83:20: UP045 Use X | None for type annotations
[failure] 21-21: Ruff (UP035)
apps/backend/runners/github/providers/azure_devops_provider.py:21:1: UP035 typing.List is deprecated, use list instead
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: CodeQL
- GitHub Check: CodeQL (javascript-typescript)
- GitHub Check: CodeQL (python)
🔇 Additional comments (4)
apps/backend/runners/github/providers/azure_devops_provider.py (4)
793-857: LGTM on API operations.Timeouts are properly set (30 seconds), and blocking calls are correctly wrapped with
asyncio.to_thread. The function-local imports are acceptable for these low-level utility methods.
862-958: LGTM on helper methods.The parsing logic handles ADO's data structures appropriately, with sensible fallbacks for missing data. The datetime parsing covers multiple input formats correctly.
646-647: Emptyexceptblocks silently swallow errors inapply_labels.Add logging to aid debugging when label operations fail.
Likely an incorrect or invalid review comment.
338-342: Consider makingdelete_source_branchconfigurable.The
delete_source_branch=Trueis hardcoded. Some workflows prefer to retain source branches for history or debugging.Likely an incorrect or invalid review comment.
| # Azure DevOps Integration (optional - for ADO provider) | ||
| azure-devops>=7.1.0b4,<8.0.0 | ||
| msrest>=0.7.1,<1.0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Pin azure-devops to exact beta version for production stability.
The azure-devops>=7.1.0b4,<8.0.0 constraint still allows auto-upgrades to future beta versions. Since 7.1.0b4 is a pre-release with no stable version available, consider pinning to ==7.1.0b4 for reproducible builds.
Suggested fix
-azure-devops>=7.1.0b4,<8.0.0
+azure-devops==7.1.0b4 # Pinned beta - update when stable 7.1.x is released📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Azure DevOps Integration (optional - for ADO provider) | |
| azure-devops>=7.1.0b4,<8.0.0 | |
| msrest>=0.7.1,<1.0.0 | |
| # Azure DevOps Integration (optional - for ADO provider) | |
| azure-devops==7.1.0b4 # Pinned beta - update when stable 7.1.x is released | |
| msrest>=0.7.1,<1.0.0 |
🤖 Prompt for AI Agents
In @apps/backend/requirements.txt around lines 19 - 21, The requirements file
currently allows azure-devops to float across beta releases using
"azure-devops>=7.1.0b4,<8.0.0"; change the constraint to pin the exact beta
version by replacing that range with an exact equality for the 7.1.0b4 release
(i.e., use azure-devops==7.1.0b4) so builds are reproducible; update any related
CI/lockfile if present to match the pinned azure-devops version.
| import re | ||
| from dataclasses import dataclass, field | ||
| from datetime import datetime, timezone | ||
| from typing import Any, List, Optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use modern type annotation syntax.
Per Python 3.10+ and Ruff UP035/UP006 rules, use built-in list instead of typing.List and X | None instead of Optional[X].
Suggested fix for imports
-from typing import Any, List, Optional
+from typing import AnyThen update all type hints throughout the file:
Optional[X]→X | NoneList[X]→list[X]
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 GitHub Check: python
[failure] 21-21: Ruff (UP035)
apps/backend/runners/github/providers/azure_devops_provider.py:21:1: UP035 typing.List is deprecated, use list instead
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py at line 21,
Replace typing.List and typing.Optional with modern built-ins: change the import
line to use "from typing import Any" (remove List and Optional), then update all
type hints in this file (e.g., in functions/methods and class signatures such as
any occurrences inside AzureDevOpsProvider, get_* methods, or helpers) replacing
"List[X]" with "list[X]" and "Optional[X]" with "X | None". Ensure any forward
references or imports still resolve and run type checks after the changes.
| async def fetch_pr_diff(self, number: int) -> str: | ||
| """Fetch the diff for a pull request.""" | ||
| try: | ||
| # Get the PR to find the commits | ||
| pr = await asyncio.to_thread( | ||
| self.git_client.get_pull_request, | ||
| repository_id=self._repo_name, | ||
| pull_request_id=number, | ||
| project=self._project, | ||
| ) | ||
|
|
||
| # Get commits in the PR | ||
| commits = await asyncio.to_thread( | ||
| self.git_client.get_pull_request_commits, | ||
| repository_id=self._repo_name, | ||
| pull_request_id=number, | ||
| project=self._project, | ||
| ) | ||
|
|
||
| if not commits: | ||
| return "" | ||
|
|
||
| # Use the changes endpoint | ||
| changes = await asyncio.to_thread( | ||
| self.git_client.get_pull_request_iteration_changes, | ||
| repository_id=self._repo_name, | ||
| pull_request_id=number, | ||
| iteration_id=1, # First iteration | ||
| project=self._project, | ||
| ) | ||
|
|
||
| # Build a simple diff summary | ||
| diff_lines = [] | ||
| for change in changes.change_entries or []: | ||
| change_type = getattr(change, "change_type", "edit") | ||
| path = getattr(change.item, "path", "") if change.item else "" | ||
| diff_lines.append(f"--- {change_type}: {path}") | ||
|
|
||
| return "\n".join(diff_lines) | ||
|
|
||
| except Exception as e: | ||
| return f"Error fetching diff: {e}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused variable and fix error handling.
- Line 232:
pris fetched but never used (flagged by CodeQL). - Line 269: Returning an error string as diff content is problematic—callers may treat it as valid diff output.
Suggested fix
async def fetch_pr_diff(self, number: int) -> str:
"""Fetch the diff for a pull request."""
try:
- # Get the PR to find the commits
- pr = await asyncio.to_thread(
- self.git_client.get_pull_request,
- repository_id=self._repo_name,
- pull_request_id=number,
- project=self._project,
- )
-
# Get commits in the PR
commits = await asyncio.to_thread(
self.git_client.get_pull_request_commits,- except Exception as e:
- return f"Error fetching diff: {e}"
+ except Exception:
+ # Return empty diff on failure; caller should handle missing diff gracefully
+ return ""🧰 Tools
🪛 GitHub Check: CodeQL
[notice] 232-232: Unused local variable
Variable pr is not used.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
228 - 269, In fetch_pr_diff: remove the unused fetch of pr (the call to
self.git_client.get_pull_request) since pr is never used; keep the commits check
(commits variable) and bail with an empty string if no commits; safely handle a
missing or None changes.change_entries when building diff_lines; and do not
return an error string on exception—raise the exception (or let it propagate)
instead so callers can handle errors rather than receiving an invalid diff
string.
| except Exception: | ||
| pass # Vote may fail if user is not a reviewer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Log vote failures instead of silently ignoring.
The exception is silently swallowed. While the comment explains the rationale, this masks legitimate errors (network issues, permission problems).
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
306 - 307, Replace the silent except block that currently does "except
Exception: pass" in the vote-related code with an exception handler that
captures the exception (e.g., except Exception as e) and logs the failure with
context (PR/commit id, reviewer id, and any action like "vote" or "set_vote")
using the module/class logger (e.g., self.logger or logger) — keep the existing
behavior of not raising after logging but ensure the log uses logger.exception
or logger.error with the exception details so network/permission errors are
visible for debugging.
| except Exception: | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Add logging for merge failures.
Returning False without logging makes debugging difficult. Callers won't know why the merge failed.
Suggested fix
- except Exception:
- return False
+ except Exception as e:
+ import logging
+ logging.warning(f"Failed to merge PR #{pr_number}: {e}")
+ return False🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
359 - 360, The bare except block that returns False hides merge errors; update
the exception handler in the AzureDevOpsProvider merge routine (the except
Exception: return False) to log the failure before returning False — call an
appropriate logger (e.g., self.logger.exception or module-level
logger.exception) with a descriptive message like "Failed to merge PR" and
include the exception so the stacktrace is recorded, then return False as
before.
| except Exception: | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Add logging for close_pr failures.
Same concern as merge_pr—silent failures hinder debugging.
Suggested fix
- except Exception:
- return False
+ except Exception as e:
+ import logging
+ logging.warning(f"Failed to close PR #{pr_number}: {e}")
+ return False📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception: | |
| return False | |
| except Exception as e: | |
| import logging | |
| logging.warning(f"Failed to close PR #{pr_number}: {e}") | |
| return False |
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
385 - 386, The close_pr handler currently swallows all exceptions (except
Exception: return False); update the except block in close_pr to log the failure
before returning False by catching the exception as e and calling the provider's
logger (use the class logger used elsewhere in this file, e.g., self.logger or
logger) with the PR context and the exception (use logger.exception or
logger.error(..., exc_info=True)), similar to how merge_pr was fixed—include PR
identifier/URL and the exception message/trace to aid debugging.
| except Exception: | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Add logging for close_issue failures.
Consistent with other operations, failures should be logged for debugging.
Suggested fix
- except Exception:
- return False
+ except Exception as e:
+ import logging
+ logging.warning(f"Failed to close work item #{number}: {e}")
+ return False🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
555 - 556, The close_issue handler swallows all exceptions without any logging;
update the except block in the close_issue method of AzureDevOpsProvider (or the
close_issue function) to log the failure details before returning False: catch
Exception as e and call the module/class logger (e.g.,
self.logger.exception(...) or self.logger.error(..., exc_info=True)) with a
short message that includes the issue id/context and the exception to aid
debugging, then return False as before.
| except Exception: | ||
| # Try as PR comment | ||
| try: | ||
| from azure.devops.v7_1.git.models import Comment, CommentThread | ||
|
|
||
| comment = Comment(content=body) | ||
| thread = CommentThread(comments=[comment], status="active") | ||
|
|
||
| result = await asyncio.to_thread( | ||
| self.git_client.create_thread, | ||
| comment_thread=thread, | ||
| repository_id=self._repo_name, | ||
| pull_request_id=issue_or_pr_number, | ||
| project=self._project, | ||
| ) | ||
| return result.id if result else 0 | ||
| except Exception: | ||
| return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Improve error handling in add_comment fallback logic.
The nested try-except structure silently swallows all errors. While the fallback from work item to PR comment is intentional, complete silence makes troubleshooting impossible.
Suggested fix
except Exception:
# Try as PR comment
try:
from azure.devops.v7_1.git.models import Comment, CommentThread
comment = Comment(content=body)
thread = CommentThread(comments=[comment], status="active")
result = await asyncio.to_thread(
self.git_client.create_thread,
comment_thread=thread,
repository_id=self._repo_name,
pull_request_id=issue_or_pr_number,
project=self._project,
)
return result.id if result else 0
- except Exception:
+ except Exception as e:
+ import logging
+ logging.debug(f"Failed to add comment to #{issue_or_pr_number}: {e}")
return 0🧰 Tools
🪛 GitHub Check: CodeQL
[notice] 594-594: Empty except
'except' clause does nothing but pass and there is no explanatory comment.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
585 - 602, The current nested try/except swallows all errors; modify both except
blocks to capture the exception as a variable (e.g., "except Exception as e")
and log the full exception and context before continuing—use the instance logger
(self.logger.exception or self.logger.error with traceback) or logging.exception
if no instance logger, and include identifiers like issue_or_pr_number,
self._repo_name and self._project and whether it was the work-item or PR-path;
keep the fallback behavior (attempt PR comment) and still return 0 on final
failure after logging the error.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Empty except blocks silently swallow errors in remove_labels.
Add logging to aid debugging when label operations fail.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
686 - 687, The empty except in remove_labels is swallowing errors; change it to
catch Exception as e and log the failure with the exception and contextual
details (e.g., issue/PR id, repo, labels) using the existing logger (self.logger
or logger) so label operation failures are visible for debugging; do not
suppress exceptions silently—either log and continue or re-raise if callers
expect errors.
| async def check_permissions(self, username: str) -> str: | ||
| """ | ||
| Check a user's permission level on the repository. | ||
| Note: ADO permissions are more complex and project-based. | ||
| This returns a simplified permission level. | ||
| """ | ||
| # ADO doesn't have a simple permission check like GitHub | ||
| # Return "write" as default for authenticated users | ||
| return "write" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
rg -n -C3 'check_permissions' --type=py -g '!*test*'Repository: AndyMik90/Auto-Claude
Length of output: 2044
🏁 Script executed:
rg -n 'await\s+.*\.check_permissions\(|\.check_permissions\(' --type=py -g '!*test*'Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
rg -n 'check_permissions' --type=py -g '!*test*' -B2 -A5Repository: AndyMik90/Auto-Claude
Length of output: 2374
🏁 Script executed:
rg -n 'check_permissions' . --type-list | head -20 && rg -n '\.check_permissions\(|check_permissions\s*\(' . -B1 -A1 | grep -v 'def check_permissions' | grep -v '.pyc'Repository: AndyMik90/Auto-Claude
Length of output: 945
🏁 Script executed:
# Search for actual method calls (excluding definitions)
rg 'check_permissions\(' . --type=py | grep -v 'def check_permissions' | grep -v 'async def check_permissions'Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Check if check_permissions is referenced in any configuration, comments, or future plans
rg 'check_permissions' . --type-list=all -i | head -30Repository: AndyMik90/Auto-Claude
Length of output: 454
🏁 Script executed:
# Search for all references to check_permissions without type restrictions
rg 'check_permissions' . -i | grep -v '.pyc' | grep -v '__pycache__'Repository: AndyMik90/Auto-Claude
Length of output: 423
🏁 Script executed:
# Check for TODO/FIXME comments in the provider files
rg -n 'TODO|FIXME|XXX' apps/backend/runners/github/providers/ --type=py | head -20Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Search for endpoints or functions that might call check_permissions
rg -n 'permission' apps/backend/runners/github/providers/ --type=py | grep -i 'check\|call\|invoke'Repository: AndyMik90/Auto-Claude
Length of output: 926
🏁 Script executed:
# Look at the context around the protocol definition
cat -n apps/backend/runners/github/providers/protocol.py | sed -n '430,455p'Repository: AndyMik90/Auto-Claude
Length of output: 870
🏁 Script executed:
# Check the entire protocol interface to understand its purpose
cat -n apps/backend/runners/github/providers/protocol.py | head -50Repository: AndyMik90/Auto-Claude
Length of output: 1551
🏁 Script executed:
# Verify if any other providers also have check_permissions stubs
find apps/backend/runners/github/providers -name '*_provider.py' -type f | xargs -I {} basename {}Repository: AndyMik90/Auto-Claude
Length of output: 108
🏁 Script executed:
# Let's also verify the azure_devops_provider file structure around this method
cat -n apps/backend/runners/github/providers/azure_devops_provider.py | sed -n '775,800p'Repository: AndyMik90/Auto-Claude
Length of output: 1200
🏁 Script executed:
# Check if there are any other stub methods in the azure provider that follow this pattern
grep -n 'return "' apps/backend/runners/github/providers/azure_devops_provider.py | head -20Repository: AndyMik90/Auto-Claude
Length of output: 212
check_permissions is a documented stub that is currently unused in the codebase.
The method's limitations are already clearly documented (lines 782-786), explaining that ADO permissions are more complex and this returns a simplified default. Since this method is not called anywhere in the codebase, there is no active authorization bypass risk. If this is intended for future use, consider adding a TODO comment clarifying the implementation status and expected behavior.
🤖 Prompt for AI Agents
In @apps/backend/runners/github/providers/azure_devops_provider.py around lines
778 - 787, The check_permissions function is a documented stub that always
returns "write" and is unused; update the function docstring or body to include
a clear TODO explaining it's a placeholder for future ADO permission resolution,
describe expected behavior (e.g., resolve project-level ADO permissions or
raise/not-used), and mark it as intentionally unused for now (e.g., TODO:
implement or integrate with permission checks) so reviewers know it isn't an
active authorization bypass; reference the async def check_permissions(username:
str) function and leave the current simplified return in place until
implementation.
…ss - Add ADO IPC channel constants for work items, PRs, and auto-fix - Create ADO types, utils, work item handlers, and PR handlers - Register ADO handlers in main process IPC setup - Credentials stay in main process (PAT from .env), not exposed to renderer
| // ADO uses Basic auth with PAT (empty username, PAT as password) | ||
| const auth = Buffer.from(`:${config.pat}`).toString('base64'); | ||
|
|
||
| const response = await fetch(url, { |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
| headers: { | ||
| 'Accept': 'application/json', | ||
| 'Authorization': `Basic ${auth}`, | ||
| 'Content-Type': 'application/json', | ||
| ...options.headers, | ||
| }, |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
| const url = buildADOApiUrl(config, endpoint); | ||
| const auth = Buffer.from(`:${config.pat}`).toString('base64'); | ||
|
|
||
| const response = await fetch(url, { |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
| headers: { | ||
| 'Accept': 'application/json', | ||
| 'Authorization': `Basic ${auth}`, | ||
| 'Content-Type': 'application/json-patch+json', // Required for work item updates | ||
| }, |
Check warning
Code scanning / CodeQL
File data in outbound network request Medium
file data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @apps/frontend/src/main/ipc-handlers/ado/pr-handlers.ts:
- Around line 159-167: The diff summary returned by the handler currently only
lists change type and path (built from changesResult.changeEntries into
diffLines) instead of actual file content diffs; either add a clear inline
comment near the construction of diffLines (and in the function handling PR
diffs) stating this is an intentional lightweight summary for performance and
not a full content diff, or implement full diffs by fetching file
contents/patches for each change and include them in the returned string;
reference the changesResult.changeEntries array and the diffLines construction
to locate where to add the comment or expand the logic.
In @apps/frontend/src/main/ipc-handlers/ado/utils.ts:
- Around line 31-37: The return object includes a redundant fallback for
repoName; remove the "|| ''" and return repoName directly since repoName was
earlier set from vars['ADO_REPO_NAME'] || projectName and projectName is
validated truthy. Update the return in the function that builds the ADO config
to use repoName (not repoName || '') and keep the other fields (organization,
project: projectName, pat, instanceUrl) unchanged.
- Around line 117-125: The current sanitizeWiqlString function triggers the
no-control-regex lint rule due to literal control-character escapes in the regex
used to strip control chars; update sanitizeWiqlString to avoid the
control-regex by either (A) replacing the problematic regex with a
function-based sanitization that iterates characters (e.g.,
Array.from(value).filter(ch => { const c = ch.charCodeAt(0); return !(c >= 0 &&
c <= 0x1f) && !(c >= 0x7f && c <= 0x9f); }).join('')) before slicing, or (B) if
you must keep the regex, add an inline eslint disable comment just above the
regex line to suppress no-control-regex for that single occurrence; apply the
change inside sanitizeWiqlString so the control characters are removed without
causing the linter/pipeline failure.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
apps/frontend/src/main/ipc-handlers/ado/index.tsapps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsapps/frontend/src/main/ipc-handlers/ado/types.tsapps/frontend/src/main/ipc-handlers/ado/utils.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/main/ipc-handlers/index.tsapps/frontend/src/shared/constants/ipc.ts
🧰 Additional context used
📓 Path-based instructions (3)
apps/frontend/src/**/*.{ts,tsx,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always use i18n translation keys for all user-facing text in the frontend instead of hardcoded strings
Files:
apps/frontend/src/main/ipc-handlers/index.tsapps/frontend/src/main/ipc-handlers/ado/types.tsapps/frontend/src/main/ipc-handlers/ado/index.tsapps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/main/ipc-handlers/ado/utils.ts
apps/frontend/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
useTranslation()hook with namespace prefixes (e.g., 'navigation:items.key') for accessing translation strings in React components
Files:
apps/frontend/src/main/ipc-handlers/index.tsapps/frontend/src/main/ipc-handlers/ado/types.tsapps/frontend/src/main/ipc-handlers/ado/index.tsapps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/main/ipc-handlers/ado/utils.ts
apps/frontend/**/*.{ts,tsx}
⚙️ CodeRabbit configuration file
apps/frontend/**/*.{ts,tsx}: Review React patterns and TypeScript type safety.
Check for proper state management and component composition.
Files:
apps/frontend/src/main/ipc-handlers/index.tsapps/frontend/src/main/ipc-handlers/ado/types.tsapps/frontend/src/main/ipc-handlers/ado/index.tsapps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/main/ipc-handlers/ado/utils.ts
🧬 Code graph analysis (4)
apps/frontend/src/main/ipc-handlers/index.ts (1)
apps/frontend/src/main/ipc-handlers/ado/index.ts (1)
registerADOHandlers(16-19)
apps/frontend/src/main/ipc-handlers/ado/types.ts (1)
apps/frontend/src/main/ipc-handlers/ado/index.ts (3)
ADOConfig(23-23)ADOWorkItem(23-23)ADOPullRequest(23-23)
apps/frontend/src/main/ipc-handlers/ado/pr-handlers.ts (2)
apps/frontend/src/main/ipc-handlers/ado/types.ts (2)
ADOAPIPullRequest(57-84)ADOPullRequest(126-153)apps/frontend/src/main/ipc-handlers/ado/utils.ts (4)
normalizePRState(138-148)extractBranchName(154-156)getADOConfig(14-41)adoFetch(56-82)
apps/frontend/src/main/ipc-handlers/ado/utils.ts (2)
apps/frontend/src/shared/types/project.ts (1)
Project(5-13)apps/frontend/src/main/ipc-handlers/ado/types.ts (1)
ADOConfig(5-11)
🪛 Biome (2.1.2)
apps/frontend/src/main/ipc-handlers/ado/utils.ts
[error] 122-122: Unexpected control character in a regular expression.
Control characters are unusual and potentially incorrect inputs, so they are disallowed.
(lint/suspicious/noControlCharactersInRegex)
[error] 122-122: Unexpected control character in a regular expression.
Control characters are unusual and potentially incorrect inputs, so they are disallowed.
(lint/suspicious/noControlCharactersInRegex)
🪛 GitHub Actions: CI
apps/frontend/src/main/ipc-handlers/ado/utils.ts
[error] 122-122: no-control-regex: Unexpected control character(s) in regular expression: \x00, \x1f
🪛 GitHub Check: test-frontend
apps/frontend/src/main/ipc-handlers/ado/utils.ts
[failure] 122-122:
Unexpected control character(s) in regular expression: \x00, \x1f
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (javascript-typescript)
- GitHub Check: CodeQL (python)
🔇 Additional comments (8)
apps/frontend/src/shared/constants/ipc.ts (1)
514-555: LGTM!The ADO IPC channel constants are well-organized and follow the existing naming conventions. The grouping (work items, PRs, events, auto-fix) mirrors the patterns established for GitHub and GitLab integrations.
apps/frontend/src/main/ipc-handlers/index.ts (1)
95-96: LGTM!The ADO handlers are correctly registered in the IPC setup flow. The parameterless registration aligns with the current ADO handler implementation that doesn't require
agentManagerorgetMainWindowdependencies.apps/frontend/src/main/ipc-handlers/ado/index.ts (1)
1-23: LGTM!Clean module organization with clear documentation. The entry point correctly aggregates work item and PR handler registrations, and appropriately re-exports utilities and types for external use.
apps/frontend/src/main/ipc-handlers/ado/types.ts (1)
98-153: LGTM!The normalized types (
ADOWorkItem,ADOPullRequest) provide good GitHub/GitLab compatibility with appropriate aliases (numberforid). The vote values documentation (line 145) is helpful for understanding the ADO review system.apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts (2)
21-57: LGTM!The
transformWorkItemfunction correctly maps ADO API fields to the normalized application format. The handling of optional fields (assignees, tags) and theclosedAtapproximation usingChangedDateare reasonable for ADO's data model.
107-149: LGTM!The WIQL query construction properly uses
sanitizeWiqlStringfor the project name to prevent injection. The state filtering logic correctly maps to ADO's work item states, and the batch fetch with a 200-item limit is a sensible safeguard.apps/frontend/src/main/ipc-handlers/ado/pr-handlers.ts (1)
20-49: LGTM!Good transformation logic with appropriate handling of ADO-specific fields. The comment on line 44 helpfully documents the ADO limitation regarding
updatedAt.apps/frontend/src/main/ipc-handlers/ado/utils.ts (1)
56-82: LGTM!The
adoFetchimplementation correctly handles Basic authentication with PAT and provides comprehensive error messages including status and response body.
| // Build a summary of changes | ||
| const diffLines: string[] = []; | ||
| for (const change of changesResult.changeEntries || []) { | ||
| const changeType = change.changeType || 'edit'; | ||
| const path = change.item?.path || ''; | ||
| diffLines.push(`--- ${changeType}: ${path}`); | ||
| } | ||
|
|
||
| return { success: true, data: diffLines.join('\n') }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider documenting the diff summary limitation.
The current implementation returns only a summary of changed files (change type and path) rather than actual file content diffs. This differs from typical GitHub/GitLab PR diff behavior. If this is intentional for performance or summary purposes, consider adding a comment explaining this design decision. Otherwise, users expecting full diffs may be surprised.
🤖 Prompt for AI Agents
In @apps/frontend/src/main/ipc-handlers/ado/pr-handlers.ts around lines 159 -
167, The diff summary returned by the handler currently only lists change type
and path (built from changesResult.changeEntries into diffLines) instead of
actual file content diffs; either add a clear inline comment near the
construction of diffLines (and in the function handling PR diffs) stating this
is an intentional lightweight summary for performance and not a full content
diff, or implement full diffs by fetching file contents/patches for each change
and include them in the returned string; reference the
changesResult.changeEntries array and the diffLines construction to locate where
to add the comment or expand the logic.
| return { | ||
| organization, | ||
| project: projectName, | ||
| repoName: repoName || '', | ||
| pat, | ||
| instanceUrl, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Minor: Redundant fallback for repoName.
Line 34 uses repoName || '', but repoName is already assigned a fallback value on line 26 (vars['ADO_REPO_NAME'] || projectName). Since projectName is validated as truthy on line 29, repoName will never be falsy at this point.
♻️ Simplify redundant fallback
return {
organization,
project: projectName,
- repoName: repoName || '',
+ repoName,
pat,
instanceUrl,
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return { | |
| organization, | |
| project: projectName, | |
| repoName: repoName || '', | |
| pat, | |
| instanceUrl, | |
| }; | |
| return { | |
| organization, | |
| project: projectName, | |
| repoName, | |
| pat, | |
| instanceUrl, | |
| }; |
🤖 Prompt for AI Agents
In @apps/frontend/src/main/ipc-handlers/ado/utils.ts around lines 31 - 37, The
return object includes a redundant fallback for repoName; remove the "|| ''" and
return repoName directly since repoName was earlier set from
vars['ADO_REPO_NAME'] || projectName and projectName is validated truthy. Update
the return in the function that builds the ADO config to use repoName (not
repoName || '') and keep the other fields (organization, project: projectName,
pat, instanceUrl) unchanged.
| export function sanitizeWiqlString(value: string): string { | ||
| if (!value) return ''; | ||
| // Escape single quotes (WIQL string delimiter) | ||
| let sanitized = value.replace(/'/g, "''"); | ||
| // Remove control characters | ||
| sanitized = sanitized.replace(/[\x00-\x1f\x7f-\x9f]/g, ''); | ||
| // Limit length | ||
| return sanitized.slice(0, 500); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the control character regex to resolve pipeline failure.
The regex on line 122 triggers the no-control-regex lint rule, causing CI to fail. While the intent is correct (sanitizing control characters), the linter flags literal control character escapes.
Use a character code-based approach or disable the rule for this specific line:
🐛 Proposed fix using eslint-disable comment
export function sanitizeWiqlString(value: string): string {
if (!value) return '';
// Escape single quotes (WIQL string delimiter)
let sanitized = value.replace(/'/g, "''");
// Remove control characters
+ // eslint-disable-next-line no-control-regex
sanitized = sanitized.replace(/[\x00-\x1f\x7f-\x9f]/g, '');
// Limit length
return sanitized.slice(0, 500);
}Alternatively, use a function-based approach to avoid the regex entirely:
♻️ Alternative: Function-based sanitization
export function sanitizeWiqlString(value: string): string {
if (!value) return '';
// Escape single quotes (WIQL string delimiter)
let sanitized = value.replace(/'/g, "''");
// Remove control characters (0x00-0x1f and 0x7f-0x9f)
- sanitized = sanitized.replace(/[\x00-\x1f\x7f-\x9f]/g, '');
+ sanitized = Array.from(sanitized)
+ .filter(char => {
+ const code = char.charCodeAt(0);
+ return !((code >= 0x00 && code <= 0x1f) || (code >= 0x7f && code <= 0x9f));
+ })
+ .join('');
// Limit length
return sanitized.slice(0, 500);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function sanitizeWiqlString(value: string): string { | |
| if (!value) return ''; | |
| // Escape single quotes (WIQL string delimiter) | |
| let sanitized = value.replace(/'/g, "''"); | |
| // Remove control characters | |
| sanitized = sanitized.replace(/[\x00-\x1f\x7f-\x9f]/g, ''); | |
| // Limit length | |
| return sanitized.slice(0, 500); | |
| } | |
| export function sanitizeWiqlString(value: string): string { | |
| if (!value) return ''; | |
| // Escape single quotes (WIQL string delimiter) | |
| let sanitized = value.replace(/'/g, "''"); | |
| // Remove control characters (0x00-0x1f and 0x7f-0x9f) | |
| sanitized = Array.from(sanitized) | |
| .filter(char => { | |
| const code = char.charCodeAt(0); | |
| return !((code >= 0x00 && code <= 0x1f) || (code >= 0x7f && code <= 0x9f)); | |
| }) | |
| .join(''); | |
| // Limit length | |
| return sanitized.slice(0, 500); | |
| } |
🧰 Tools
🪛 Biome (2.1.2)
[error] 122-122: Unexpected control character in a regular expression.
Control characters are unusual and potentially incorrect inputs, so they are disallowed.
(lint/suspicious/noControlCharactersInRegex)
[error] 122-122: Unexpected control character in a regular expression.
Control characters are unusual and potentially incorrect inputs, so they are disallowed.
(lint/suspicious/noControlCharactersInRegex)
🪛 GitHub Actions: CI
[error] 122-122: no-control-regex: Unexpected control character(s) in regular expression: \x00, \x1f
🪛 GitHub Check: test-frontend
[failure] 122-122:
Unexpected control character(s) in regular expression: \x00, \x1f
🤖 Prompt for AI Agents
In @apps/frontend/src/main/ipc-handlers/ado/utils.ts around lines 117 - 125, The
current sanitizeWiqlString function triggers the no-control-regex lint rule due
to literal control-character escapes in the regex used to strip control chars;
update sanitizeWiqlString to avoid the control-regex by either (A) replacing the
problematic regex with a function-based sanitization that iterates characters
(e.g., Array.from(value).filter(ch => { const c = ch.charCodeAt(0); return !(c
>= 0 && c <= 0x1f) && !(c >= 0x7f && c <= 0x9f); }).join('')) before slicing, or
(B) if you must keep the regex, add an inline eslint disable comment just above
the regex line to suppress no-control-regex for that single occurrence; apply
the change inside sanitizeWiqlString so the control characters are removed
without causing the linter/pipeline failure.
- Add ADOSetupModal.tsx for configuring ADO connection (org, project, PAT) - Add ADOWorkItems.tsx for listing and viewing work items - Add ado-api.ts preload module with ADOAPI interface and factory - Wire ADO API into ElectronAPI (window.electronAPI.ado.*) - Add testADOConnection IPC handler to test credentials before saving - Add ADO_TEST_CONNECTION channel constant - Export ADO components from index UI features: - Connection test flow with PAT validation - Work items list with search and state filtering - Work item detail view with metadata and description - Support for on-premises ADO Server URLs
| const [isOnPrem, setIsOnPrem] = useState(false); | ||
| const [isTesting, setIsTesting] = useState(false); | ||
| const [error, setError] = useState<string | null>(null); | ||
| const [testSuccess, setTestSuccess] = useState(false); |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In @apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts:
- Around line 173-178: The current code truncates to 200 items via const ids =
queryResult.workItems.slice(0, 200)... which hides results for large backlogs;
make the batch size configurable (introduce a BATCH_SIZE constant or add a
config.batchSize/env var and replace the hardcoded 200), or implement pagination
by repeatedly calling adoFetch(`/wit/workitems?ids=...`) in chunks of BATCH_SIZE
until all ids from queryResult.workItems are fetched; additionally return or
attach a totalCount (e.g., from queryResult or aggregate) alongside
workItemsResult so callers can detect truncation.
In @apps/frontend/src/preload/api/modules/ado-api.ts:
- Around line 194-198: The callbacks use overly broad unknown types; define
precise result interfaces (e.g., ADOInvestigationCompleteResult and
ADOPRReviewCompleteResult) matching the IPC payload shape and replace the
unknown type in onADOInvestigationComplete and onADOPRReviewComplete signatures
with those interfaces; update the createIpcListener generic usage so the
listener callback is strongly typed and adjust any related exports/imports to
use the new types.
- Around line 8-34: The ADOWorkItem interface is duplicated; extract the
ADOWorkItem type into a single shared module and export it, then replace the
local interface in both the preload module containing ADOWorkItem and the IPC
handler that currently duplicates it with an import of that shared ADOWorkItem;
ensure the shared export includes all fields (id, number, title, state,
workItemType, tags, assignees, author, priority, iteration, areaPath, createdAt,
updatedAt, closedAt, url, htmlUrl) and update the import statements in the files
that referenced the old local definitions.
In @apps/frontend/src/renderer/components/ADOSetupModal.tsx:
- Around line 142-326: The renderCredentialsStep JSX uses hardcoded user-facing
strings; update the ADOSetupModal component to use i18n by importing and calling
useTranslation (with the appropriate namespace) and replace all visible text in
renderCredentialsStep—titles, descriptions, labels, placeholders, button text,
help link text, PAT guidance, optional hints and error messages (e.g., "Connect
to Azure DevOps", "Configure your Azure DevOps connection...", "You'll need a
Personal Access Token (PAT)", "Organization *", "Project *", "Personal Access
Token *", "Save Without Testing", "Test & Save", "Skip for now", and the error
string)—with t('...') keys; ensure keys follow the project's namespace/prefix
convention and update any template strings or concatenations to use t with
interpolation where needed (refer to renderCredentialsStep,
handleSaveWithoutTest, testConnection and the error state usages to locate all
occurrences).
- Line 29: The ADOSetupModalProps declares and the ADOSetupModal component
destructures a project prop that is never used; either remove project from the
ADOSetupModalProps interface and from the component destructuring, or use it to
pre-fill the adoProject state or display project context. Update the interface
ADOSetupModalProps and the component signature to drop project if unused, or set
initial state for adoProject (or render project info) using the project value so
the prop is actually consumed (ensure references to project are removed if you
choose deletion to avoid linter errors).
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx:
- Around line 276-282: The button rendering inside ADOWorkItems (the element
with key={wi.id} that calls setSelectedWorkItem onClick and uses isSelected for
styling) lacks an explicit type and will default to type="submit"; update that
button to include type="button" to prevent unintended form submissions when this
component is nested in a form.
- Around line 41-363: The ADOWorkItems component currently uses many hardcoded
user-facing strings; import and call the i18n hook (useTranslation) at the top
of ADOWorkItems (same pattern as ADOSetupModal) and replace all literals with
translation lookups (e.g., t('ado:workItems.title'), t('ado:connect.title'),
t('ado:buttons.configure'), t('ado:buttons.settings'), t('ado:buttons.refresh'),
t('ado:search.placeholder'), t('ado:empty.noMatches'), t('ado:empty.noItems'),
t('ado:detail.selectWorkItem'), and any labels like
Author/Assigned/Created/Iteration/Tags/Description); ensure namespace prefix
(e.g., 'ado:') is used consistently, update any JSX text, button labels, badge
counts, placeholders and error messages to use t(), and add corresponding keys
to the ado translation namespace so ADOSetupModal and ADOWorkItems share the
same localization keys.
- Around line 458-461: The code renders unsanitized HTML from Azure DevOps via
dangerouslySetInnerHTML using workItem.body in the ADOWorkItems React component,
which creates an XSS risk; fix it by installing DOMPurify and @types/dompurify,
import DOMPurify in ADOWorkItems.tsx, sanitize workItem.body (e.g., const
safeHtml = DOMPurify.sanitize(workItem.body)) before passing it into
dangerouslySetInnerHTML, or replace the dangerouslySetInnerHTML usage with a
safe render that uses the sanitized string; ensure the sanitize step runs
wherever workItem.body is mapped or prepared so no raw HTML reaches
dangerouslySetInnerHTML.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/preload/api/index.tsapps/frontend/src/preload/api/modules/ado-api.tsapps/frontend/src/preload/api/modules/index.tsapps/frontend/src/renderer/components/ADOSetupModal.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/renderer/components/index.tsapps/frontend/src/shared/constants/ipc.ts
🧰 Additional context used
📓 Path-based instructions (3)
apps/frontend/src/**/*.{ts,tsx,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always use i18n translation keys for all user-facing text in the frontend instead of hardcoded strings
Files:
apps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/preload/api/index.tsapps/frontend/src/preload/api/modules/index.tsapps/frontend/src/renderer/components/index.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/renderer/components/ADOSetupModal.tsxapps/frontend/src/preload/api/modules/ado-api.tsapps/frontend/src/shared/constants/ipc.ts
apps/frontend/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
useTranslation()hook with namespace prefixes (e.g., 'navigation:items.key') for accessing translation strings in React components
Files:
apps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/preload/api/index.tsapps/frontend/src/preload/api/modules/index.tsapps/frontend/src/renderer/components/index.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/renderer/components/ADOSetupModal.tsxapps/frontend/src/preload/api/modules/ado-api.tsapps/frontend/src/shared/constants/ipc.ts
apps/frontend/**/*.{ts,tsx}
⚙️ CodeRabbit configuration file
apps/frontend/**/*.{ts,tsx}: Review React patterns and TypeScript type safety.
Check for proper state management and component composition.
Files:
apps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/preload/api/index.tsapps/frontend/src/preload/api/modules/index.tsapps/frontend/src/renderer/components/index.tsapps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsapps/frontend/src/renderer/components/ADOSetupModal.tsxapps/frontend/src/preload/api/modules/ado-api.tsapps/frontend/src/shared/constants/ipc.ts
🧠 Learnings (1)
📚 Learning: 2026-01-04T23:59:45.209Z
Learnt from: MikeeBuilds
Repo: AndyMik90/Auto-Claude PR: 661
File: apps/frontend/src/renderer/components/onboarding/OllamaModelSelector.tsx:176-189
Timestamp: 2026-01-04T23:59:45.209Z
Learning: In the AndyMik90/Auto-Claude repository, pre-existing i18n issues (hardcoded user-facing strings) can be deferred for future i18n cleanup passes. Do not fix such issues in PRs that do not introduce new i18n violations, especially in frontend TSX components (e.g., apps/frontend/**/*.tsx). If a PR adds new i18n violations, address them in that PR.
Applied to files:
apps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/renderer/components/ADOSetupModal.tsx
🧬 Code graph analysis (3)
apps/frontend/src/renderer/components/ADOWorkItems.tsx (2)
.design-system/src/lib/icons.ts (2)
Zap(38-38)ChevronRight(23-23)apps/frontend/src/preload/api/modules/ado-api.ts (1)
ADOWorkItem(8-34)
apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts (2)
apps/frontend/src/main/ipc-handlers/ado/types.ts (2)
ADOAPIWorkItem(13-43)ADOWorkItem(98-124)apps/frontend/src/main/ipc-handlers/ado/utils.ts (4)
normalizeWorkItemState(130-133)getADOConfig(14-41)adoFetch(56-82)sanitizeWiqlString(117-125)
apps/frontend/src/preload/api/modules/ado-api.ts (1)
apps/frontend/src/shared/constants/ipc.ts (1)
IPC_CHANNELS(6-557)
🪛 ast-grep (0.40.4)
apps/frontend/src/renderer/components/ADOWorkItems.tsx
[warning] 459-459: Usage of dangerouslySetInnerHTML detected. This bypasses React's built-in XSS protection. Always sanitize HTML content using libraries like DOMPurify before injecting it into the DOM to prevent XSS attacks.
Context: dangerouslySetInnerHTML
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml
- https://cwe.mitre.org/data/definitions/79.html
(react-unsafe-html-injection)
🪛 Biome (2.1.2)
apps/frontend/src/renderer/components/ADOWorkItems.tsx
[error] 276-282: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
[error] 460-460: Avoid passing content using the dangerouslySetInnerHTML prop.
Setting content using code can expose users to cross-site scripting (XSS) attacks
(lint/security/noDangerouslySetInnerHtml)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (python)
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (8)
apps/frontend/src/renderer/components/index.ts (1)
11-12: LGTM!The new ADO component exports follow the established barrel export pattern and are properly placed alongside related integrations (after
GitHubIssues).apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts (3)
150-162: WIQL query construction looks secure.Good use of
sanitizeWiqlStringfor the project name before interpolation into the WIQL query. The state conditions use hardcoded values, avoiding injection risks.
21-57: LGTM!The
transformWorkItemfunction correctly maps ADO API fields to the application's normalized format, with proper handling of:
- Optional fields (
assignedTo,_links)- Tag parsing from semicolon-separated string
- State normalization via
normalizeWorkItemState- Fallback URL generation when
_links.htmlis missing
382-390: LGTM!The
registerWorkItemHandlersfunction cleanly aggregates all handler registrations, following a consistent pattern for IPC handler setup.apps/frontend/src/preload/api/modules/index.ts (1)
14-14: LGTM!The ADO API module export follows the established barrel pattern and is appropriately positioned alongside other provider APIs (GitHub, Linear).
apps/frontend/src/preload/api/index.ts (1)
12-12: LGTM!The ADO API integration follows the established pattern used for GitHub:
- Namespaced under
adoproperty inElectronAPI- Factory properly creates the API via
createADOAPI()- Both the factory and type are exported for external use
Also applies to: 34-34, 52-53, 70-70, 89-89
apps/frontend/src/preload/api/modules/ado-api.ts (1)
248-268: LGTM!The
createADOAPIfactory cleanly assembles all ADO operations and event listeners into a typed API object, matching the pattern established bycreateGitHubAPI.apps/frontend/src/shared/constants/ipc.ts (1)
512-513: LGTM!The formatting change adds consistency with the trailing comma pattern used throughout the file.
| // Fetch full work item details (batch, max 200) | ||
| const ids = queryResult.workItems.slice(0, 200).map(wi => wi.id); | ||
| const workItemsResult = await adoFetch( | ||
| config, | ||
| `/wit/workitems?ids=${ids.join(',')}&$expand=All` | ||
| ) as { value: ADOAPIWorkItem[] }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider making the batch size limit configurable or documenting the 200-item cap.
The 200-item limit is reasonable for performance, but users with large backlogs may not realize results are truncated. Consider adding a total count or pagination support.
📝 Optional: Return total count for transparency
if (!queryResult.workItems || queryResult.workItems.length === 0) {
return { success: true, data: [] };
}
+ const totalCount = queryResult.workItems.length;
// Fetch full work item details (batch, max 200)
const ids = queryResult.workItems.slice(0, 200).map(wi => wi.id);
// ... fetch details ...
- return { success: true, data: result };
+ return {
+ success: true,
+ data: result,
+ // Optional: include metadata about truncation
+ // meta: { total: totalCount, returned: result.length, truncated: totalCount > 200 }
+ };Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts around lines
173 - 178, The current code truncates to 200 items via const ids =
queryResult.workItems.slice(0, 200)... which hides results for large backlogs;
make the batch size configurable (introduce a BATCH_SIZE constant or add a
config.batchSize/env var and replace the hardcoded 200), or implement pagination
by repeatedly calling adoFetch(`/wit/workitems?ids=...`) in chunks of BATCH_SIZE
until all ids from queryResult.workItems are fetched; additionally return or
attach a totalCount (e.g., from queryResult or aggregate) alongside
workItemsResult so callers can detect truncation.
| export interface ADOWorkItem { | ||
| id: number; | ||
| number: number; | ||
| title: string; | ||
| body?: string; | ||
| state: 'open' | 'closed'; | ||
| workItemType: string; | ||
| tags: string[]; | ||
| assignees: Array<{ | ||
| login: string; | ||
| displayName: string; | ||
| avatarUrl?: string; | ||
| }>; | ||
| author: { | ||
| login: string; | ||
| displayName: string; | ||
| avatarUrl?: string; | ||
| }; | ||
| priority?: number; | ||
| iteration?: string; | ||
| areaPath?: string; | ||
| createdAt: string; | ||
| updatedAt: string; | ||
| closedAt?: string; | ||
| url: string; | ||
| htmlUrl: string; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Type duplication: ADOWorkItem is defined in both preload and IPC handler layers.
The ADOWorkItem interface here (lines 8-34) mirrors the one in apps/frontend/src/main/ipc-handlers/ado/types.ts (lines 97-123). Consider importing from a shared location to avoid drift.
You could move shared types to apps/frontend/src/shared/types/ado.ts and import from there in both locations. This ensures type consistency across the IPC boundary.
🤖 Prompt for AI Agents
In @apps/frontend/src/preload/api/modules/ado-api.ts around lines 8 - 34, The
ADOWorkItem interface is duplicated; extract the ADOWorkItem type into a single
shared module and export it, then replace the local interface in both the
preload module containing ADOWorkItem and the IPC handler that currently
duplicates it with an import of that shared ADOWorkItem; ensure the shared
export includes all fields (id, number, title, state, workItemType, tags,
assignees, author, priority, iteration, areaPath, createdAt, updatedAt,
closedAt, url, htmlUrl) and update the import statements in the files that
referenced the old local definitions.
| export function onADOInvestigationComplete( | ||
| callback: (result: unknown) => void | ||
| ): IpcListenerCleanup { | ||
| return createIpcListener(IPC_CHANNELS.ADO_INVESTIGATION_COMPLETE, callback); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider using more specific types for callback results instead of unknown.
onADOInvestigationComplete and onADOPRReviewComplete callbacks receive unknown typed results. If the result structure is known, define and use specific types for better type safety and developer experience.
📝 Example with specific types
+export interface ADOInvestigationResult {
+ // Define the actual structure
+ summary: string;
+ recommendations: string[];
+}
+
export function onADOInvestigationComplete(
- callback: (result: unknown) => void
+ callback: (result: ADOInvestigationResult) => void
): IpcListenerCleanup {
return createIpcListener(IPC_CHANNELS.ADO_INVESTIGATION_COMPLETE, callback);
}Also applies to: 212-216
🤖 Prompt for AI Agents
In @apps/frontend/src/preload/api/modules/ado-api.ts around lines 194 - 198, The
callbacks use overly broad unknown types; define precise result interfaces
(e.g., ADOInvestigationCompleteResult and ADOPRReviewCompleteResult) matching
the IPC payload shape and replace the unknown type in onADOInvestigationComplete
and onADOPRReviewComplete signatures with those interfaces; update the
createIpcListener generic usage so the listener callback is strongly typed and
adjust any related exports/imports to use the new types.
| interface ADOSetupModalProps { | ||
| open: boolean; | ||
| onOpenChange: (open: boolean) => void; | ||
| project: Project; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Unused project prop.
The project prop is declared in ADOSetupModalProps and destructured in the component but is never used. Either remove it from the interface and destructure, or use it (e.g., to pre-fill adoProject or show context).
♻️ If the prop is not needed, remove it
interface ADOSetupModalProps {
open: boolean;
onOpenChange: (open: boolean) => void;
- project: Project;
onComplete: (settings: {
adoOrganization: string;
adoProject: string;
adoRepoName: string;
adoPat: string;
adoInstanceUrl: string;
}) => void;
onSkip?: () => void;
} export function ADOSetupModal({
open,
onOpenChange,
- project,
onComplete,
onSkip,
}: ADOSetupModalProps) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| project: Project; | |
| interface ADOSetupModalProps { | |
| open: boolean; | |
| onOpenChange: (open: boolean) => void; | |
| onComplete: (settings: { | |
| adoOrganization: string; | |
| adoProject: string; | |
| adoRepoName: string; | |
| adoPat: string; | |
| adoInstanceUrl: string; | |
| }) => void; | |
| onSkip?: () => void; | |
| } | |
| export function ADOSetupModal({ | |
| open, | |
| onOpenChange, | |
| onComplete, | |
| onSkip, | |
| }: ADOSetupModalProps) { | |
| // ... rest of component implementation | |
| } |
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOSetupModal.tsx at line 29, The
ADOSetupModalProps declares and the ADOSetupModal component destructures a
project prop that is never used; either remove project from the
ADOSetupModalProps interface and from the component destructuring, or use it to
pre-fill the adoProject state or display project context. Update the interface
ADOSetupModalProps and the component signature to drop project if unused, or set
initial state for adoProject (or render project info) using the project value so
the prop is actually consumed (ensure references to project are removed if you
choose deletion to avoid linter errors).
| const renderCredentialsStep = () => ( | ||
| <> | ||
| <DialogHeader> | ||
| <DialogTitle className="flex items-center gap-2"> | ||
| <Cloud className="h-5 w-5 text-blue-500" /> | ||
| Connect to Azure DevOps | ||
| </DialogTitle> | ||
| <DialogDescription> | ||
| Configure your Azure DevOps connection to sync work items and pull requests. | ||
| </DialogDescription> | ||
| </DialogHeader> | ||
|
|
||
| <div className="py-4 space-y-4"> | ||
| {/* PAT Help Link */} | ||
| <div className="rounded-lg border border-blue-200 bg-blue-50 dark:bg-blue-950 dark:border-blue-800 p-3"> | ||
| <div className="flex items-start gap-2"> | ||
| <Key className="h-4 w-4 text-blue-600 mt-0.5" /> | ||
| <div className="text-xs"> | ||
| <p className="font-medium text-blue-900 dark:text-blue-100"> | ||
| You'll need a Personal Access Token (PAT) | ||
| </p> | ||
| <p className="mt-1 text-blue-700 dark:text-blue-300"> | ||
| Create one at your ADO organization settings with{' '} | ||
| <strong>Code (Read & Write)</strong> and{' '} | ||
| <strong>Work Items (Read & Write)</strong> scopes. | ||
| </p> | ||
| <a | ||
| href="https://dev.azure.com/_usersSettings/tokens" | ||
| target="_blank" | ||
| rel="noopener noreferrer" | ||
| className="inline-flex items-center gap-1 mt-2 text-blue-600 hover:underline" | ||
| > | ||
| Create PAT on Azure DevOps | ||
| <ExternalLink className="h-3 w-3" /> | ||
| </a> | ||
| </div> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Organization */} | ||
| <div className="space-y-2"> | ||
| <Label htmlFor="ado-org" className="flex items-center gap-2"> | ||
| <Building2 className="h-4 w-4" /> | ||
| Organization * | ||
| </Label> | ||
| <Input | ||
| id="ado-org" | ||
| value={organization} | ||
| onChange={(e) => setOrganization(e.target.value)} | ||
| placeholder="myorganization" | ||
| disabled={isTesting} | ||
| /> | ||
| <p className="text-xs text-muted-foreground"> | ||
| Your ADO organization name (from dev.azure.com/<strong>org-name</strong>) | ||
| </p> | ||
| </div> | ||
|
|
||
| {/* Project */} | ||
| <div className="space-y-2"> | ||
| <Label htmlFor="ado-project" className="flex items-center gap-2"> | ||
| <FolderGit2 className="h-4 w-4" /> | ||
| Project * | ||
| </Label> | ||
| <Input | ||
| id="ado-project" | ||
| value={adoProject} | ||
| onChange={(e) => setAdoProject(e.target.value)} | ||
| placeholder="MyProject" | ||
| disabled={isTesting} | ||
| /> | ||
| </div> | ||
|
|
||
| {/* Repository (optional) */} | ||
| <div className="space-y-2"> | ||
| <Label htmlFor="ado-repo" className="flex items-center gap-2"> | ||
| <GitBranch className="h-4 w-4" /> | ||
| Repository | ||
| <span className="text-xs text-muted-foreground">(optional)</span> | ||
| </Label> | ||
| <Input | ||
| id="ado-repo" | ||
| value={repoName} | ||
| onChange={(e) => setRepoName(e.target.value)} | ||
| placeholder={adoProject || 'Same as project name'} | ||
| disabled={isTesting} | ||
| /> | ||
| <p className="text-xs text-muted-foreground"> | ||
| Leave empty to use the project name as repo name | ||
| </p> | ||
| </div> | ||
|
|
||
| {/* PAT */} | ||
| <div className="space-y-2"> | ||
| <Label htmlFor="ado-pat" className="flex items-center gap-2"> | ||
| <Key className="h-4 w-4" /> | ||
| Personal Access Token * | ||
| </Label> | ||
| <Input | ||
| id="ado-pat" | ||
| type="password" | ||
| value={pat} | ||
| onChange={(e) => setPat(e.target.value)} | ||
| placeholder="Enter your PAT" | ||
| disabled={isTesting} | ||
| /> | ||
| </div> | ||
|
|
||
| {/* On-prem toggle */} | ||
| <div className="space-y-2"> | ||
| <div className="flex items-center gap-2"> | ||
| <input | ||
| type="checkbox" | ||
| id="ado-onprem" | ||
| checked={isOnPrem} | ||
| onChange={(e) => { | ||
| setIsOnPrem(e.target.checked); | ||
| if (!e.target.checked) { | ||
| setInstanceUrl('https://dev.azure.com'); | ||
| } else { | ||
| setInstanceUrl(''); | ||
| } | ||
| }} | ||
| className="rounded border-gray-300" | ||
| disabled={isTesting} | ||
| /> | ||
| <Label htmlFor="ado-onprem" className="text-sm cursor-pointer"> | ||
| Using Azure DevOps Server (on-premises) | ||
| </Label> | ||
| </div> | ||
|
|
||
| {isOnPrem && ( | ||
| <div className="ml-6 space-y-2"> | ||
| <Input | ||
| value={instanceUrl} | ||
| onChange={(e) => setInstanceUrl(e.target.value)} | ||
| placeholder="https://devops.yourcompany.com" | ||
| disabled={isTesting} | ||
| /> | ||
| <p className="text-xs text-muted-foreground"> | ||
| Your on-premises Azure DevOps Server URL | ||
| </p> | ||
| </div> | ||
| )} | ||
| </div> | ||
|
|
||
| {error && ( | ||
| <div className="rounded-lg bg-destructive/10 border border-destructive/30 p-3 text-sm text-destructive flex items-start gap-2"> | ||
| <AlertCircle className="h-4 w-4 mt-0.5 flex-shrink-0" /> | ||
| <span>{error}</span> | ||
| </div> | ||
| )} | ||
| </div> | ||
|
|
||
| <DialogFooter className="flex-col sm:flex-row gap-2"> | ||
| {onSkip && ( | ||
| <Button variant="ghost" onClick={onSkip} disabled={isTesting}> | ||
| Skip for now | ||
| </Button> | ||
| )} | ||
| <Button | ||
| variant="outline" | ||
| onClick={handleSaveWithoutTest} | ||
| disabled={isTesting || !organization || !adoProject || !pat} | ||
| > | ||
| Save Without Testing | ||
| </Button> | ||
| <Button | ||
| onClick={testConnection} | ||
| disabled={isTesting || !organization || !adoProject || !pat} | ||
| > | ||
| {isTesting ? ( | ||
| <> | ||
| <Loader2 className="mr-2 h-4 w-4 animate-spin" /> | ||
| Testing... | ||
| </> | ||
| ) : ( | ||
| <> | ||
| <CheckCircle2 className="mr-2 h-4 w-4" /> | ||
| Test & Save | ||
| </> | ||
| )} | ||
| </Button> | ||
| </DialogFooter> | ||
| </> | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All user-facing strings must use i18n translation keys.
This new component contains numerous hardcoded strings that violate the project's i18n requirements. As per coding guidelines, always use useTranslation() hook with namespace prefixes for user-facing text.
Examples of hardcoded strings that need translation:
- "Connect to Azure DevOps"
- "Configure your Azure DevOps connection..."
- "You'll need a Personal Access Token (PAT)"
- "Organization *", "Project *", "Personal Access Token *"
- "Save Without Testing", "Test & Save", "Skip for now"
- Error messages like "Please fill in all required fields"
🔧 Example refactor using useTranslation
+import { useTranslation } from 'react-i18next';
+
export function ADOSetupModal({
open,
onOpenChange,
project,
onComplete,
onSkip,
}: ADOSetupModalProps) {
+ const { t } = useTranslation('ado');
const [step, setStep] = useState<SetupStep>('credentials');
// ...
const renderCredentialsStep = () => (
<>
<DialogHeader>
<DialogTitle className="flex items-center gap-2">
<Cloud className="h-5 w-5 text-blue-500" />
- Connect to Azure DevOps
+ {t('setup.title')}
</DialogTitle>
<DialogDescription>
- Configure your Azure DevOps connection to sync work items and pull requests.
+ {t('setup.description')}
</DialogDescription>
</DialogHeader>
// ... apply similar changes to all user-facing strings
</>
);Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOSetupModal.tsx around lines 142 -
326, The renderCredentialsStep JSX uses hardcoded user-facing strings; update
the ADOSetupModal component to use i18n by importing and calling useTranslation
(with the appropriate namespace) and replace all visible text in
renderCredentialsStep—titles, descriptions, labels, placeholders, button text,
help link text, PAT guidance, optional hints and error messages (e.g., "Connect
to Azure DevOps", "Configure your Azure DevOps connection...", "You'll need a
Personal Access Token (PAT)", "Organization *", "Project *", "Personal Access
Token *", "Save Without Testing", "Test & Save", "Skip for now", and the error
string)—with t('...') keys; ensure keys follow the project's namespace/prefix
convention and update any template strings or concatenations to use t with
interpolation where needed (refer to renderCredentialsStep,
handleSaveWithoutTest, testConnection and the error state usages to locate all
occurrences).
| export function ADOWorkItems({ onOpenSettings }: ADOWorkItemsProps) { | ||
| const projects = useProjectStore((state) => state.projects); | ||
| const selectedProjectId = useProjectStore((state) => state.selectedProjectId); | ||
| const selectedProject = projects.find((p) => p.id === selectedProjectId); | ||
|
|
||
| const [workItems, setWorkItems] = useState<ADOWorkItem[]>([]); | ||
| const [isLoading, setIsLoading] = useState(false); | ||
| const [error, setError] = useState<string | null>(null); | ||
| const [isConnected, setIsConnected] = useState<boolean | null>(null); | ||
| const [selectedWorkItem, setSelectedWorkItem] = useState<ADOWorkItem | null>(null); | ||
| const [searchQuery, setSearchQuery] = useState(''); | ||
| const [filterState, setFilterState] = useState<'open' | 'closed' | 'all'>('open'); | ||
| const [showSetupModal, setShowSetupModal] = useState(false); | ||
|
|
||
| // Check connection status | ||
| const checkConnection = useCallback(async () => { | ||
| if (!selectedProject?.id) return; | ||
|
|
||
| try { | ||
| const result = await window.electronAPI.ado.checkADOConnection(selectedProject.id); | ||
| setIsConnected(result.success); | ||
| if (!result.success) { | ||
| setError(result.error || 'Not connected to Azure DevOps'); | ||
| } | ||
| } catch (err) { | ||
| setIsConnected(false); | ||
| setError(err instanceof Error ? err.message : 'Failed to check connection'); | ||
| } | ||
| }, [selectedProject?.id]); | ||
|
|
||
| // Fetch work items | ||
| const fetchWorkItems = useCallback(async () => { | ||
| if (!selectedProject?.id || !isConnected) return; | ||
|
|
||
| setIsLoading(true); | ||
| setError(null); | ||
|
|
||
| try { | ||
| const result = await window.electronAPI.ado.getADOWorkItems(selectedProject.id, filterState); | ||
| if (result.success && result.data) { | ||
| setWorkItems(result.data); | ||
| } else { | ||
| setError(result.error || 'Failed to fetch work items'); | ||
| } | ||
| } catch (err) { | ||
| setError(err instanceof Error ? err.message : 'Failed to fetch work items'); | ||
| } finally { | ||
| setIsLoading(false); | ||
| } | ||
| }, [selectedProject?.id, isConnected, filterState]); | ||
|
|
||
| // Initial load | ||
| useEffect(() => { | ||
| checkConnection(); | ||
| }, [checkConnection]); | ||
|
|
||
| // Fetch work items when connected or filter changes | ||
| useEffect(() => { | ||
| if (isConnected) { | ||
| fetchWorkItems(); | ||
| } | ||
| }, [fetchWorkItems, isConnected]); | ||
|
|
||
| // Filter work items by search query | ||
| const filteredWorkItems = useMemo(() => { | ||
| if (!searchQuery) return workItems; | ||
| const query = searchQuery.toLowerCase(); | ||
| return workItems.filter( | ||
| (wi) => | ||
| wi.title.toLowerCase().includes(query) || | ||
| wi.id.toString().includes(query) || | ||
| wi.workItemType.toLowerCase().includes(query) || | ||
| wi.tags.some((tag) => tag.toLowerCase().includes(query)) | ||
| ); | ||
| }, [workItems, searchQuery]); | ||
|
|
||
| // Handle refresh | ||
| const handleRefresh = useCallback(() => { | ||
| checkConnection(); | ||
| if (isConnected) { | ||
| fetchWorkItems(); | ||
| } | ||
| }, [checkConnection, fetchWorkItems, isConnected]); | ||
|
|
||
| // Handle setup complete | ||
| const handleSetupComplete = useCallback(async (settings: { | ||
| adoOrganization: string; | ||
| adoProject: string; | ||
| adoRepoName: string; | ||
| adoPat: string; | ||
| adoInstanceUrl: string; | ||
| }) => { | ||
| // TODO: Save settings to project | ||
| // For now, settings should be saved to .env file | ||
| console.log('ADO settings received:', settings); | ||
| setShowSetupModal(false); | ||
| handleRefresh(); | ||
| }, [handleRefresh]); | ||
|
|
||
| // Not connected state | ||
| if (isConnected === false) { | ||
| return ( | ||
| <div className="flex-1 flex flex-col items-center justify-center p-8 text-center"> | ||
| <div className="h-16 w-16 rounded-full bg-blue-100 dark:bg-blue-900 flex items-center justify-center mb-4"> | ||
| <Cloud className="h-8 w-8 text-blue-600 dark:text-blue-400" /> | ||
| </div> | ||
| <h3 className="text-lg font-medium mb-2">Connect to Azure DevOps</h3> | ||
| <p className="text-sm text-muted-foreground mb-4 max-w-md"> | ||
| {error || 'Configure your Azure DevOps connection to view and manage work items.'} | ||
| </p> | ||
| <div className="flex gap-2"> | ||
| <Button onClick={() => setShowSetupModal(true)}> | ||
| <Cloud className="mr-2 h-4 w-4" /> | ||
| Configure Connection | ||
| </Button> | ||
| {onOpenSettings && ( | ||
| <Button variant="outline" onClick={onOpenSettings}> | ||
| <Settings className="mr-2 h-4 w-4" /> | ||
| Settings | ||
| </Button> | ||
| )} | ||
| </div> | ||
|
|
||
| {selectedProject && ( | ||
| <ADOSetupModal | ||
| open={showSetupModal} | ||
| onOpenChange={setShowSetupModal} | ||
| project={selectedProject} | ||
| onComplete={handleSetupComplete} | ||
| onSkip={() => setShowSetupModal(false)} | ||
| /> | ||
| )} | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| // Loading initial connection | ||
| if (isConnected === null) { | ||
| return ( | ||
| <div className="flex-1 flex items-center justify-center"> | ||
| <Loader2 className="h-8 w-8 animate-spin text-muted-foreground" /> | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| return ( | ||
| <div className="flex-1 flex flex-col h-full"> | ||
| {/* Header */} | ||
| <div className="flex-none border-b border-border p-4"> | ||
| <div className="flex items-center justify-between mb-3"> | ||
| <div className="flex items-center gap-2"> | ||
| <Cloud className="h-5 w-5 text-blue-500" /> | ||
| <h2 className="font-semibold">Azure DevOps Work Items</h2> | ||
| <Badge variant="outline" className="text-xs"> | ||
| {filteredWorkItems.length} items | ||
| </Badge> | ||
| </div> | ||
| <div className="flex items-center gap-2"> | ||
| <Button | ||
| variant="outline" | ||
| size="sm" | ||
| onClick={handleRefresh} | ||
| disabled={isLoading} | ||
| > | ||
| <RefreshCw className={`h-4 w-4 mr-1 ${isLoading ? 'animate-spin' : ''}`} /> | ||
| Refresh | ||
| </Button> | ||
| <Button | ||
| variant="outline" | ||
| size="sm" | ||
| onClick={() => setShowSetupModal(true)} | ||
| > | ||
| <Settings className="h-4 w-4" /> | ||
| </Button> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Search and Filter */} | ||
| <div className="flex items-center gap-2"> | ||
| <div className="relative flex-1"> | ||
| <Search className="absolute left-2.5 top-2.5 h-4 w-4 text-muted-foreground" /> | ||
| <Input | ||
| placeholder="Search work items..." | ||
| value={searchQuery} | ||
| onChange={(e) => setSearchQuery(e.target.value)} | ||
| className="pl-8" | ||
| /> | ||
| </div> | ||
| <div className="flex items-center gap-1 border rounded-md p-0.5"> | ||
| {(['open', 'closed', 'all'] as const).map((state) => ( | ||
| <Button | ||
| key={state} | ||
| variant={filterState === state ? 'default' : 'ghost'} | ||
| size="sm" | ||
| onClick={() => setFilterState(state)} | ||
| className="h-7 px-2 text-xs capitalize" | ||
| > | ||
| {state} | ||
| </Button> | ||
| ))} | ||
| </div> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Content */} | ||
| <div className="flex-1 flex min-h-0"> | ||
| {/* Work Items List */} | ||
| <div className="w-1/2 border-r border-border flex flex-col"> | ||
| {error ? ( | ||
| <div className="flex-1 flex items-center justify-center p-4"> | ||
| <div className="text-center"> | ||
| <AlertCircle className="h-8 w-8 text-destructive mx-auto mb-2" /> | ||
| <p className="text-sm text-muted-foreground">{error}</p> | ||
| <Button variant="outline" size="sm" className="mt-2" onClick={handleRefresh}> | ||
| Try Again | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| ) : filteredWorkItems.length === 0 ? ( | ||
| <div className="flex-1 flex items-center justify-center p-4"> | ||
| <div className="text-center"> | ||
| <Circle className="h-8 w-8 text-muted-foreground mx-auto mb-2" /> | ||
| <p className="text-sm text-muted-foreground"> | ||
| {searchQuery ? 'No matching work items' : 'No work items found'} | ||
| </p> | ||
| </div> | ||
| </div> | ||
| ) : ( | ||
| <ScrollArea className="flex-1"> | ||
| <div className="divide-y divide-border"> | ||
| {filteredWorkItems.map((wi) => { | ||
| const TypeIcon = workItemTypeIcons[wi.workItemType] || Circle; | ||
| const isSelected = selectedWorkItem?.id === wi.id; | ||
|
|
||
| return ( | ||
| <button | ||
| key={wi.id} | ||
| onClick={() => setSelectedWorkItem(wi)} | ||
| className={`w-full text-left p-3 hover:bg-muted/50 transition-colors ${ | ||
| isSelected ? 'bg-muted' : '' | ||
| }`} | ||
| > | ||
| <div className="flex items-start gap-2"> | ||
| <TypeIcon | ||
| className={`h-4 w-4 mt-0.5 flex-shrink-0 ${ | ||
| wi.workItemType === 'Bug' | ||
| ? 'text-red-500' | ||
| : wi.workItemType === 'User Story' | ||
| ? 'text-blue-500' | ||
| : 'text-green-500' | ||
| }`} | ||
| /> | ||
| <div className="flex-1 min-w-0"> | ||
| <div className="flex items-center gap-1"> | ||
| <span className="text-xs text-muted-foreground">#{wi.id}</span> | ||
| <span | ||
| className={`text-xs px-1.5 py-0.5 rounded ${ | ||
| wi.state === 'open' | ||
| ? 'bg-green-100 text-green-800 dark:bg-green-900 dark:text-green-200' | ||
| : 'bg-gray-100 text-gray-800 dark:bg-gray-800 dark:text-gray-200' | ||
| }`} | ||
| > | ||
| {wi.state} | ||
| </span> | ||
| </div> | ||
| <p className="text-sm font-medium truncate">{wi.title}</p> | ||
| {wi.tags.length > 0 && ( | ||
| <div className="flex items-center gap-1 mt-1 flex-wrap"> | ||
| {wi.tags.slice(0, 3).map((tag) => ( | ||
| <Badge | ||
| key={tag} | ||
| variant="outline" | ||
| className="text-xs px-1 py-0" | ||
| > | ||
| {tag} | ||
| </Badge> | ||
| ))} | ||
| {wi.tags.length > 3 && ( | ||
| <span className="text-xs text-muted-foreground"> | ||
| +{wi.tags.length - 3} | ||
| </span> | ||
| )} | ||
| </div> | ||
| )} | ||
| </div> | ||
| <ChevronRight className="h-4 w-4 text-muted-foreground flex-shrink-0" /> | ||
| </div> | ||
| </button> | ||
| ); | ||
| })} | ||
| </div> | ||
| </ScrollArea> | ||
| )} | ||
| </div> | ||
|
|
||
| {/* Work Item Detail */} | ||
| <div className="w-1/2 flex flex-col"> | ||
| {selectedWorkItem ? ( | ||
| <WorkItemDetail workItem={selectedWorkItem} /> | ||
| ) : ( | ||
| <div className="flex-1 flex items-center justify-center"> | ||
| <div className="text-center text-muted-foreground"> | ||
| <Circle className="h-8 w-8 mx-auto mb-2" /> | ||
| <p className="text-sm">Select a work item to view details</p> | ||
| </div> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Setup Modal */} | ||
| {selectedProject && ( | ||
| <ADOSetupModal | ||
| open={showSetupModal} | ||
| onOpenChange={setShowSetupModal} | ||
| project={selectedProject} | ||
| onComplete={handleSetupComplete} | ||
| onSkip={() => setShowSetupModal(false)} | ||
| /> | ||
| )} | ||
| </div> | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All user-facing strings must use i18n translation keys.
Similar to ADOSetupModal, this new component contains many hardcoded strings violating i18n requirements. Examples include:
- "Azure DevOps Work Items"
- "Connect to Azure DevOps"
- "Configure Connection", "Settings", "Refresh"
- "Search work items...", "No matching work items", "No work items found"
- "Select a work item to view details"
- Labels like "Author:", "Assigned:", "Created:", "Iteration:", "Tags:", "Description"
As per coding guidelines, use useTranslation() hook with namespace prefixes.
🧰 Tools
🪛 Biome (2.1.2)
[error] 276-282: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx around lines 41 -
363, The ADOWorkItems component currently uses many hardcoded user-facing
strings; import and call the i18n hook (useTranslation) at the top of
ADOWorkItems (same pattern as ADOSetupModal) and replace all literals with
translation lookups (e.g., t('ado:workItems.title'), t('ado:connect.title'),
t('ado:buttons.configure'), t('ado:buttons.settings'), t('ado:buttons.refresh'),
t('ado:search.placeholder'), t('ado:empty.noMatches'), t('ado:empty.noItems'),
t('ado:detail.selectWorkItem'), and any labels like
Author/Assigned/Created/Iteration/Tags/Description); ensure namespace prefix
(e.g., 'ado:') is used consistently, update any JSX text, button labels, badge
counts, placeholders and error messages to use t(), and add corresponding keys
to the ado translation namespace so ADOSetupModal and ADOWorkItems share the
same localization keys.
| <button | ||
| key={wi.id} | ||
| onClick={() => setSelectedWorkItem(wi)} | ||
| className={`w-full text-left p-3 hover:bg-muted/50 transition-colors ${ | ||
| isSelected ? 'bg-muted' : '' | ||
| }`} | ||
| > |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add explicit type="button" to prevent unintended form submissions.
The <button> element defaults to type="submit", which can cause unexpected behavior if placed inside a form context.
🐛 Proposed fix
<button
key={wi.id}
+ type="button"
onClick={() => setSelectedWorkItem(wi)}
className={`w-full text-left p-3 hover:bg-muted/50 transition-colors ${
isSelected ? 'bg-muted' : ''
}`}
>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <button | |
| key={wi.id} | |
| onClick={() => setSelectedWorkItem(wi)} | |
| className={`w-full text-left p-3 hover:bg-muted/50 transition-colors ${ | |
| isSelected ? 'bg-muted' : '' | |
| }`} | |
| > | |
| <button | |
| key={wi.id} | |
| type="button" | |
| onClick={() => setSelectedWorkItem(wi)} | |
| className={`w-full text-left p-3 hover:bg-muted/50 transition-colors ${ | |
| isSelected ? 'bg-muted' : '' | |
| }`} | |
| > |
🧰 Tools
🪛 Biome (2.1.2)
[error] 276-282: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx around lines 276 -
282, The button rendering inside ADOWorkItems (the element with key={wi.id} that
calls setSelectedWorkItem onClick and uses isSelected for styling) lacks an
explicit type and will default to type="submit"; update that button to include
type="button" to prevent unintended form submissions when this component is
nested in a form.
| <div | ||
| className="prose prose-sm dark:prose-invert max-w-none" | ||
| dangerouslySetInnerHTML={{ __html: workItem.body }} | ||
| /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical XSS vulnerability: unsanitized HTML injection.
workItem.body originates from the Azure DevOps API and is rendered directly via dangerouslySetInnerHTML without sanitization. An attacker with access to ADO work items could inject malicious scripts.
🔒 Recommended fix: Sanitize HTML with DOMPurify
+import DOMPurify from 'dompurify';
+
function WorkItemDetail({ workItem }: { workItem: ADOWorkItem }) {
// ...
+ const sanitizedBody = workItem.body
+ ? DOMPurify.sanitize(workItem.body)
+ : '';
+
return (
// ...
{workItem.body && (
<div className="border-t pt-4">
<h4 className="text-sm font-medium mb-2">Description</h4>
<div
className="prose prose-sm dark:prose-invert max-w-none"
- dangerouslySetInnerHTML={{ __html: workItem.body }}
+ dangerouslySetInnerHTML={{ __html: sanitizedBody }}
/>
</div>
)}
);
}You'll need to install DOMPurify:
npm install dompurify
npm install --save-dev @types/dompurifyCommittable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 ast-grep (0.40.4)
[warning] 459-459: Usage of dangerouslySetInnerHTML detected. This bypasses React's built-in XSS protection. Always sanitize HTML content using libraries like DOMPurify before injecting it into the DOM to prevent XSS attacks.
Context: dangerouslySetInnerHTML
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml
- https://cwe.mitre.org/data/definitions/79.html
(react-unsafe-html-injection)
🪛 Biome (2.1.2)
[error] 460-460: Avoid passing content using the dangerouslySetInnerHTML prop.
Setting content using code can expose users to cross-site scripting (XSS) attacks
(lint/security/noDangerouslySetInnerHtml)
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx around lines 458 -
461, The code renders unsanitized HTML from Azure DevOps via
dangerouslySetInnerHTML using workItem.body in the ADOWorkItems React component,
which creates an XSS risk; fix it by installing DOMPurify and @types/dompurify,
import DOMPurify in ADOWorkItems.tsx, sanitize workItem.body (e.g., const
safeHtml = DOMPurify.sanitize(workItem.body)) before passing it into
dangerouslySetInnerHTML, or replace the dangerouslySetInnerHTML usage with a
safe render that uses the sanitized string; ensure the sanitize step runs
wherever workItem.body is mapped or prepared so no raw HTML reaches
dangerouslySetInnerHTML.
| // Azure DevOps integration | ||
| ADO_CHECK_CONNECTION: 'ado:checkConnection', | ||
| ADO_TEST_CONNECTION: 'ado:testConnection', // Test credentials before saving | ||
| ADO_GET_WORK_ITEMS: 'ado:getWorkItems', | ||
| ADO_GET_WORK_ITEM: 'ado:getWorkItem', | ||
| ADO_CREATE_WORK_ITEM: 'ado:createWorkItem', | ||
| ADO_UPDATE_WORK_ITEM: 'ado:updateWorkItem', | ||
| ADO_GET_WORK_ITEM_COMMENTS: 'ado:getWorkItemComments', | ||
| ADO_INVESTIGATE_WORK_ITEM: 'ado:investigateWorkItem', | ||
| ADO_IMPORT_WORK_ITEMS: 'ado:importWorkItems', | ||
|
|
||
| // Azure DevOps Pull Requests | ||
| ADO_PR_LIST: 'ado:pr:list', | ||
| ADO_PR_GET: 'ado:pr:get', | ||
| ADO_PR_GET_DIFF: 'ado:pr:getDiff', | ||
| ADO_PR_REVIEW: 'ado:pr:review', | ||
| ADO_PR_REVIEW_CANCEL: 'ado:pr:reviewCancel', | ||
| ADO_PR_GET_REVIEW: 'ado:pr:getReview', | ||
| ADO_PR_POST_REVIEW: 'ado:pr:postReview', | ||
| ADO_PR_MERGE: 'ado:pr:merge', | ||
| ADO_PR_ABANDON: 'ado:pr:abandon', | ||
|
|
||
| // Azure DevOps events (main -> renderer) | ||
| ADO_INVESTIGATION_PROGRESS: 'ado:investigationProgress', | ||
| ADO_INVESTIGATION_COMPLETE: 'ado:investigationComplete', | ||
| ADO_INVESTIGATION_ERROR: 'ado:investigationError', | ||
| ADO_PR_REVIEW_PROGRESS: 'ado:pr:reviewProgress', | ||
| ADO_PR_REVIEW_COMPLETE: 'ado:pr:reviewComplete', | ||
| ADO_PR_REVIEW_ERROR: 'ado:pr:reviewError', | ||
|
|
||
| // Azure DevOps Auto-Fix operations | ||
| ADO_AUTOFIX_START: 'ado:autofix:start', | ||
| ADO_AUTOFIX_STOP: 'ado:autofix:stop', | ||
| ADO_AUTOFIX_GET_QUEUE: 'ado:autofix:getQueue', | ||
| ADO_AUTOFIX_CHECK_TAGS: 'ado:autofix:checkTags', | ||
| ADO_AUTOFIX_CHECK_NEW: 'ado:autofix:checkNew', | ||
| ADO_AUTOFIX_GET_CONFIG: 'ado:autofix:getConfig', | ||
| ADO_AUTOFIX_SAVE_CONFIG: 'ado:autofix:saveConfig', | ||
|
|
||
| // Azure DevOps Auto-Fix events (main -> renderer) | ||
| ADO_AUTOFIX_PROGRESS: 'ado:autofix:progress', | ||
| ADO_AUTOFIX_COMPLETE: 'ado:autofix:complete', | ||
| ADO_AUTOFIX_ERROR: 'ado:autofix:error' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, check if the file exists and get its basic info
echo "=== File information ==="
ls -la apps/frontend/src/shared/constants/ipc.ts 2>/dev/null || echo "File not found at standard path"
# Get line count to understand file structure
wc -l apps/frontend/src/shared/constants/ipc.ts 2>/dev/null
# Try to find the file in case it's at a different location
echo -e "\n=== Searching for ipc.ts or similar files ==="
fd -e ts -e tsx "ipc" --path "*/constants/*" --type f 2>/dev/null | head -20Repository: AndyMik90/Auto-Claude
Length of output: 274
🏁 Script executed:
#!/bin/bash
# Look for ipc files more broadly
echo "=== Searching for ipc-related files ==="
find . -name "*ipc*" -type f 2>/dev/null | grep -E "\.(ts|tsx|js)$" | head -20Repository: AndyMik90/Auto-Claude
Length of output: 467
🏁 Script executed:
#!/bin/bash
# Search for IPC_CHANNELS definition
echo "=== Searching for IPC_CHANNELS object ==="
rg "const IPC_CHANNELS|export.*IPC_CHANNELS" -A 5 --type ts --type tsx 2>/dev/null | head -30Repository: AndyMik90/Auto-Claude
Length of output: 106
🏁 Script executed:
#!/bin/bash
# Look for GitHub, GitLab, and ADO channel definitions
echo "=== Searching for GitHub/GitLab/ADO channel patterns ==="
rg "GITHUB_|GITLAB_|ADO_" --type ts -c | sort -t: -k2 -nr | head -20Repository: AndyMik90/Auto-Claude
Length of output: 1332
🏁 Script executed:
#!/bin/bash
# Search for ADO implementation files
echo "=== Searching for ADO-related implementation files ==="
fd -e ts -e tsx . --type f -x grep -l "ado\|azure\|ADO_" {} \; 2>/dev/null | head -20Repository: AndyMik90/Auto-Claude
Length of output: 1316
🏁 Script executed:
#!/bin/bash
# Read the ipc.ts file to verify the ADO, GitHub, and GitLab sections
echo "=== Reading IPC channels file ==="
cat -n apps/frontend/src/shared/constants/ipc.ts | head -100Repository: AndyMik90/Auto-Claude
Length of output: 5253
🏁 Script executed:
#!/bin/bash
# Get GitHub section (around lines 205-398 as mentioned in review)
echo "=== GitHub section of IPC channels ==="
sed -n '200,400p' apps/frontend/src/shared/constants/ipc.ts | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 10617
🏁 Script executed:
#!/bin/bash
# Get ADO section (lines 514-556 as mentioned in review)
echo "=== ADO section of IPC channels (lines 510-560) ==="
sed -n '510,560p' apps/frontend/src/shared/constants/ipc.ts | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 2523
🏁 Script executed:
#!/bin/bash
# Check for GitLab section
echo "=== GitLab section of IPC channels ==="
sed -n '230,330p' apps/frontend/src/shared/constants/ipc.ts | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 5331
🏁 Script executed:
#!/bin/bash
# Check for ADO handler implementations
echo "=== Checking for ADO IPC handler implementations ==="
find apps/frontend/src/main/ipc-handlers -name "*ado*" -type f 2>/dev/null
# Look inside ADO handlers to see what's implemented
echo -e "\n=== Listing ADO handler files ==="
ls -la apps/frontend/src/main/ipc-handlers/ado/ 2>/dev/null || echo "ADO handlers directory check"Repository: AndyMik90/Auto-Claude
Length of output: 586
🏁 Script executed:
#!/bin/bash
# Check for TODO/FIXME comments related to ADO
echo "=== Searching for TODO/FIXME comments related to ADO ==="
rg -n "TODO|FIXME" --type ts --type tsx -B 2 -A 2 | rg -i "ado|azure|batch|triage|pr.*memory"Repository: AndyMik90/Auto-Claude
Length of output: 166
🏁 Script executed:
#!/bin/bash
# Check what ADO channels are actually handled
echo "=== Checking which ADO channels are registered in handlers ==="
rg "IPC_CHANNELS\.ADO_" apps/frontend/src/main/ipc-handlers/ -n --type tsRepository: AndyMik90/Auto-Claude
Length of output: 1399
🏁 Script executed:
#!/bin/bash
# Check the ADO handlers content for what's implemented
echo "=== ADO PR handlers content ==="
head -50 apps/frontend/src/main/ipc-handlers/ado/pr-handlers.ts 2>/dev/null || echo "File not found"
echo -e "\n=== ADO work item handlers content ==="
head -50 apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts 2>/dev/null || echo "File not found"Repository: AndyMik90/Auto-Claude
Length of output: 3367
🏁 Script executed:
#!/bin/bash
# Check if there's an autofix handlers file for ADO
echo "=== Checking for ADO autofix handlers ==="
find apps/frontend/src/main/ipc-handlers/ado -name "*autofix*" -o -name "*triage*" 2>/dev/null
# If not, check what ADO handler files exist
echo -e "\n=== All ADO handler files ==="
find apps/frontend/src/main/ipc-handlers/ado -type f -name "*.ts" 2>/dev/null | xargs -I {} sh -c 'echo "=== {} ===" && head -20 {}'Repository: AndyMik90/Auto-Claude
Length of output: 3121
🏁 Script executed:
#!/bin/bash
# Search for any comments about phased rollout or future ADO features
echo "=== Searching for comments about ADO scope or phased rollout ==="
rg -n "phased|rollout|phase|future|TODO|future.*ado|ado.*future" --type ts --type tsx -B 3 -A 3 | head -100Repository: AndyMik90/Auto-Claude
Length of output: 174
🏁 Script executed:
#!/bin/bash
# Look at the full ADO PR handlers to see what's actually implemented
echo "=== Full ADO PR handlers ==="
cat apps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsRepository: AndyMik90/Auto-Claude
Length of output: 10200
🏁 Script executed:
#!/bin/bash
# Look at the full ADO work item handlers
echo "=== Full ADO work item handlers (first 100 lines) ==="
head -100 apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.tsRepository: AndyMik90/Auto-Claude
Length of output: 3260
🏁 Script executed:
#!/bin/bash
# Check if there are any other handler registration files
echo "=== Check main ipc-setup.ts for handler registration ==="
cat apps/frontend/src/main/ipc-setup.ts | head -150Repository: AndyMik90/Auto-Claude
Length of output: 2147
🏁 Script executed:
#!/bin/bash
# Check if any of the claimed "missing" ADO channels are referenced anywhere
echo "=== Search for ADO_AUTOFIX handlers ==="
rg "ADO_AUTOFIX|ADO_INVESTIGATE_WORK_ITEM|ADO_IMPORT_WORK_ITEMS" --type ts
echo -e "\n=== Search for ADO_TRIAGE or ADO_BATCH ==="
rg "ADO_TRIAGE|ADO_BATCH|ADO_ANALYZE_PREVIEW" --type tsRepository: AndyMik90/Auto-Claude
Length of output: 1228
🏁 Script executed:
#!/bin/bash
# Check GitHub handlers structure to compare
echo "=== GitHub autofix handlers (first 50 lines) ==="
head -50 apps/frontend/src/main/ipc-handlers/github/autofix-handlers.ts
echo -e "\n=== Check for GitHub triage handlers ==="
head -30 apps/frontend/src/main/ipc-handlers/github/triage-handlers.tsRepository: AndyMik90/Auto-Claude
Length of output: 2705
🏁 Script executed:
#!/bin/bash
# Search for any PR or README mentioning ADO phased rollout
echo "=== Searching in PR descriptions or metadata ==="
git log --all --oneline --grep="ADO\|Azure DevOps" 2>/dev/null | head -20
echo -e "\n=== Looking for ADO-related comments in code ==="
rg -i "ado.*phase|ado.*rollout|ado.*future|future.*ado" --type ts -B 2 -A 2Repository: AndyMik90/Auto-Claude
Length of output: 5845
🏁 Script executed:
#!/bin/bash
# Check if ADO_INVESTIGATE_WORK_ITEM and ADO_IMPORT_WORK_ITEMS are implemented
echo "=== Search for handlers of INVESTIGATE_WORK_ITEM and IMPORT_WORK_ITEMS ==="
rg "registerInvestigateWorkItem|registerImportWorkItems|ADO_INVESTIGATE_WORK_ITEM|ADO_IMPORT_WORK_ITEMS" apps/frontend/src/main/ipc-handlers/ado/ -n
echo -e "\n=== Check if these are registered ==="
cat apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts | tail -50Repository: AndyMik90/Auto-Claude
Length of output: 1739
🏁 Script executed:
#!/bin/bash
# Check how many handler registration functions exist in workitem handlers
echo "=== Handler registrations in workitem-handlers.ts ==="
rg "^export function register" apps/frontend/src/main/ipc-handlers/ado/workitem-handlers.ts
echo -e "\n=== Handler registrations in pr-handlers.ts ==="
rg "^export function register" apps/frontend/src/main/ipc-handlers/ado/pr-handlers.tsRepository: AndyMik90/Auto-Claude
Length of output: 866
🏁 Script executed:
#!/bin/bash
# Count the declared ADO channels
echo "=== Count declared ADO channels ==="
rg "ADO_" apps/frontend/src/shared/constants/ipc.ts | wc -l
echo -e "\n=== Count implemented ADO handlers ==="
rg "IPC_CHANNELS\.ADO_" apps/frontend/src/main/ipc-handlers/ado/ | wc -l
echo -e "\n=== Compare GitHub declared vs implemented ==="
echo "GitHub declared:"
rg "GITHUB_" apps/frontend/src/shared/constants/ipc.ts | wc -l
echo "GitHub implemented:"
rg "IPC_CHANNELS\.GITHUB_" apps/frontend/src/main/ipc-handlers/github/ | wc -lRepository: AndyMik90/Auto-Claude
Length of output: 238
🏁 Script executed:
#!/bin/bash
# Get the exact count and list of declared vs implemented ADO channels
echo "=== Declared ADO channels ==="
rg "ADO_" apps/frontend/src/shared/constants/ipc.ts | cut -d: -f2 | sed 's/.*\(ADO_[A-Z_]*\).*/\1/' | sort | uniq
echo -e "\n=== Implemented ADO channel handlers ==="
rg "IPC_CHANNELS\.ADO_" apps/frontend/src/main/ipc-handlers/ado/ | sed 's/.*\(ADO_[A-Z_]*\).*/\1/' | sort | uniqRepository: AndyMik90/Auto-Claude
Length of output: 375
🏁 Script executed:
#!/bin/bash
# Check if autofix channels have any handlers or just TODOs
echo "=== Searching for any reference to autofix handlers ==="
find apps/frontend/src/main/ipc-handlers/ado -type f -exec grep -l "autofix" {} \;
echo -e "\n=== Checking if there's a separate autofix file ==="
ls -la apps/frontend/src/main/ipc-handlers/ado/Repository: AndyMik90/Auto-Claude
Length of output: 607
🏁 Script executed:
#!/bin/bash
# Check if any renderer/frontend code tries to call unimplemented ADO channels
echo "=== Searching for frontend calls to unimplemented ADO channels ==="
rg "ADO_INVESTIGATE_WORK_ITEM|ADO_IMPORT_WORK_ITEMS|ADO_AUTOFIX|ADO_PR_REVIEW[^_]|ADO_INVESTIGATION|ADO_PR_REVIEW_PROGRESS" apps/frontend/src/renderer/ apps/frontend/src/preload/ -n | head -30Repository: AndyMik90/Auto-Claude
Length of output: 652
🏁 Script executed:
#!/bin/bash
# Check the ado-api.ts preload module to see what it exposes
echo "=== Checking preload API module for ADO ==="
cat apps/frontend/src/preload/api/modules/ado-api.tsRepository: AndyMik90/Auto-Claude
Length of output: 8630
🏁 Script executed:
#!/bin/bash
# Check which ADO channels are declared but NOT implemented
echo "=== Declared but NOT implemented ADO channels ==="
echo "Declared channels:"
rg "ADO_" apps/frontend/src/shared/constants/ipc.ts | sed 's/.*\(ADO_[A-Z_]*\).*/\1/' | sort > /tmp/declared.txt
echo "Implemented channels:"
rg "IPC_CHANNELS\.ADO_" apps/frontend/src/main/ipc-handlers/ado/ | sed 's/.*\(ADO_[A-Z_]*\).*/\1/' | sort > /tmp/implemented.txt
echo "Missing from implementation:"
comm -23 /tmp/declared.txt /tmp/implemented.txtRepository: AndyMik90/Auto-Claude
Length of output: 638
🏁 Script executed:
#!/bin/bash
# Verify ADOWorkItems component to see if it uses any unimplemented channels
echo "=== ADOWorkItems component usage ==="
head -100 apps/frontend/src/renderer/components/ADOWorkItems.tsxRepository: AndyMik90/Auto-Claude
Length of output: 3211
ADO IPC channels: 21 declared but unimplemented handlers will cause runtime failures.
Verified: 34 ADO channels declared in ipc.ts, but only 13 have handler implementations. The following 21 channels have no handlers and will fail if invoked:
Work Items (2 missing):
ADO_INVESTIGATE_WORK_ITEM,ADO_IMPORT_WORK_ITEMS
Pull Requests (3 missing):
ADO_PR_REVIEW,ADO_PR_REVIEW_CANCEL,ADO_PR_GET_REVIEW
Events (6 missing):
ADO_INVESTIGATION_PROGRESS,ADO_INVESTIGATION_COMPLETE,ADO_INVESTIGATION_ERRORADO_PR_REVIEW_PROGRESS,ADO_PR_REVIEW_COMPLETE,ADO_PR_REVIEW_ERROR
Auto-Fix (all 10 missing):
ADO_AUTOFIX_START,ADO_AUTOFIX_STOP,ADO_AUTOFIX_GET_QUEUE,ADO_AUTOFIX_CHECK_TAGS,ADO_AUTOFIX_CHECK_NEW,ADO_AUTOFIX_GET_CONFIG,ADO_AUTOFIX_SAVE_CONFIG,ADO_AUTOFIX_PROGRESS,ADO_AUTOFIX_COMPLETE,ADO_AUTOFIX_ERROR
The preload API (ado-api.ts) exposes listener functions for these unimplemented channels, which will silently fail at runtime. Additionally, ADO lacks features present in GitHub and GitLab integrations (batch operations, triage, analysis preview), and uses CHECK_TAGS instead of CHECK_LABELS unlike other providers.
Implement handlers for all declared channels or remove the declarations to prevent runtime failures. If this is a phased rollout, document it with TODO comments and explicit scope notes.
- Add ADO to sidebar navigation (adoWorkItems, adoPRs views) - Add SidebarView types for ado-work-items and ado-pull-requests - Add ADO nav items with Cloud and ListTodo icons - Update visibleNavItems to show ADO when adoEnabled - Add ADO fields to ProjectEnvConfig type (adoEnabled, adoOrganization, etc) - Update env-handlers to read/write ADO config to .env file - Add ADO_ENV_KEYS constant for environment variable names - Support ADO section in generated .env content - Wire ADOWorkItems component into App.tsx view routing - Update handleSetupComplete to persist settings via updateProjectEnv - Add translations for ADO nav items (en/fr) Users can now: 1. Configure ADO connection via setup modal 2. Settings persist to project .env file 3. Navigate to ADO Work Items and PRs views from sidebar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx:
- Around line 148-150: An arbitrary setTimeout is being used to wait for
settings to be saved before refreshing; replace this fragile timing hack by
chaining off the actual promise that completes the save/check operation and call
handleRefresh() only after that promise resolves (for example, await the
saveSettings or the promise returned by checkConnection()), or refactor
checkConnection() to return the new connection status and use that result
instead of relying on isConnected (which updates asynchronously); alternatively,
call fetchWorkItems() unconditionally after the awaited operation completes to
ensure the UI refreshes reliably without relying on setTimeout.
In @apps/frontend/src/renderer/components/Sidebar.tsx:
- Around line 98-102: The ADO nav item for pull requests uses the ListTodo icon
which can be confusing; update the adoNavItems array so the item with id
'ado-pull-requests' uses the GitPullRequest icon instead of ListTodo (leave the
'ado-work-items' item using Cloud unchanged) to match GitHub PR visuals and
improve consistency.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
apps/frontend/src/main/ipc-handlers/env-handlers.tsapps/frontend/src/renderer/App.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/shared/i18n/locales/en/navigation.jsonapps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/shared/types/project.ts
🧰 Additional context used
📓 Path-based instructions (4)
apps/frontend/src/shared/i18n/locales/**/*.json
📄 CodeRabbit inference engine (CLAUDE.md)
apps/frontend/src/shared/i18n/locales/**/*.json: Store translation strings in namespace-organized JSON files atapps/frontend/src/shared/i18n/locales/{lang}/*.jsonfor each supported language
When implementing new frontend features, add translation keys to all language files (minimum: en/.json and fr/.json)
Files:
apps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/shared/i18n/locales/en/navigation.json
apps/frontend/src/**/*.{ts,tsx,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Always use i18n translation keys for all user-facing text in the frontend instead of hardcoded strings
Files:
apps/frontend/src/shared/types/project.tsapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/main/ipc-handlers/env-handlers.tsapps/frontend/src/renderer/App.tsx
apps/frontend/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use
useTranslation()hook with namespace prefixes (e.g., 'navigation:items.key') for accessing translation strings in React components
Files:
apps/frontend/src/shared/types/project.tsapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/main/ipc-handlers/env-handlers.tsapps/frontend/src/renderer/App.tsx
apps/frontend/**/*.{ts,tsx}
⚙️ CodeRabbit configuration file
apps/frontend/**/*.{ts,tsx}: Review React patterns and TypeScript type safety.
Check for proper state management and component composition.
Files:
apps/frontend/src/shared/types/project.tsapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/main/ipc-handlers/env-handlers.tsapps/frontend/src/renderer/App.tsx
🧠 Learnings (5)
📚 Learning: 2025-12-30T16:38:36.314Z
Learnt from: CR
Repo: AndyMik90/Auto-Claude PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-30T16:38:36.314Z
Learning: Applies to apps/frontend/src/shared/i18n/locales/**/*.json : When implementing new frontend features, add translation keys to all language files (minimum: en/*.json and fr/*.json)
Applied to files:
apps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/shared/i18n/locales/en/navigation.json
📚 Learning: 2025-12-30T16:38:36.314Z
Learnt from: CR
Repo: AndyMik90/Auto-Claude PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-30T16:38:36.314Z
Learning: Applies to apps/frontend/src/shared/i18n/locales/**/*.json : Store translation strings in namespace-organized JSON files at `apps/frontend/src/shared/i18n/locales/{lang}/*.json` for each supported language
Applied to files:
apps/frontend/src/shared/i18n/locales/fr/navigation.json
📚 Learning: 2026-01-04T23:59:45.209Z
Learnt from: MikeeBuilds
Repo: AndyMik90/Auto-Claude PR: 661
File: apps/frontend/src/renderer/components/onboarding/OllamaModelSelector.tsx:176-189
Timestamp: 2026-01-04T23:59:45.209Z
Learning: In the AndyMik90/Auto-Claude repository, pre-existing i18n issues (hardcoded user-facing strings) can be deferred for future i18n cleanup passes. Do not fix such issues in PRs that do not introduce new i18n violations, especially in frontend TSX components (e.g., apps/frontend/**/*.tsx). If a PR adds new i18n violations, address them in that PR.
Applied to files:
apps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/ADOWorkItems.tsxapps/frontend/src/renderer/App.tsx
📚 Learning: 2025-12-30T16:38:36.314Z
Learnt from: CR
Repo: AndyMik90/Auto-Claude PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-30T16:38:36.314Z
Learning: Applies to apps/frontend/src/**/*.{ts,tsx,jsx} : Always use i18n translation keys for all user-facing text in the frontend instead of hardcoded strings
Applied to files:
apps/frontend/src/renderer/components/ADOWorkItems.tsx
📚 Learning: 2025-12-30T16:38:36.314Z
Learnt from: CR
Repo: AndyMik90/Auto-Claude PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-30T16:38:36.314Z
Learning: Applies to apps/frontend/src/**/*.{ts,tsx} : Use `useTranslation()` hook with namespace prefixes (e.g., 'navigation:items.key') for accessing translation strings in React components
Applied to files:
apps/frontend/src/renderer/components/ADOWorkItems.tsx
🧬 Code graph analysis (2)
apps/frontend/src/renderer/components/Sidebar.tsx (1)
apps/frontend/src/renderer/components/settings/utils/hookProxyFactory.ts (1)
envConfig(23-23)
apps/frontend/src/renderer/App.tsx (1)
apps/frontend/src/renderer/components/ADOWorkItems.tsx (1)
ADOWorkItems(41-379)
🪛 ast-grep (0.40.4)
apps/frontend/src/renderer/components/ADOWorkItems.tsx
[warning] 475-475: Usage of dangerouslySetInnerHTML detected. This bypasses React's built-in XSS protection. Always sanitize HTML content using libraries like DOMPurify before injecting it into the DOM to prevent XSS attacks.
Context: dangerouslySetInnerHTML
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml
- https://cwe.mitre.org/data/definitions/79.html
(react-unsafe-html-injection)
🪛 Biome (2.1.2)
apps/frontend/src/renderer/components/ADOWorkItems.tsx
[error] 292-298: Provide an explicit type prop for the button element.
The default type of a button is submit, which causes the submission of a form when placed inside a form element. This is likely not the behaviour that you want inside a React application.
Allowed button types are: submit, button or reset
(lint/a11y/useButtonType)
[error] 476-476: Avoid passing content using the dangerouslySetInnerHTML prop.
Setting content using code can expose users to cross-site scripting (XSS) attacks
(lint/security/noDangerouslySetInnerHtml)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (python)
- GitHub Check: CodeQL (javascript-typescript)
🔇 Additional comments (14)
apps/frontend/src/shared/i18n/locales/fr/navigation.json (1)
18-19: LGTM! French translations added correctly.The ADO navigation item translations follow the established pattern and are consistent with existing GitHub/GitLab entries.
apps/frontend/src/shared/i18n/locales/en/navigation.json (1)
18-19: LGTM! English translations added correctly.The ADO navigation item translations are properly aligned with the French locale and follow the established pattern for GitHub/GitLab integrations.
apps/frontend/src/renderer/App.tsx (2)
43-43: LGTM! ADOWorkItems component imported correctly.The import follows the established pattern for other integration components (GitHubIssues, GitLabIssues, etc.).
805-812: LGTM! ADO Work Items view integrated correctly.The implementation follows the established pattern for GitHub and GitLab integrations, with proper conditional rendering and settings callback.
apps/frontend/src/shared/types/project.ts (1)
321-327: LGTM! ADO configuration fields properly typed.The Azure DevOps integration fields follow the established pattern for GitHub and GitLab integrations, with appropriate types and optional fields for flexible configuration.
apps/frontend/src/renderer/components/Sidebar.tsx (2)
57-57: LGTM! SidebarView type extended correctly.The type now includes both ADO views, maintaining type safety across the sidebar navigation system.
147-164: LGTM! ADO navigation integration implemented correctly.The conditional rendering logic and memoization dependencies are properly configured to show/hide ADO navigation items based on the
adoEnabledflag, following the same pattern as GitHub and GitLab integrations.apps/frontend/src/main/ipc-handlers/env-handlers.ts (5)
23-32: LGTM!The
ADO_ENV_KEYSconstant follows the established pattern used byGITLAB_ENV_KEYS. The key naming convention is consistent and the use ofas constfor type safety is appropriate.
146-164: LGTM!The ADO integration block in
generateEnvContentcorrectly mirrors the GitLab pattern. Each field is conditionally written to the env vars map only when defined, which preserves existing values appropriately.
292-301: LGTM!The
.envtemplate section for Azure DevOps follows the established pattern used by other integrations. The default value forADO_INSTANCE_URL(https://dev.azure.com) is the correct public Azure DevOps endpoint.
414-414: LGTM!Default
adoEnabled: falseis consistent with the approach for other optional integrations.
489-507: LGTM!The ADO config parsing logic correctly mirrors the GitLab pattern:
- PAT presence triggers auto-enable unless explicitly disabled
- All fields are conditionally populated when present in the
.envfile- Instance URL fallback is handled at the generation level
apps/frontend/src/renderer/components/ADOWorkItems.tsx (2)
41-154: Component logic and state management are well-structured.The separation of concerns with
checkConnection,fetchWorkItems, andhandleSetupCompletecallbacks is clean. TheuseMemofor filtering and theuseEffecthooks for data loading follow React best practices.
381-483: WorkItemDetail component is well-organized.The detail view properly displays work item metadata, handles optional fields conditionally, and provides an external link to ADO. The layout with metadata grid and tags section is clean.
| {activeView === 'ado-pull-requests' && (activeProjectId || selectedProjectId) && ( | ||
| <div className="flex-1 flex items-center justify-center text-muted-foreground"> | ||
| ADO Pull Requests view coming soon | ||
| </div> | ||
| )} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace hardcoded string with i18n translation key.
The placeholder message violates the i18n coding guideline that requires all user-facing text to use translation keys.
📝 Proposed fix
Add a translation key to both locale files:
apps/frontend/src/shared/i18n/locales/en/common.json (or appropriate namespace):
+ "adoPullRequestsComingSoon": "ADO Pull Requests view coming soon"apps/frontend/src/shared/i18n/locales/fr/common.json:
+ "adoPullRequestsComingSoon": "Vue des PRs Azure DevOps à venir"Then update the component:
{activeView === 'ado-pull-requests' && (activeProjectId || selectedProjectId) && (
<div className="flex-1 flex items-center justify-center text-muted-foreground">
- ADO Pull Requests view coming soon
+ {t('common:adoPullRequestsComingSoon')}
</div>
)}As per coding guidelines, always use i18n translation keys for all user-facing text in the frontend.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {activeView === 'ado-pull-requests' && (activeProjectId || selectedProjectId) && ( | |
| <div className="flex-1 flex items-center justify-center text-muted-foreground"> | |
| ADO Pull Requests view coming soon | |
| </div> | |
| )} | |
| {activeView === 'ado-pull-requests' && (activeProjectId || selectedProjectId) && ( | |
| <div className="flex-1 flex items-center justify-center text-muted-foreground"> | |
| {t('common:adoPullRequestsComingSoon')} | |
| </div> | |
| )} |
| setTimeout(() => { | ||
| handleRefresh(); | ||
| }, 100); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Avoid arbitrary setTimeout for coordination.
Using setTimeout(..., 100) to wait for settings to be saved before refreshing is fragile and may cause race conditions on slower systems. Consider using the promise resolution directly.
♻️ Proposed fix
await window.electronAPI.updateProjectEnv(selectedProject.id, {
adoEnabled: true,
adoOrganization: settings.adoOrganization,
adoProject: settings.adoProject,
adoRepoName: settings.adoRepoName,
adoPat: settings.adoPat,
adoInstanceUrl: settings.adoInstanceUrl,
});
setShowSetupModal(false);
- // Re-check connection after settings are saved
- setTimeout(() => {
- handleRefresh();
- }, 100);
+ // Re-check connection after settings are saved
+ await checkConnection();
+ if (isConnected) {
+ await fetchWorkItems();
+ }
} catch (error) {
console.error('Failed to save ADO settings:', error);
}Note: Since isConnected won't reflect the new state immediately after checkConnection() due to React's async state updates, you may need to call fetchWorkItems() unconditionally or refactor to return the connection status from checkConnection().
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/ADOWorkItems.tsx around lines 148 -
150, An arbitrary setTimeout is being used to wait for settings to be saved
before refreshing; replace this fragile timing hack by chaining off the actual
promise that completes the save/check operation and call handleRefresh() only
after that promise resolves (for example, await the saveSettings or the promise
returned by checkConnection()), or refactor checkConnection() to return the new
connection status and use that result instead of relying on isConnected (which
updates asynchronously); alternatively, call fetchWorkItems() unconditionally
after the awaited operation completes to ensure the UI refreshes reliably
without relying on setTimeout.
| // Azure DevOps nav items shown when ADO is enabled | ||
| const adoNavItems: NavItem[] = [ | ||
| { id: 'ado-work-items', labelKey: 'navigation:items.adoWorkItems', icon: Cloud, shortcut: 'O' }, | ||
| { id: 'ado-pull-requests', labelKey: 'navigation:items.adoPRs', icon: ListTodo, shortcut: 'V' } | ||
| ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
ADO navigation items configured correctly.
The implementation follows the established pattern for GitHub and GitLab integrations with appropriate keyboard shortcuts.
Optional: Consider using GitPullRequest icon for ADO PRs
The ListTodo icon for ADO pull requests might be confusing since pull requests aren't task lists. The GitPullRequest icon (already imported and used for GitHub PRs) could provide better visual consistency:
const adoNavItems: NavItem[] = [
{ id: 'ado-work-items', labelKey: 'navigation:items.adoWorkItems', icon: Cloud, shortcut: 'O' },
- { id: 'ado-pull-requests', labelKey: 'navigation:items.adoPRs', icon: ListTodo, shortcut: 'V' }
+ { id: 'ado-pull-requests', labelKey: 'navigation:items.adoPRs', icon: GitPullRequest, shortcut: 'V' }
];This would align ADO PRs visually with GitHub PRs while keeping the Cloud icon distinctive for ADO work items.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Azure DevOps nav items shown when ADO is enabled | |
| const adoNavItems: NavItem[] = [ | |
| { id: 'ado-work-items', labelKey: 'navigation:items.adoWorkItems', icon: Cloud, shortcut: 'O' }, | |
| { id: 'ado-pull-requests', labelKey: 'navigation:items.adoPRs', icon: ListTodo, shortcut: 'V' } | |
| ]; | |
| // Azure DevOps nav items shown when ADO is enabled | |
| const adoNavItems: NavItem[] = [ | |
| { id: 'ado-work-items', labelKey: 'navigation:items.adoWorkItems', icon: Cloud, shortcut: 'O' }, | |
| { id: 'ado-pull-requests', labelKey: 'navigation:items.adoPRs', icon: GitPullRequest, shortcut: 'V' } | |
| ]; |
🤖 Prompt for AI Agents
In @apps/frontend/src/renderer/components/Sidebar.tsx around lines 98 - 102, The
ADO nav item for pull requests uses the ListTodo icon which can be confusing;
update the adoNavItems array so the item with id 'ado-pull-requests' uses the
GitPullRequest icon instead of ListTodo (leave the 'ado-work-items' item using
Cloud unchanged) to match GitHub PR visuals and improve consistency.
Base Branch
developbranch (required for all feature/fix PRs)main(hotfix only - maintainers)Description
Related Issue
Closes #
Type of Change
Area
Commit Message Format
Follow conventional commits:
<type>: <subject>Types: feat, fix, docs, style, refactor, test, chore
Example:
feat: add user authentication systemChecklist
developbranchCI/Testing Requirements
Screenshots
Feature Toggle
use_feature_nameBreaking Changes
Breaking: Yes / No
Details:
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.