-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
feat(conda): add conda environment management with Windows compatibility #1052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(conda): add conda environment management with Windows compatibility #1052
Conversation
- Add comprehensive branching strategy documentation - Explain main, develop, feature, fix, release, and hotfix branches - Clarify that all PRs should target develop (not main) - Add release process documentation for maintainers - Update PR process to branch from develop - Expand table of contents with new sections
* refactor: restructure project to Apps/frontend and Apps/backend - Move auto-claude-ui to Apps/frontend with feature-based architecture - Move auto-claude to Apps/backend - Switch from pnpm to npm for frontend - Update Node.js requirement to v24.12.0 LTS - Add pre-commit hooks for lint, typecheck, and security audit - Add commit-msg hook for conventional commits - Fix CommonJS compatibility issues (postcss.config, postinstall scripts) - Update README with comprehensive setup and contribution guidelines - Configure ESLint to ignore .cjs files - 0 npm vulnerabilities Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> * feat(refactor): clean code and move to npm * feat(refactor): clean code and move to npm * chore: update to v2.7.0, remove Docker deps (LadybugDB is embedded) * feat: v2.8.0 - update workflows and configs for Apps/ structure, npm * fix: resolve Python lint errors (F401, I001) * fix: update test paths for Apps/backend structure * fix: add missing facade files and update paths for Apps/backend structure - Fix ruff lint error I001 in auto_claude_tools.py - Create missing facade files to match upstream (agent, ci_discovery, critique, etc.) - Update test paths from auto-claude/ to Apps/backend/ - Update .pre-commit-config.yaml paths for Apps/ structure - Add pytest to pre-commit hooks (skip slow/integration/Windows-incompatible tests) - Fix Unicode encoding in test_agent_architecture.py for Windows Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> * feat: improve readme * fix: new path * fix: correct release workflow and docs for Apps/ restructure - Fix ARM64 macOS build: pnpm → npm, auto-claude-ui → Apps/frontend - Fix artifact upload paths in release.yml - Update Node.js version to 24 for consistency - Update CLI-USAGE.md with Apps/backend paths - Update RELEASE.md with Apps/frontend/package.json paths 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * refactor: rename Apps/ to apps/ and fix backend path resolution - Rename Apps/ folder to apps/ for consistency with JS/Node conventions - Update all path references across CI/CD workflows, docs, and config files - Fix frontend Python path resolver to look for 'backend' instead of 'auto-claude' - Update path-resolver.ts to correctly find apps/backend in development mode This completes the Apps restructure from PR AndyMik90#122 and prepares for v2.8.0 release. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(electron): correct preload script path from .js to .mjs electron-vite builds the preload script as ESM (index.mjs) but the main process was looking for CommonJS (index.js). This caused the preload to fail silently, making the app fall back to browser mock mode with fake data and non-functional IPC handlers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * - Introduced `dev:debug` script to enable debugging during development. - Added `dev:mcp` script for running the frontend in MCP mode. These enhancements streamline the development process for frontend developers. * refactor(memory): make Graphiti memory mandatory and remove Docker dependency Memory is now a core component of Auto Claude rather than optional: - Python 3.12+ is required for the backend (not just memory layer) - Graphiti is enabled by default in .env.example - Removed all FalkorDB/Docker references (migrated to embedded LadybugDB) - Deleted guides/DOCKER-SETUP.md and docker-handlers.ts - Updated onboarding UI to remove "optional" language - Updated all documentation to reflect LadybugDB architecture 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat: add cross-platform Windows support for npm scripts - Add scripts/install-backend.js for cross-platform Python venv setup - Auto-detects Python 3.12 (py -3.12 on Windows, python3.12 on Unix) - Handles platform-specific venv paths - Add scripts/test-backend.js for cross-platform pytest execution - Update package.json to use Node.js scripts instead of shell commands - Update CONTRIBUTING.md with correct paths and instructions: - apps/backend/ and apps/frontend/ paths - Python 3.12 requirement (memory system now required) - Platform-specific install commands (winget, brew, apt) - npm instead of pnpm - Quick Start section with npm run install:all 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * remove doc * fix(frontend): correct Ollama detector script path after apps restructure The Ollama status check was failing because memory-handlers.ts was looking for ollama_model_detector.py at auto-claude/ but the script is now at apps/backend/ after the directory restructure. This caused "Ollama not running" to display even when Ollama was actually running and accessible. * chore: bump version to 2.7.2 Downgrade version from 2.8.0 to 2.7.2 as the Apps/ restructure is better suited as a patch release rather than a minor release. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore: update package-lock.json for Windows compatibility 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * docs(contributing): add hotfix workflow and update paths for apps/ structure Add Git Flow hotfix workflow documentation with step-by-step guide and ASCII diagram showing the branching strategy. Update all paths from auto-claude/auto-claude-ui to apps/backend/apps/frontend and migrate package manager references from pnpm to npm to match the new project structure. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ci): remove duplicate ARM64 build from Intel runner The Intel runner was building both x64 and arm64 architectures, while a separate ARM64 runner also builds arm64 natively. This caused duplicate ARM64 builds, wasting CI resources. Now each runner builds only its native architecture: - Intel runner: x64 only - ARM64 runner: arm64 only 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Alex Madera <[email protected]> Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <[email protected]>
…Mik90#141) * feat(ollama): add real-time download progress tracking for model downloads Implement comprehensive download progress tracking with: - NDJSON parsing for streaming progress data from Ollama API - Real-time speed calculation (MB/s, KB/s, B/s) with useRef for delta tracking - Time remaining estimation based on download speed - Animated progress bars in OllamaModelSelector component - IPC event streaming from main process to renderer - Proper listener management with cleanup functions Changes: - memory-handlers.ts: Parse NDJSON from Ollama stderr, emit progress events - OllamaModelSelector.tsx: Display progress bars with speed and time remaining - project-api.ts: Implement onDownloadProgress listener with cleanup - ipc.ts types: Define onDownloadProgress listener interface - infrastructure-mock.ts: Add mock implementation for browser testing This allows users to see real-time feedback when downloading Ollama models, including percentage complete, current download speed, and estimated time remaining. * test: add focused test coverage for Ollama download progress feature Add unit tests for the critical paths of the real-time download progress tracking: - Progress calculation tests (52 tests): Speed/time/percentage calculations with comprehensive edge case coverage (zero speeds, NaN, Infinity, large numbers) - NDJSON parser tests (33 tests): Streaming JSON parsing from Ollama, buffer management for incomplete lines, error handling All 562 unit tests passing with clean dependencies. Tests focus on critical mathematical logic and data processing - the most important paths that need verification. Test coverage: ✅ Speed calculation and formatting (B/s, KB/s, MB/s) ✅ Time remaining calculations (seconds, minutes, hours) ✅ Percentage clamping (0-100%) ✅ NDJSON streaming with partial line buffering ✅ Invalid JSON handling ✅ Real Ollama API responses ✅ Multi-chunk streaming scenarios * docs: add comprehensive JSDoc docstrings for Ollama download progress feature - Enhanced OllamaModelSelector component with detailed JSDoc * Documented component props, behavior, and usage examples * Added docstrings to internal functions (checkInstalledModels, handleDownload, handleSelect) * Explained progress tracking algorithm and useRef usage - Improved memory-handlers.ts documentation * Added docstring to main registerMemoryHandlers function * Documented all Ollama-related IPC handlers (check-status, list-embedding-models, pull-model) * Added JSDoc to executeOllamaDetector helper function * Documented interface types (OllamaStatus, OllamaModel, OllamaEmbeddingModel, OllamaPullResult) * Explained NDJSON parsing and progress event structure - Enhanced test file documentation * Added docstrings to NDJSON parser test utilities with algorithm explanation * Documented all calculation functions (speed, time, percentage) * Added detailed comments on formatting and bounds-checking logic - Improved overall code maintainability * Docstring coverage now meets 80%+ threshold for code review * Clear explanation of progress tracking implementation details * Better context for future maintainers working with download streaming * feat: add batch task creation and management CLI commands - Handle batch task creation from JSON files - Show status of all specs in project - Cleanup tool for completed specs - Full integration with new apps/backend structure - Compatible with implementation_plan.json workflow * test: add batch task test file and testing checklist - batch_test.json: Sample tasks for testing batch creation - TESTING_CHECKLIST.md: Comprehensive testing guide for Ollama and batch tasks - Includes UI testing steps, CLI testing steps, and edge cases - Ready for manual and automated testing * chore: update package-lock.json to match v2.7.2 * test: update checklist with verification results and architecture validation * docs: add comprehensive implementation summary for Ollama + Batch features * docs: add comprehensive Phase 2 testing guide with checklists and procedures * docs: add NEXT_STEPS guide for Phase 2 testing * fix: resolve merge conflict in project-api.ts from Ollama feature cherry-pick * fix: remove duplicate Ollama check status handler registration * test: update checklist with Phase 2 bug findings and fixes --------- Co-authored-by: ray <[email protected]>
Implemented promise queue pattern in PythonEnvManager to handle concurrent initialization requests. Previously, multiple simultaneous requests (e.g., startup + merge) would fail with "Already initializing" error. Also fixed parsePythonCommand() to handle file paths with spaces by checking file existence before splitting on whitespace. Changes: - Added initializationPromise field to queue concurrent requests - Split initialize() into public and private _doInitialize() - Enhanced parsePythonCommand() with existsSync() check Co-authored-by: Joris Slagter <[email protected]>
) Removes the legacy 'auto-claude' path from the possiblePaths array in agent-process.ts. This path was from before the monorepo restructure (v2.7.2) and is no longer needed. The legacy path was causing spec_runner.py to be looked up at the wrong location: - OLD (wrong): /path/to/auto-claude/auto-claude/runners/spec_runner.py - NEW (correct): /path/to/apps/backend/runners/spec_runner.py This aligns with the new monorepo structure where all backend code lives in apps/backend/. Fixes AndyMik90#147 Co-authored-by: Joris Slagter <[email protected]>
* fix: Linear API authentication and GraphQL types - Remove Bearer prefix from Authorization header (Linear API keys are sent directly) - Change GraphQL variable types from String! to ID! for teamId and issue IDs - Improve error handling to show detailed Linear API error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: Radix Select empty value error in Linear import modal Use '__all__' sentinel value instead of empty string for "All projects" option, as Radix Select does not allow empty string values. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat: add CodeRabbit configuration file Introduce a new .coderabbit.yaml file to configure CodeRabbit settings, including review profiles, automatic review options, path filters, and specific instructions for different file types. This enhances the code review process by providing tailored guidelines for Python, TypeScript, and test files. * fix: correct GraphQL types for Linear team queries Linear API uses different types for different queries: - team(id:) expects String! - issues(filter: { team: { id: { eq: } } }) expects ID! 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: refresh task list after Linear import Call loadTasks() after successful Linear import to update the kanban board without requiring a page reload. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * cleanup * cleanup * fix: address CodeRabbit review comments for Linear integration - Fix unsafe JSON parsing: check response.ok before parsing JSON to handle non-JSON error responses (e.g., 503 from proxy) gracefully - Use ID! type instead of String! for teamId in LINEAR_GET_PROJECTS query for GraphQL type consistency - Remove debug console.log (ESLint config only allows warn/error) - Refresh task list on partial import success (imported > 0) instead of requiring full success - Fix pre-existing TypeScript and lint issues blocking commit 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * version sync logic * lints for develop branch * chore: update CI workflow to include develop branch - Modified the CI configuration to trigger on pushes and pull requests to both main and develop branches, enhancing the workflow for development and integration processes. * fix: update project directory auto-detection for apps/backend structure The project directory auto-detection was checking for the old `auto-claude/` directory name but needed to check for `apps/backend/`. When running from `apps/backend/`, the directory name is `backend` not `auto-claude`, so the check would fail and `project_dir` would incorrectly remain as `apps/backend/` instead of resolving to the project root (2 levels up). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: use GraphQL variables instead of string interpolation in LINEAR_GET_ISSUES Replace direct string interpolation of teamId and linearProjectId with proper GraphQL variables. This prevents potential query syntax errors if IDs contain special characters like double quotes, and aligns with the variable-based approach used elsewhere in the file. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ui): correct logging level and await loadTasks on import complete - Change console.warn to console.log for import success messages (warn is incorrect severity for normal completion) - Make onImportComplete callback async and await loadTasks() to prevent potential unhandled promise rejections Applies CodeRabbit review feedback across 3 LinearTaskImportModal usages. * fix(hooks): use POSIX-compliant find instead of bash glob The pre-commit hook uses #!/bin/sh but had bash-specific ** glob pattern for staging ruff-formatted files. The ** pattern only works in bash with globstar enabled - in POSIX sh it expands literally and won't match subdirectories, causing formatted files in nested directories to not be staged. --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…_progress When a user drags a running task back to Planning (or any other column), the process was not being stopped, leaving a "ghost" process that prevented deletion with "Cannot delete a running task" error. Now the task process is automatically killed when status changes away from in_progress, ensuring the process state stays in sync with the UI.
* feat: add UI scale feature * refactor: extract UI scale bounds to shared constants * fix: duplicated import
…90#154) * fix: analyzer Python compatibility and settings integration Fixes project index analyzer failing with TypeError on Python type hints. Changes: - Added 'from __future__ import annotations' to all analysis modules - Fixed project discovery to support new analyzer JSON format - Read Python path directly from settings.json instead of pythonEnvManager - Added stderr/stdout logging for analyzer debugging Resolves 'Discovered 0 files' and 'TypeError: unsupported operand type' issues. * auto-claude: subtask-1-1 - Hide status badge when execution phase badge is showing When a task has an active execution (planning, coding, etc.), the execution phase badge already displays the correct state with a spinner. The status badge was also rendering, causing duplicate/confusing badges (e.g., both "Planning" and "Pending" showing at the same time). This fix wraps the status badge in a conditional that only renders when there's no active execution, eliminating the redundant badge display. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(ipc): remove unused pythonEnvManager parameter and fix ES6 import Address CodeRabbit review feedback: - Remove unused pythonEnvManager parameter from registerProjectContextHandlers and registerContextHandlers (the code reads Python path directly from settings.json instead) - Replace require('electron').app with proper ES6 import for consistency 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore(lint): fix import sorting in analysis module Run ruff --fix to resolve I001 lint errors after merging develop. All 23 files in apps/backend/analysis/ now have properly sorted imports. --------- Co-authored-by: Joris Slagter <[email protected]> Co-authored-by: Claude Opus 4.5 <[email protected]>
* fix(core): add task persistence, terminal handling, and HTTP 300 fixes Consolidated bug fixes from PRs AndyMik90#168, AndyMik90#170, AndyMik90#171: - Task persistence (AndyMik90#168): Scan worktrees for tasks on app restart to prevent loss of in-progress work and wasted API credits. Tasks in .worktrees/*/specs are now loaded and deduplicated with main. - Terminal buttons (AndyMik90#170): Fix "Open Terminal" buttons silently failing on macOS by properly awaiting createTerminal() Promise. Added useTerminalHandler hook with loading states and error display. - HTTP 300 errors (AndyMik90#171): Handle branch/tag name collisions that cause update failures. Added validation script to prevent conflicts before releases and user-friendly error messages with manual download links. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(platform): add path resolution, spaces handling, and XDG support This commit consolidates multiple bug fixes from community PRs: - PR AndyMik90#187: Path resolution fix - Update path detection to find apps/backend instead of legacy auto-claude directory after v2.7.2 restructure - PR AndyMik90#182/AndyMik90#155: Python path spaces fix - Improve parsePythonCommand() to handle quoted paths and paths containing spaces without splitting - PR AndyMik90#161: Ollama detection fix - Add new apps structure paths for ollama_model_detector.py script discovery - PR AndyMik90#160: AppImage support - Add XDG Base Directory compliant paths for Linux sandboxed environments (AppImage, Flatpak, Snap). New files: - config-paths.ts: XDG path utilities - fs-utils.ts: Filesystem utilities with fallback support - PR AndyMik90#159: gh CLI PATH fix - Add getAugmentedEnv() utility to include common binary locations (Homebrew, snap, local) in PATH for child processes. Fixes gh CLI not found when app launched from Finder/Dock. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address CodeRabbit/Cursor review comments on PR AndyMik90#185 Fixes from code review: - http-client.ts: Use GITHUB_CONFIG instead of hardcoded owner in HTTP 300 error message - validate-release.js: Fix substring matching bug in branch detection that could cause false positives (e.g., v2.7 matching v2.7.2) - bump-version.js: Remove unnecessary try-catch wrapper (exec() already exits on failure) - execution-handlers.ts: Capture original subtask status before mutation for accurate logging - fs-utils.ts: Add error handling to safeWriteFile with proper logging Dismissed as trivial/not applicable: - config-paths.ts: Exhaustive switch check (over-engineering) - env-utils.ts: PATH priority documentation (existing comments sufficient) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address additional CodeRabbit review comments (round 2) Fixes from second round of code review: - fs-utils.ts: Wrap test file cleanup in try-catch for Windows file locking - fs-utils.ts: Add error handling to safeReadFile for consistency with safeWriteFile - http-client.ts: Use GITHUB_CONFIG in fetchJson (missed in first round) - validate-release.js: Exclude symbolic refs (origin/HEAD -> origin/main) from branch check - python-detector.ts: Return cleanPath instead of pythonPath for empty input edge case Dismissed as trivial/not applicable: - execution-handlers.ts: Redundant checkSubtasksCompletion call (micro-optimization) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
* chore: update README version to 2.7.1 Updated the version badge and download links in the README to reflect the new release version 2.7.1, ensuring users have the correct information for downloading the latest builds. * feat(releases): add beta release system with user opt-in Implements a complete beta release workflow that allows users to opt-in to receiving pre-release versions. This enables testing new features before they're included in stable releases. Changes: - Add beta-release.yml workflow for creating beta releases from develop - Add betaUpdates setting with UI toggle in Settings > Updates - Add update channel support to electron-updater (beta vs latest) - Extract shared settings-utils.ts to reduce code duplication - Add prepare-release.yml workflow for automated release preparation - Document beta release process in CONTRIBUTING.md and RELEASE.md Users can enable beta updates in Settings > Updates, and maintainers can trigger beta releases via the GitHub Actions workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * workflow update --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
* chore: update README version to 2.7.1 Updated the version badge and download links in the README to reflect the new release version 2.7.1, ensuring users have the correct information for downloading the latest builds. * feat(releases): add beta release system with user opt-in Implements a complete beta release workflow that allows users to opt-in to receiving pre-release versions. This enables testing new features before they're included in stable releases. Changes: - Add beta-release.yml workflow for creating beta releases from develop - Add betaUpdates setting with UI toggle in Settings > Updates - Add update channel support to electron-updater (beta vs latest) - Extract shared settings-utils.ts to reduce code duplication - Add prepare-release.yml workflow for automated release preparation - Document beta release process in CONTRIBUTING.md and RELEASE.md Users can enable beta updates in Settings > Updates, and maintainers can trigger beta releases via the GitHub Actions workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * workflow update * ci(github): update Discord link and redirect feature requests to discussions Update Discord invite link to correct URL (QhRnz9m5HE) across all GitHub templates and workflows. Redirect feature requests from issue template to GitHub Discussions for better community engagement. Changes: - config.yml: Add feature request link to Discussions, fix Discord URL - question.yml: Update Discord link in pre-question guidance - welcome.yml: Update Discord link in first-time contributor message --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
- Change branch reference from main to develop - Fix contribution guide link to use full URL - Remove hyphen from "Auto Claude" in welcome message
…tup (AndyMik90#180 AndyMik90#167) (AndyMik90#208) This fixes critical bug where macOS users with default Python 3.9.6 couldn't use Auto-Claude because claude-agent-sdk requires Python 3.10+. Root Cause: - Auto-Claude doesn't bundle Python, relies on system Python - python-detector.ts accepted any Python 3.x without checking minimum version - macOS ships with Python 3.9.6 by default (incompatible) - GitHub Actions runners didn't explicitly set Python version Changes: 1. python-detector.ts: - Added getPythonVersion() to extract version from command - Added validatePythonVersion() to check if >= 3.10.0 - Updated findPythonCommand() to skip Python < 3.10 with clear error messages 2. python-env-manager.ts: - Import and use findPythonCommand() (already has version validation) - Simplified findSystemPython() to use shared validation logic - Updated error message from "Python 3.9+" to "Python 3.10+" with download link 3. .github/workflows/release.yml: - Added Python 3.11 setup to all 4 build jobs (macOS Intel, macOS ARM64, Windows, Linux) - Ensures consistent Python version across all platforms during build Impact: - macOS users with Python 3.9 now see clear error with download link - macOS users with Python 3.10+ work normally - CI/CD builds use consistent Python 3.11 - Prevents "ModuleNotFoundError: dotenv" and dependency install failures Fixes AndyMik90#180, AndyMik90#167 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Sonnet 4.5 <[email protected]>
* feat: Add OpenRouter as LLM/embedding provider Add OpenRouter provider support for Graphiti memory integration, enabling access to multiple LLM providers through a single API. Changes: Backend: - Created openrouter_llm.py: OpenRouter LLM provider using OpenAI-compatible API - Created openrouter_embedder.py: OpenRouter embedder provider - Updated config.py: Added OpenRouter to provider enums and configuration - New fields: openrouter_api_key, openrouter_base_url, openrouter_llm_model, openrouter_embedding_model - Validation methods updated for OpenRouter - Updated factory.py: Added OpenRouter to LLM and embedder factories - Updated provider __init__.py files: Exported new OpenRouter functions Frontend: - Updated project.ts types: Added 'openrouter' to provider type unions - GraphitiProviderConfig extended with OpenRouter fields - Updated GraphitiStep.tsx: Added OpenRouter to provider arrays - LLM_PROVIDERS: 'Multi-provider aggregator' - EMBEDDING_PROVIDERS: 'OpenAI-compatible embeddings' - Added OpenRouter API key input field with show/hide toggle - Link to https://openrouter.ai/keys - Updated env-handlers.ts: OpenRouter .env generation and parsing - Template generation for OPENROUTER_* variables - Parsing from .env files with proper type casting Documentation: - Updated .env.example with OpenRouter section - Configuration examples - Popular model recommendations - Example configuration (AndyMik90#6) Fixes AndyMik90#92 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * refactor: address CodeRabbit review comments for OpenRouter - Add globalOpenRouterApiKey to settings types and store updates - Initialize openrouterApiKey from global settings - Update documentation to include OpenRouter in provider lists - Add OpenRouter handling to get_embedding_dimension() method - Add openrouter to provider cleanup list - Add OpenRouter to get_available_providers() function - Clarify Legacy comment for openrouterLlmModel These changes complete the OpenRouter integration by ensuring proper settings persistence and provider detection across the application. * fix: apply ruff formatting to OpenRouter code - Break long error message across multiple lines - Format provider list with one item per line - Fixes lint CI failure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Sonnet 4.5 <[email protected]>
…Mik90#209) Implements distributed file-based locking for spec number coordination across main project and all worktrees. Previously, parallel spec creation could assign the same number to different specs (e.g., 042-bmad-task and 042-gitlab-integration both using number 042). The fix adds SpecNumberLock class that: - Acquires exclusive lock before calculating spec numbers - Scans ALL locations (main project + worktrees) for global maximum - Creates spec directories atomically within the lock - Handles stale locks via PID-based detection with 30s timeout Applied to both Python backend (spec_runner.py flow) and TypeScript frontend (ideation conversion, GitHub/GitLab issue import). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.5 <[email protected]>
* fix(ideation): add missing event forwarders for status sync - Add event forwarders in ideation-handlers.ts for progress, log, type-complete, type-failed, complete, error, and stopped events - Fix ideation-type-complete to load actual ideas array from JSON files instead of emitting only the count Resolves UI getting stuck at 0/3 complete during ideation generation. * fix(ideation): fix UI not updating after actions - Fix getIdeationSummary to count only active ideas (exclude dismissed/archived) This ensures header stats match the visible ideas count - Add transformSessionFromSnakeCase to properly transform session data from backend snake_case to frontend camelCase on ideation-complete event - Transform raw session before emitting ideation-complete event Resolves header showing stale counts after dismissing/deleting ideas. * fix(ideation): improve type safety and async handling in ideation type completion - Replace synchronous readFileSync with async fsPromises.readFile in ideation-type-complete handler - Wrap async file read in IIFE with proper error handling to prevent unhandled promise rejections - Add type validation for IdeationType with VALID_IDEATION_TYPES set and isValidIdeationType guard - Add validateEnabledTypes function to filter out invalid type values and log dropped entries - Handle ENOENT separately * fix(ideation): improve generation state management and error handling - Add explicit isGenerating flag to prevent race conditions during async operations - Implement 5-minute timeout for generation with automatic cleanup and error state - Add ideation-stopped event emission when process is intentionally killed - Replace console.warn/error with proper ideation-error events in agent-queue - Add resetGeneratingTypes helper to transition all generating types to a target state - Filter out dismissed/ * refactor(ideation): improve event listener cleanup and timeout management - Extract event handler functions in ideation-handlers.ts to enable proper cleanup - Return cleanup function from registerIdeationHandlers to remove all listeners - Replace single generationTimeoutId with Map to support multiple concurrent projects - Add clearGenerationTimeout helper to centralize timeout cleanup logic - Extract loadIdeationType IIFE to named function for better error context - Enhance error logging with projectId, * refactor: use async file read for ideation and roadmap session loading - Replace synchronous readFileSync with async fsPromises.readFile - Prevents blocking the event loop during file operations - Consistent with async pattern used elsewhere in the codebase - Improved error handling with proper event emission * fix(agent-queue): improve roadmap completion handling and error reporting - Add transformRoadmapFromSnakeCase to convert backend snake_case to frontend camelCase - Transform raw roadmap data before emitting roadmap-complete event - Add roadmap-error emission for unexpected errors during completion - Add roadmap-error emission when project path is unavailable - Remove duplicate ideation-type-complete emission from error handler (event already emitted in loadIdeationType) - Update error log message
Adds 'from __future__ import annotations' to spec/discovery.py for Python 3.9+ compatibility with type hints. This completes the Python compatibility fixes that were partially applied in previous commits. All 26 analysis and spec Python files now have the future annotations import. Related: AndyMik90#128 Co-authored-by: Joris Slagter <[email protected]>
…#241) * fix: resolve Python detection and backend packaging issues - Fix backend packaging path (auto-claude -> backend) to match path-resolver.ts expectations - Add future annotations import to config_parser.py for Python 3.9+ compatibility - Use findPythonCommand() in project-context-handlers to prioritize Homebrew Python - Improve Python detection to prefer Homebrew paths over system Python on macOS This resolves the following issues: - 'analyzer.py not found' error due to incorrect packaging destination - TypeError with 'dict | None' syntax on Python < 3.10 - Wrong Python interpreter being used (system Python instead of Homebrew Python 3.10+) Tested on macOS with packaged app - project index now loads successfully. * refactor: address PR review feedback - Extract findHomebrewPython() helper to eliminate code duplication between findPythonCommand() and getDefaultPythonCommand() - Remove hardcoded version-specific paths (python3.12) and rely only on generic Homebrew symlinks for better maintainability - Remove unnecessary 'from __future__ import annotations' from config_parser.py since backend requires Python 3.12+ where union types are native These changes make the code more maintainable, less fragile to Python version changes, and properly reflect the project's Python 3.12+ requirement.
…#250) * feat(github): add GitHub automation system for issues and PRs Implements comprehensive GitHub automation with three major components: 1. Issue Auto-Fix: Automatically creates specs from labeled issues - AutoFixButton component with progress tracking - useAutoFix hook for config and queue management - Backend handlers for spec creation from issues 2. GitHub PRs Tool: AI-powered PR review sidebar - New sidebar tab (Cmd+Shift+P) alongside GitHub Issues - PRList/PRDetail components for viewing PRs - Review system with findings by severity - Post review comments to GitHub 3. Issue Triage: Duplicate/spam/feature-creep detection - Triage handlers with label application - Configurable detection thresholds Also adds: - Debug logging (DEBUG=true) for all GitHub handlers - Backend runners/github module with orchestrator - AI prompts for PR review, triage, duplicate/spam detection - dev:debug npm script for development with logging 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-runner): resolve import errors for direct script execution Changes runner.py and orchestrator.py to handle both: - Package import: `from runners.github import ...` - Direct script: `python runners/github/runner.py` Uses try/except pattern for relative vs direct imports. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): correct argparse argument order for runner.py Move --project global argument before subcommand so argparse can correctly parse it. Fixes "unrecognized arguments: --project" error. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * logs when debug mode is on * refactor(github): extract service layer and fix linting errors Major refactoring to improve maintainability and code quality: Backend (Python): - Extracted orchestrator.py (2,600 → 835 lines, 68% reduction) into 7 service modules: - prompt_manager.py: Prompt template management - response_parsers.py: AI response parsing - pr_review_engine.py: PR review orchestration - triage_engine.py: Issue triage logic - autofix_processor.py: Auto-fix workflow - batch_processor.py: Batch issue handling - Fixed 18 ruff linting errors (F401, C405, C414, E741): - Removed unused imports (BatchValidationResult, AuditAction, locked_json_write) - Optimized collection literals (set([n]) → {n}) - Removed unnecessary list() calls - Renamed ambiguous variable 'l' to 'label' throughout Frontend (TypeScript): - Refactored IPC handlers (19% overall reduction) with shared utilities: - autofix-handlers.ts: 1,042 → 818 lines - pr-handlers.ts: 648 → 543 lines - triage-handlers.ts: 437 lines (no duplication) - Created utils layer: logger, ipc-communicator, project-middleware, subprocess-runner - Split github-store.ts into focused stores: issues, pr-review, investigation, sync-status - Split ReviewFindings.tsx into focused components All imports verified, type checks passing, linting clean. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…ndyMik90#250)" (AndyMik90#251) This reverts commit 348de6d.
* Add multilingual support and i18n integration - Implemented i18n framework using `react-i18next` for translation management. - Added support for English and French languages with translation files. - Integrated language selector into settings. - Updated all text strings in UI components to use translation keys. - Ensured smooth language switching with live updates. * Migrate remaining hard-coded strings to i18n system - TaskCard: status labels, review reasons, badges, action buttons - PhaseProgressIndicator: execution phases, progress labels - KanbanBoard: drop zone, show archived, tooltips - CustomModelModal: dialog title, description, labels - ProactiveSwapListener: account switch notifications - AgentProfileSelector: phase labels, custom configuration - GeneralSettings: agent framework option Added translation keys for en/fr locales in tasks.json, common.json, and settings.json for complete i18n coverage. * Add i18n support to dialogs and settings components - AddFeatureDialog: form labels, validation messages, buttons - AddProjectModal: dialog steps, form fields, actions - RateLimitIndicator: rate limit notifications - RateLimitModal: account switching, upgrade prompts - AdvancedSettings: updates and notifications sections - ThemeSettings: theme selection labels - Updated dialogs.json locales (en/fr) * Fix truncated 'ready' message in dialogs locales * Fix backlog terminology in i18n locales Change "Planning"/"Planification" to standard PM term "Backlog" * Migrate settings navigation and integration labels to i18n - AppSettings: nav items, section titles, buttons - IntegrationSettings: Claude accounts, auto-switch, API keys labels - Added settings nav/projectSections/integrations translation keys - Added buttons.saving to common translations * Migrate AgentProfileSettings and Sidebar init dialog to i18n - AgentProfileSettings: migrate phase config labels, section title, description, and all hardcoded strings to settings namespace - Sidebar: migrate init dialog strings to dialogs namespace with common buttons from common namespace - Add new translation keys for agent profile settings and update dialog * Migrate AppSettings navigation labels to i18n - Add useTranslation hook to AppSettings.tsx - Replace hardcoded section labels with dynamic translations - Add projectSections translations for project settings nav - Add rerunWizardDescription translation key * Add explicit typing to notificationItems array Import NotificationSettings type and use keyof to properly type the notification item keys, removing manual type assertion.
…AndyMik90#266) * ci: implement enterprise-grade PR quality gates and security scanning * ci: implement enterprise-grade PR quality gates and security scanning * fix:pr comments and improve code * fix: improve commit linting and code quality * Removed the dependency-review job (i added it) * fix: address CodeRabbit review comments - Expand scope pattern to allow uppercase, underscores, slashes, dots - Add concurrency control to cancel duplicate security scan runs - Add explanatory comment for Bandit CLI flags - Remove dependency-review job (requires repo settings) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * docs: update commit lint examples with expanded scope patterns Show slashes and dots in scope examples to demonstrate the newly allowed characters (api/users, package.json) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * chore: remove feature request issue template Feature requests are directed to GitHub Discussions via the issue template config.yml 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix: address security vulnerabilities in service orchestrator - Fix port parsing crash on malformed docker-compose entries - Fix shell injection risk by using shlex.split() with shell=False Prevents crashes when docker-compose.yml contains environment variables in port mappings (e.g., '${PORT}:8080') and eliminates shell injection vulnerabilities in subprocess execution. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…90#252) * feat(github): add GitHub automation system for issues and PRs Implements comprehensive GitHub automation with three major components: 1. Issue Auto-Fix: Automatically creates specs from labeled issues - AutoFixButton component with progress tracking - useAutoFix hook for config and queue management - Backend handlers for spec creation from issues 2. GitHub PRs Tool: AI-powered PR review sidebar - New sidebar tab (Cmd+Shift+P) alongside GitHub Issues - PRList/PRDetail components for viewing PRs - Review system with findings by severity - Post review comments to GitHub 3. Issue Triage: Duplicate/spam/feature-creep detection - Triage handlers with label application - Configurable detection thresholds Also adds: - Debug logging (DEBUG=true) for all GitHub handlers - Backend runners/github module with orchestrator - AI prompts for PR review, triage, duplicate/spam detection - dev:debug npm script for development with logging 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-runner): resolve import errors for direct script execution Changes runner.py and orchestrator.py to handle both: - Package import: `from runners.github import ...` - Direct script: `python runners/github/runner.py` Uses try/except pattern for relative vs direct imports. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): correct argparse argument order for runner.py Move --project global argument before subcommand so argparse can correctly parse it. Fixes "unrecognized arguments: --project" error. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * logs when debug mode is on * refactor(github): extract service layer and fix linting errors Major refactoring to improve maintainability and code quality: Backend (Python): - Extracted orchestrator.py (2,600 → 835 lines, 68% reduction) into 7 service modules: - prompt_manager.py: Prompt template management - response_parsers.py: AI response parsing - pr_review_engine.py: PR review orchestration - triage_engine.py: Issue triage logic - autofix_processor.py: Auto-fix workflow - batch_processor.py: Batch issue handling - Fixed 18 ruff linting errors (F401, C405, C414, E741): - Removed unused imports (BatchValidationResult, AuditAction, locked_json_write) - Optimized collection literals (set([n]) → {n}) - Removed unnecessary list() calls - Renamed ambiguous variable 'l' to 'label' throughout Frontend (TypeScript): - Refactored IPC handlers (19% overall reduction) with shared utilities: - autofix-handlers.ts: 1,042 → 818 lines - pr-handlers.ts: 648 → 543 lines - triage-handlers.ts: 437 lines (no duplication) - Created utils layer: logger, ipc-communicator, project-middleware, subprocess-runner - Split github-store.ts into focused stores: issues, pr-review, investigation, sync-status - Split ReviewFindings.tsx into focused components All imports verified, type checks passing, linting clean. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * fixes during testing of PR * feat(github): implement PR merge, assign, and comment features - Add auto-assignment when clicking "Run AI Review" - Implement PR merge functionality with squash method - Add ability to post comments on PRs - Display assignees in PR UI - Add Approve and Merge buttons when review passes - Update backend gh_client with pr_merge, pr_comment, pr_assign methods - Create IPC handlers for new PR operations - Update TypeScript interfaces and browser mocks 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * Improve PR review AI * fix(github): use temp files for PR review posting to avoid shell escaping issues When posting PR reviews with findings containing special characters (backticks, parentheses, quotes), the shell command was interpreting them as commands instead of literal text, causing syntax errors. Changed both postPRReview and postPRComment handlers to write the body content to temporary files and use gh CLI's --body-file flag instead of --body with inline content. This safely handles ALL special characters without escaping issues. Fixes shell errors when posting reviews with suggested fixes containing code snippets. * fix(i18n): add missing GitHub PRs translation and document i18n requirements Fixed missing translation key for GitHub PRs feature that was causing "items.githubPRs" to display instead of the proper translated text. Added comprehensive i18n guidelines to CLAUDE.md to ensure all future frontend development follows the translation key pattern instead of using hardcoded strings. Also fixed missing deletePRReview mock function in browser-mock.ts to resolve TypeScript compilation errors. Changes: - Added githubPRs translation to en/navigation.json - Added githubPRs translation to fr/navigation.json - Added Development Guidelines section to CLAUDE.md with i18n requirements - Documented translation file locations and namespace usage patterns - Added deletePRReview mock function to browser-mock.ts 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * fix ui loading * Github PR fixes * improve claude.md * lints/tests * fix(github): handle PRs exceeding GitHub's 20K line diff limit - Add PRTooLargeError exception for large PR detection - Update pr_diff() to catch and raise PRTooLargeError for HTTP 406 errors - Gracefully handle large PRs by skipping full diff and using individual file patches - Add diff_truncated flag to PRContext to track when diff was skipped - Large PRs will now review successfully using per-file diffs instead of failing Fixes issue with PR AndyMik90#252 which has 100+ files exceeding the 20,000 line limit. * fix: implement individual file patch fetching for large PRs The PR review was getting stuck for large PRs (>20K lines) because when we skipped the full diff due to GitHub API limits, we had no code to analyze. The individual file patches were also empty, leaving the AI with just file names and metadata. Changes: - Implemented _get_file_patch() to fetch individual patches via git diff - Updated PR review engine to build composite diff from file patches when diff_truncated is True - Added missing 'state' field to PRContext dataclass - Limits composite diff to first 50 files for very large PRs - Shows appropriate warnings when using reconstructed diffs This allows AI review to proceed with actual code analysis even when the full PR diff exceeds GitHub's limits. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <[email protected]> * 1min reduction * docs: add GitHub Sponsors funding configuration Enable the Sponsor button on the repository by adding FUNDING.yml with the AndyMik90 GitHub Sponsors profile. * feat(github-pr): add orchestrating agent for thorough PR reviews Implement a new Opus 4.5 orchestrating agent that performs comprehensive PR reviews regardless of size. Key changes: - Add orchestrator_reviewer.py with strategic review workflow - Add review_tools.py with subagent spawning capabilities - Add pr_orchestrator.md prompt emphasizing thorough analysis - Add pr_security_agent.md and pr_quality_agent.md subagent prompts - Integrate orchestrator into pr_review_engine.py with config flag - Fix critical bug where findings were extracted but not processed (indentation issue in _parse_orchestrator_output) The orchestrator now correctly identifies issues in PRs that were previously approved as "trivial". Testing showed 7 findings detected vs 0 before the fix. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * i18n * fix(github-pr): restrict pr_reviewer to read-only permissions The PR review agent was using qa_reviewer agent type which has Bash access, allowing it to checkout branches and make changes during review. Created new pr_reviewer agent type with BASE_READ_TOOLS only (no Bash, no writes, no auto-claude tools). This prevents the PR review from accidentally modifying code or switching branches during analysis. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github-pr): robust category mapping and JSON parsing for PR review The orchestrator PR review was failing to extract findings because: 1. AI generates category names like 'correctness', 'consistency', 'testing' that aren't in our ReviewCategory enum - added flexible mapping 2. JSON sometimes embedded in markdown code blocks (```json) which broke parsing - added code block extraction as first parsing attempt Changes: - Add _CATEGORY_MAPPING dict to map AI categories to valid enum values - Add _map_category() helper function with fallback to QUALITY - Add severity parsing with fallback to MEDIUM - Add markdown code block detection (```json) before raw JSON parsing - Add _extract_findings_from_data() helper to reduce code duplication - Apply same fixes to review_tools.py for subagent parsing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(pr-review): improve post findings UX with batch support and feedback - Fix post findings failing on own PRs by falling back from REQUEST_CHANGES to COMMENT when GitHub returns 422 error - Change status badge to show "Reviewed" instead of "Commented" until findings are actually posted to GitHub - Add success notification when findings are posted (auto-dismisses after 3s) - Add batch posting support: track posted findings, show "Posted" badge, allow posting remaining findings in additional batches - Show loading state on button while posting 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(github): resolve stale timestamp and null author bugs - Fix stale timestamp in batch_issues.py: Move updated_at assignment BEFORE to_dict() serialization so the saved JSON contains the correct timestamp instead of the old value - Fix AttributeError in context_gatherer.py: Handle null author/user fields when GitHub API returns null for deleted/suspended users instead of an empty object 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high and medium severity PR review findings HIGH severity fixes: - Command Injection in autofix-handlers.ts: Use execFileSync with args array - Command Injection in pr-handlers.ts (3 locations): Use execFileSync + validation - Command Injection in triage-handlers.ts: Use execFileSync + label validation - Token Exposure in bot_detection.py: Pass token via GH_TOKEN env var MEDIUM severity fixes: - Environment variable leakage in subprocess-runner.ts: Filter to safe vars only - Debug logging in subprocess-runner.ts: Only log in development mode - Delimiter escape bypass in sanitize.py: Use regex pattern for variations - Insecure file permissions in trust.py: Use os.open with 0o600 mode - No file locking in learning.py: Use FileLock + atomic_write utilities - Bare except in confidence.py: Log error with specific exception info - Fragile module import in pr_review_engine.py: Import at module level - State transition validation in models.py: Enforce can_transition_to() 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * PR followup * fix(security): add usedforsecurity=False to MD5 hash calls MD5 is used for generating unique IDs/cache keys, not for security purposes. Adding usedforsecurity=False resolves Bandit B324 warnings. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high-priority PR review findings Fixes 5 high-priority issues from Auto Claude PR Review: 1. orchestrator_reviewer.py: Token budget tracking now increments total_tokens from API response usage data 2. pr_review_engine.py: Async exceptions now re-raise RuntimeError instead of silently returning empty results 3. batch_issues.py: IssueBatch.save() now uses locked_json_write for atomic file operations with file locking 4. project-middleware.ts: Added validateProjectPath() to prevent path traversal attacks (checks absolute, no .., exists, is dir) 5. orchestrator.py: Exception handling now logs full traceback and preserves exception type/context in error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * fix(security): address all high-priority PR review findings Fixes 5 high-priority issues from Auto Claude PR Review: 1. orchestrator_reviewer.py: Token budget tracking now increments total_tokens from API response usage data 2. pr_review_engine.py: Async exceptions now re-raise RuntimeError instead of silently returning empty results 3. batch_issues.py: IssueBatch.save() now uses locked_json_write for atomic file operations with file locking 4. project-middleware.ts: Added validateProjectPath() to prevent path traversal attacks (checks absolute, no .., exists, is dir) 5. orchestrator.py: Exception handling now logs full traceback and preserves exception type/context in error messages 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * feat(ui): add PR status labels to list view Add secondary status badges to the PR list showing review state at a glance: - "Changes Requested" (warning) - PRs with blocking issues (critical/high) - "Ready to Merge" (green) - PRs with only non-blocking suggestions - "Ready for Follow-up" (blue) - PRs with new commits since last review The "Ready for Follow-up" badge uses a cached new commits check from the store, only shown after the detail view confirms new commits via SHA comparison. This prevents false positives from PR updatedAt timestamp changes (which can happen from comments, labels, etc). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> * PR labels * auto-claude: Initialize subtask-based implementation plan - Workflow type: feature - Phases: 3 - Subtasks: 6 - Ready for autonomous implementation --------- Co-authored-by: Claude Opus 4.5 <[email protected]>
…yMik90#272) Bumps [vitest](https://github.com/vitest-dev/vitest/tree/HEAD/packages/vitest) from 4.0.15 to 4.0.16. - [Release notes](https://github.com/vitest-dev/vitest/releases) - [Commits](https://github.com/vitest-dev/vitest/commits/v4.0.16/packages/vitest) --- updated-dependencies: - dependency-name: vitest dependency-version: 4.0.16 dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [@electron/rebuild](https://github.com/electron/rebuild) from 3.7.2 to 4.0.2. - [Release notes](https://github.com/electron/rebuild/releases) - [Commits](electron/rebuild@v3.7.2...v4.0.2) --- updated-dependencies: - dependency-name: "@electron/rebuild" dependency-version: 4.0.2 dependency-type: direct:development update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Andy <[email protected]>
Co-authored-by: danielfrey63 <[email protected]> Co-authored-by: Andy <[email protected]>
* fix(planning): accept bug_fix workflow_type alias * style(planning): ruff format * fix: refatored common logic * fix: remove ruff errors * fix: remove duplicate _normalize_workflow_type method Remove the incorrectly placed duplicate method inside ContextLoader class. The module-level function is the correct implementation being used. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]> --------- Co-authored-by: danielfrey63 <[email protected]> Co-authored-by: Andy <[email protected]> Co-authored-by: AndyMik90 <[email protected]> Co-authored-by: Claude Opus 4.5 <[email protected]>
…ow (AndyMik90#276) When dry_run=true, the workflow skipped creating the version tag but build jobs still tried to checkout that non-existent tag, causing all 4 platform builds to fail with "git failed with exit code 1". Now build jobs checkout develop branch for dry runs while still using the version tag for real releases. Closes: GitHub Actions run #20464082726
Replace separate GitHubIntegration and GitLabIntegration components with unified ProviderIntegration component: - Single component handles both GitHub and GitLab - Provider-specific behavior controlled by 'provider' prop - Reduces code duplication and maintains consistent UX - Simplifies future provider additions Both GitHub and GitLab settings now use the same component with provider-specific configuration from the factory pattern.
Add event-driven sidebar refresh when GitHub/GitLab settings are toggled: - useProjectSettings dispatches 'env-config-updated' event after saving changes - Sidebar listens for this event and reloads env config - Ensures sidebar nav items (GitHub PRs, GitLab Issues) appear/disappear immediately This eliminates the need for users to manually refresh or switch projects to see updated sidebar navigation.
Enhance GitLab CLI installation and authentication flow: - Auto-trigger OAuth authentication after successful glab CLI installation - Add comprehensive debug logging for OAuth flow troubleshooting - Verify installation completed before starting OAuth - Seamless UX: install CLI → auto-authenticate → ready to use This reduces manual steps: users no longer need to manually click 'Use OAuth' after installing glab CLI.
Only show OAuth flow if user explicitly chose OAuth previously: - Before: showed OAuth by default when no token exists - After: only shows OAuth if githubAuthMethod === 'oauth' This prevents forcing OAuth on users who prefer manual token entry and matches expected behavior for new project setups.
- Add automatic validation after planner session completes - Halt build with clear error message if no subtasks created - Show validation errors and suggested fixes to user - Display helpful troubleshooting guidance - Add prominent warning in planner prompt about mandatory subtasks - Include minimum acceptable plan example in prompt - Fix Unicode encoding issue in test_agent_architecture.py - Fix Python 3.10 compatibility in conftest.py (tomllib → tomli fallback) - Simplify test_review_approval.py to not require datetime mocking This fixes the issue where tasks created from ideation/insights would start with empty implementation plans and the planner would sometimes fail to create any subtasks, causing builds to appear 'complete' when they actually never started. Validation now catches this immediately after planning and provides actionable feedback to help users fix the spec or understand why planning failed. Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
When the planner fails to create subtasks, explicitly write error status to implementation_plan.json so the frontend displays an error state instead of showing 'Task Incomplete' or moving to 'human_review' status. This ensures users see a clear error state in the UI when planning fails, rather than a confusing incomplete state. Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
Frontend TaskStatus doesn't include 'error', so use 'human_review' with reviewReason='errors' instead. This ensures the task shows up correctly in the UI with an error state. Also added a detailed, formatted error_message that explains: - What went wrong (no subtasks created) - Why this usually happens (vague spec, errors, token limits) - Clear next steps to fix the issue Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
Task descriptions from ideation insights were failing to parse in spec_runner.py because Windows cmd.exe couldn't handle nested quotes in command-line arguments. Changed from passing --task "description with quotes, commas, etc" to writing the description to a temporary file and passing --task-file path. This completely bypasses command-line quoting issues and allows spec creation to work correctly for tasks created from insights. Co-Authored-By: Claude Sonnet 4.5 <[email protected]>
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughThis PR introduces comprehensive Conda environment management across frontend and backend, implements multi-provider Git support (GitHub/GitLab) with auto-detection, integrates user feedback into the planning process, enhances Windows/PowerShell compatibility, and investigates a subtask rendering issue with proposed solutions. Changes
Sequence DiagramssequenceDiagram
participant User
participant Frontend as Frontend (Main)
participant Detector as Conda Detector
participant EnvMgr as Conda Env Manager
participant Workspace as Workspace Generator
participant Filesystem
User->>Frontend: Initiate Conda Setup for Project
Frontend->>Detector: detectCondaInstallations()
Detector->>Detector: Search OS-specific paths
Detector->>Filesystem: Validate conda executable
Detector-->>Frontend: CondaDetectionResult
Frontend->>EnvMgr: createEnvironment(config)
EnvMgr->>EnvMgr: Parse Python version
EnvMgr->>Filesystem: Create environment directory
EnvMgr->>EnvMgr: Run conda create command
EnvMgr->>EnvMgr: Verify environment
EnvMgr->>EnvMgr: Install dependencies
EnvMgr-->>Frontend: Progress events
Frontend->>Workspace: generateWorkspaceFiles()
Workspace->>Workspace: Detect project structure
Workspace->>Filesystem: Write VS Code workspace config
Workspace->>Filesystem: Generate activation scripts
Workspace-->>Frontend: Generated paths
Frontend-->>User: Setup complete
sequenceDiagram
participant User
participant Frontend as Frontend (UI)
participant WorktreeManager as Worktree Manager
participant GitProvider as Git Provider
participant GitHub as GitHub CLI
participant GitLab as GitLab CLI
User->>Frontend: Request PR/MR Creation
Frontend->>WorktreeManager: push_and_create_pr(spec, target_branch, provider_setting)
WorktreeManager->>GitProvider: get_provider_config(project_dir, provider_setting)
GitProvider-->>WorktreeManager: Provider (GITHUB | GITLAB)
WorktreeManager->>WorktreeManager: push_branch with retry
alt GitHub Provider
WorktreeManager->>GitHub: gh pr create --head --base --title
GitHub-->>WorktreeManager: PR URL
else GitLab Provider
WorktreeManager->>GitLab: glab mr create --source-branch --target-branch --title
GitLab-->>WorktreeManager: MR URL
end
WorktreeManager-->>Frontend: PushAndCreatePRResult
Frontend-->>User: PR/MR created (provider-specific messaging)
sequenceDiagram
participant Agent as Coder Agent
participant Phase as Phase Config
participant Logger as Task Logger
participant Prompt as Prompt Builder
participant PlanValidator as Plan Validator
Agent->>Agent: Start main loop
Agent->>Phase: get_unread_feedback(spec_dir)
Phase->>Phase: Load task_metadata.json
Phase-->>Agent: Unread feedback list
Agent->>Logger: Log unread feedback detected
Agent->>Prompt: Append feedback_section to current prompt
alt Feedback exists
Prompt->>Agent: Enhanced prompt with user feedback context
else No feedback
Prompt->>Agent: Standard prompt
end
Agent->>Agent: Generate plan
Agent->>PlanValidator: Validate plan structure
alt Plan invalid
PlanValidator-->>Agent: Validation errors
Agent->>Logger: Log planning failure
Agent->>Agent: Set plan state to human_review
else Plan valid
Agent->>Phase: Sync plan to source
end
alt Completion check
Agent->>Phase: Check for unread feedback
alt Feedback remains
Agent->>Agent: Block completion with error
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes This PR introduces substantial new functionality across frontend and backend, including: a complete Conda environment management system (detection, creation, activation, workspace generation), a provider abstraction layer with GitHub/GitLab implementations, user feedback integration with planning validation, and cross-platform activation script support. The changes span 30+ new files and significant modifications to critical paths (worktree management, agent coder, process spawning). Multiple heterogeneous features demand separate reasoning for each subsystem despite some shared infrastructure patterns. Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Test User seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Summary of ChangesHello @jasonnator, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the project's Python environment management by integrating Conda, ensuring cross-platform compatibility, especially for Windows users. It also refines core development workflows, including pre-commit checks, Git operations, and PR review processes, by introducing a new Git provider abstraction and enhancing AI-driven merge conflict resolution. These changes aim to improve developer experience, streamline automated tasks, and make the system more robust and extensible across different operating systems and Git hosting platforms. Highlights
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is an extensive and high-quality pull request that introduces comprehensive conda environment management and significantly improves Windows compatibility. The changes are well-thought-out, especially the enhancements to worktree management, git command handling, and security validation. The introduction of evidence-based validation for PR review findings and robust error handling in the agent lifecycle are particularly noteworthy improvements. My review includes a few suggestions for consistency and robustness, mainly around file I/O and refactoring duplicated logic.
Add comprehensive conda environment support for isolated Python environments: **Conda Environment Features:** - Conda installation detection across OS-specific locations - Application-level environment at ~/miniconda3/envs/auto-claude - Project-level self-contained environments in .envs/<project-name>/ - Automatic terminal activation for configured projects - Cross-platform activation scripts (CMD, PowerShell, Bash) - VS Code workspace file generation with conda terminal profiles - Python version detection from requirements.txt, pyproject.toml, environment.yml - Setup wizard with multi-step progress tracking **Windows/PowerShell Compatibility Fixes:** - Add windowsHide: true to all spawn() calls to prevent console popups - Fix PowerShell command syntax (semicolons, $env:PATH, & call operator) - Platform-aware shell escaping (PowerShell vs bash quote handling) - Add pathWasModified flag to skip unnecessary PATH modifications - PowerShell-specific conda activation via init scripts **New Files (11):** - conda-detector.ts, conda-env-manager.ts, conda-workspace-generator.ts - conda-project-structure.ts, conda-handlers.ts, conda-api.ts - PythonEnvSettings.tsx, CondaSetupWizard.tsx, CondaDetectionDisplay.tsx - useCondaSetup.ts, conda.ts (types) **Modified Files (48):** - Terminal integration, Claude CLI integration, shell utilities - Settings UI components, IPC handlers, type definitions - i18n translations (en + fr) Co-Authored-By: Claude Opus 4.5 <[email protected]>
- GeneralSettings.tsx: Fix 6 hardcoded strings (validation messages, browse buttons, file dialog titles) - SectionRouter.tsx: Fix 2 hardcoded strings (section title/description) - python-env-manager.ts: Change console.log to console.warn for consistency - Add 7 new translation keys to EN and FR locale files Co-Authored-By: Claude Opus 4.5 <[email protected]>
3725e69 to
d1caff3
Compare
Resolves all merge conflicts from 91 commits on develop since PR AndyMik90#1052 was opened. Key conflict resolutions: - Backend Core: Merged GitLab MR support with improved gh CLI detection (get_gh_executable) - Backend Runners: Added platform dependency validation and safe_print() from develop - Frontend Main: Merged conda features with ESM __dirname compatibility and platform utilities - Frontend Renderer: Combined conda UI with updated model names (claude-sonnet-4-5-20250929) - Frontend Shared: Merged i18n translations and type improvements - Tests: Added new test utilities and platform-specific validation Notable develop branch changes incorporated: - ACS-251: PYTHONPATH isolation fix for external Python projects - ACS-253: Platform dependency validation to prevent import chain issues - ACS-306/310: Platform-critical packages (pywin32, secretstorage) - Improved Windows compatibility (windowsHide: true on spawn calls) - Model name updates to Claude 4.5 versions - ESM compatibility fixes (fileURLToPath for __dirname) Fixes applied during merge: - Added missing subprocess import to workspace/setup.py - Removed duplicate const declarations in Terminal.tsx - Removed duplicate exports in hooks/index.ts - Removed duplicate object properties in browser-mock.ts - Fixed invalid TaskStatus checks for 'planning' in KanbanBoard and TaskConfiguration - Removed orphaned pty-manager-conda.test.ts (functions don't exist) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
|
|
||
| from __future__ import annotations | ||
|
|
||
| import json |
Check notice
Code scanning / CodeQL
Unused import
| from ...github.providers.protocol import ( | ||
| IssueData, | ||
| IssueFilters, | ||
| LabelData, | ||
| PRData, | ||
| PRFilters, | ||
| ProviderType, | ||
| ReviewData, | ||
| ReviewFinding, | ||
| ) |
Check notice
Code scanning / CodeQL
Unused import
| Examples showing how to use the provider factory for different git hosts. | ||
| """ | ||
|
|
||
| import asyncio |
Check notice
Code scanning / CodeQL
Unused import
| const os = require('os'); | ||
| const fs = require('fs'); | ||
| const tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`); | ||
| fs.writeFileSync(tempTaskFile, taskDescription, 'utf-8'); |
Check failure
Code scanning / CodeQL
Insecure temporary file
|
|
||
| // Append .envs/ entry | ||
| content += `${entry}\n`; | ||
| await fsPromises.writeFile(gitignorePath, content, 'utf-8'); |
Check failure
Code scanning / CodeQL
Potential file system race condition
| result = 'error'; | ||
| } else if (isComplete) { | ||
| result = 'completed'; | ||
| } else if (stepIndex < (currentStep as number)) { |
Check warning
Code scanning / CodeQL
Useless comparison test
| const stepIndex = 2; | ||
|
|
||
| let result: string; | ||
| if (hasError && stepIndex === (currentStep as number)) { |
Check warning
Code scanning / CodeQL
Useless comparison test
| const os = require('os'); | ||
| const fs = require('fs'); | ||
| const tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`); | ||
| fs.writeFileSync(tempTaskFile, taskDescription, 'utf-8'); |
Check warning
Code scanning / CodeQL
Network data written to file
| import { | ||
| RefreshCw, | ||
| KeyRound, | ||
| Loader2, | ||
| CheckCircle2, | ||
| AlertCircle, | ||
| Terminal, | ||
| ExternalLink, | ||
| ChevronDown, | ||
| } from 'lucide-react'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class
| }; | ||
|
|
||
| export function TaskConfiguration({ task }: TaskConfigurationProps) { | ||
| const { t } = useTranslation(['tasks', 'common']); |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class
| : { name: 'Default', description: 'Using system defaults' }; | ||
| }; | ||
|
|
||
| const getModelLabel = (model: string): string => { |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class
| } | ||
|
|
||
| export function TaskFeedback({ task }: TaskFeedbackProps) { | ||
| const { t } = useTranslation(['tasks', 'common']); |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class
| createMockHookReturn({ step: 'detecting', isRunning: false }) | ||
| ); | ||
|
|
||
| const { rerender } = renderWizard({ open: true, onOpenChange }); |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 105
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
apps/frontend/src/renderer/components/project-settings/hooks/useProjectSettings.ts (1)
363-385: Local state updated even when backend save fails.When
result.successis false (line 371) or an exception is caught (line 378), the function still callssetEnvConfig(newConfig)on line 383. This causes the UI to display configuration that wasn't persisted, leading to a state/persistence mismatch. Users may believe their changes were saved when they weren't.Consider only updating local state on successful save, or reverting/notifying the user on failure.
🛠️ Suggested fix
const updateEnvConfig = async (updates: Partial<ProjectEnvConfig>) => { if (envConfig) { const newConfig = { ...envConfig, ...updates }; console.log('[useProjectSettings.updateEnvConfig] Updating config:', updates); // Save to backend FIRST so disk is updated before effects run try { const result = await window.electronAPI.updateProjectEnv(project.id, newConfig); if (!result.success) { console.error('[useProjectSettings] Failed to auto-save env config:', result.error); + setEnvError(result.error || 'Failed to save environment config'); + return; } else { console.log('[useProjectSettings.updateEnvConfig] Successfully saved to backend'); // Notify other components (like Sidebar) that env config changed window.dispatchEvent(new CustomEvent('env-config-updated')); + // Update local state only on success + setEnvConfig(newConfig); } } catch (err) { console.error('[useProjectSettings] Error auto-saving env config:', err); + setEnvError(err instanceof Error ? err.message : 'Unknown error'); + return; } - - // Then update local state (triggers effects that read from disk) - setEnvConfig(newConfig); } };apps/frontend/src/main/claude-cli-utils.ts (1)
12-47: Replaceprocess.platformchecks with platform abstraction helpers.
Direct platform checks violate project guidelines and will fail static enforcement. Use the shared platform utilities for delimiter and Windows detection.🛠️ Proposed fix
import path from 'path'; import { getAugmentedEnv, getAugmentedEnvAsync } from './env-utils'; import { getToolPath, getToolPathAsync } from './cli-tool-manager'; +import { getPathDelimiter, isWindows } from './platform'; @@ - const pathSeparator = process.platform === 'win32' ? ';' : ':'; + const pathSeparator = getPathDelimiter(); @@ - const hasCommandDir = process.platform === 'win32' + const hasCommandDir = isWindows() ? pathEntries .map((entry) => path.normalize(entry).toLowerCase()) .includes(normalizedCommandDir.toLowerCase()) : pathEntries .map((entry) => path.normalize(entry)) .includes(normalizedCommandDir);As per coding guidelines, ...
apps/frontend/src/main/ipc-handlers/claude-code-handlers.ts (2)
509-523: Consider propagating spawn errors before detaching.The
runWindowsCommandhelper resolves after a 300ms delay regardless of whether the process started successfully. While fire-and-forget is intentional for detached terminals, early spawn failures (e.g., command not found) are silently swallowed. Consider attaching an error listener before the timeout:Suggested improvement
const runWindowsCommand = (cmdString: string): Promise<void> => { - return new Promise((resolve) => { + return new Promise((resolve, reject) => { console.log(`[Claude Code] Executing: ${cmdString}`); const child = exec(cmdString, { windowsHide: false }); + + child.on('error', (err) => { + reject(new Error(`Failed to start terminal: ${err.message}`)); + }); + child.unref?.(); - setTimeout(() => resolve(), 300); + setTimeout(() => resolve(), 300); }); };
392-398: Directprocess.platformcheck violates coding guidelines.Import and use platform abstraction functions from the platform module (
isWindows(),isMacOS(),isLinux()) instead of checkingprocess.platformdirectly.Suggested refactor
+import { isWindows, isMacOS } from '../platform'; + export async function openTerminalWithCommand(command: string, keepOpen: boolean = true): Promise<void> { - const platform = process.platform; const settings = readSettingsFile(); const preferredTerminal = settings?.preferredTerminal as string | undefined; - console.log('[Claude Code] Platform:', platform); + console.log('[Claude Code] Platform:', process.platform); console.log('[Claude Code] Preferred terminal:', preferredTerminal); - if (platform === 'darwin') { + if (isMacOS()) { // macOS handling... - } else if (platform === 'win32') { + } else if (isWindows()) { // Windows handling... } else { // Linux handling... }apps/frontend/src/renderer/components/settings/integrations/GitLabIntegration.tsx (1)
234-256: Guard OAuth logs to avoid leaking sensitive data in production.These console.log statements bypass DEBUG gating and log the full OAuth response. If the payload includes auth URLs or tokens, this is a privacy/telemetry risk and adds production noise. Prefer debugLog and sanitize the logged fields.
🛠️ Suggested change
- console.log('[GitLabIntegration] handleStartOAuth called'); + debugLog('handleStartOAuth called'); const hostname = envConfig?.gitlabInstanceUrl?.replace(/^https?:\/\//, '').replace(/\/$/, ''); - console.log('[GitLabIntegration] Calling startGitLabAuth with hostname:', hostname); + debugLog('Calling startGitLabAuth with hostname:', hostname); const result = await window.electronAPI.startGitLabAuth(hostname); - console.log('[GitLabIntegration] startGitLabAuth result:', result); + debugLog('startGitLabAuth result:', { success: result.success, error: result.error }); if (result.success) { - console.log('[GitLabIntegration] Auth started successfully, polling for completion...'); + debugLog('Auth started successfully, polling for completion...'); // Poll for auth completion const checkAuth = async () => { const authResult = await window.electronAPI.checkGitLabAuth(hostname); if (authResult.success && authResult.data?.authenticated) { handleOAuthSuccess(); } else { // Retry after delay setTimeout(checkAuth, 2000); } }; setTimeout(checkAuth, 3000); } else { - console.error('[GitLabIntegration] startGitLabAuth failed:', result.error); + console.error('[GitLabIntegration] startGitLabAuth failed'); + debugLog('startGitLabAuth error detail:', result.error); }apps/frontend/src/preload/api/modules/github-api.ts (1)
395-423: Remove the duplicateMergeReadinessinterface.The interface is declared twice in the same file. TypeScript will merge them, but it’s redundant and can hide future divergences. Keep a single definition (or re-export from shared types).
♻️ Proposed fix
-/** - * Lightweight merge readiness check result - * Used for real-time validation of AI verdict freshness - */ -export interface MergeReadiness { - /** PR is in draft mode */ - isDraft: boolean; - /** GitHub's mergeable status */ - mergeable: 'MERGEABLE' | 'CONFLICTING' | 'UNKNOWN'; - /** Branch is behind base branch (out of date) */ - isBehind: boolean; - /** Simplified CI status */ - ciStatus: 'passing' | 'failing' | 'pending' | 'none'; - /** List of blockers that contradict a "ready to merge" verdict */ - blockers: string[]; -}apps/frontend/src/renderer/components/settings/GeneralSettings.tsx (1)
334-410: Localize file dialog filter labels.These names surface in the OS picker and are user-facing. Please move them to i18n keys.
✅ Suggested fix
- filters: isWindows - ? [{ name: 'Python Executable', extensions: ['exe'] }] + filters: isWindows + ? [{ name: t('general.pythonExecutableFilter'), extensions: ['exe'] }] : [] @@ - ? [ - { name: 'Activation Scripts', extensions: ['bat', 'cmd', 'ps1'] }, - { name: 'All Files', extensions: ['*'] } - ] - : [{ name: 'All Files', extensions: ['*'] }] + ? [ + { name: t('general.activationScriptsFilter'), extensions: ['bat', 'cmd', 'ps1'] }, + { name: t('general.allFilesFilter'), extensions: ['*'] } + ] + : [{ name: t('general.allFilesFilter'), extensions: ['*'] }]As per coding guidelines.
.husky/pre-commit
Outdated
| # In worktree without dependencies - warn but allow commit | ||
| echo "" | ||
| echo "⚠️ WARNING: node_modules not available in this worktree." | ||
| echo "WARNING: node_modules not available in this worktree." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider maintaining emoji consistency with other scripts.
The scripts/bump-version.js uses ⚠️ emoji in its warning() function. Removing the emoji here creates an inconsistency. If this was intentional for terminal compatibility, consider documenting the rationale or applying the same pattern to other warning messages.
🤖 Prompt for AI Agents
In @.husky/pre-commit at line 198, The pre-commit hook's echo warning ("echo
\"WARNING: node_modules not available in this worktree.\"") is inconsistent with
the ⚠️ emoji used by the warning() in scripts/bump-version.js; update the
pre-commit warning to include the same ⚠️ emoji (or vice versa) to maintain
consistency, or add a short comment in the hook explaining
terminal-compatibility rationale if you intentionally omitted the emoji so
reviewers know why the styles differ.
| # Windows-specific dependency for LadybugDB/Graphiti | ||
| # pywin32 provides Windows system bindings required by real_ladybug | ||
| pywin32>=306; sys_platform == "win32" and python_version >= "3.12" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the redundant pywin32 dependency entry.
Lines 26-28 duplicate the pywin32 dependency already declared on line 24. Since line 24 installs pywin32>=306 on all Windows systems (any Python version), the additional constraint python_version >= "3.12" on line 28 is redundant—pip will already have installed pywin32 for Python 3.12+ systems via line 24.
If the intent is to document that pywin32 is required for both MCP (all versions) and LadybugDB/Graphiti (3.12+ only), clarify this in the comments on lines 21-24 rather than adding a duplicate dependency entry.
♻️ Proposed fix: Remove duplicate entry
# Windows-specific dependency for LadybugDB/Graphiti
# pywin32 provides Windows system bindings required by real_ladybug
# Required on all Python versions on Windows (ACS-306) - MCP library unconditionally imports win32api
+# Also required by real_ladybug/graphiti-core on Python 3.12+ (Windows)
pywin32>=306; sys_platform == "win32"
-# Windows-specific dependency for LadybugDB/Graphiti
-# pywin32 provides Windows system bindings required by real_ladybug
-pywin32>=306; sys_platform == "win32" and python_version >= "3.12"
-
# Google AI (optional - for Gemini LLM and embeddings)
google-generativeai>=0.8.0🤖 Prompt for AI Agents
In `@apps/backend/requirements.txt` around lines 26 - 28, Remove the duplicate
pywin32 requirement entry ("pywin32>=306; sys_platform == \"win32\" and
python_version >= \"3.12\"") from requirements.txt and keep only the existing
pywin32 entry ("pywin32>=306") to avoid redundancy; then update the comment
above the remaining pywin32 line to clarify that pywin32 is required for MCP on
all Windows versions and also required by LadybugDB/Graphiti for Python 3.12+,
instead of adding a second dependency line.
| function normalizePath(p: string): string { | ||
| // Use path.normalize first, then lowercase for Windows drive letters | ||
| const normalized = path.normalize(p); | ||
| // On Windows, normalize drive letters to lowercase for comparison | ||
| if (process.platform === 'win32' && /^[A-Z]:/.test(normalized)) { | ||
| return normalized.charAt(0).toLowerCase() + normalized.slice(1); | ||
| } | ||
| return normalized; | ||
| } | ||
|
|
||
| describe('conda-project-structure', () => { | ||
| beforeEach(() => { | ||
| vi.clearAllMocks(); | ||
| }); | ||
|
|
||
| afterEach(() => { | ||
| vi.resetAllMocks(); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid process.platform here and add OS-matrix tests.
The test helper checks process.platform directly and the suite doesn’t exercise Windows/macOS/Linux permutations. Please use the platform abstraction and parameterize tests to cover all three OSes. Based on learnings and as per coding guidelines, platform-specific behavior should be mocked and tested across Windows, macOS, and Linux.
💡 Example approach
-import path from 'path';
+import path from 'path';
+import { isWindows, isMacOS, isLinux } from '../platform';
+vi.mock('../platform', () => ({
+ isWindows: vi.fn(),
+ isMacOS: vi.fn(),
+ isLinux: vi.fn(),
+}));
const mockIsWindows = isWindows as ReturnType<typeof vi.fn>;
const mockIsMacOS = isMacOS as ReturnType<typeof vi.fn>;
const mockIsLinux = isLinux as ReturnType<typeof vi.fn>;
-describe('conda-project-structure', () => {
+describe.each([
+ { name: 'windows', win: true, mac: false, linux: false },
+ { name: 'macos', win: false, mac: true, linux: false },
+ { name: 'linux', win: false, mac: false, linux: true },
+])('conda-project-structure (%s)', ({ win, mac, linux }) => {
+ beforeEach(() => {
+ mockIsWindows.mockReturnValue(win);
+ mockIsMacOS.mockReturnValue(mac);
+ mockIsLinux.mockReturnValue(linux);
+ });🤖 Prompt for AI Agents
In `@apps/frontend/src/main/__tests__/conda-project-structure.test.ts` around
lines 37 - 55, The helper normalizePath currently reads process.platform
directly; change tests to mock the platform abstraction instead and parameterize
the suite across 'win32', 'darwin', and 'linux' so behavior for Windows, macOS
and Linux is verified. Replace direct uses of process.platform in the test file
with a mocked platform value via the existing platform abstraction (or create
one if missing), then write a parameterized describe/it block that runs
normalizePath under each mocked platform and asserts expected normalized outputs
(including drive-letter lowercasing for 'win32'); ensure you use vi.spyOn or
vi.stub to set the platform abstraction and restore it in afterEach to avoid
global leaks.
| const os = require('os'); | ||
| const fs = require('fs'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Move require() calls to ES module imports at the top of the file.
Using CommonJS require() inside the function body is inconsistent with the rest of this ES module file. The fs module is already partially imported at line 3. Move these to proper ES module imports.
Suggested fix
Add to imports at top of file:
import os from 'os';
import { existsSync, writeFileSync, unlinkSync } from 'fs';Then remove the inline requires:
// spec_runner.py will auto-start run.py after spec creation completes
// Write task description to a temp file to avoid command-line quoting issues
- const os = require('os');
- const fs = require('fs');
- const tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`);
- fs.writeFileSync(tempTaskFile, taskDescription, 'utf-8');
+ const tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`);
+ writeFileSync(tempTaskFile, taskDescription, 'utf-8');🤖 Prompt for AI Agents
In `@apps/frontend/src/main/agent/agent-manager.ts` around lines 139 - 140, Move
the inline CommonJS requires into ES module imports at the top of the file:
replace the in-function "const os = require('os'); const fs = require('fs');"
with "import os from 'os';" and import the specific fs helpers used (e.g.,
existsSync, writeFileSync, unlinkSync) instead of requiring the whole module;
then remove the inline require statements and update references in functions
like the ones in agent-manager.ts that call os or fs methods to use the imported
symbols (os, existsSync, writeFileSync, unlinkSync).
| const tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`); | ||
| fs.writeFileSync(tempTaskFile, taskDescription, 'utf-8'); | ||
| const args = [specRunnerPath, '--task-file', tempTaskFile, '--project-dir', projectPath]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Temp file is never cleaned up, causing a resource leak.
The temp file created here is never deleted after the process completes. Over time, this will accumulate orphan files in the OS temp directory. Additionally, writeFileSync can throw if the temp directory is not writable or disk is full—this should be wrapped in a try-catch to emit a proper error.
Suggested approach
- Wrap the file write in try-catch to handle write failures gracefully.
- Register cleanup for the temp file when the process exits. Since
spawnProcessis called at line 184, you could listen for process exit and delete the file.
+ let tempTaskFile: string | undefined;
+ try {
+ tempTaskFile = path.join(os.tmpdir(), `task-${Date.now()}-${Math.random().toString(36).substring(7)}.txt`);
+ writeFileSync(tempTaskFile, taskDescription, 'utf-8');
+ } catch (error) {
+ this.emit('error', taskId, `Failed to write task description to temp file: ${error instanceof Error ? error.message : String(error)}`);
+ return;
+ }
const args = [specRunnerPath, '--task-file', tempTaskFile, '--project-dir', projectPath];For cleanup, consider adding a listener for this specific task's exit event to delete the temp file:
// After spawnProcess call
const cleanupTempFile = () => {
if (tempTaskFile) {
try { unlinkSync(tempTaskFile); } catch { /* ignore */ }
}
};
this.once('exit', (exitTaskId: string) => {
if (exitTaskId === taskId) cleanupTempFile();
});
this.once('error', (errorTaskId: string) => {
if (errorTaskId === taskId) cleanupTempFile();
});🤖 Prompt for AI Agents
In `@apps/frontend/src/main/agent/agent-manager.ts` around lines 141 - 143, The
temp file created as tempTaskFile via writeFileSync can leak and throw; wrap the
fs.writeFileSync(tempTaskFile, taskDescription, 'utf-8') in a try-catch and emit
a clear error (or throw) if the write fails, and register a cleanup that deletes
tempTaskFile after the spawned task completes or errors: after calling
spawnProcess (or wherever the task is started, e.g., in the same block that uses
taskId and spawnProcess), add listeners for the task’s exit/error (or
this.once('exit'/'error') keyed by taskId) that call fs.unlinkSync(tempTaskFile)
inside their own try-catch to ignore unlink failures so the temp file is removed
reliably.
| ``` | ||
| Backend (Python) | ||
| ↓ | ||
| Creates implementation_plan.json | ||
| ↓ | ||
| Emits IPC event: 'task:progress' with plan data | ||
| ↓ | ||
| Frontend (Electron Renderer) | ||
| ↓ | ||
| useIpc.ts: onTaskProgress handler (batched) | ||
| ↓ | ||
| task-store.ts: updateTaskFromPlan(taskId, plan) | ||
| ↓ | ||
| Creates subtasks from plan.phases.flatMap(phase => phase.subtasks) | ||
| ↓ | ||
| UI: TaskSubtasks.tsx renders subtasks | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add language identifiers and spacing around fenced blocks.
markdownlint (MD040/MD031) will fail because fenced blocks lack a language tag and aren’t surrounded by blank lines (e.g., the architecture diagram here and the log snippets later). Add language identifiers (text/typescript/log) and blank lines around fences throughout.
🛠️ Example fix (apply across the document)
-### Current Architecture
-
-```
+### Current Architecture
+
+```text
Backend (Python)
↓
Creates implementation_plan.json
↓
Emits IPC event: 'task:progress' with plan data
↓
Frontend (Electron Renderer)
↓
useIpc.ts: onTaskProgress handler (batched)
↓
task-store.ts: updateTaskFromPlan(taskId, plan)
↓
Creates subtasks from plan.phases.flatMap(phase => phase.subtasks)
↓
UI: TaskSubtasks.tsx renders subtasks
-```
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| Backend (Python) | |
| ↓ | |
| Creates implementation_plan.json | |
| ↓ | |
| Emits IPC event: 'task:progress' with plan data | |
| ↓ | |
| Frontend (Electron Renderer) | |
| ↓ | |
| useIpc.ts: onTaskProgress handler (batched) | |
| ↓ | |
| task-store.ts: updateTaskFromPlan(taskId, plan) | |
| ↓ | |
| Creates subtasks from plan.phases.flatMap(phase => phase.subtasks) | |
| ↓ | |
| UI: TaskSubtasks.tsx renders subtasks | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
11-11: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@INVESTIGATION.md` around lines 11 - 27, In INVESTIGATION.md, update all
fenced code/diagram blocks (e.g., the architecture diagram shown between
"Backend (Python)" and "UI: TaskSubtasks.tsx renders subtasks") to include a
language identifier (use text/typescript/log as appropriate) and ensure there is
a blank line before and after each triple-backtick fence so markdownlint
MD040/MD031 are satisfied; apply the same change to other fenced blocks such as
log snippets elsewhere in the document.
| **A. No Plan Validation Before State Update** | ||
|
|
||
| Current code in `updateTaskFromPlan` immediately creates subtasks from whatever plan data it receives: | ||
| ```typescript | ||
| const subtasks: Subtask[] = plan.phases.flatMap((phase) => | ||
| phase.subtasks.map((subtask) => ({ ... })) | ||
| ); | ||
| ``` | ||
|
|
||
| **Problem:** No check if plan is "ready" or "complete" before updating state. | ||
|
|
||
| **B. Missing Reload Trigger After Spec Completion** | ||
|
|
||
| When spec creation completes and the full plan is written: | ||
| - The IPC event might not fire again | ||
| - Or the event fires but the batching mechanism drops it | ||
| - Frontend state remains stuck with empty subtasks | ||
|
|
||
| **C. Race Condition in Batch Update Queue** | ||
|
|
||
| In `useIpc.ts:92-112`, the batching mechanism queues updates: | ||
| ```typescript |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use proper headings instead of bold emphasis for subsections.
markdownlint (MD036/MD022) flags bold text used as headings. Convert these labels to real headings and ensure blank lines around them.
🛠️ Example fix
-**A. No Plan Validation Before State Update**
+#### A. No Plan Validation Before State Update🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
108-108: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
111-111: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
119-119: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
126-126: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
129-129: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@INVESTIGATION.md` around lines 108 - 129, Replace the bold-section labels
with proper Markdown headings and ensure blank lines around them: change "**A.
No Plan Validation Before State Update**", "**B. Missing Reload Trigger After
Spec Completion**", and "**C. Race Condition in Batch Update Queue**" into
heading lines (e.g., "## A. No Plan Validation Before State Update") and add a
blank line before and after each heading so markdownlint rules MD036/MD022 are
satisfied and the sections render as real headings.
| ## Summary | ||
|
|
||
| This PR implements comprehensive Conda environment management for Auto Claude, enabling isolated Python environments at both application and project levels. It also includes significant Windows/PowerShell compatibility fixes to ensure reliable terminal integration across platforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Markdown formatting: missing top-level heading.
Per markdownlint, the first line should be a top-level heading (#). Consider adding a title.
Proposed fix
+# Conda Environment Management
+
## Summary
This PR implements comprehensive Conda environment management...📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Summary | |
| This PR implements comprehensive Conda environment management for Auto Claude, enabling isolated Python environments at both application and project levels. It also includes significant Windows/PowerShell compatibility fixes to ensure reliable terminal integration across platforms. | |
| # Conda Environment Management | |
| ## Summary | |
| This PR implements comprehensive Conda environment management for Auto Claude, enabling isolated Python environments at both application and project levels. It also includes significant Windows/PowerShell compatibility fixes to ensure reliable terminal integration across platforms. |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
1-1: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
🤖 Prompt for AI Agents
In `@PR_DESCRIPTION.md` around lines 1 - 3, Add a top-level Markdown heading at
the top of PR_DESCRIPTION.md by inserting a single line starting with "# "
(e.g., "# Auto Claude Conda Environment Management") above the existing "##
Summary" header so the file begins with a level-1 title and satisfies
markdownlint.
| ## Files Changed | ||
|
|
||
| ### New Files (11) | ||
| - `conda-detector.ts` - Conda installation detection service |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Markdown formatting: missing blank line before heading.
Per markdownlint (MD022), headings should be surrounded by blank lines.
Proposed fix
- [ ] Settings Persistence**: Navigate away and back, verify all conda settings retained
## Files Changed
+
### New Files (11)
- `conda-detector.ts` - Conda installation detection serviceAlso add a blank line before line 49 (### Modified Files).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Files Changed | |
| ### New Files (11) | |
| - `conda-detector.ts` - Conda installation detection service | |
| ## Files Changed | |
| ### New Files (11) | |
| - `conda-detector.ts` - Conda installation detection service |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
36-36: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In `@PR_DESCRIPTION.md` around lines 34 - 37, Add a blank line above the "## Files
Changed" heading and also add a blank line before the "### Modified Files"
heading in PR_DESCRIPTION.md to satisfy MD022; edit the block containing "##
Files Changed", "### New Files (11)" and the later "### Modified Files" headings
so each heading is preceded by an empty line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| **Example MINIMUM acceptable plan:** | ||
| ```json | ||
| { | ||
| "feature": "Feature name", | ||
| "workflow_type": "feature", | ||
| "phases": [ | ||
| { | ||
| "id": "phase-1", | ||
| "name": "Implementation", | ||
| "type": "implementation", | ||
| "subtasks": [ | ||
| { | ||
| "id": "1.1", | ||
| "description": "Implement the core feature", | ||
| "service": "backend", | ||
| "status": "pending" | ||
| } | ||
| ] | ||
| } | ||
| ] | ||
| } | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add blank lines around the JSON fenced block (markdownlint MD031).
The JSON example should be surrounded by blank lines to satisfy markdownlint.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
25-25: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@apps/backend/prompts/planner.md` around lines 24 - 46, The markdown fenced
JSON example under the "Example MINIMUM acceptable plan:" heading lacks
surrounding blank lines which violates markdownlint MD031; fix it by inserting a
blank line immediately before the opening ```json fence and a blank line
immediately after the closing ``` fence so the JSON block is separated from
surrounding text; update the section containing the example JSON (the code block
showing the minimal plan) accordingly.
| **After completing this phase, signal progress:** | ||
| ```bash | ||
| echo "[ROADMAP_PROGRESS] 52 Discovered target audience and pain points" | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor markdown formatting issue.
Static analysis indicates a blank line issue around line 146. Consider adding a blank line after the code block to satisfy markdown linting rules (MD022).
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
146-146: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In `@apps/backend/prompts/roadmap_discovery.md` around lines 142 - 146, Add a
single blank line after the fenced code block that contains echo
"[ROADMAP_PROGRESS] 52 Discovered target audience and pain points" so the code
block is followed by an empty line (to satisfy MD022); locate the closing ```
fence in the roadmap_discovery.md content and insert one blank line immediately
after it.
apps/frontend/src/renderer/components/task-detail/TaskConfiguration.tsx
Outdated
Show resolved
Hide resolved
apps/frontend/src/renderer/components/task-detail/TaskConfiguration.tsx
Outdated
Show resolved
Hide resolved
| it('should not update state after unmount', async () => { | ||
| const { unmount } = renderHook(() => useCondaSetup({ type: 'app' })); | ||
|
|
||
| unmount(); | ||
|
|
||
| // Try to trigger progress after unmount - should not throw | ||
| act(() => { | ||
| if (progressCallback) { | ||
| progressCallback({ | ||
| step: 'creating', | ||
| message: 'Creating...', | ||
| }); | ||
| } | ||
| }); | ||
|
|
||
| // State should remain at initial values (component is unmounted) | ||
| // This test ensures no errors are thrown when updating unmounted component | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider strengthening the unmount safety test assertion.
The test "should not update state after unmount" only verifies that no error is thrown, but doesn't actually assert that state remained unchanged. This is a common pattern but consider adding a spy or using result.current before unmount to verify the callback no longer affects state.
💡 Suggested improvement
it('should not update state after unmount', async () => {
- const { unmount } = renderHook(() => useCondaSetup({ type: 'app' }));
+ const { result, unmount } = renderHook(() => useCondaSetup({ type: 'app' }));
+
+ // Capture initial state
+ const initialStep = result.current.step;
unmount();
// Try to trigger progress after unmount - should not throw
act(() => {
if (progressCallback) {
progressCallback({
step: 'creating',
message: 'Creating...',
});
}
});
- // State should remain at initial values (component is unmounted)
- // This test ensures no errors are thrown when updating unmounted component
+ // Note: After unmount, result.current still holds the last snapshot
+ // The key verification is that no React warning about updating unmounted component occurs
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should not update state after unmount', async () => { | |
| const { unmount } = renderHook(() => useCondaSetup({ type: 'app' })); | |
| unmount(); | |
| // Try to trigger progress after unmount - should not throw | |
| act(() => { | |
| if (progressCallback) { | |
| progressCallback({ | |
| step: 'creating', | |
| message: 'Creating...', | |
| }); | |
| } | |
| }); | |
| // State should remain at initial values (component is unmounted) | |
| // This test ensures no errors are thrown when updating unmounted component | |
| }); | |
| it('should not update state after unmount', async () => { | |
| const { result, unmount } = renderHook(() => useCondaSetup({ type: 'app' })); | |
| // Capture initial state | |
| const initialStep = result.current.step; | |
| unmount(); | |
| // Try to trigger progress after unmount - should not throw | |
| act(() => { | |
| if (progressCallback) { | |
| progressCallback({ | |
| step: 'creating', | |
| message: 'Creating...', | |
| }); | |
| } | |
| }); | |
| // Note: After unmount, result.current still holds the last snapshot | |
| // The key verification is that no React warning about updating unmounted component occurs | |
| }); |
🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/hooks/__tests__/useCondaSetup.test.ts` around
lines 526 - 543, The test for useCondaSetup should assert state didn't change
after unmount rather than only ensuring no throw: before unmount capture the
hook's current state (use renderHook(() => useCondaSetup(...)) and read
result.current or spy on the internal updater), then unmount and invoke the
progressCallback inside act; finally assert that the previously captured state
equals the current state (or that the state setter spy wasn't called) to prove
no updates occurred. Reference useCondaSetup, progressCallback, renderHook,
result.current and unmount when locating where to add the pre-unmount snapshot
and the post-callback assertion.
| *In `.env` file:* | ||
| ```bash | ||
| # Set git provider (optional, defaults to auto-detect) | ||
| GIT_PROVIDER=auto # auto | github | gitlab | ||
| ``` | ||
|
|
||
| **Example Usage:** | ||
|
|
||
| ```bash | ||
| # Auto-detect provider from git remote (default) | ||
| python run.py --spec 001 --create-pr | ||
|
|
||
| # Works automatically with both: | ||
| # - [email protected]:user/repo.git → Creates GitHub PR using gh CLI | ||
| # - [email protected]:user/repo.git → Creates GitLab MR using glab CLI | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add blank lines around fenced blocks to satisfy markdownlint.
The new bash fences in this section need surrounding blank lines (MD031).
🧼 Suggested fix
-*In `.env` file:*
-```bash
+*In `.env` file:*
+
+```bash
# Set git provider (optional, defaults to auto-detect)
GIT_PROVIDER=auto # auto | github | gitlabExample Usage:
-bash +bash
Auto-detect provider from git remote (default)
python run.py --spec 001 --create-pr
@@
- +
</details>
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>
284-284: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
---
287-287: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
---
287-287: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
---
289-289: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In @CLAUDE.md around lines 284 - 299, Add blank lines before and after the
fenced code blocks in the CLAUDE.md snippet to satisfy markdownlint rule MD031:
insert an empty line above the "bash" that follows "*In `.env` file:*" and an empty line below its closing "", and likewise add an empty line above the
"bash" that precedes "python run.py --spec 001 --create-pr" and below its closing "" in the Example Usage section so both fenced blocks are separated
from surrounding text.
</details>
<!-- fingerprinting:phantom:medusa:eagle -->
<!-- This is an auto-generated comment by CodeRabbit -->
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| def __post_init__(self): | ||
| if self._glab_client is None: | ||
| from pathlib import Path | ||
|
|
||
| project_dir = Path(self._project_dir) if self._project_dir else Path.cwd() | ||
| self._glab_client = GitLabClient( | ||
| project_dir=project_dir, | ||
| config=self._config, | ||
| ) | ||
|
|
||
| @property | ||
| def provider_type(self) -> ProviderType: | ||
| return ProviderType.GITLAB | ||
|
|
||
| @property | ||
| def repo(self) -> str: | ||
| """Get the project in group/project format.""" | ||
| return self._config.project | ||
|
|
||
| @property | ||
| def glab_client(self) -> GitLabClient: | ||
| """Get the underlying GitLabClient.""" | ||
| return self._glab_client |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use platform abstraction for project_dir resolution.
Line 59-62 uses pathlib.Path/Path.cwd() directly. Backend guidelines require the platform abstraction module for path handling—please switch to the project’s platform path utility. As per coding guidelines, use the platform abstraction module for path handling.
🤖 Prompt for AI Agents
In `@apps/backend/runners/gitlab/providers/gitlab_provider.py` around lines 57 -
79, Replace direct pathlib.Path usage in __post_init__ with the project's
platform path abstraction: instead of using pathlib.Path and Path.cwd(), import
and call the platform path utility (the platform module's Path/cwd or
equivalent) to resolve project_dir from self._project_dir or the platform's
current working directory, then pass that resolved project_dir into GitLabClient
(the _glab_client initialization). Update the import and references so
__post_init__ uses the platform abstraction rather than pathlib.Path.
| async def fetch_prs(self, filters: PRFilters | None = None) -> list[PRData]: | ||
| """ | ||
| Fetch merge requests with optional filters. | ||
| """ | ||
| # For now, return empty list - would need to implement list_mr in glab_client | ||
| # This is a placeholder for the full implementation | ||
| return [] | ||
|
|
||
| async def fetch_pr_diff(self, number: int) -> str: | ||
| """Fetch the diff for a merge request.""" | ||
| return self.glab_client.get_mr_diff(number) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
fetch_prs is a stub—make the behavior explicit to callers.
Line 135-141 returns an empty list, which can be misread as “no MRs” rather than “not implemented.” Consider raising NotImplementedError or documenting this more explicitly; fetch_pr_diff is fine as-is.
🤖 Prompt for AI Agents
In `@apps/backend/runners/gitlab/providers/gitlab_provider.py` around lines 135 -
145, Replace the stubbed behavior in fetch_prs so callers don't mistake an empty
list for "no MRs": update the async method fetch_prs to explicitly raise
NotImplementedError with a clear message (e.g., "fetch_prs not implemented for
GitLab provider") or alternatively return a Result/NotImplemented sentinel and
document it in the docstring; keep fetch_pr_diff unchanged. Ensure you reference
the async method name fetch_prs in the GitLab provider class so callers see the
explicit not-implemented signal.
| useEffect(() => { | ||
| // Skip validation if paths haven't changed | ||
| if (lastValidatedRef.current.pythonPath === pythonPath && | ||
| lastValidatedRef.current.activationScript === activationScript) { | ||
| return; | ||
| } | ||
|
|
||
| // Defer validation to allow UI to render first (prevents blocking) | ||
| const timeoutId = setTimeout(() => { | ||
| lastValidatedRef.current = { pythonPath, activationScript }; | ||
|
|
||
| if (activationScript) { | ||
| validateEnvironment(); | ||
| } else { | ||
| checkPackages(); | ||
| } | ||
| }, 100); // 100ms delay to let UI render | ||
|
|
||
| return () => clearTimeout(timeoutId); | ||
| }, [pythonPath, activationScript]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, read the full PythonPackageValidator.tsx file to understand the context
wc -l apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsxRepository: AndyMik90/Auto-Claude
Length of output: 142
🏁 Script executed:
#!/bin/bash
# Read the file to see the full context of the component and function definitions
cat -n apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsx | head -100Repository: AndyMik90/Auto-Claude
Length of output: 4479
🏁 Script executed:
#!/bin/bash
# Check for eslint configuration files
find . -name ".eslintrc*" -o -name "eslint.config.*" | head -20Repository: AndyMik90/Auto-Claude
Length of output: 98
🏁 Script executed:
#!/bin/bash
# Continue reading the file to find validateEnvironment function
cat -n apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsx | sed -n '100,200p'Repository: AndyMik90/Auto-Claude
Length of output: 3643
🏁 Script executed:
#!/bin/bash
# Look for validateEnvironment function definition
rg -n "validateEnvironment" apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsxRepository: AndyMik90/Auto-Claude
Length of output: 185
🏁 Script executed:
#!/bin/bash
# Check the eslint configuration to see if exhaustive-deps is disabled
cat apps/frontend/eslint.config.mjsRepository: AndyMik90/Auto-Claude
Length of output: 2569
Add function dependencies to useEffect or wrap functions in useCallback.
The checkPackages() and validateEnvironment() functions are called within the effect but not included in the dependency array. ESLint's exhaustive-deps rule (configured as warn) will flag this. While the current pattern works due to the setTimeout deferral and ref check, wrap these functions with useCallback to satisfy the linter and prevent potential stale closure issues if the effect logic changes.
🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsx`
around lines 39 - 58, The useEffect in PythonPackageValidator references
checkPackages and validateEnvironment but does not include them in its
dependency array; to satisfy exhaustive-deps and avoid stale closures, wrap
checkPackages and validateEnvironment in useCallback (exporting from the
component scope) so their identities are stable, then add them to the effect
dependencies (i.e., change the dependency array to [pythonPath,
activationScript, checkPackages, validateEnvironment]) or alternatively include
the callbacks directly if you prefer to memoize them where they are declared.
| const handleReinstallEnvironment = async () => { | ||
| if (!activationScript) { | ||
| setError('No activation script configured'); | ||
| return; | ||
| } | ||
|
|
||
| if (!confirm('⚠️ WARNING: This will DELETE the existing Python environment and reinstall Python 3.12.\n\nAll installed packages will be removed.\n\nContinue?')) { | ||
| return; | ||
| } | ||
|
|
||
| setReinstalling(true); | ||
| setError(''); | ||
| setReinstallProgress({ step: 'Starting...', completed: 0, total: 3 }); | ||
|
|
||
| const unsubscribe = window.electronAPI.onPythonReinstallProgress((progress) => { | ||
| setReinstallProgress(progress); | ||
| }); | ||
|
|
||
| const result = await window.electronAPI.reinstallPythonEnvironment({ | ||
| activationScript, | ||
| pythonVersion: '3.12' | ||
| }); | ||
|
|
||
| unsubscribe(); | ||
| setReinstalling(false); | ||
| setReinstallProgress(null); | ||
|
|
||
| if (result.success && result.data?.success) { | ||
| setError(''); | ||
| setEnvStatus('valid'); | ||
| setEnvValidation({ | ||
| valid: true, | ||
| pythonPath: result.data.environmentPath, | ||
| version: result.data.pythonVersion, | ||
| error: null, | ||
| status: 'valid' | ||
| }); | ||
| // Re-validate environment and packages after reinstall | ||
| await validateEnvironment(); | ||
| } else { | ||
| setError(result.data?.error || result.error || 'Environment reinstall failed'); | ||
| setEnvStatus('invalid'); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Replace native confirm() with a proper modal dialog.
Using the browser's native confirm() dialog (line 160) doesn't fit well with Electron app UX patterns. Consider using a proper modal component that matches the application's design system.
Additionally, the warning text contains special characters (
🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsx`
around lines 154 - 197, In handleReinstallEnvironment replace the native
confirm() call with the app's modal dialog component (e.g., openConfirmModal or
ConfirmDialog) so the confirmation follows the Electron/React UX; present the
same warning text via i18n keys (do not use the ⚠️ emoji inline) and await the
modal result (promise/callback) before proceeding, only calling
window.electronAPI.reinstallPythonEnvironment when the user confirms; ensure the
modal exposes a clear destructive action label, cancel path, and proper
focus/ARIA so dismissing cancels the reinstall flow and the existing
setReinstalling/setError/state updates remain unchanged.
| return ( | ||
| <div className="space-y-4 relative"> | ||
| {/* Loading Overlay - Show during initial validation */} | ||
| {isInitialValidation && ( | ||
| <div className="absolute inset-0 bg-background/80 backdrop-blur-sm z-10 flex flex-col items-center justify-center rounded-lg"> | ||
| <Loader2 className="h-8 w-8 animate-spin text-primary mb-3" /> | ||
| <p className="text-sm font-medium text-foreground">Validating Python environment...</p> | ||
| <p className="text-xs text-muted-foreground mt-1">This may take a few seconds</p> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Error Display */} | ||
| {error && ( | ||
| <div className="p-3 bg-destructive/10 border border-destructive/20 rounded-md"> | ||
| <p className="text-sm font-medium text-destructive mb-1">Error</p> | ||
| <p className="text-xs text-destructive/80">{error}</p> | ||
| </div> | ||
| )} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded user-facing strings violate i18n guidelines.
Multiple hardcoded strings in the JSX need translation keys. As per the coding guidelines, all user-facing text in frontend components must use i18n translation keys.
Strings requiring translation
Lines that need t() calls:
- Line 214:
"Validating Python environment..." - Line 215:
"This may take a few seconds" - Line 222:
"Error" - Line 234:
"Validating Python environment..." - Lines 245-247: Status messages (
"Python not found","Wrong version: ...","Validation error") - Lines 252-257: Guidance messages
- Line 267:
"Reinstall Python Environment" - Line 290:
"Install location:" - Line 325:
"Missing packages" - Line 348:
"Installation Output:" - Line 350:
"Starting installation..."
Example fix for loading overlay
{isInitialValidation && (
<div className="absolute inset-0 bg-background/80 backdrop-blur-sm z-10 flex flex-col items-center justify-center rounded-lg">
<Loader2 className="h-8 w-8 animate-spin text-primary mb-3" />
- <p className="text-sm font-medium text-foreground">Validating Python environment...</p>
- <p className="text-xs text-muted-foreground mt-1">This may take a few seconds</p>
+ <p className="text-sm font-medium text-foreground">{t('python.validatingEnvironment')}</p>
+ <p className="text-xs text-muted-foreground mt-1">{t('python.validatingHint')}</p>
</div>
)}🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/components/settings/PythonPackageValidator.tsx`
around lines 208 - 226, The JSX in PythonPackageValidator contains hardcoded
user-facing strings (e.g., the loading overlay text, error header/body, status
messages like "Python not found", "Wrong version: ...", guidance messages,
button label "Reinstall Python Environment", "Install location:", "Missing
packages", "Installation Output:", "Starting installation...") and must be
replaced with i18n keys using the t() function; update the component
(PythonPackageValidator) to import and use useTranslation()/t, replace each
literal string in elements controlled by isInitialValidation, error, status
display logic, guidance blocks, buttons, and labels with appropriate t('...')
keys (preserve dynamic interpolations like version values via t('key', { version
})) and ensure any new translation keys are added to the locale files.
| const saveFeedbackForNextSubtask = async () => { | ||
| if (!newFeedback.trim()) return; | ||
|
|
||
| setIsSaving(true); | ||
| try { | ||
| const feedbackEntry: FeedbackEntry = { | ||
| timestamp: new Date().toISOString(), | ||
| message: newFeedback.trim(), | ||
| read: false | ||
| }; | ||
|
|
||
| const updatedFeedback = [...feedbackHistory, feedbackEntry]; | ||
|
|
||
| await window.electronAPI.updateTaskMetadata(task.id, { | ||
| ...metadata, | ||
| feedback: updatedFeedback | ||
| }); | ||
|
|
||
| setFeedbackHistory(updatedFeedback); | ||
| setMetadata({ ...metadata, feedback: updatedFeedback }); | ||
| setNewFeedback(''); | ||
| } catch (error) { | ||
| console.error('Failed to save feedback:', error); | ||
| } finally { | ||
| setIsSaving(false); | ||
| } | ||
| }; | ||
|
|
||
| /** | ||
| * Shows confirmation dialog for immediate feedback application. | ||
| * Only available when task is actively running. | ||
| */ | ||
| const promptApplyFeedbackNow = () => { | ||
| if (!newFeedback.trim()) return; | ||
| setShowApplyNowDialog(true); | ||
| }; | ||
|
|
||
| /** | ||
| * Saves new feedback and immediately stops/restarts the running task. | ||
| * Use this for urgent corrections that cannot wait for the next subtask. | ||
| * | ||
| * Flow: | ||
| * 1. Save feedback to task_metadata.json | ||
| * 2. Stop the running task | ||
| * 3. Poll until task stops (max 10 seconds) | ||
| * 4. Restart task to pick up feedback immediately | ||
| */ | ||
| const saveFeedbackAndRestartTask = async () => { | ||
| setShowApplyNowDialog(false); | ||
|
|
||
| // Save feedback first | ||
| await saveFeedbackForNextSubtask(); | ||
|
|
||
| // Stop the task | ||
| stopTask(task.id); | ||
|
|
||
| // Poll for task to stop, then restart | ||
| let attempts = 0; | ||
| const maxAttempts = 20; // 10 seconds max | ||
| const checkInterval = 500; // Check every 500ms | ||
|
|
||
| const pollAndRestart = async () => { | ||
| try { | ||
| // Get fresh task status | ||
| const tasks = await window.electronAPI.getTasks(task.projectId); | ||
| const currentTask = tasks.data?.find((t: any) => t.id === task.id); | ||
|
|
||
| if (!currentTask) { | ||
| console.error('Task not found after stop'); | ||
| return; | ||
| } | ||
|
|
||
| // Check if task has stopped | ||
| const isStopped = currentTask.status !== 'in_progress'; | ||
|
|
||
| if (isStopped) { | ||
| // Task stopped successfully, restart it | ||
| console.log('Task stopped, restarting with feedback...'); | ||
| startTask(task.id); | ||
| } else if (attempts < maxAttempts) { | ||
| // Keep polling | ||
| attempts++; | ||
| setTimeout(pollAndRestart, checkInterval); | ||
| } else { | ||
| console.error('Task failed to stop after 10 seconds, attempting restart anyway'); | ||
| startTask(task.id); | ||
| } | ||
| } catch (error) { | ||
| console.error('Failed to check task status:', error); | ||
| // Try to restart anyway | ||
| startTask(task.id); | ||
| } | ||
| }; | ||
|
|
||
| // Start polling after initial delay | ||
| setTimeout(pollAndRestart, checkInterval); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t restart a task if feedback fails to save.
saveFeedbackForNextSubtask catches errors and returns void, so saveFeedbackAndRestartTask will stop/restart even when the save failed—users lose feedback but see a restart. Make the save step return success (or throw) and abort the restart on failure.
🔧 Proposed fix
- const saveFeedbackForNextSubtask = async () => {
- if (!newFeedback.trim()) return;
+ const saveFeedbackForNextSubtask = async (): Promise<boolean> => {
+ if (!newFeedback.trim()) return false;
@@
- setNewFeedback('');
+ setNewFeedback('');
+ return true;
} catch (error) {
console.error('Failed to save feedback:', error);
+ return false;
} finally {
setIsSaving(false);
}
};
@@
- // Save feedback first
- await saveFeedbackForNextSubtask();
+ // Save feedback first; abort if it fails
+ const saved = await saveFeedbackForNextSubtask();
+ if (!saved) return;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const saveFeedbackForNextSubtask = async () => { | |
| if (!newFeedback.trim()) return; | |
| setIsSaving(true); | |
| try { | |
| const feedbackEntry: FeedbackEntry = { | |
| timestamp: new Date().toISOString(), | |
| message: newFeedback.trim(), | |
| read: false | |
| }; | |
| const updatedFeedback = [...feedbackHistory, feedbackEntry]; | |
| await window.electronAPI.updateTaskMetadata(task.id, { | |
| ...metadata, | |
| feedback: updatedFeedback | |
| }); | |
| setFeedbackHistory(updatedFeedback); | |
| setMetadata({ ...metadata, feedback: updatedFeedback }); | |
| setNewFeedback(''); | |
| } catch (error) { | |
| console.error('Failed to save feedback:', error); | |
| } finally { | |
| setIsSaving(false); | |
| } | |
| }; | |
| /** | |
| * Shows confirmation dialog for immediate feedback application. | |
| * Only available when task is actively running. | |
| */ | |
| const promptApplyFeedbackNow = () => { | |
| if (!newFeedback.trim()) return; | |
| setShowApplyNowDialog(true); | |
| }; | |
| /** | |
| * Saves new feedback and immediately stops/restarts the running task. | |
| * Use this for urgent corrections that cannot wait for the next subtask. | |
| * | |
| * Flow: | |
| * 1. Save feedback to task_metadata.json | |
| * 2. Stop the running task | |
| * 3. Poll until task stops (max 10 seconds) | |
| * 4. Restart task to pick up feedback immediately | |
| */ | |
| const saveFeedbackAndRestartTask = async () => { | |
| setShowApplyNowDialog(false); | |
| // Save feedback first | |
| await saveFeedbackForNextSubtask(); | |
| // Stop the task | |
| stopTask(task.id); | |
| // Poll for task to stop, then restart | |
| let attempts = 0; | |
| const maxAttempts = 20; // 10 seconds max | |
| const checkInterval = 500; // Check every 500ms | |
| const pollAndRestart = async () => { | |
| try { | |
| // Get fresh task status | |
| const tasks = await window.electronAPI.getTasks(task.projectId); | |
| const currentTask = tasks.data?.find((t: any) => t.id === task.id); | |
| if (!currentTask) { | |
| console.error('Task not found after stop'); | |
| return; | |
| } | |
| // Check if task has stopped | |
| const isStopped = currentTask.status !== 'in_progress'; | |
| if (isStopped) { | |
| // Task stopped successfully, restart it | |
| console.log('Task stopped, restarting with feedback...'); | |
| startTask(task.id); | |
| } else if (attempts < maxAttempts) { | |
| // Keep polling | |
| attempts++; | |
| setTimeout(pollAndRestart, checkInterval); | |
| } else { | |
| console.error('Task failed to stop after 10 seconds, attempting restart anyway'); | |
| startTask(task.id); | |
| } | |
| } catch (error) { | |
| console.error('Failed to check task status:', error); | |
| // Try to restart anyway | |
| startTask(task.id); | |
| } | |
| }; | |
| // Start polling after initial delay | |
| setTimeout(pollAndRestart, checkInterval); | |
| }; | |
| const saveFeedbackForNextSubtask = async (): Promise<boolean> => { | |
| if (!newFeedback.trim()) return false; | |
| setIsSaving(true); | |
| try { | |
| const feedbackEntry: FeedbackEntry = { | |
| timestamp: new Date().toISOString(), | |
| message: newFeedback.trim(), | |
| read: false | |
| }; | |
| const updatedFeedback = [...feedbackHistory, feedbackEntry]; | |
| await window.electronAPI.updateTaskMetadata(task.id, { | |
| ...metadata, | |
| feedback: updatedFeedback | |
| }); | |
| setFeedbackHistory(updatedFeedback); | |
| setMetadata({ ...metadata, feedback: updatedFeedback }); | |
| setNewFeedback(''); | |
| return true; | |
| } catch (error) { | |
| console.error('Failed to save feedback:', error); | |
| return false; | |
| } finally { | |
| setIsSaving(false); | |
| } | |
| }; | |
| /** | |
| * Shows confirmation dialog for immediate feedback application. | |
| * Only available when task is actively running. | |
| */ | |
| const promptApplyFeedbackNow = () => { | |
| if (!newFeedback.trim()) return; | |
| setShowApplyNowDialog(true); | |
| }; | |
| /** | |
| * Saves new feedback and immediately stops/restarts the running task. | |
| * Use this for urgent corrections that cannot wait for the next subtask. | |
| * | |
| * Flow: | |
| * 1. Save feedback to task_metadata.json | |
| * 2. Stop the running task | |
| * 3. Poll until task stops (max 10 seconds) | |
| * 4. Restart task to pick up feedback immediately | |
| */ | |
| const saveFeedbackAndRestartTask = async () => { | |
| setShowApplyNowDialog(false); | |
| // Save feedback first; abort if it fails | |
| const saved = await saveFeedbackForNextSubtask(); | |
| if (!saved) return; | |
| // Stop the task | |
| stopTask(task.id); | |
| // Poll for task to stop, then restart | |
| let attempts = 0; | |
| const maxAttempts = 20; // 10 seconds max | |
| const checkInterval = 500; // Check every 500ms | |
| const pollAndRestart = async () => { | |
| try { | |
| // Get fresh task status | |
| const tasks = await window.electronAPI.getTasks(task.projectId); | |
| const currentTask = tasks.data?.find((t: any) => t.id === task.id); | |
| if (!currentTask) { | |
| console.error('Task not found after stop'); | |
| return; | |
| } | |
| // Check if task has stopped | |
| const isStopped = currentTask.status !== 'in_progress'; | |
| if (isStopped) { | |
| // Task stopped successfully, restart it | |
| console.log('Task stopped, restarting with feedback...'); | |
| startTask(task.id); | |
| } else if (attempts < maxAttempts) { | |
| // Keep polling | |
| attempts++; | |
| setTimeout(pollAndRestart, checkInterval); | |
| } else { | |
| console.error('Task failed to stop after 10 seconds, attempting restart anyway'); | |
| startTask(task.id); | |
| } | |
| } catch (error) { | |
| console.error('Failed to check task status:', error); | |
| // Try to restart anyway | |
| startTask(task.id); | |
| } | |
| }; | |
| // Start polling after initial delay | |
| setTimeout(pollAndRestart, checkInterval); | |
| }; |
🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx` around
lines 85 - 181, The save step (saveFeedbackForNextSubtask) currently swallows
errors and returns void, so saveFeedbackAndRestartTask proceeds to stop/restart
even when saving failed; change saveFeedbackForNextSubtask to return a boolean
success or rethrow the error (propagate failure) by removing the silent catch or
returning true/false, and update saveFeedbackAndRestartTask to await the save,
check the result (or catch the thrown error) and abort the stop/restart flow if
saving failed (do not call stopTask/startTask and keep the apply dialog open or
surface the error); reference saveFeedbackForNextSubtask and
saveFeedbackAndRestartTask when applying these changes.
| const formatTimestamp = (timestamp: string): string => { | ||
| const date = new Date(timestamp); | ||
| const now = new Date(); | ||
| const diffMs = now.getTime() - date.getTime(); | ||
| const diffMins = Math.floor(diffMs / 60000); | ||
| const diffHours = Math.floor(diffMs / 3600000); | ||
| const diffDays = Math.floor(diffMs / 86400000); | ||
|
|
||
| if (diffMins < 1) return 'Just now'; | ||
| if (diffMins < 60) return `${diffMins}m ago`; | ||
| if (diffHours < 24) return `${diffHours}h ago`; | ||
| if (diffDays < 7) return `${diffDays}d ago`; | ||
|
|
||
| return date.toLocaleDateString(); | ||
| }; | ||
|
|
||
| if (isLoading) { | ||
| return ( | ||
| <div className="flex items-center justify-center h-full"> | ||
| <Loader2 className="h-6 w-6 animate-spin text-muted-foreground" /> | ||
| </div> | ||
| ); | ||
| } | ||
|
|
||
| return ( | ||
| <div className="h-full flex flex-col"> | ||
| <ScrollArea className="flex-1"> | ||
| <div className="p-6 space-y-6"> | ||
| {/* Header */} | ||
| <div className="space-y-2"> | ||
| <div className="flex items-center gap-2"> | ||
| <MessageSquarePlus className="h-5 w-5 text-muted-foreground" /> | ||
| <h3 className="text-lg font-semibold">Task Feedback</h3> | ||
| </div> | ||
| <p className="text-sm text-muted-foreground"> | ||
| Provide corrections, additional context, or instructions to guide the agent. | ||
| {isTaskRunning && ' The agent will automatically pick up your feedback before starting the next subtask.'} | ||
| </p> | ||
| </div> | ||
|
|
||
| {/* Running Task Info */} | ||
| {isTaskRunning && ( | ||
| <div className="rounded-lg border border-blue-200 bg-blue-50 dark:border-blue-900 dark:bg-blue-950 p-4"> | ||
| <div className="flex items-start gap-3"> | ||
| <AlertCircle className="h-5 w-5 text-blue-600 dark:text-blue-400 mt-0.5" /> | ||
| <div className="flex-1 space-y-2"> | ||
| <p className="text-sm font-medium text-blue-900 dark:text-blue-100"> | ||
| Task is Running | ||
| </p> | ||
| <div className="space-y-1"> | ||
| <p className="text-xs text-blue-700 dark:text-blue-300"> | ||
| <strong>Submit Feedback:</strong> Saves feedback and waits for next subtask (recommended - no interruption) | ||
| </p> | ||
| <p className="text-xs text-blue-700 dark:text-blue-300"> | ||
| <strong>Apply Now:</strong> Immediately stops and restarts task with feedback (use for urgent corrections) | ||
| </p> | ||
| </div> | ||
| </div> | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Human Review with Unread Feedback */} | ||
| {canRestartWithFeedback && ( | ||
| <div className="rounded-lg border border-green-200 bg-green-50 dark:border-green-900 dark:bg-green-950 p-4"> | ||
| <div className="flex items-start gap-3"> | ||
| <CheckCircle2 className="h-5 w-5 text-green-600 dark:text-green-400 mt-0.5" /> | ||
| <div className="flex-1 space-y-2"> | ||
| <p className="text-sm font-medium text-green-900 dark:text-green-100"> | ||
| Unread Feedback Available | ||
| </p> | ||
| <p className="text-xs text-green-700 dark:text-green-300"> | ||
| You have {feedbackHistory.filter(f => !f.read).length} unread feedback {feedbackHistory.filter(f => !f.read).length === 1 ? 'entry' : 'entries'}. Restart the task to incorporate your corrections. | ||
| </p> | ||
| <Button | ||
| onClick={promptRestartWithFeedback} | ||
| size="sm" | ||
| className="mt-2 bg-green-600 hover:bg-green-700" | ||
| > | ||
| <Zap className="h-4 w-4 mr-2" /> | ||
| Restart Task with Feedback | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* New Feedback Input */} | ||
| <div className="space-y-3"> | ||
| <Label htmlFor="feedback-input" className="text-sm font-medium"> | ||
| New Feedback / Instructions | ||
| </Label> | ||
| <Textarea | ||
| id="feedback-input" | ||
| placeholder={`Example:\n\nIMPORTANT CORRECTIONS:\n- Use .NET Core 8 (NOT .NET Framework 4.8)\n- Implement AssemblyResolver for plugin loading\n- Load dependencies from self-contained plugin folder`} | ||
| value={newFeedback} | ||
| onChange={(e) => setNewFeedback(e.target.value)} | ||
| className="min-h-[150px] font-mono text-sm" | ||
| disabled={isSaving} | ||
| /> | ||
| <div className="flex items-center justify-between"> | ||
| <p className="text-xs text-muted-foreground"> | ||
| {newFeedback.length} characters | ||
| </p> | ||
| <div className="flex items-center gap-2"> | ||
| <Button | ||
| onClick={saveFeedbackForNextSubtask} | ||
| disabled={isSaving || !newFeedback.trim()} | ||
| size="sm" | ||
| variant="outline" | ||
| > | ||
| {isSaving ? ( | ||
| <> | ||
| <Loader2 className="h-4 w-4 mr-2 animate-spin" /> | ||
| Saving... | ||
| </> | ||
| ) : ( | ||
| <> | ||
| <Send className="h-4 w-4 mr-2" /> | ||
| Submit Feedback | ||
| </> | ||
| )} | ||
| </Button> | ||
| {isTaskRunning && ( | ||
| <Button | ||
| onClick={promptApplyFeedbackNow} | ||
| disabled={isSaving || !newFeedback.trim()} | ||
| size="sm" | ||
| className="bg-orange-600 hover:bg-orange-700" | ||
| > | ||
| <Zap className="h-4 w-4 mr-2" /> | ||
| Apply Now | ||
| </Button> | ||
| )} | ||
| </div> | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Feedback History */} | ||
| {feedbackHistory.length > 0 && ( | ||
| <div className="space-y-3"> | ||
| <div className="flex items-center justify-between"> | ||
| <Label className="text-sm font-medium">Feedback History</Label> | ||
| {hasUnreadFeedback && ( | ||
| <span className="text-xs px-2 py-0.5 rounded-full bg-blue-100 dark:bg-blue-900 text-blue-700 dark:text-blue-300"> | ||
| Unread feedback | ||
| </span> | ||
| )} | ||
| </div> | ||
|
|
||
| <div className="space-y-3"> | ||
| {feedbackHistory.slice().reverse().map((entry, reversedIndex) => { | ||
| const actualIndex = feedbackHistory.length - 1 - reversedIndex; | ||
| const isEditing = editingIndex === actualIndex; | ||
|
|
||
| return ( | ||
| <div | ||
| key={`${entry.timestamp}-${actualIndex}`} | ||
| className={cn( | ||
| 'rounded-lg border p-4 space-y-2', | ||
| !entry.read && 'border-blue-200 bg-blue-50/50 dark:border-blue-900 dark:bg-blue-950/50' | ||
| )} | ||
| > | ||
| {/* Header */} | ||
| <div className="flex items-center justify-between"> | ||
| <div className="flex items-center gap-2"> | ||
| <Clock className="h-3.5 w-3.5 text-muted-foreground" /> | ||
| <span className="text-xs text-muted-foreground"> | ||
| {formatTimestamp(entry.timestamp)} | ||
| </span> | ||
| </div> | ||
| <div className="flex items-center gap-2"> | ||
| {entry.read ? ( | ||
| <div className="flex items-center gap-1.5 text-xs text-green-600 dark:text-green-400"> | ||
| <CheckCircle2 className="h-3.5 w-3.5" /> | ||
| <span>Read by agent</span> | ||
| </div> | ||
| ) : ( | ||
| <> | ||
| <div className="flex items-center gap-1.5 text-xs text-blue-600 dark:text-blue-400"> | ||
| <AlertCircle className="h-3.5 w-3.5" /> | ||
| <span>Pending</span> | ||
| </div> | ||
| {!isEditing && ( | ||
| <div className="flex items-center gap-1 ml-2"> | ||
| <Button | ||
| size="sm" | ||
| variant="ghost" | ||
| className="h-7 w-7 p-0" | ||
| onClick={() => startEditingFeedback(actualIndex)} | ||
| disabled={isSaving} | ||
| > | ||
| <Edit2 className="h-3.5 w-3.5" /> | ||
| </Button> | ||
| <Button | ||
| size="sm" | ||
| variant="ghost" | ||
| className="h-7 w-7 p-0 text-destructive hover:text-destructive" | ||
| onClick={() => deleteFeedbackEntry(actualIndex)} | ||
| disabled={isSaving} | ||
| > | ||
| <Trash2 className="h-3.5 w-3.5" /> | ||
| </Button> | ||
| </div> | ||
| )} | ||
| </> | ||
| )} | ||
| </div> | ||
| </div> | ||
|
|
||
| {/* Message or Edit Textarea */} | ||
| {isEditing ? ( | ||
| <div className="space-y-2"> | ||
| <Textarea | ||
| value={editText} | ||
| onChange={(e) => setEditText(e.target.value)} | ||
| className="min-h-[100px] font-mono text-sm" | ||
| disabled={isSaving} | ||
| /> | ||
| <div className="flex items-center justify-end gap-2"> | ||
| <Button | ||
| size="sm" | ||
| variant="outline" | ||
| onClick={cancelEditingFeedback} | ||
| disabled={isSaving} | ||
| > | ||
| <X className="h-3.5 w-3.5 mr-1" /> | ||
| Cancel | ||
| </Button> | ||
| <Button | ||
| size="sm" | ||
| onClick={() => saveEditedFeedbackWithAutoEscalation(actualIndex)} | ||
| disabled={isSaving || !editText.trim()} | ||
| > | ||
| {isSaving ? ( | ||
| <Loader2 className="h-3.5 w-3.5 mr-1 animate-spin" /> | ||
| ) : ( | ||
| <Check className="h-3.5 w-3.5 mr-1" /> | ||
| )} | ||
| Save | ||
| </Button> | ||
| </div> | ||
| </div> | ||
| ) : ( | ||
| <div className="text-sm whitespace-pre-wrap font-mono bg-background/50 rounded p-3 border"> | ||
| {entry.message} | ||
| </div> | ||
| )} | ||
| </div> | ||
| ); | ||
| })} | ||
| </div> | ||
| </div> | ||
| )} | ||
|
|
||
| {/* Empty State */} | ||
| {feedbackHistory.length === 0 && ( | ||
| <div className="text-center py-8 text-muted-foreground"> | ||
| <MessageSquarePlus className="h-12 w-12 mx-auto mb-3 opacity-50" /> | ||
| <p className="text-sm">No feedback provided yet</p> | ||
| <p className="text-xs mt-1"> | ||
| Use the form above to provide instructions or corrections | ||
| </p> | ||
| </div> | ||
| )} | ||
| </div> | ||
| </ScrollArea> | ||
|
|
||
| {/* Apply Now Confirmation Dialog */} | ||
| <AlertDialog open={showApplyNowDialog} onOpenChange={setShowApplyNowDialog}> | ||
| <AlertDialogContent> | ||
| <AlertDialogHeader> | ||
| <AlertDialogTitle>Apply Feedback Immediately?</AlertDialogTitle> | ||
| <AlertDialogDescription className="space-y-3"> | ||
| <p> | ||
| This will immediately stop the running task and restart it with your feedback applied. | ||
| </p> | ||
| <div className="rounded border border-orange-200 bg-orange-50 dark:border-orange-900 dark:bg-orange-950 p-3"> | ||
| <p className="text-sm font-medium text-orange-900 dark:text-orange-100 mb-2"> | ||
| Your Feedback: | ||
| </p> | ||
| <div className="text-sm font-mono whitespace-pre-wrap text-orange-800 dark:text-orange-200"> | ||
| {newFeedback} | ||
| </div> | ||
| </div> | ||
| <p className="text-xs text-muted-foreground"> | ||
| The current subtask will be interrupted. Alternatively, click "Cancel" and use "Submit Feedback" to wait for the next subtask. | ||
| </p> | ||
| </AlertDialogDescription> | ||
| </AlertDialogHeader> | ||
| <AlertDialogFooter> | ||
| <AlertDialogCancel>Cancel</AlertDialogCancel> | ||
| <AlertDialogAction | ||
| onClick={saveFeedbackAndRestartTask} | ||
| className="bg-orange-600 hover:bg-orange-700" | ||
| > | ||
| <Zap className="h-4 w-4 mr-2" /> | ||
| Apply Now & Restart | ||
| </AlertDialogAction> | ||
| </AlertDialogFooter> | ||
| </AlertDialogContent> | ||
| </AlertDialog> | ||
|
|
||
| {/* Restart from Human Review Dialog */} | ||
| <AlertDialog open={showRestartDialog} onOpenChange={setShowRestartDialog}> | ||
| <AlertDialogContent> | ||
| <AlertDialogHeader> | ||
| <AlertDialogTitle>Restart Task with Feedback?</AlertDialogTitle> | ||
| <AlertDialogDescription className="space-y-3"> | ||
| <p> | ||
| This will change the task status from <strong>Human Review</strong> to <strong>In Progress</strong> and restart the agent with your feedback. | ||
| </p> | ||
| <div className="rounded border border-green-200 bg-green-50 dark:border-green-900 dark:bg-green-950 p-3"> | ||
| <p className="text-sm font-medium text-green-900 dark:text-green-100 mb-2"> | ||
| Unread Feedback ({feedbackHistory.filter(f => !f.read).length}): | ||
| </p> | ||
| <div className="space-y-2 max-h-[200px] overflow-y-auto"> | ||
| {feedbackHistory.filter(f => !f.read).map((fb, idx) => ( | ||
| <div key={idx} className="text-sm font-mono whitespace-pre-wrap text-green-800 dark:text-green-200 border-l-2 border-green-300 dark:border-green-700 pl-2"> | ||
| {fb.message} | ||
| </div> | ||
| ))} | ||
| </div> | ||
| </div> | ||
| <p className="text-xs text-muted-foreground"> | ||
| The agent will incorporate these corrections and continue implementation. | ||
| </p> | ||
| </AlertDialogDescription> | ||
| </AlertDialogHeader> | ||
| <AlertDialogFooter> | ||
| <AlertDialogCancel>Cancel</AlertDialogCancel> | ||
| <AlertDialogAction | ||
| onClick={restartTaskWithFeedback} | ||
| className="bg-green-600 hover:bg-green-700" | ||
| > | ||
| <Zap className="h-4 w-4 mr-2" /> | ||
| Restart with Feedback | ||
| </AlertDialogAction> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check if the file exists and its recent history
git log --oneline -n 20 apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx 2>/dev/null | head -10Repository: AndyMik90/Auto-Claude
Length of output: 134
🏁 Script executed:
# Check the actual file to see imports and hardcoded strings
head -50 apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 1994
🏁 Script executed:
# Look at the full file to check for i18n usage patterns
wc -l apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 135
🏁 Script executed:
# Check if react-i18next is imported and used in this file
grep -n "i18next\|useTranslation\|import.*t'" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 348
🏁 Script executed:
# Find similar task-related components to see the i18n pattern they follow
find apps/frontend/src/renderer/components -name "*.tsx" -type f | head -10 | xargs grep -l "tasks:" | head -5Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Check if the specific hardcoded strings from the review are actually using i18n or are hardcoded
grep -n "Task Feedback\|Just now\|Unread Feedback Available\|Submit Feedback" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx | head -20Repository: AndyMik90/Auto-Claude
Length of output: 557
🏁 Script executed:
# Look at the formatTimestamp function specifically
sed -n '304,320p' apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 650
🏁 Script executed:
# Check the actual usage of t() in the file
grep -n "t('tasks:\|t('common:" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx | head -20Repository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# See how many hardcoded strings vs i18n calls exist
echo "=== Hardcoded strings (common literals) ===" && \
grep -o "'[A-Z][^']*'" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx | sort | uniq -c | sort -rn | head -15 && \
echo "=== i18n calls ===" && \
grep -c "t('tasks:\|t('common:" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx || echo "0 i18n calls found"Repository: AndyMik90/Auto-Claude
Length of output: 565
🏁 Script executed:
# Check the file creation/addition date more thoroughly
git log --follow --diff-filter=A -- apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 1851
🏁 Script executed:
# Count ALL hardcoded UI strings that are user-facing
grep -nE "return .*'[A-Z]|<.*>.*'[A-Z]|placeholder=|className=.*>[^<]*</|{showApplyNowDialog|AlertDialogTitle>|Button.*>|Label.*>" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx | grep -v "i18n\|t(\|import\|className" | wc -lRepository: AndyMik90/Auto-Claude
Length of output: 67
🏁 Script executed:
# List all direct hardcoded UI text strings (not imports/comments)
sed -n '304,641p' apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx | grep -oE "'[A-Za-z][^']*'|\"[A-Za-z][^\"]*\"" | grep -v "^'[a-z]" | sort | uniqRepository: AndyMik90/Auto-Claude
Length of output: 2477
🏁 Script executed:
# Double-check by looking at actual hardcoded UI text in the snippet range
sed -n '330,380p' apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsxRepository: AndyMik90/Auto-Claude
Length of output: 2869
🏁 Script executed:
# Verify the exact pattern: t() is imported and set up, but not actually used
echo "=== useTranslation setup ===" && \
sed -n '8p;39p' apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx && \
echo "=== Total t() function calls in file ===" && \
grep -c "t(" apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx || echo "0"Repository: AndyMik90/Auto-Claude
Length of output: 237
🏁 Script executed:
# Check the translation files to see what keys SHOULD be used
find apps/frontend/src/shared/i18n/locales -name "tasks.json" -type f | head -1 | xargs head -100Repository: AndyMik90/Auto-Claude
Length of output: 3225
Replace all hardcoded UI strings with i18n keys.
This new component introduces extensive user-facing text (headers, descriptions, button labels, status messages, dialog copy, placeholders, and relative-time strings) without localization. Although useTranslation is imported, it is effectively unused—only 7 t() calls exist in 647 lines. All text must use t() with the namespace:section.key format (e.g., tasks:feedback.title), with proper interpolation/pluralization for dynamic content like "entry" vs "entries" and relative timestamps ("Just now", "5m ago", etc.).
Example fixes
- <h3 className="text-lg font-semibold">Task Feedback</h3>
+ <h3 className="text-lg font-semibold">{t('tasks:feedback.title')}</h3>
- if (diffMins < 1) return 'Just now';
- if (diffMins < 60) return `${diffMins}m ago`;
+ if (diffMins < 1) return t('common:time.justNow');
+ if (diffMins < 60) return t('common:time.minutesAgo', { count: diffMins });
- You have {feedbackHistory.filter(f => !f.read).length} unread feedback {feedbackHistory.filter(f => !f.read).length === 1 ? 'entry' : 'entries'}.
+ {t('tasks:feedback.unreadCount', { count: unreadCount })}🤖 Prompt for AI Agents
In `@apps/frontend/src/renderer/components/task-detail/TaskFeedback.tsx` around
lines 304 - 641, The component currently renders many hardcoded UI strings;
replace them with i18n keys using the existing useTranslation hook and
t('tasks:feedback.<key>') (or similar namespace) for all user-facing
text—identify and update strings in formatTimestamp (use t with
pluralization/relative interpolation for "Just now", "Xm ago", etc.), the New
Feedback placeholder, headers and descriptions (Task Feedback, Task is Running,
Unread Feedback Available, Submit Feedback, Apply Now, Restart Task with
Feedback, Feedback History, No feedback provided yet), all AlertDialog
titles/descriptions/actions (Apply Feedback Immediately?, Restart Task with
Feedback?, Cancel, Apply Now & Restart, Restart with Feedback), dynamic status
labels within the feedbackHistory map (Read by agent, Pending, Pending actions
tooltips), character count, button labels and aria text, and any inline computed
strings (like unread count plurals and unread feedback list). Use t() with
interpolation/pluralization for dynamic values (feedbackHistory.filter(f =>
!f.read).length, timestamp deltas) and ensure calls occur where
functions/handlers like formatTimestamp, saveFeedbackForNextSubtask,
promptApplyFeedbackNow, promptRestartWithFeedback, saveFeedbackAndRestartTask,
restartTaskWithFeedback, startEditingFeedback, deleteFeedbackEntry,
saveEditedFeedbackWithAutoEscalation, cancelEditingFeedback are referenced so
the UI text there uses the new keys.
Co-Authored-By: Claude Opus 4.5 <[email protected]>
6b2b89a to
f50bf2a
Compare
Backend Python fixes: - Add encoding="utf-8" to file operations in coder.py - Guard task_logger before logging, remove debug prints in feedback.py - Normalize provider_setting in git_provider.py - Add platform-aware glab resolution in worktree.py - Fix api_get params and add_comment docstring in gitlab_provider.py - Sort imports in providers/__init__.py and factory.py Frontend platform abstraction: - Replace process.platform checks with isWindows()/getPathDelimiter() - Add isMac(), isLinux() helpers to python-path-utils.ts - Fix agent-queue.ts, agent-process.ts, conda-detector.ts - Fix oauth-handlers, claude-code-handlers, version-suggester - Add timeout and GLAB_NO_PROMPT to gitlab utils - Convert metadata-handlers to async fs operations Frontend i18n/UI fixes: - Localize ProviderIntegration, TaskConfiguration, TaskDetailModal - Localize CondaDetectionDisplay, IntegrationSettings, Insights - Fix factory.ts visibility mapping and error i18n - Add functional state updates to GeneralSettings - Fix slider.tsx disabled styling (data-[disabled]:) - Remove debug console.log statements Security and quality: - Add sanitizeShellPath for command injection prevention - Guard OAuth logs in GitLabIntegration - Add caching to detectProjectStructure - Fix TOCTOU race condition in conda-workspace-generator - Tighten types in factory.ts and task-api.ts Bug fix: - Add PowerShell (.ps1) support to getEnvironmentPathFromScript Co-Authored-By: Claude Opus 4.5 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@apps/backend/agents/coder.py`:
- Around line 403-467: The feedback_section string (constructed when
feedback_text is present and appended to prompt) uses overly absolute language
(e.g., "MUST implement...", "NEVER wrong") that prevents reasonable technical
judgment; modify the text in feedback_section to soften mandates—replace "MUST"
with "should" or "strongly encouraged", change absolutes like "NEVER" to
"generally" or "in most cases", and add an explicit clause that allows the agent
to perform safety/sanity checks and document any deviations using the existing
update_subtask_status and mark_feedback_read workflow (refer to variables
feedback_section, feedback_text, and prompt for where to edit the content).
Ensure the revised wording still emphasizes user priority but explicitly permits
informed technical decisions and requires documenting/marking feedback when
changes are made.
- Around line 644-652: Wrap the call to ImplementationPlan.load(spec_dir /
"implementation_plan.json") in a try/except (or check for a None/invalid return)
to defensively handle load failures: catch exceptions from
ImplementationPlan.load, log or report the error via print_status (or
processLogger) with context, and exit or return early instead of continuing to
access plan.phases; ensure the code references ImplementationPlan.load and
plan.phases so you validate the loaded plan before computing total_phases and
total_subtasks.
In `@apps/backend/agents/tools_pkg/tools/feedback.py`:
- Around line 21-28: Update the create_feedback_tools docstring to reflect that
the second parameter is intentionally unused: change the Args section to
document _project_dir (not project_dir) and note it is unused or reserved (e.g.,
"_project_dir: unused; reserved for API compatibility"), and ensure the
parameter name in the docstring matches the function signature in
create_feedback_tools so the docs are accurate.
- Around line 85-134: The read-modify-write of task_metadata.json
(metadata_file) is not protected against concurrent access: wrap the whole
critical section that opens, loads into metadata, updates feedback_list based on
feedback_indices, and writes back into an in-process async lock (e.g., a
module-level asyncio.Lock named metadata_lock) and use "async with
metadata_lock:" around the block that reads, modifies, and json.dump()s to avoid
lost updates; if multiple processes may run concurrently, also add or suggest a
file lock (e.g., fcntl or portalocker) around the same critical section so both
metadata_lock and a process-level file lock protect the operation.
In `@apps/backend/core/git_provider.py`:
- Around line 35-41: The code currently calls subprocess.run(...) with ["git",
"config", "--get", "remote.origin.url"] in git_provider.py which bypasses the
platform-aware helper; replace that subprocess.run call with the platform-aware
run_git(...) helper (core.git_executable.run_git or the run_git import used in
the project), passing the same args, cwd=project_dir and the equivalent
capture/text/timeout options, and handle the run_git result the same way the
subprocess.CompletedProcess was handled (or adapt to run_git's return
type/errors) so git discovery works on Windows/custom installs.
In `@apps/backend/core/worktree.py`:
- Around line 1029-1062: The PR creation uses the hardcoded "gh" command in
gh_args and subprocess.run (e.g., in do_create_pr and the other call around the
later invocation), causing failures for non-standard installs; add a
platform-aware get_gh_executable() function (mirroring
get_git_executable/get_glab_executable) that resolves the gh binary via
shutil.which and common OS-specific install paths and falls back to "gh" if not
found, then replace the hardcoded "gh" in gh_args and any other subprocess.run
calls with the resolved executable from get_gh_executable(); ensure all
references (gh_args list construction and do_create_pr subprocess.run) use that
function so subprocess calls are robust across platforms.
In `@apps/backend/prompts/roadmap_discovery.md`:
- Around line 97-101: Several fenced bash blocks for progress signals (the lines
beginning with the heading "**After completing this phase, signal progress:**"
followed by ```bash and the echo "[ROADMAP_PROGRESS] ..." lines) are missing a
blank line before the fence causing MD031 warnings; fix by inserting a single
blank line between the preceding sentence/heading and each ```bash fence for all
occurrences (e.g., the blocks containing echo "[ROADMAP_PROGRESS] 42...", 47...,
57..., 62..., 67... and the other occurrences around lines 120-123, 176-179,
191-194, 206-209) so each fenced code block is preceded by one empty line.
♻️ Duplicate comments (6)
apps/backend/prompts/roadmap_discovery.md (1)
187-193: Add concrete competitor analysis steps before signaling completion.This section still signals completion without any instructions to read and merge
competitor_analysis.jsoninsights.apps/backend/runners/providers/__init__.py (1)
9-9: Sort re-exported imports to satisfy Ruff I001.🧹 Proposed fix
-from .factory import ProviderConfig, create_provider +from .factory import create_provider, ProviderConfigapps/backend/runners/gitlab/providers/gitlab_provider.py (4)
55-63: Use platform abstraction for project_dir resolution.Line 55-60 uses
pathlib.Pathdirectly; backend guidelines require the platform abstraction for path handling. As per coding guidelines, use platform abstraction for path handling.
133-139: fetch_prs should not silently return an empty list.Line 133-139 returning
[]reads as “no MRs” rather than “not implemented.” Prefer an explicitNotImplementedError.✅ Suggested fix
- # For now, return empty list - would need to implement list_mr in glab_client - # This is a placeholder for the full implementation - return [] + raise NotImplementedError("fetch_prs not implemented for GitLab provider")
213-226: close_pr reports failure even after posting a note.Line 223-226 posts a comment but always returns
False, which is misleading. Consider raisingNotImplementedErroruntil actual close support exists.✅ Suggested fix
- if comment: - self.glab_client.post_mr_note(pr_number, comment) - # TODO: Implement MR closing in glab_client - return False + if comment: + self.glab_client.post_mr_note(pr_number, comment) + raise NotImplementedError("close_pr not implemented for GitLab provider")
282-330: No-op label/repo methods can silently succeed.Line 282-330
pass/default returns can mislead callers into believing actions succeeded. PreferNotImplementedError(or explicit unsupported responses) for label and repository operations.✅ Suggested fix (apply similarly to the other stubs)
- # TODO: Implement label operations in glab_client - pass + raise NotImplementedError("Label operations not implemented for GitLab provider")
| # Inject user feedback if available (HIGHEST PRIORITY) | ||
| if feedback_text: | ||
| feedback_section = f""" | ||
| ═══════════════════════════════════════════════════════════════════════════════ | ||
| 🚨 CRITICAL: USER FEEDBACK - MANDATORY IMPLEMENTATION REQUIRED | ||
| ═══════════════════════════════════════════════════════════════════════════════ | ||
| The user has provided important feedback/corrections that MUST be incorporated: | ||
| {feedback_text} | ||
| 🚨 MANDATORY INSTRUCTIONS - DO NOT QUESTION OR DISMISS: | ||
| - This feedback takes ABSOLUTE PRIORITY over ALL other considerations | ||
| - You MUST implement this feedback exactly as stated - do NOT evaluate whether it "makes sense" | ||
| - Do NOT dismiss feedback as "not applicable" or "for a different project" | ||
| - If the feedback contradicts the original spec, the SPEC IS WRONG - follow the user's feedback | ||
| - If the feedback seems unclear, implement it to the best of your understanding | ||
| - The user is the final authority - their feedback overrides everything including your own reasoning | ||
| - User feedback is NEVER wrong or misapplied - implement it without question | ||
| 🔄 REOPEN COMPLETED WORK WHEN NEEDED: | ||
| If feedback requires changing work from COMPLETED subtasks, you MUST: | ||
| 1. Use update_subtask_status to change those subtasks from "completed" back to "pending" | ||
| 2. This allows you to revisit and fix that work in future iterations | ||
| Example: Feedback says "Use .NET Core 8 instead of .NET Framework 4.8" | ||
| - If subtask "Setup C# project structure" is already completed with Framework 4.8 | ||
| - Call: update_subtask_status(subtask_id="1.1", status="pending", notes="User feedback: switch to .NET Core 8") | ||
| - The subtask will be picked up again and you can make the correction | ||
| 🎯 REQUIRED: After addressing feedback, you MUST call mark_feedback_read: | ||
| 1. Use mark_feedback_read to mark which feedback items you addressed | ||
| 2. Provide the 0-based indices as a comma-separated string (e.g., "0" or "0,1,2") | ||
| 3. "Addressing feedback" means ANY of these actions: | ||
| - Implementing the requested change | ||
| - Making an informed technical decision with clear justification | ||
| - Documenting why a suggestion cannot be implemented | ||
| - Reopening subtasks to address the feedback later | ||
| Example workflow: | ||
| User provided 3 feedback items (indices 0, 1, 2). | ||
| - Index 0: "Use .NET Core 8 if compatible, otherwise netstandard2.0" | ||
| → You analyze and choose netstandard2.0 with justification = ADDRESSED | ||
| - Index 1: "Add AssemblyResolver" | ||
| → You document it in README for consuming apps = ADDRESSED | ||
| - Index 2: "Use realistic test data" - requires reopening subtask 3.2 | ||
| → You reopen the subtask for later work = NOT ADDRESSED YET (wait until actually fixed) | ||
| Actions: | ||
| 1. Analyze feedback[0]: netstandard2.0 is REQUIRED for NT8 compatibility, document reasoning | ||
| 2. Address feedback[1]: Document AssemblyResolver pattern in README | ||
| 3. update_subtask_status(subtask_id="3.2", status="pending", notes="User feedback: realistic test data") | ||
| 4. mark_feedback_read(feedback_indices="0,1") # Mark 0 and 1 as addressed (comma-separated string) | ||
| CRITICAL: Mark feedback as read when you have FULLY ADDRESSED it in any way: | ||
| - If you implemented the change → mark as read | ||
| - If you made a technical decision with justification → mark as read | ||
| - If you documented why it cannot be done → mark as read | ||
| - If you ONLY reopened subtasks for later → do NOT mark as read yet (wait until fixed) | ||
| ═══════════════════════════════════════════════════════════════════════════════ | ||
| """ | ||
| prompt += feedback_section | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider softening the absolute language in feedback instructions.
The feedback section is comprehensive, but the language at lines 416-422 is extremely prescriptive ("You MUST implement this feedback exactly as stated", "User feedback is NEVER wrong"). This could cause the agent to implement harmful or mistaken feedback without appropriate judgment.
While the instructions at lines 437-441 provide an escape hatch for "informed technical decision with clear justification," the emphatic phrasing above may override this nuance.
💡 Suggested adjustment
Consider balancing the instructions to emphasize user authority while still allowing for sanity checks:
-🚨 MANDATORY INSTRUCTIONS - DO NOT QUESTION OR DISMISS:
-- This feedback takes ABSOLUTE PRIORITY over ALL other considerations
-- You MUST implement this feedback exactly as stated - do NOT evaluate whether it "makes sense"
-- Do NOT dismiss feedback as "not applicable" or "for a different project"
+🚨 HIGH PRIORITY INSTRUCTIONS:
+- User feedback takes priority over the original spec
+- Implement feedback as requested, or document a clear technical reason if you cannot
+- Do NOT dismiss feedback without careful consideration🤖 Prompt for AI Agents
In `@apps/backend/agents/coder.py` around lines 403 - 467, The feedback_section
string (constructed when feedback_text is present and appended to prompt) uses
overly absolute language (e.g., "MUST implement...", "NEVER wrong") that
prevents reasonable technical judgment; modify the text in feedback_section to
soften mandates—replace "MUST" with "should" or "strongly encouraged", change
absolutes like "NEVER" to "generally" or "in most cases", and add an explicit
clause that allows the agent to perform safety/sanity checks and document any
deviations using the existing update_subtask_status and mark_feedback_read
workflow (refer to variables feedback_section, feedback_text, and prompt for
where to edit the content). Ensure the revised wording still emphasizes user
priority but explicitly permits informed technical decisions and requires
documenting/marking feedback when changes are made.
| from implementation_plan import ImplementationPlan | ||
|
|
||
| plan = ImplementationPlan.load(spec_dir / "implementation_plan.json") | ||
| total_phases = len(plan.phases) | ||
| total_subtasks = sum(len(p.subtasks) for p in plan.phases) | ||
| print_status( | ||
| f"Planning complete: {total_phases} phase(s), {total_subtasks} subtask(s)", | ||
| "success", | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider adding a defensive check for ImplementationPlan.load() failure.
While the validator confirmed the plan is valid, ImplementationPlan.load() could still fail (e.g., file modified between validation and load, or load method has different parsing). A try/except would make this more robust.
💡 Suggested defensive pattern
# Show plan statistics
from implementation_plan import ImplementationPlan
- plan = ImplementationPlan.load(spec_dir / "implementation_plan.json")
- total_phases = len(plan.phases)
- total_subtasks = sum(len(p.subtasks) for p in plan.phases)
- print_status(
- f"Planning complete: {total_phases} phase(s), {total_subtasks} subtask(s)",
- "success",
- )
+ try:
+ plan = ImplementationPlan.load(spec_dir / "implementation_plan.json")
+ total_phases = len(plan.phases)
+ total_subtasks = sum(len(p.subtasks) for p in plan.phases)
+ print_status(
+ f"Planning complete: {total_phases} phase(s), {total_subtasks} subtask(s)",
+ "success",
+ )
+ except Exception as e:
+ logger.warning(f"Could not load plan statistics: {e}")
+ print_status("Planning complete", "success")🤖 Prompt for AI Agents
In `@apps/backend/agents/coder.py` around lines 644 - 652, Wrap the call to
ImplementationPlan.load(spec_dir / "implementation_plan.json") in a try/except
(or check for a None/invalid return) to defensively handle load failures: catch
exceptions from ImplementationPlan.load, log or report the error via
print_status (or processLogger) with context, and exit or return early instead
of continuing to access plan.phases; ensure the code references
ImplementationPlan.load and plan.phases so you validate the loaded plan before
computing total_phases and total_subtasks.
| def create_feedback_tools(spec_dir: Path, _project_dir: Path) -> list: | ||
| """ | ||
| Create feedback management tools. | ||
| Args: | ||
| spec_dir: Path to the spec directory | ||
| project_dir: Path to the project root | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update the docstring to reflect _project_dir being intentionally unused.
The argument docs still mention project_dir, which is misleading now that the parameter is _project_dir.
✏️ Proposed fix
- project_dir: Path to the project root
+ _project_dir: Path to the project root (unused; kept for signature compatibility)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def create_feedback_tools(spec_dir: Path, _project_dir: Path) -> list: | |
| """ | |
| Create feedback management tools. | |
| Args: | |
| spec_dir: Path to the spec directory | |
| project_dir: Path to the project root | |
| def create_feedback_tools(spec_dir: Path, _project_dir: Path) -> list: | |
| """ | |
| Create feedback management tools. | |
| Args: | |
| spec_dir: Path to the spec directory | |
| _project_dir: Path to the project root (unused; kept for signature compatibility) | |
| Returns: | |
| List of feedback tool functions | |
| """ |
🤖 Prompt for AI Agents
In `@apps/backend/agents/tools_pkg/tools/feedback.py` around lines 21 - 28, Update
the create_feedback_tools docstring to reflect that the second parameter is
intentionally unused: change the Args section to document _project_dir (not
project_dir) and note it is unused or reserved (e.g., "_project_dir: unused;
reserved for API compatibility"), and ensure the parameter name in the docstring
matches the function signature in create_feedback_tools so the docs are
accurate.
| metadata_file = spec_dir / "task_metadata.json" | ||
| if not metadata_file.exists(): | ||
| return { | ||
| "content": [ | ||
| { | ||
| "type": "text", | ||
| "text": "Error: task_metadata.json not found", | ||
| } | ||
| ] | ||
| } | ||
|
|
||
| try: | ||
| with open(metadata_file) as f: | ||
| metadata = json.load(f) | ||
|
|
||
| feedback_list = metadata.get("feedback", []) | ||
| if not feedback_list: | ||
| return { | ||
| "content": [ | ||
| { | ||
| "type": "text", | ||
| "text": "No feedback entries found in task_metadata.json", | ||
| } | ||
| ] | ||
| } | ||
|
|
||
| # Mark specified feedback items as read | ||
| marked_count = 0 | ||
| invalid_indices = [] | ||
| for idx in feedback_indices: | ||
| if 0 <= idx < len(feedback_list): | ||
| feedback_list[idx]["read"] = True | ||
| marked_count += 1 | ||
| else: | ||
| invalid_indices.append(idx) | ||
|
|
||
| if invalid_indices: | ||
| return { | ||
| "content": [ | ||
| { | ||
| "type": "text", | ||
| "text": f"Error: Invalid feedback indices: {invalid_indices}. " | ||
| f"Valid range is 0-{len(feedback_list) - 1}", | ||
| } | ||
| ] | ||
| } | ||
|
|
||
| # Save updated metadata | ||
| with open(metadata_file, "w") as f: | ||
| json.dump(metadata, f, indent=2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard the read‑modify‑write to avoid lost updates under concurrency.
Concurrent tool calls can race and overwrite each other’s updates to task_metadata.json. Add an in‑process async lock around the critical section (and consider a file lock if multiple processes can run this).
🔒 Proposed fix (in‑process lock)
+import asyncio
import json
from pathlib import Path
from typing import Any
@@
+_feedback_lock = asyncio.Lock()
@@
- metadata_file = spec_dir / "task_metadata.json"
- if not metadata_file.exists():
- return {
- "content": [
- {
- "type": "text",
- "text": "Error: task_metadata.json not found",
- }
- ]
- }
-
- try:
- with open(metadata_file) as f:
- metadata = json.load(f)
+ metadata_file = spec_dir / "task_metadata.json"
+ try:
+ async with _feedback_lock:
+ if not metadata_file.exists():
+ return {
+ "content": [
+ {
+ "type": "text",
+ "text": "Error: task_metadata.json not found",
+ }
+ ]
+ }
+
+ with open(metadata_file) as f:
+ metadata = json.load(f)
@@
- with open(metadata_file, "w") as f:
- json.dump(metadata, f, indent=2)
+ with open(metadata_file, "w") as f:
+ json.dump(metadata, f, indent=2)🤖 Prompt for AI Agents
In `@apps/backend/agents/tools_pkg/tools/feedback.py` around lines 85 - 134, The
read-modify-write of task_metadata.json (metadata_file) is not protected against
concurrent access: wrap the whole critical section that opens, loads into
metadata, updates feedback_list based on feedback_indices, and writes back into
an in-process async lock (e.g., a module-level asyncio.Lock named metadata_lock)
and use "async with metadata_lock:" around the block that reads, modifies, and
json.dump()s to avoid lost updates; if multiple processes may run concurrently,
also add or suggest a file lock (e.g., fcntl or portalocker) around the same
critical section so both metadata_lock and a process-level file lock protect the
operation.
| result = subprocess.run( | ||
| ["git", "config", "--get", "remote.origin.url"], | ||
| cwd=project_dir, | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=5, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Find the git_provider.py file
fd -t f "git_provider.py" apps/backend/Repository: AndyMik90/Auto-Claude
Length of output: 98
🏁 Script executed:
# Check if core.git_executable module exists
fd -t f "git_executable" apps/backend/Repository: AndyMik90/Auto-Claude
Length of output: 100
🏁 Script executed:
# List files in apps/backend/core/ to understand structure
ls -la apps/backend/core/ 2>/dev/null | head -20 || find apps/backend/core -type f -name "*.py" | head -20Repository: AndyMik90/Auto-Claude
Length of output: 1147
🏁 Script executed:
# Read git_provider.py with line numbers, focus on lines 35-41
cat -n apps/backend/core/git_provider.py | head -50Repository: AndyMik90/Auto-Claude
Length of output: 1589
🏁 Script executed:
# Read git_executable.py to check available functions
cat -n apps/backend/core/git_executable.pyRepository: AndyMik90/Auto-Claude
Length of output: 5608
Route git detection through platform-aware helpers.
Line 35-41 invokes git directly via subprocess.run(), bypassing the platform abstraction provided by core.git_executable.run_git(). This risks failures on custom-path installs, Windows environments, or systems where git is not in PATH. Use run_git() instead, which handles executable discovery across platforms.
Suggested fix
-import subprocess
+from core.git_executable import run_git
@@
- result = subprocess.run(
- ["git", "config", "--get", "remote.origin.url"],
- cwd=project_dir,
- capture_output=True,
- text=True,
- timeout=5,
- )
+ result = run_git(
+ ["config", "--get", "remote.origin.url"],
+ cwd=project_dir,
+ timeout=5,
+ )🤖 Prompt for AI Agents
In `@apps/backend/core/git_provider.py` around lines 35 - 41, The code currently
calls subprocess.run(...) with ["git", "config", "--get", "remote.origin.url"]
in git_provider.py which bypasses the platform-aware helper; replace that
subprocess.run call with the platform-aware run_git(...) helper
(core.git_executable.run_git or the run_git import used in the project), passing
the same args, cwd=project_dir and the equivalent capture/text/timeout options,
and handle the run_git result the same way the subprocess.CompletedProcess was
handled (or adapt to run_git's return type/errors) so git discovery works on
Windows/custom installs.
| gh_args = [ | ||
| "gh", | ||
| "pr", | ||
| "create", | ||
| "--base", | ||
| target, | ||
| "--head", | ||
| info.branch, | ||
| "--title", | ||
| title, | ||
| "--body", | ||
| body, | ||
| ] | ||
| if draft: | ||
| gh_args.append("--draft") | ||
|
|
||
| def is_pr_retryable(stderr: str) -> bool: | ||
| """Check if PR creation error is retryable (network or HTTP 5xx).""" | ||
| return _is_retryable_network_error(stderr) or _is_retryable_http_error( | ||
| stderr | ||
| ) | ||
|
|
||
| def do_create_pr() -> tuple[bool, PullRequestResult | None, str]: | ||
| """Execute PR creation for retry wrapper.""" | ||
| try: | ||
| result = subprocess.run( | ||
| gh_args, | ||
| cwd=info.path, | ||
| capture_output=True, | ||
| text=True, | ||
| encoding="utf-8", | ||
| errors="replace", | ||
| timeout=self.GH_CLI_TIMEOUT, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the structure and find the relevant files
git ls-files apps/backend/core/ | grep -E "(worktree|gh_executable|git_executable)" | head -20Repository: AndyMik90/Auto-Claude
Length of output: 130
🏁 Script executed:
# Check the specific lines mentioned in the review
sed -n '1025,1065p' apps/backend/core/worktree.py | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 1673
🏁 Script executed:
# Also check lines 1341-1349
sed -n '1337,1352p' apps/backend/core/worktree.py | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 713
🏁 Script executed:
# Search for any existing executable detection modules
fd -e py . apps/backend/core/ | xargs grep -l "get_.*_executable\|get_executable" 2>/dev/nullRepository: AndyMik90/Auto-Claude
Length of output: 171
🏁 Script executed:
# Check if gh_executable module exists
ls -la apps/backend/core/gh_executable.py 2>/dev/null || echo "Module not found"Repository: AndyMik90/Auto-Claude
Length of output: 81
🏁 Script executed:
# Check the git_executable module to understand the pattern
cat apps/backend/core/git_executable.pyRepository: AndyMik90/Auto-Claude
Length of output: 4614
🏁 Script executed:
# Search for all "gh" hardcoded invocations in worktree.py
rg '"gh"' apps/backend/core/worktree.py -nRepository: AndyMik90/Auto-Claude
Length of output: 176
🏁 Script executed:
# Check the imports in worktree.py to see what's already imported
head -50 apps/backend/core/worktree.py | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 1760
🏁 Script executed:
# Check if there's a platform abstraction module
find apps/backend -name "*platform*" -type fRepository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Check if get_glab_executable has a similar pattern we can follow
sed -n '39,80p' apps/backend/core/worktree.py | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 1854
🏁 Script executed:
# Search for any existing gh_executable or github executable finder
rg "get_gh_executable|gh_executable|github.*executable" apps/backend/ --type pyRepository: AndyMik90/Auto-Claude
Length of output: 47
🏁 Script executed:
# Check the full context around the gh usages to understand better
sed -n '1020,1075p' apps/backend/core/worktree.py | cat -nRepository: AndyMik90/Auto-Claude
Length of output: 2371
Create platform-aware get_gh_executable() function instead of hardcoding "gh" invocations.
Lines 1030 and 1343 hardcode "gh" directly in subprocess calls, which fails on custom GitHub CLI installations and non-standard paths. The codebase already follows this pattern for other CLIs (e.g., get_git_executable(), get_glab_executable()). Add a similar executable discovery function that checks PATH and common installation locations before falling back to the bare command name.
This aligns with the coding guideline requiring platform-aware executable detection instead of hardcoded paths.
🤖 Prompt for AI Agents
In `@apps/backend/core/worktree.py` around lines 1029 - 1062, The PR creation uses
the hardcoded "gh" command in gh_args and subprocess.run (e.g., in do_create_pr
and the other call around the later invocation), causing failures for
non-standard installs; add a platform-aware get_gh_executable() function
(mirroring get_git_executable/get_glab_executable) that resolves the gh binary
via shutil.which and common OS-specific install paths and falls back to "gh" if
not found, then replace the hardcoded "gh" in gh_args and any other
subprocess.run calls with the resolved executable from get_gh_executable();
ensure all references (gh_args list construction and do_create_pr
subprocess.run) use that function so subprocess calls are robust across
platforms.
| **After completing this phase, signal progress:** | ||
| ```bash | ||
| echo "[ROADMAP_PROGRESS] 42 Analyzed project structure and dependencies" | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add blank lines before the progress fences (MD031).
Markdownlint requires a blank line before fenced blocks; these progress signal blocks still sit flush to the preceding sentence.
🧹 Proposed fix (add blank lines before fences)
-**After completing this phase, signal progress:**
-```bash
+**After completing this phase, signal progress:**
+
+```bash
echo "[ROADMAP_PROGRESS] 42 Analyzed project structure and dependencies"@@
-After completing this phase, signal progress:
-bash +**After completing this phase, signal progress:** + +bash
echo "[ROADMAP_PROGRESS] 47 Identified project purpose and value proposition"
@@
-**After completing this phase, signal progress:**
-```bash
+**After completing this phase, signal progress:**
+
+```bash
echo "[ROADMAP_PROGRESS] 57 Assessed project maturity and existing features"
@@
-After completing this phase, signal progress:
-bash +**After completing this phase, signal progress:** + +bash
echo "[ROADMAP_PROGRESS] 62 Analyzed competitive landscape"
@@
-**After completing this phase, signal progress:**
-```bash
+**After completing this phase, signal progress:**
+
+```bash
echo "[ROADMAP_PROGRESS] 67 Identified technical and resource constraints"
</details>
Also applies to: 120-123, 176-179, 191-194, 206-209
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>
98-98: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In @apps/backend/prompts/roadmap_discovery.md around lines 97 - 101, Several
fenced bash blocks for progress signals (the lines beginning with the heading
"After completing this phase, signal progress:" followed by bash and the echo "[ROADMAP_PROGRESS] ..." lines) are missing a blank line before the fence causing MD031 warnings; fix by inserting a single blank line between the preceding sentence/heading and each bash fence for all occurrences (e.g., the
blocks containing echo "[ROADMAP_PROGRESS] 42...", 47..., 57..., 62..., 67...
and the other occurrences around lines 120-123, 176-179, 191-194, 206-209) so
each fenced code block is preceded by one empty line.
</details>
<!-- fingerprinting:phantom:poseidon:eagle -->
<!-- This is an auto-generated comment by CodeRabbit -->
|
Closing PR due to #1229 better implementation. |
Summary
Add comprehensive conda environment management for both app-level and project-level Python environments, with full Windows compatibility.
Key Features
Windows Compatibility Fixes
windowsHide: trueto all spawn calls (~20 files) to prevent console window popups;instead of&&,$env:PATHinstead ofPATH=)Files Changed
Test Plan
Manual Testing
Breaking Changes
None - this is a new feature with no changes to existing APIs.
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.