feat: Memory system improvements + Custom Agents integration#1920
feat: Memory system improvements + Custom Agents integration#1920vitorafgomes wants to merge 49 commits intoAndyMik90:developfrom
Conversation
Add customer folder creation/selection with auto-initialization and
GitHub authentication. Customer folders skip git setup requirements
since they don't need git repos.
- Add type field ('project' | 'customer') to Project interface
- Rewrite AddCustomerModal with create new/open existing folder steps
- Auto-initialize .auto-claude/ on customer creation
- Skip git check and init dialog for customer-type projects
- Trigger GitHubSetupModal after customer folder is ready
- Add i18n keys for EN and FR
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The full initializeProject requires git, which customer folders don't have. Use createProjectFolder to create .auto-claude/ directly instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The project object passed to onCustomerAdded was stale (missing autoBuildPath). Now reads the updated project from the store after .auto-claude/ creation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Customer folders don't have git, so the regular initializeProject fails. Add a dedicated IPC handler that creates .auto-claude/ and persists autoBuildPath in the main process project store without git checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a Customer tab is active, GitHub Issues now shows issues from ALL child repositories. Includes repo filter dropdown, per-issue repo badge, and aggregated view sorted by update date. Token comes from Customer parent .env, each child repo contributes its own GITHUB_REPO. Also includes prior uncommitted work: Customer flow improvements, GitHub integration settings, i18n additions, and UI refinements. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Customer projects now always show GitHub Issues/PRs nav items since multi-repo aggregation pulls from all child repos. Removed the guard that was hiding GitHub nav when a child repo was selected via the Customer dropdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…opdown Use customerContext instead of isCustomerProject so GitHub nav stays visible both when the Customer itself is selected AND when a child repo is selected via the Customer dropdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Child repos added to a Customer often don't have .env with GITHUB_REPO. Now falls back to detecting the repo from git remote origin URL, which is always available for cloned repos. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds multi-repo PR support matching the existing multi-repo Issues pattern. When a Customer is active, PRs from all child repos are aggregated with repo badges and a repo filter dropdown. Review/merge features are read-only in multi-repo mode. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Read globally configured MCP servers from ~/.claude/settings.json (enabledPlugins and mcpServers) and display them as a read-only "Claude Code MCPs (Global)" section in the MCP Server Overview page. - Add ClaudeCodeMcpServerConfig type and enabledPlugins/mcpServers fields - Create claude-mcp-handlers.ts IPC handler to resolve plugin cache configs - Expose getGlobalMcps() via preload API - Add Global MCPs section with source badges (Plugin/Settings) and type labels - Add i18n keys for en and fr Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix MCP handler to read ~/.claude.json (main Claude Code config) in addition to ~/.claude/settings.json, resolving 32+ MCP servers that were previously invisible - Add new IPC handler for reading custom agents from ~/.claude/agents/ (11 categories, 147 agents) - Display custom agents section in MCP Overview with collapsible categories - Add 'claude-json' source badge to distinguish MCP origin - i18n keys added for EN and FR Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…i-repo Resolve child project ID from the selected issue/PR's repoFullName using syncStatus.repos mapping. This enables features that were previously disabled for Customer multi-repo mode: - Investigation: hook receives resolved child projectId instead of undefined - Auto-fix: per-issue auto-fix now functional with child project context - PR Review: full PRDetail with review/findings/fix replaces read-only view - IssueDetail: projectId, autoFixConfig, autoFixQueueItem now active Header-level batch operations (Analyze & Group, global auto-fix toggle) remain disabled for multi-repo as cross-repo batch is too complex. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…icrofrontend detection
Customer projects (multi-repo) were unable to generate a project index
because the analyzer ran against the parent folder (not a code repo),
producing an empty index. This broke the spec creation pipeline's
discovery and requirements phases.
## Customer project indexing (aggregation)
- CONTEXT_REFRESH_INDEX handler now detects `project.type === 'customer'`
and discovers child repos via projectStore
- For each child repo without an index, runs analyzer.py individually
- Aggregates all child indexes into a single customer-level index with
`project_type: 'customer'` and `child_repos` map
- Services are prefixed with repo name (e.g., `repo/main`) to avoid
key collisions across repos
- CONTEXT_GET handler also builds aggregated index on-the-fly from
existing child indexes when no persisted aggregate exists
- Accepts `force` parameter to re-run analyzer on all children
## Analyzer: .NET/C# detection
- Detects `.csproj` / `.sln` files via new glob-based service root
detection (`has_service_root()` helper in base.py)
- Identifies frameworks: ASP.NET Core, Blazor, WPF, MAUI, Worker
Services, gRPC
- Detects ORMs (Entity Framework, Dapper, Npgsql), task queues
(MassTransit, Hangfire, RabbitMQ), test frameworks (xUnit, NUnit,
MSTest)
- Parses .csproj XML for PackageReference and SDK type detection
## Analyzer: documentation project detection
- Standalone detection: MkDocs/Material, Sphinx, mdBook
- Node.js-based: Docusaurus, VitePress, VuePress, Nextra, Storybook
- New `documentation` service type with dedicated icon (BookOpen) and
color (emerald) in the UI constants
## Analyzer: microfrontend detection
- Detects Module Federation, Native Federation, single-spa, Qiankun
from package.json dependencies
- Also checks webpack config for ModuleFederationPlugin
- Stores result in `microfrontend` field on service analysis
## UI: progress tracking & re-analyze
- New `CONTEXT_INDEX_PROGRESS` IPC channel for real-time progress events
- Handler sends progress via `webContents.send()` as each child repo is
analyzed (message + current/total count)
- New `indexProgress` / `indexProgressCurrent` / `indexProgressTotal`
state fields in context-store
- `useIndexProgress()` hook listens for IPC progress events
- ProjectIndexTab now shows:
- Progress bar with percentage and repo count during indexing
- "Re-analyze" button to force re-run analyzer on all repos
- "Repositories" section for customer projects showing each repo
with type and framework badges
- Inline progress banner when re-analyzing with existing data
## Type changes
- `ProjectIndex.project_type` now includes `'customer'`
- `ProjectIndex.child_repos` optional field for per-repo indexes
- `ServiceInfo.type` now includes `'documentation'`
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
feat: Customer module - Project indexing aggregation, enhanced codebase analysis & progress UI
- Remove unused SERVICE_ROOT_FILES import from project_analyzer_module.py
- Fix .csproj XML parsing to handle legacy files with XML namespaces
where tree.iter("PackageReference") silently returns nothing.
Now tries namespace-aware iteration and falls back to regex if
no packages found, not just on ParseError.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Export transformIssue from issue-handlers.ts and import in
customer-github-handlers.ts instead of duplicating (DRY)
- Narrow except Exception to except (OSError, UnicodeDecodeError) in
framework_analyzer.py .csproj parsing
- Replace require('path')/require('fs') with ES import statements in
oauth-handlers.ts and project-handlers.ts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…, logging - CRITICAL: Fix port_detector undefined in framework_analyzer.py (NameError at runtime) - CRITICAL: Fix command injection in oauth-handlers.ts (execSync → execFileSync with args array + isValidGitHubRepo validation) - MAJOR: Replace console.log/error/warn with debugLog in project-context-handlers.ts - MAJOR: Fix cross-platform path detection (startsWith → path.relative) in project-context-handlers.ts and customer-github-handlers.ts - MAJOR: Add runtime type validation for PROJECT_ADD IPC handler - MAJOR: Use @shared path aliases in AgentTools.tsx imports - MINOR: Tighten Sphinx detection (remove overly broad "extensions" check) - TRIVIAL: Remove redundant packages_lower variable in dotnet detection Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Apply ruff format to base.py and framework_analyzer.py (dict line length, regex quotes) - Update smoke and ipc-bridge tests to expect third `undefined` argument in addProject IPC calls (matches new optional `type` parameter) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nc, DRY Backend: - Remove unused re/ET imports in framework_analyzer.py - Ruff format all modified analyzer files MCP handlers: - Enforce transport requirements (command or url) in sanitizeMcpServers - Add runtime type validation in resolveInlineServers for untrusted JSON - Respect CLAUDE_CONFIG_DIR env var for .claude.json lookup Main process handlers: - Convert execFileSync to async execFile with timeout in customer-github-handlers - Extract duplicated keychain probe into hasKeychainClaudeToken() helper - Use atomic file write for aggregated customer index Renderer components: - Add i18n for all hardcoded strings in ProjectIndexTab (en + fr) - Localize error messages in EnvConfigModal with react-i18next - Add debug logging to silent catch in AddCustomerModal - Fix Windows path separator in AddCustomerModal folder preview - Wrap loadRepos in useCallback and fix useEffect deps in CustomerReposModal - Handle addProject registration failure in clone flow - Add TODO for repo-aware issue selection in multi-repo mode Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…mpat, backend hardening - Multi-repo selection: composite key (repo#number) for issues/PRs to prevent ambiguous selection across repos - i18n: localize all hardcoded strings in GitHub issues/PRs/integration components (16 new translation keys) - French translations: fix all missing diacritical marks (accents) across context, dialogs, settings - Windows path compat: use path.relative() and normalize helpers instead of hardcoded '/' separator - Path aliases: switch relative imports to @shared/* in 6 files (main, preload, renderer) - Backend: fix Storybook devDependency misclassification, .NET sdk_type priority, Angular route regex - Backend: add empty except comments, guard empty clean_type, remove appsettings.json from code refs - Frontend: analyzer timeout (120s), atomic child key dedup, progress bar null checks - Frontend: add type guard for PROJECT_INIT_CUSTOMER, repo validation for issue detail fetches - Frontend: fix hasMore pagination, stale connection state reset, loading state for directory picker - Frontend: simplify AgentTools ternary, add aria-expanded, use semantic colors, remove unused imports - Frontend: guard customer onboarding against failed GitHub settings save - Logging: consistent debugError for field sanitization warnings Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…on-vite build Main process and preload builds use Rollup SSR mode which doesn't resolve @shared/* tsconfig path aliases. Revert to relative imports for these two files while keeping @shared aliases in renderer files (which work fine). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…case sensitivity - .NET: pick best .sln when multiple exist (most entry_points + libraries) - .NET: label Worker SDK as ".NET Worker" instead of "Azure Functions" - .NET: only skip monorepo detection for single .sln repos - .NET: broaden csproj parse exception handling to catch all errors - .NET: remove redundant role variable default assignment - Routes: fix ASP.NET attribute regex and duplicate-slash normalization - Routes: prevent Angular double-counting of nested child routes - Case sensitivity: add lowercase properties/ path for launchSettings.json - env_detector: continue processing remaining launchSettings files (break vs return) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…anual per-phase selection Move Custom Agents catalog from MCP Overview to Agent Settings as an informational section (only visible when agents exist in ~/.claude/agents/). Remove manual per-phase agent selection — the system now injects a full catalog of all available specialist agents into the system prompt, letting Claude automatically pick the right specialist based on task context. Backend: - Add build_agents_catalog_prompt() to build concise catalog of all agents - Change create_client() from custom_agent_prompt to agents_catalog_prompt - Update coder, planner, and QA loop to use catalog instead of single agent Frontend: - Move agent catalog display to AgentProfileSettings (hidden if no agents) - Remove per-phase custom agent dropdowns from phase configuration - Remove phaseCustomAgents persistence from TaskCreationWizard/TaskEditDialog - Remove Custom Agents section from AgentTools (MCP Overview) - Update i18n keys (en/fr) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 56
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
apps/frontend/src/renderer/components/settings/integrations/LinearIntegration.tsx (1)
231-243:⚠️ Potential issue | 🟡 MinorHardcoded placeholder strings violate i18n requirements.
The
placeholderattributes on lines 232 and 240 contain hardcoded English strings"Auto-detected"and"Auto-created". These are user-facing and must use translation keys per the coding guidelines.🌐 Proposed fix to localize placeholder strings
<div className="space-y-2"> <Label className="text-sm font-medium text-foreground">{t('linear.teamId')}</Label> <Input - placeholder="Auto-detected" + placeholder={t('linear.teamIdPlaceholder')} value={teamId} onChange={(e) => onTeamIdChange(e.target.value)} /> </div> <div className="space-y-2"> <Label className="text-sm font-medium text-foreground">{t('linear.projectId')}</Label> <Input - placeholder="Auto-created" + placeholder={t('linear.projectIdPlaceholder')} value={projectId} onChange={(e) => onProjectIdChange(e.target.value)} /> </div>Ensure matching keys are added to both
en/*.jsonandfr/*.jsontranslation files. As per coding guidelines: "All frontend user-facing text must usereact-i18nexttranslation keys. Never hardcode strings in JSX/TSX."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/integrations/LinearIntegration.tsx` around lines 231 - 243, The placeholders on the Input components in LinearIntegration.tsx are hardcoded; replace the literal strings "Auto-detected" and "Auto-created" with i18n lookups using the component's translation function (t) — e.g. set placeholder={t('linear.autoDetected')} for the teamId Input (with onTeamIdChange) and placeholder={t('linear.autoCreated')} for the projectId Input (with onProjectIdChange); then add matching keys "linear.autoDetected" and "linear.autoCreated" to both en/*.json and fr/*.json translation files.apps/frontend/src/renderer/components/settings/ThemeSelector.tsx (1)
130-131:⚠️ Potential issue | 🟡 MinorTheme names and descriptions remain untranslated.
While this PR adds i18n support to the component,
theme.nameandtheme.descriptionfromCOLOR_THEMESare still rendered directly without translation. For consistency, consider using dynamic translation keys:- <p className="font-medium text-sm text-foreground">{theme.name}</p> - <p className="text-xs text-muted-foreground line-clamp-2">{theme.description}</p> + <p className="font-medium text-sm text-foreground">{t(`themes.${theme.id}.name`)}</p> + <p className="text-xs text-muted-foreground line-clamp-2">{t(`themes.${theme.id}.description`)}</p>Then add corresponding keys to
en/*.jsonandfr/*.jsonfor each theme ID inCOLOR_THEMES.As per coding guidelines: "All frontend user-facing text must use
react-i18nexttranslation keys."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/ThemeSelector.tsx` around lines 130 - 131, Theme names/descriptions are rendered directly from COLOR_THEMES (theme.name and theme.description) instead of using react-i18next; update the ThemeSelector component to call the i18n translator (useTranslation -> t) and replace theme.name with something like t(`themes.${theme.id}.name`) and theme.description with t(`themes.${theme.id}.description`), then add matching keys for each theme ID under themes.* in your en/*.json and fr/*.json locale files so all user-facing theme text is translated.apps/frontend/src/renderer/components/TaskEditDialog.tsx (1)
212-225:⚠️ Potential issue | 🟠 Major
phaseCustomAgentschanges are not included in save-dirty detection.
At Line 227,hasChangescan be false even when custom-agent assignments differ, so the dialog closes without persisting the new config.💡 Suggested fix
+ const configuredPhaseCustomAgents = + settings.phaseCustomAgents && Object.values(settings.phaseCustomAgents).some(Boolean) + ? settings.phaseCustomAgents + : undefined; + + const hasPhaseCustomAgentChanges = + JSON.stringify(configuredPhaseCustomAgents ?? {}) !== + JSON.stringify(task.metadata?.phaseCustomAgents ?? {}); + const hasChanges = trimmedTitle !== task.title || trimmedDescription !== task.description || category !== (task.metadata?.category || '') || priority !== (task.metadata?.priority || '') || complexity !== (task.metadata?.complexity || '') || impact !== (task.metadata?.impact || '') || model !== (task.metadata?.model || '') || thinkingLevel !== (task.metadata?.thinkingLevel || '') || requireReviewBeforeCoding !== (task.metadata?.requireReviewBeforeCoding ?? false) || fastMode !== (task.metadata?.fastMode ?? false) || JSON.stringify(images) !== JSON.stringify(task.metadata?.attachedImages || []) || JSON.stringify(phaseModels) !== JSON.stringify(task.metadata?.phaseModels || DEFAULT_PHASE_MODELS) || - JSON.stringify(phaseThinking) !== JSON.stringify(task.metadata?.phaseThinking || DEFAULT_PHASE_THINKING); + JSON.stringify(phaseThinking) !== JSON.stringify(task.metadata?.phaseThinking || DEFAULT_PHASE_THINKING) || + hasPhaseCustomAgentChanges; @@ - if (settings.phaseCustomAgents && Object.values(settings.phaseCustomAgents).some(Boolean)) { - metadataUpdates.phaseCustomAgents = settings.phaseCustomAgents; + if (configuredPhaseCustomAgents) { + metadataUpdates.phaseCustomAgents = configuredPhaseCustomAgents; }Also applies to: 248-251
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/TaskEditDialog.tsx` around lines 212 - 225, The dirty-check (hasChanges) and the other save-detection spots are missing comparison for phaseCustomAgents, so changes to custom-agent assignments don't mark the dialog as dirty; add a comparison like JSON.stringify(phaseCustomAgents) !== JSON.stringify(task.metadata?.phaseCustomAgents || DEFAULT_PHASE_CUSTOM_AGENTS) to the existing hasChanges expression and the corresponding checks at the other occurrence (around where phaseModels/phaseThinking are compared) using the phaseCustomAgents state and the task.metadata?.phaseCustomAgents fallback constant (e.g., DEFAULT_PHASE_CUSTOM_AGENTS) so custom-agent edits are detected and persisted.apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx (1)
87-101:⚠️ Potential issue | 🟠 MajorInclude
phaseCustomAgentsin customization detection.Right now, custom-agent-only changes are treated as “not customized,” so the customized badge and reset control can disappear even when overrides exist (see Lines 92-101 + Line 283 condition).
🔧 Proposed fix
const hasCustomConfig = useMemo((): boolean => { - if (!settings.customPhaseModels && !settings.customPhaseThinking) { + if (!settings.customPhaseModels && !settings.customPhaseThinking && !settings.phaseCustomAgents) { return false; // No custom settings, using profile defaults } - return PHASE_KEYS.some( + return ( + PHASE_KEYS.some( phase => currentPhaseModels[phase] !== profilePhaseModels[phase] || currentPhaseThinking[phase] !== profilePhaseThinking[phase] - ); - }, [settings.customPhaseModels, settings.customPhaseThinking, currentPhaseModels, currentPhaseThinking, profilePhaseModels, profilePhaseThinking]); + ) || + Object.values(currentPhaseCustomAgents).some(Boolean) + ); + }, [settings.customPhaseModels, settings.customPhaseThinking, settings.phaseCustomAgents, currentPhaseModels, currentPhaseThinking, currentPhaseCustomAgents, profilePhaseModels, profilePhaseThinking]);Also applies to: 283-295
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx` around lines 87 - 101, The memoized hasCustomConfig currently ignores phaseCustomAgents, so agent-only overrides aren't detected; update the useMemo in AgentProfileSettings to also check settings.phaseCustomAgents and compare currentPhaseCustomAgents[phase] against the profile's phaseCustomAgents (e.g., add a comparison like currentPhaseCustomAgents[phase] !== profilePhaseCustomAgents[phase] within the PHASE_KEYS.some), and add settings.phaseCustomAgents and currentPhaseCustomAgents (and profilePhaseCustomAgents if named differently) to the hook dependency array; apply the same inclusion of phaseCustomAgents detection to the similar condition block around lines 283-295 so the customized badge/reset control reflects agent overrides too.apps/frontend/src/main/project-store.ts (1)
129-146:⚠️ Potential issue | 🟠 Major
addProjectignorestypewhen the project already exists.If an existing entry is re-added as
'customer', the method returns early and does not persist the new type, which can break customer-only flows.🔧 Proposed fix
const existing = this.data.projects.find((p) => p.path === absolutePath); if (existing) { + if (type && existing.type !== type) { + existing.type = type; + existing.updatedAt = new Date(); + } + // Validate that .auto-claude folder still exists for existing project // If manually deleted, reset autoBuildPath so UI prompts for reinitialization if (existing.autoBuildPath && !isInitialized(existing.path)) { console.warn(`[ProjectStore] .auto-claude folder was deleted for project "${existing.name}" - resetting autoBuildPath`); existing.autoBuildPath = ''; existing.updatedAt = new Date(); - this.save(); } + this.save(); return existing; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/project-store.ts` around lines 129 - 146, addProject currently returns the existing Project early without updating its type, so calls that re-add a path with type='customer' aren't persisted; in addProject (after computing absolutePath and finding existing via this.data.projects.find) check if a new type argument was passed and differs from existing.type, then set existing.type = type, update existing.updatedAt = new Date(), call this.save(), and log if desired before returning; keep the existing autoBuildPath/isInitialized reset logic and ensure you still return the existing project after persisting the type change.apps/frontend/src/renderer/components/github-prs/components/PRList.tsx (1)
191-205:⚠️ Potential issue | 🟡 MinorGuard against future timestamps in relative-date rendering.
Clock skew or API time drift can yield negative values (
-N minutes ago). Add a non-negative fallback before relative calculations.💡 Proposed fix
function formatRelativeDate(dateString: string, t: (key: string, options?: Record<string, unknown>) => string): string { const date = new Date(dateString); const now = new Date(); const diffMs = now.getTime() - date.getTime(); + if (diffMs < 0) { + return date.toLocaleDateString(); + } const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/github-prs/components/PRList.tsx` around lines 191 - 205, The relative-date calculation can produce negative values for future timestamps; clamp diffMs to non-negative before computing diffDays to avoid "-N minutes ago" outputs. In the block using now, date, diffMs and diffDays (in PRList.tsx), set diffMs = Math.max(0, now.getTime() - date.getTime()) (or early-return a localized "just now") so subsequent Math.floor(...) computations and translation keys t('time.minutesAgo'...), t('time.hoursAgo'...), etc., never receive negative counts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/agents/coder.py`:
- Around line 26-32: The import block from module phase_config (get_fast_mode,
get_phase_client_thinking_kwargs, get_phase_custom_agent, get_phase_model,
get_phase_model_betas) is not sorted according to Ruff rules; reorder imports
into proper Ruff groups (stdlib, third-party, local) and alphabetize names
within the local group and across lines as needed so the phase_config imports
are alphabetically ordered and consistent with Ruff's import-sorting
conventions; apply the same sorting fix to the other import group that includes
these symbols further down in the file (the imports around the second
occurrence) to make the whole file Ruff-compliant.
In `@apps/backend/agents/custom_agents.py`:
- Around line 114-145: The load_custom_agent function constructs paths using
f"{agent_id}.md" without validation, allowing path traversal; before building
agent_file or root_file (references: load_custom_agent, agent_file, root_file,
parse_agent_file), validate/sanitize agent_id to reject any path separators or
traversal tokens (e.g., "/" or "\" or "..") or restrict to a safe pattern like
alphanumeric plus limited chars (underscore, hyphen, dot); if validation fails,
return None or raise a clear error so files outside get_agents_dir cannot be
accessed.
- Around line 148-180: The parser in _parse_simple_yaml silently ignores YAML
block lists (key: newline followed by indented "- item" lines); modify
_parse_simple_yaml to iterate the lines with an index (e.g., while loop over
text.split("\n")) instead of a simple for loop so you can look ahead: when you
encounter a "key:" whose value is empty (or only whitespace), collect subsequent
lines that are indented and start with "-" (strip leading "-" and whitespace)
into a Python list and assign that to result[key]; keep existing inline-list
handling, quoted-string stripping, and comment/blank-line skipping behavior so
other keys remain unchanged. Ensure you stop collecting when you reach a
non-indented/non-dash line and continue parsing the rest.
- Around line 79-83: The frontmatter regex in custom_agents.py (fm_match) is
fragile to CRLF; update the extraction so it handles Windows newlines by either
normalizing content (e.g., replace "\r\n" with "\n" before running the regex) or
by making the regex accept optional CRs (use "\r?\n" in the pattern used to set
fm_match), then continue to set fm_text = fm_match.group(1) and body =
fm_match.group(2).strip() as before; ensure you only change the input
normalization or the regex and not the subsequent fm_text/body handling.
In `@apps/backend/analysis/analyzers/context/env_detector.py`:
- Around line 45-52: The current order calls _parse_code_references(env_vars,
optional_vars) before _parse_appsettings(env_vars) and
_parse_launch_settings(env_vars), and those later parsers skip keys that already
exist which causes concrete values from appsettings/launchSettings to be
shadowed by code-reference placeholders (value=None); either move the calls so
_parse_appsettings and _parse_launch_settings run before _parse_code_references,
or change the appsettings/launchSettings parsing logic in _parse_appsettings and
_parse_launch_settings to allow overwriting existing entries only when the
existing entry's value is None or marked as a code-reference placeholder (i.e.,
update env_vars entries when env_vars[key].value is None or env_vars[key].source
== "code_reference").
In `@apps/backend/analysis/analyzers/database_detector.py`:
- Around line 472-486: The analyzer currently treats PascalCase reference types
(e.g., Customer) as scalar columns because only virtual/collection navigations
are filtered; update the property-type filtering logic around
raw_type/is_nullable/clean_type to also skip properties whose clean_type matches
a known entity name (use your detected entity names set, e.g., entity_names or
detected_entities) unless the property name indicates a FK scalar (property name
ends with "Id" or "ID"); locate the block using variables raw_type, clean_type,
csharp_types and the property name variable (e.g., prop_name or name) and add
the check to continue when clean_type is in the entity set and the property name
does not end with "Id"/"ID".
In `@apps/backend/analysis/analyzers/framework_analyzer.py`:
- Around line 734-740: project_rel_path from the .sln can contain parent
traversals which leads to storing invalid project paths; to fix, resolve
csproj_path before computing the relative path and guard the relative_to call in
a try/except that skips projects outside the analysis root: change the logic
around csproj_path/project_rel_path so you call csproj_path = (self.path /
project_rel_path).resolve(), then compute project_dir by calling
csproj_path.relative_to(self.path) inside a try block (catching ValueError) and
continue/skip when it fails; update any downstream use of project_dir
accordingly (references: csproj_path, project_rel_path, project_dir, self.path,
relative_to).
In `@apps/backend/analysis/analyzers/port_detector.py`:
- Around line 371-375: The commandLineArgs parsing only matches the
space-separated form and misses the equals form; update the regex used where
cmd_args = profile.get("commandLineArgs", "") and match = re.search(...) to
accept either "--port 7173" or "--port=7173" (e.g., use a pattern that allows
either '=' or whitespace between --port and the digits, and ensure you still
capture the port digits in group 1); modify the re.search call in the port
detection logic (the block handling cmd_args) to use the new pattern.
In `@apps/backend/analysis/analyzers/project_analyzer_module.py`:
- Around line 65-76: The early return when exactly one .sln is found (the
len(sln_files) == 1 branch) can hide mixed-repo cases; instead of immediately
returning, update the logic in the method that uses
sln_files/project_dir/src_dir to verify there are no other top-level service
directories (e.g., sibling folders like "services", "api", "worker", or any
non-dot, non-src top-level directories that would indicate additional services)
before skipping monorepo detection—replace the unconditional return with a check
that inspects project_dir for other root-level service folders and only treat it
as a single-solution repo when no such siblings exist.
In `@apps/backend/analysis/analyzers/route_detector.py`:
- Around line 516-519: The http_method_pattern regex (variable
http_method_pattern) currently requires a quoted route template inside the
attribute parentheses, so forms like [HttpGet(Name = "ListUsers")] are not
matched; update the pattern to allow parentheses with named arguments but no
quoted string by making the quoted-string capture optional (i.e., allow the
group that captures ["..."] to be optional while still matching the surrounding
parentheses and other parameters), so attributes like [HttpGet("route")] and
[HttpGet(Name = "X")] both match; adjust the compiled regex used in
route_detector.py accordingly.
- Around line 484-523: The code only extracts the first class
(class_match/class_name/pre_class_section/class_route_prefix/class_authorize)
but then applies http_method_pattern.finditer(content) across the whole file,
misattributing routes when multiple controllers exist; change the logic to first
find all class declarations (use re.finditer for r"class\s+(\w+)" to get class
spans and names), compute each class's controller_name, class_route_prefix and
class_authorize by inspecting the text immediately before each class span (like
pre_class_section), then for each class restrict the http method attribute
search to that class's text span (or associate each http_method_pattern match to
the nearest preceding class by comparing match.start() to class spans) so that
attributes are paired with the correct class context (refer to
class_match/class_name/class_route_prefix/pre_class_section/http_method_pattern
in the diff).
- Around line 543-549: The current regex in method_name_match only allows a
single-token return type and misses common C# modifiers/generic return types;
update the pattern used in route_detector.py (the re.search call that assigns
method_name_match) to allow optional modifiers like async/static (zero or more
word tokens) and generic/complex return types (tokens containing <> and dots)
before capturing the method name, then extract the method name from the correct
capture group (instead of the existing single-group assumption), lowercase it as
currently done and call full_path.replace("[action]", method_name) to ensure
[action] is substituted for async/generic/static method signatures.
- Around line 648-654: The current route-statement end detection (where stmt_end
= content.find(";", match.end())) can stop at semicolons inside lambda bodies
and miss chained calls like .RequireAuthorization(); implement a robust helper
_find_statement_end(self, content: str, start: int, lookahead: int = 1200) that
scans forward up to lookahead characters tracking parentheses, braces, brackets
and string literals and only treats a semicolon as statement terminator when not
nested (paren/brace/bracket counters are zero and not inside a string),
returning either the semicolon position+1 or the lookahead limit; then replace
the existing stmt_end calculation in the route detection logic to call
_find_statement_end(content, match.end()) so requires_auth checks see chained
.RequireAuthorization() even when lambdas contain internal semicolons.
In `@apps/backend/core/client.py`:
- Around line 908-918: The custom_agent_prompt is injected into base_prompt
without any length protection which can exceed Windows command-line limits;
truncate custom_agent_prompt to the same safe maximum used for CLAUDE.md before
concatenation (or reuse the existing truncate utility if present), ensure the
truncated text is used when building base_prompt, and update the diagnostic
print to indicate when truncation occurred and the final length (reference
variables custom_agent_prompt and base_prompt and the CLAUDE.md truncation
logic).
In `@apps/backend/integrations/graphiti/config.py`:
- Around line 239-243: The code currently parses GRAPHITI_EPISODE_TTL_DAYS into
episode_ttl_days but accepts negative values; update the parsing in config.py so
that after converting to int you check for value < 0 and clamp it to 0, and emit
a warning (e.g., using logging.warning or the module's logger) indicating the
provided value was invalid and has been set to 0; preserve the existing
ValueError handling so non-integer input also results in episode_ttl_days = 0
with a warning. Ensure you reference GRAPHITI_EPISODE_TTL_DAYS and the
episode_ttl_days variable when logging.
In `@apps/backend/integrations/graphiti/queries_pkg/schema.py`:
- Line 29: The module-level int() conversion of
os.getenv("GRAPHITI_MAX_RESULTS", "10") can raise ValueError during import;
update the MAX_CONTEXT_RESULTS initialization in schema.py to safely parse
GRAPHITI_MAX_RESULTS by reading the env var string, attempting int() inside a
try/except (or using str.isdigit()/safe cast), falling back to the default 10 on
failure, and emit a warning or use the existing logger (reference
MAX_CONTEXT_RESULTS and GRAPHITI_MAX_RESULTS) so a non-numeric value does not
crash import/time startup.
In `@apps/backend/integrations/graphiti/queries_pkg/search.py`:
- Around line 78-79: The bare "except Exception: pass" swallowed
embedding-dimension validation failures; replace it with explicit error handling
in the embedding-dimension validation code path: catch specific exceptions
(e.g., ValueError, TypeError) or use "except Exception as e", log the error with
context (include the invalid value and exception message) and then re-raise or
raise a new ValueError so config loading fails loudly; update the handler that
currently contains "except Exception: pass" to either validate and raise (e.g.,
in the embedding-dimension parsing/validation function) or call the existing
logger before re-raising to preserve diagnostics.
In `@apps/backend/integrations/graphiti/tests/test_episode_types.py`:
- Around line 27-48: The fixture mock_graphiti_core_nodes currently
unconditionally pops "graphiti_core" and "graphiti_core.nodes" from sys.modules;
instead, save any existing originals at the start (e.g., orig_graphiti_core =
sys.modules.get("graphiti_core") and orig_graphiti_core_nodes =
sys.modules.get("graphiti_core.nodes")), inject the mocks (mock_graphiti_core,
mock_nodes) as you already do, and in the finally block restore the originals
(set sys.modules["graphiti_core"]=orig_graphiti_core if not None, similarly for
"graphiti_core.nodes") or delete the keys only if the original was None; keep
the fixture name/mock symbols (mock_graphiti_core, mock_nodes,
mock_episode_type) unchanged.
In `@apps/backend/integrations/graphiti/tests/test_integration_graphiti.py`:
- Around line 31-33: The tests set pytestmark = [pytest.mark.integration] but an
autouse fixture is globally mocking graphiti_core, so these integration tests
never hit real LadybugDB/graphiti. Update the autouse mock (likely in
conftest.py) to skip mocking when the test has the "integration" marker—use
request.node.get_closest_marker("integration") (or
request.node.get_closest_marker("integration") is not None) to conditionally
avoid applying the mock—or alternatively remove the autouse mock for these
specific tests and apply the mock only in unit tests; ensure the symbols to
change are the global autouse fixture name and the pytestmark =
[pytest.mark.integration] in test_integration_graphiti.py.
- Around line 70-85: The fixture currently unconditionally pops sys.modules
entries for "graphiti_core", "graphiti_core.nodes", "graphiti_core.driver", and
"graphiti_core.driver.kuzu_driver" in the finally block; instead capture the
original values before inserting mocks (e.g., save prev_graphiti_core =
sys.modules.get("graphiti_core") etc.) and in the finally restore each entry: if
the saved value is None remove the key, otherwise set sys.modules[key] back to
the saved value. Update the finally block in the fixture that yields the
mock_graphiti_core/mock_nodes/mock_driver_module to use these saved variables to
restore prior sys.modules state.
In `@apps/backend/integrations/graphiti/tests/test_integration_ollama.py`:
- Around line 31-33: The module-level pytestmark variable incorrectly labels
these tests as integration; instead either remove or change pytestmark to a unit
marker and stop globally mocking graphiti_core/driver paths at module scope —
move the mocks into individual tests or a function-scoped fixture using
monkeypatch (or pytest.fixture(scope="function")) so only tests that should be
unit-level use the mocked graphiti_core and driver path imports; locate the
module-level pytestmark and the global mocks of graphiti_core/driver in this
test module and change their scope accordingly or adjust the marker to reflect
unit tests.
- Around line 64-79: The fixture unconditionally pops keys from sys.modules
("graphiti_core", "graphiti_core.nodes", "graphiti_core.driver",
"graphiti_core.driver.kuzu_driver") in the finally block which can remove
pre-existing modules; change the teardown to capture the original values before
injecting mocks and then restore them on exit (i.e., save originals in local
variables or a dict prior to assigning mock_graphiti_core and related entries
and in the finally block set sys.modules[...] back to the saved value or delete
only if it did not exist originally), updating the code around the sys.modules
assignments and the finally cleanup to use those saved originals.
In `@apps/backend/phase_config.py`:
- Around line 529-535: Guard access to phaseCustomAgents by ensuring it's a dict
before calling .get: in the block where metadata is read (variables
phase_agents, metadata, phase), replace the unconditional phase_agents =
metadata.get("phaseCustomAgents") and subsequent phase_agents.get(phase) with a
type check (e.g., temp = metadata.get("phaseCustomAgents"); if not
isinstance(temp, dict): return None) and only then use temp.get(phase) and the
existing string validation/strip logic; this prevents a crash when
phaseCustomAgents is malformed.
In `@apps/frontend/src/main/claude-code-settings/reader.ts`:
- Around line 103-104: The sanitizer functions (sanitizeEnv, sanitizeMcpServers,
sanitizeEnabledPlugins) are vulnerable to prototype-pollution because they write
untrusted JSON keys into plain {} maps (e.g., sanitized, mcpServers, headers);
change those map initializations to use Object.create(null) and, before
assigning any user key (e.g., key, hk, pluginKey), validate the key is not one
of ['__proto__','constructor','prototype'] (reject or skip such keys) so
assignments like sanitized[key] = ... and headers[hk] = ... cannot mutate
prototypes.
In `@apps/frontend/src/main/ipc-handlers/claude-agents-handlers.ts`:
- Around line 47-51: The ClaudeCustomAgent objects returned by the IPC handler
in claude-agents-handlers.ts currently include a filePath field that should be
removed; update the code that constructs the ClaudeCustomAgent response (the IPC
handler in this file that uses getAgentsDir()) to only return agentId,
agentName, categoryDir, and categoryName (the exact shape used by
AgentProfileSettings.tsx) and stop exposing local filesystem paths to the
renderer. If renderer code needs to read agent content, add a separate IPC
method (e.g., readAgentFile(agentId)) implemented in claude-agents-handlers.ts
that reads from getAgentsDir() on the main side and returns the content, rather
than sending filePath in ClaudeCustomAgent.
In `@apps/frontend/src/main/ipc-handlers/claude-mcp-handlers.ts`:
- Around line 187-189: The spread currently casts config.headers to
Record<string,string> without validating values; update the logic around the
spread (the expression using config.headers) to validate and sanitize entries:
when config.headers is a non-null object and not an array, iterate its own keys
and build a new headers object containing only entries whose values are strings
(or convert allowed primitive values via String(value)), skipping
undefined/null/objects/arrays, then spread that sanitized headers object instead
of blindly casting; reference the config.headers usage in the clause that
produces { headers: config.headers as Record<string, string> } to locate where
to replace with the validated headers map.
- Around line 10-12: The IPC handler path uses synchronous fs calls in functions
findLatestSubdir, resolvePluginServers, and readClaudeJsonMcpServers which can
block the Electron main thread; convert these functions to async and replace
existsSync/readdirSync/statSync/readFileSync with the async fs.promises
equivalents (or import from 'fs/promises') and use await; update their callers
(notably registerClaudeMcpHandlers and any ipcMain.handle callbacks) to await
the new async functions so work runs asynchronously and does not block the main
thread. Ensure all return types/promises are propagated and error handling is
preserved when switching to the async versions.
In `@apps/frontend/src/main/ipc-handlers/context/project-context-handlers.ts`:
- Around line 343-350: The IPC handler is sending hardcoded English progress
messages to the renderer; update sendIndexProgress calls (e.g., the ones inside
the loop that reference childProjects and the earlier "Discovering..." call) to
emit a stable progress key/code plus a params object instead of user-facing text
(for example { key: "progress.discovering_repos", params: { total } } and { key:
"progress.analyzing_project", params: { name: child.name, index: i+1, total }
}). Ensure every place using sendIndexProgress (including the later block around
lines 369-400) follows this pattern so the renderer can call
react-i18next.translate with the key and params; leave types for
sendIndexProgress/payload consistent with ProjectIndex usage and update any
typing accordingly.
In `@apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts`:
- Around line 182-187: Validate and sanitize IPC inputs before building GitHub
REST endpoints: ensure the incoming "state" param is one of
'open'|'closed'|'all' (reject or default to 'open' if not), ensure "page" is an
integer >=1 (coerce or reject), and ensure any "issueNumber" used later is a
positive integer; if validation fails return a clear IPC error instead of
building the request. Update the handler that accepts (customerId, state, page)
and the other handler(s) referencing issueNumber (lines around 274-276) to
perform these checks/normalization early, and use the validated values when
constructing endpoints or queries.
In `@apps/frontend/src/main/ipc-handlers/memory-handlers.ts`:
- Around line 95-99: The heuristic fallback that returns guessed dims for
nameLower.includes('large'/'base'/'small' should be removed; instead, in the
function handling model dimension lookup (the code block using nameLower in
apps/frontend/src/main/ipc-handlers/memory-handlers.ts), fail fast for unknown
models by throwing a clear error or returning an explicit failure result (e.g.,
throw new Error('Unknown model embedding dimension') or return { error:
'unknown_model' }) so the caller will query the backend/detector for the exact
dimension; update any callers to handle this explicit failure path rather than
relying on guessed dimensions.
In `@apps/frontend/src/main/ipc-handlers/project-handlers.ts`:
- Around line 518-523: The code currently creates the .auto-claude directory
unconditionally which can silently recreate a missing or moved project root;
before constructing dotAutoClaude and calling mkdirSync, validate that the base
project path exists (e.g., check existsSync(project.path) or fs.statSync
handling) and fail fast (throw or return an error/log) if the base path is
missing, only then create path.join(project.path, '.auto-claude') with
mkdirSync({ recursive: true }); update the logic around the dotAutoClaude
variable and the existsSync/mkdirSync calls to reflect this validation and error
handling.
In `@apps/frontend/src/main/ipc-handlers/settings-handlers.ts`:
- Around line 780-797: The hasKeychainClaudeToken function currently treats
whitespace-only tokens as valid; update its fallback check (where it calls
getCredentialsFromKeychain()) to verify the token is non-empty after trimming
(e.g., confirm creds.token exists and creds.token.trim().length > 0) instead of
using !!creds.token, while keeping the initial profileManager.hasValidAuth()
attempt unchanged; this change affects hasKeychainClaudeToken and references
getCredentialsFromKeychain and profileManager.hasValidAuth.
- Around line 855-856: The AUTOBUILD_SOURCE_ENV_GET handler is using hasEnvToken
|| hasGlobalToken but AUTOBUILD_SOURCE_ENV_CHECK_TOKEN also includes
hasKeychainClaudeToken(), causing inconsistent UI vs runtime checks; update
AUTOBUILD_SOURCE_ENV_GET to use the same unified token-presence logic (e.g.,
compute hasToken = hasEnvToken || hasGlobalToken || hasKeychainClaudeToken()),
or extract a shared helper (e.g., getHasToken() or unifiedHasToken()) and use it
in both AUTOBUILD_SOURCE_ENV_GET and AUTOBUILD_SOURCE_ENV_CHECK_TOKEN so both
handlers report identical token sources and state.
In `@apps/frontend/src/renderer/App.tsx`:
- Around line 815-820: The finally block always closes the GitHub setup UI and
clears gitHubSetupProject even on save failure; change the flow so that
setShowGitHubSetup(false) and setGitHubSetupProject(null) are only called after
a successful save (e.g., move those calls out of the finally and into the try
block after the await/save operation and any success checks), leaving the catch
to log or surface the error and keep the modal open for retry; update the code
paths referencing setShowGitHubSetup and setGitHubSetupProject accordingly so
failures do not close the modal or clear gitHubSetupProject.
In `@apps/frontend/src/renderer/components/AddCustomerModal.tsx`:
- Around line 49-73: The code captures a stale Zustand snapshot by saving
useProjectStore.getState() to `store` and later reading `store.projects` after
`store.updateProject(...)`; fix by re-reading the up-to-date store state after
the update instead of using the old `store` reference — e.g. call
useProjectStore.getState() (or useProjectStore.getState().projects) when
computing `updatedProject` so the lookup uses the fresh state (replace the
`store.projects.find(...)` usage with a fresh getState() lookup), keeping
existing calls to store.updateProject, store.addProject, store.selectProject,
and onCustomerAdded intact.
- Around line 44-48: In registerAndInitCustomer, don't return silently when
window.electronAPI.addProject(path, 'customer') fails; instead explicitly handle
the error case by reading result.success/result.data and result.error, logging
the failure (console.error or process logger) and surfacing a user-facing error
(e.g., set a modal state like setRegistrationError or trigger a
toast/notification) so the UI shows the failure; update the failure branch
inside registerAndInitCustomer (after the addProject call) to include logging
the returned error details and updating the component state or throwing so
callers know the operation failed.
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 1376-1377: Currently the two cards are fully hidden behind the
conditions allGlobalServers.length > 0 and agentsInfo.totalAgents > 0 which also
hides their refresh controls; change them to always render the card container
(keep the same rounded-lg border bg-card p-4 markup) and move the conditional
logic inside the card to show three states: a loading state (use existing
fetch/loading flags or derive one from the fetch promise/state, e.g.,
isLoadingGlobalServers / isLoadingAgents), an empty state (explicit message like
"No servers" or "No agents" when arrays/count === 0), and the populated state
that renders the list/table when data exists; ensure the refresh control
(existing refresh button/handler) stays visible in the card header so users can
retry even when empty or failed, and update references to allGlobalServers and
agentsInfo.totalAgents inside the card body accordingly.
In `@apps/frontend/src/renderer/components/context/ProjectIndexTab.tsx`:
- Around line 101-111: The progress-percentage and width calculations in
ProjectIndexTab use indexProgressCurrent / indexProgressTotal without guarding
against indexProgressTotal being 0; update the logic wherever used (the block
using indexProgressTotal and indexProgressCurrent around the progress bar and
the similar code at lines ~128-138) to compute percent as indexProgressTotal > 0
? Math.round((indexProgressCurrent / indexProgressTotal) * 100) : 0 (and
similarly for the inline style width using a conditional that yields "0%" when
total is 0 or null), and ensure you still handle null/undefined checks for
indexProgressTotal and indexProgressCurrent already present.
In `@apps/frontend/src/renderer/components/CustomerReposModal.tsx`:
- Around line 137-143: The clear-search icon button in the CustomerReposModal
component is missing an accessible name and hardcodes no text; update the button
in CustomerReposModal to use the react-i18next key customerRepos.clearSearch for
its accessible label (e.g., pass aria-label or include visually hidden text
using t('customerRepos.clearSearch') from useTranslation) instead of a hardcoded
string, and add the customerRepos.clearSearch entry to both en/*.json and
fr/*.json translation files with appropriate translations; ensure you reference
the button that calls setSearch('') so the behavior remains unchanged.
- Line 148: The div in CustomerReposModal (the element with className "flex-1
overflow-y-auto min-h-0 space-y-2 py-2") uses an inline style maxHeight:
'400px'; remove the style prop and replace it with the Tailwind utility
max-h-[400px] (added to the className) to comply with Tailwind v4 usage; update
the className on that div in CustomerReposModal.tsx and remove the style
attribute.
In `@apps/frontend/src/renderer/components/EnvConfigModal.tsx`:
- Around line 160-194: The UI uses a different authenticated-profile predicate
here (in EnvConfigModal) than the one used when loading profiles, which can
cause inconsistent success states; replace the inline check "profile.configDir
|| profile.isDefault" with the same predicate used at profile-loading (extract
or import the existing helper used at line 109, e.g., the
isAuthenticatedProfile/isProfileAuthenticated function or rename the check into
a shared helper) and use that helper in EnvConfigModal's save flow (update
comments to reflect Keychain-based auth via the shared predicate). Ensure the
same symbol is referenced in both places so they remain consistent.
In
`@apps/frontend/src/renderer/components/github-issues/components/IssueListItem.tsx`:
- Around line 30-34: The repo badge (in IssueListItem component) can overflow in
narrow panes because long issue.repoFullName isn't constrained; update the Badge
rendering to constrain its width and enable truncation (e.g., add classes like a
fixed max-width, overflow-hidden and truncate) and include a title attribute
(title={issue.repoFullName}) so the full name is visible on hover; target the
Badge instance that’s conditional on showRepoBadge and issue.repoFullName and
apply these CSS class and title changes.
In
`@apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts`:
- Around line 55-84: The useEffect that defines checkConnection can let stale
async results overwrite state when customerId changes; update checkConnection to
capture the initiating customerId (e.g., const reqCustomer = customerId) or use
a cancelled flag/AbortController so that before calling setState you verify the
response belongs to the current customer (compare reqCustomer === customerId or
!cancelled), referencing the checkConnection function, useEffect, setState,
customerId, and window.electronAPI.github.checkMultiRepoConnection; apply the
same guard pattern to the other effect handling lines 87-124 to prevent older
responses from mutating state for a new customer.
- Line 4: Replace the relative import of FilterState in
useMultiRepoGitHubIssues.ts with the renderer path alias: change "import type {
FilterState } from '../types';" to use the "@/..." alias that maps to the
renderer source (import the same FilterState symbol via the `@/`* alias consistent
with other frontend imports).
In `@apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx`:
- Around line 58-75: The function formatRelativeDate can produce negative
"minutes/hours ago" when diffMs is negative; modify formatRelativeDate to handle
future timestamps by clamping diffMs to zero or adding a dedicated future
branch: compute diffMs = Math.max(0, now.getTime() - date.getTime()) (or if you
prefer detect diffMs < 0 and return a localized future string via
t('time.inFuture') or similar). Update logic that derives
diffHours/diffMins/diffDays from diffMs so they never become negative and ensure
any i18n key used (e.g., 'time.inFuture') matches your translation entries.
- Around line 491-516: The current render shows PRDetail whenever
resolvedChildProjectId && selectedPR are truthy, which can pair a stale
singleRepo.selectedPR (fullPRDetail) with a new resolvedChildProjectId; fix this
by additionally verifying the PR belongs to the resolved child project before
rendering. Update the guard around PRDetail (where resolvedChildProjectId and
selectedPR are checked) to also compare the PR's project identifier (e.g.,
selectedPR.projectId, selectedPR.childProjectId, or fullPRDetail.id depending on
your shape) to resolvedChildProjectId so the component only renders when
selectedPR is for the currently active multi-repo project; keep the rest of the
props and handlers (PRDetail, handleRunReview, handleCheckNewCommits, etc.)
unchanged.
In
`@apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts`:
- Around line 52-119: Reset component state when customerId becomes falsy and
guard both async effects against applying stale results by using a cancellation
flag or local copy of customerId inside each effect. Specifically, in the effect
that calls checkConnection (function checkConnection) return early and call
setState to clear prs/repos/syncStatus/error when customerId is undefined;
inside checkConnection capture const currentId = customerId (or use a let
cancelled = false and set cancelled = true in the cleanup) and before any
setState verify the request is not cancelled and the customerId still matches
currentId. Do the same in the loadPRs effect (function loadPRs): capture the
customer id or use a cancelled flag in the effect cleanup and only call setState
when not cancelled and the id matches; also ensure you reset
isLoading/error/prs/repos when customerId changes to avoid showing data from the
previous customer.
In `@apps/frontend/src/renderer/components/GitHubIssues.tsx`:
- Around line 34-40: The task linking logic currently maps and resolves
linkedTaskId using only issue.number which allows cross-repo collisions in
customer multi-repo mode; update the mapping and resolution to use a repo-scoped
key (e.g., `${repoFullName}#${issue.number}`) wherever issues are aggregated —
specifically adjust the hooks/consumers referenced by isCustomer, singleRepo,
and multiRepo and replace any plain-number keys (see the resolution logic around
linkedTaskId) with the repoFullName#issueNumber composite key (also apply the
same change to the code paths noted around lines 79-87) so that lookups and maps
are created and read with the repo-qualified key.
In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx`:
- Around line 26-27: Update the deep-relative imports in
AgentProfileSettings.tsx to use the frontend path aliases: replace the import of
AgentProfile, PhaseModelConfig, PhaseThinkingConfig, PhaseCustomAgentsConfig,
ModelTypeShort, ThinkingLevel from '../../../shared/types/settings' with the
alias '@shared/types/settings', and replace ClaudeAgentsInfo and
ClaudeCustomAgent from '../../../shared/types/integrations' with
'@shared/types/integrations' so the renderer uses the configured `@shared/`* path
aliases.
In
`@apps/frontend/src/renderer/components/settings/integrations/GitHubIntegration.tsx`:
- Around line 510-514: Replace the template-literal Tailwind class composition
in the GitHubIntegration component with the cn() helper: locate the JSX element
that sets className using the template string that references status and
alreadyCloned, and call cn() passing the static classes ("flex items-center
gap-2 rounded-md border px-3 py-2 text-sm") plus a conditional object/ternary
for the dynamic classes so that when status === 'done' || alreadyCloned you
return "border-green-500/30 bg-green-500/5" else "border-border"; ensure you
import/use the existing cn helper used across the codebase and remove the
template literal to make the composition merge-safe.
In `@apps/frontend/src/renderer/components/Sidebar.tsx`:
- Line 127: The Sidebar defines showAddCustomerModal and setShowAddCustomerModal
but never sets it true, so AddCustomerModal can’t open; add a control in the
Sidebar (e.g., a button or menu item) that calls setShowAddCustomerModal(true)
and pass show={showAddCustomerModal} and onClose={() =>
setShowAddCustomerModal(false)} (or equivalent handler) into the
AddCustomerModal component; update wherever similar modal flags are declared
(see the other modal flags around the 642-649 region) to follow the same pattern
so each modal can be opened and closed from the Sidebar.
- Around line 145-155: The path comparison logic in customerContext and
customerChildRepos uses startsWith(p.path + '/') which fails on Windows
backslashes; normalize both paths before comparing by converting backslashes to
forward slashes (e.g., replace all '\' with '/') and ensure the parent path ends
with a single '/' before calling startsWith; update the comparisons in the
customerContext useMemo and customerChildRepos useMemo (references:
selectedProject.path, p.path, customerContext, customerChildRepos) to use the
normalized paths for robust cross-platform detection.
In `@apps/frontend/src/renderer/components/SortableProjectTab.tsx`:
- Around line 95-103: The inactive drag handle is keyboard-focusable
(tabIndex={0}) while styled invisible (opacity-0), so change the tabIndex to be
conditional: set tabIndex to 0 only when the handle is visible/active (e.g.,
when isActive is true or otherwise visible) and set it to -1 when
inactive/hidden; update the SortableProjectTab JSX where tabIndex={0} is used
(the drag handle element with classes including 'opacity-0
group-hover:opacity-60 bg-muted-foreground') to tabIndex={isActive ? 0 : -1} and
keep or add any focus-visible styles for the active state so keyboard focus
remains visible.
In `@apps/frontend/src/shared/i18n/locales/en/navigation.json`:
- Line 30: The "repoCount" translation currently always renders "{{count}}
repos" and should handle singular/plural; update the localization for the
"repoCount" key to provide a singular form (e.g., use ICU plural syntax like
"{{count, plural, one {# repo} other {# repos}}}" or add i18next plural keys
"repoCount_one" and "repoCount_other") so that when count===1 it outputs "1
repo" and otherwise "N repos".
In `@apps/frontend/src/shared/i18n/locales/fr/navigation.json`:
- Line 30: The "repoCount" entry currently always uses the plural form and
yields "1 dépôts"; update the locale to use proper French pluralization for the
"repoCount" key by adding singular/plural variants supported by your i18n system
(e.g., replace the single "repoCount" value with language-specific plural keys
such as "repoCount_one": "{{count}} dépôt" and "repoCount_other": "{{count}}
dépôts" or use "repoCount": "{{count, plural, one {# dépôt} other {# dépôts}}}"
depending on your translator), ensuring count=1 renders "1 dépôt".
In `@apps/frontend/src/shared/types/ipc.ts`:
- Around line 458-459: The onIndexProgress event payload lacks project
identification, so change the onIndexProgress callback signature to include
projectId in its data object (e.g., callback: (data: { projectId: string;
message: string; current?: number; total?: number }) => void) and update any
related types/usages (the onIndexProgress declaration, any emitter/dispatcher
that calls it, and listeners that consume it) to pass and expect projectId;
ensure this aligns with refreshProjectIndex and IPCResult<ProjectIndex>
semantics so progress events can be scoped per project.
---
Outside diff comments:
In `@apps/frontend/src/main/project-store.ts`:
- Around line 129-146: addProject currently returns the existing Project early
without updating its type, so calls that re-add a path with type='customer'
aren't persisted; in addProject (after computing absolutePath and finding
existing via this.data.projects.find) check if a new type argument was passed
and differs from existing.type, then set existing.type = type, update
existing.updatedAt = new Date(), call this.save(), and log if desired before
returning; keep the existing autoBuildPath/isInitialized reset logic and ensure
you still return the existing project after persisting the type change.
In `@apps/frontend/src/renderer/components/github-prs/components/PRList.tsx`:
- Around line 191-205: The relative-date calculation can produce negative values
for future timestamps; clamp diffMs to non-negative before computing diffDays to
avoid "-N minutes ago" outputs. In the block using now, date, diffMs and
diffDays (in PRList.tsx), set diffMs = Math.max(0, now.getTime() -
date.getTime()) (or early-return a localized "just now") so subsequent
Math.floor(...) computations and translation keys t('time.minutesAgo'...),
t('time.hoursAgo'...), etc., never receive negative counts.
In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx`:
- Around line 87-101: The memoized hasCustomConfig currently ignores
phaseCustomAgents, so agent-only overrides aren't detected; update the useMemo
in AgentProfileSettings to also check settings.phaseCustomAgents and compare
currentPhaseCustomAgents[phase] against the profile's phaseCustomAgents (e.g.,
add a comparison like currentPhaseCustomAgents[phase] !==
profilePhaseCustomAgents[phase] within the PHASE_KEYS.some), and add
settings.phaseCustomAgents and currentPhaseCustomAgents (and
profilePhaseCustomAgents if named differently) to the hook dependency array;
apply the same inclusion of phaseCustomAgents detection to the similar condition
block around lines 283-295 so the customized badge/reset control reflects agent
overrides too.
In
`@apps/frontend/src/renderer/components/settings/integrations/LinearIntegration.tsx`:
- Around line 231-243: The placeholders on the Input components in
LinearIntegration.tsx are hardcoded; replace the literal strings "Auto-detected"
and "Auto-created" with i18n lookups using the component's translation function
(t) — e.g. set placeholder={t('linear.autoDetected')} for the teamId Input (with
onTeamIdChange) and placeholder={t('linear.autoCreated')} for the projectId
Input (with onProjectIdChange); then add matching keys "linear.autoDetected" and
"linear.autoCreated" to both en/*.json and fr/*.json translation files.
In `@apps/frontend/src/renderer/components/settings/ThemeSelector.tsx`:
- Around line 130-131: Theme names/descriptions are rendered directly from
COLOR_THEMES (theme.name and theme.description) instead of using react-i18next;
update the ThemeSelector component to call the i18n translator (useTranslation
-> t) and replace theme.name with something like t(`themes.${theme.id}.name`)
and theme.description with t(`themes.${theme.id}.description`), then add
matching keys for each theme ID under themes.* in your en/*.json and fr/*.json
locale files so all user-facing theme text is translated.
In `@apps/frontend/src/renderer/components/TaskEditDialog.tsx`:
- Around line 212-225: The dirty-check (hasChanges) and the other save-detection
spots are missing comparison for phaseCustomAgents, so changes to custom-agent
assignments don't mark the dialog as dirty; add a comparison like
JSON.stringify(phaseCustomAgents) !==
JSON.stringify(task.metadata?.phaseCustomAgents || DEFAULT_PHASE_CUSTOM_AGENTS)
to the existing hasChanges expression and the corresponding checks at the other
occurrence (around where phaseModels/phaseThinking are compared) using the
phaseCustomAgents state and the task.metadata?.phaseCustomAgents fallback
constant (e.g., DEFAULT_PHASE_CUSTOM_AGENTS) so custom-agent edits are detected
and persisted.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 03e16508-4b22-4b2c-bbfd-7cd56888c1ba
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json,!**/package-lock.json
📒 Files selected for processing (106)
apps/backend/__init__.pyapps/backend/agents/coder.pyapps/backend/agents/custom_agents.pyapps/backend/agents/planner.pyapps/backend/analysis/analyzers/base.pyapps/backend/analysis/analyzers/context/env_detector.pyapps/backend/analysis/analyzers/database_detector.pyapps/backend/analysis/analyzers/framework_analyzer.pyapps/backend/analysis/analyzers/port_detector.pyapps/backend/analysis/analyzers/project_analyzer_module.pyapps/backend/analysis/analyzers/route_detector.pyapps/backend/analysis/analyzers/service_analyzer.pyapps/backend/core/client.pyapps/backend/integrations/graphiti/config.pyapps/backend/integrations/graphiti/memory.pyapps/backend/integrations/graphiti/queries_pkg/graphiti.pyapps/backend/integrations/graphiti/queries_pkg/queries.pyapps/backend/integrations/graphiti/queries_pkg/schema.pyapps/backend/integrations/graphiti/queries_pkg/search.pyapps/backend/integrations/graphiti/tests/test_episode_types.pyapps/backend/integrations/graphiti/tests/test_integration_graphiti.pyapps/backend/integrations/graphiti/tests/test_integration_ollama.pyapps/backend/integrations/graphiti/tests/test_queries.pyapps/backend/phase_config.pyapps/backend/qa/loop.pyapps/frontend/package.jsonapps/frontend/src/__tests__/e2e/smoke.test.tsapps/frontend/src/__tests__/integration/ipc-bridge.test.tsapps/frontend/src/main/claude-code-settings/reader.tsapps/frontend/src/main/claude-code-settings/types.tsapps/frontend/src/main/ipc-handlers/claude-agents-handlers.tsapps/frontend/src/main/ipc-handlers/claude-mcp-handlers.tsapps/frontend/src/main/ipc-handlers/context/project-context-handlers.tsapps/frontend/src/main/ipc-handlers/github/customer-github-handlers.tsapps/frontend/src/main/ipc-handlers/github/index.tsapps/frontend/src/main/ipc-handlers/github/issue-handlers.tsapps/frontend/src/main/ipc-handlers/github/oauth-handlers.tsapps/frontend/src/main/ipc-handlers/index.tsapps/frontend/src/main/ipc-handlers/memory-handlers.tsapps/frontend/src/main/ipc-handlers/project-handlers.tsapps/frontend/src/main/ipc-handlers/settings-handlers.tsapps/frontend/src/main/project-store.tsapps/frontend/src/preload/api/modules/github-api.tsapps/frontend/src/preload/api/modules/mcp-api.tsapps/frontend/src/preload/api/project-api.tsapps/frontend/src/renderer/App.tsxapps/frontend/src/renderer/components/AddCustomerModal.tsxapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/components/CustomerReposModal.tsxapps/frontend/src/renderer/components/EnvConfigModal.tsxapps/frontend/src/renderer/components/GitHubIssues.tsxapps/frontend/src/renderer/components/GitHubSetupModal.tsxapps/frontend/src/renderer/components/Insights.tsxapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/SortableProjectTab.tsxapps/frontend/src/renderer/components/TaskCreationWizard.tsxapps/frontend/src/renderer/components/TaskEditDialog.tsxapps/frontend/src/renderer/components/context/Context.tsxapps/frontend/src/renderer/components/context/MemoriesTab.tsxapps/frontend/src/renderer/components/context/MemoryCard.tsxapps/frontend/src/renderer/components/context/ProjectIndexTab.tsxapps/frontend/src/renderer/components/context/constants.tsapps/frontend/src/renderer/components/context/hooks.tsapps/frontend/src/renderer/components/github-issues/components/IssueList.tsxapps/frontend/src/renderer/components/github-issues/components/IssueListItem.tsxapps/frontend/src/renderer/components/github-issues/components/RepoFilterDropdown.tsxapps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.tsapps/frontend/src/renderer/components/github-issues/types/index.tsapps/frontend/src/renderer/components/github-prs/GitHubPRs.tsxapps/frontend/src/renderer/components/github-prs/components/PRList.tsxapps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.tsapps/frontend/src/renderer/components/ideation/Ideation.tsxapps/frontend/src/renderer/components/onboarding/OllamaModelSelector.tsxapps/frontend/src/renderer/components/settings/AgentProfileSettings.tsxapps/frontend/src/renderer/components/settings/DisplaySettings.tsxapps/frontend/src/renderer/components/settings/ProjectSelector.tsxapps/frontend/src/renderer/components/settings/ThemeSelector.tsxapps/frontend/src/renderer/components/settings/integrations/GitHubIntegration.tsxapps/frontend/src/renderer/components/settings/integrations/LinearIntegration.tsxapps/frontend/src/renderer/components/settings/sections/SectionRouter.tsxapps/frontend/src/renderer/components/task-detail/TaskProgress.tsxapps/frontend/src/renderer/lib/browser-mock.tsapps/frontend/src/renderer/lib/mocks/integration-mock.tsapps/frontend/src/renderer/lib/mocks/project-mock.tsapps/frontend/src/renderer/stores/context-store.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/shared/constants/models.tsapps/frontend/src/shared/i18n/index.tsapps/frontend/src/shared/i18n/locales/en/common.jsonapps/frontend/src/shared/i18n/locales/en/context.jsonapps/frontend/src/shared/i18n/locales/en/dialogs.jsonapps/frontend/src/shared/i18n/locales/en/navigation.jsonapps/frontend/src/shared/i18n/locales/en/settings.jsonapps/frontend/src/shared/i18n/locales/en/tasks.jsonapps/frontend/src/shared/i18n/locales/fr/common.jsonapps/frontend/src/shared/i18n/locales/fr/context.jsonapps/frontend/src/shared/i18n/locales/fr/dialogs.jsonapps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/shared/i18n/locales/fr/settings.jsonapps/frontend/src/shared/i18n/locales/fr/tasks.jsonapps/frontend/src/shared/types/integrations.tsapps/frontend/src/shared/types/ipc.tsapps/frontend/src/shared/types/project.tsapps/frontend/src/shared/types/settings.tsapps/frontend/src/shared/types/task.tspackage.json
apps/frontend/src/renderer/components/settings/integrations/GitHubIntegration.tsx
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (5)
apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx (1)
26-27: 🛠️ Refactor suggestion | 🟠 MajorUse frontend
@shared/*aliases for shared type imports.Line 26-27 still uses deep-relative imports for shared types. Please switch to configured aliases for consistency and path stability in renderer code.
♻️ Proposed fix
-import type { AgentProfile, PhaseModelConfig, PhaseThinkingConfig, ModelTypeShort, ThinkingLevel } from '../../../shared/types/settings'; -import type { ClaudeAgentsInfo } from '../../../shared/types/integrations'; +import type { AgentProfile, PhaseModelConfig, PhaseThinkingConfig, ModelTypeShort, ThinkingLevel } from '@shared/types/settings'; +import type { ClaudeAgentsInfo } from '@shared/types/integrations';As per coding guidelines, "Use path aliases from
tsconfig.jsonin frontend code:@/*for renderer,@shared/*for shared,@preload/*for preload,@features/*for features,@components/*for components,@hooks/*for hooks,@lib/*for lib."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx` around lines 26 - 27, Replace the deep-relative imports for shared types with the frontend path aliases: change the import sources that reference '../../../shared/types/settings' and '../../../shared/types/integrations' to use '@shared/types/settings' and '@shared/types/integrations' respectively so the imported symbols (AgentProfile, PhaseModelConfig, PhaseThinkingConfig, ModelTypeShort, ThinkingLevel, ClaudeAgentsInfo) use the configured tsconfig alias; update the import statements in AgentProfileSettings.tsx accordingly to avoid deep-relative paths.apps/backend/agents/custom_agents.py (3)
226-258:⚠️ Potential issue | 🟠 MajorYAML block-list syntax is silently ignored.
The
_parse_simple_yamlfunction only handles inline lists (key: [item1, item2]), but standard YAML block-list syntax is commonly used:tools: - Read - WriteThis will parse
toolsas an empty string, silently losing the configuration.🔧 Proposed fix to handle block lists
def _parse_simple_yaml(text: str) -> dict: """ Parse simple YAML-like frontmatter (key: value pairs). Handles: - key: value (strings) - key: [item1, item2] (inline lists) + - key: followed by indented - items (block lists) - key: (empty value) Does NOT handle nested structures or multi-line values. """ result = {} - for line in text.split("\n"): + lines = text.split("\n") + i = 0 + while i < len(lines): + line = lines[i] line = line.strip() if not line or line.startswith("#"): + i += 1 continue if ":" not in line: + i += 1 continue key, _, value = line.partition(":") key = key.strip() value = value.strip() # Parse inline list: [item1, item2] if value.startswith("[") and value.endswith("]"): items = value[1:-1].split(",") result[key] = [item.strip().strip("\"'") for item in items if item.strip()] + elif not value: + # Check for block list (indented - items) + block_items = [] + j = i + 1 + while j < len(lines): + next_line = lines[j] + stripped = next_line.lstrip() + if stripped.startswith("- "): + block_items.append(stripped[2:].strip().strip("\"'")) + j += 1 + elif not stripped or stripped.startswith("#"): + j += 1 + else: + break + if block_items: + result[key] = block_items + i = j + continue + else: + result[key] = "" elif value: # Strip quotes result[key] = value.strip("\"'") - else: - result[key] = "" + i += 1 return result🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/custom_agents.py` around lines 226 - 258, The _parse_simple_yaml function currently ignores YAML block-list syntax; update it so when a key is found with an empty value (result[key] = ""), the parser peeks at subsequent lines from the original text to detect an indented block of list items (lines starting with "-" possibly preceded by spaces), collect those items into a Python list (stripping "- " and quotes) and assign that list to result[key] instead of the empty string; ensure you respect the indentation level relative to the key and stop collecting when indentation decreases or a non-list line is encountered, and update any loop/control flow in _parse_simple_yaml to iterate with index or an iterator so you can consume block-list lines properly.
79-83:⚠️ Potential issue | 🟠 MajorFrontmatter extraction fails on Windows CRLF files.
The regex on line 80 uses
\nfor line matching, but files edited on Windows may use\r\n. This causes frontmatter to either not be detected or leak into the system prompt.🔧 Proposed fix
agent_id = file_path.stem # filename without .md + # Normalize line endings for cross-platform compatibility + content = content.replace("\r\n", "\n") frontmatter = {} body = content # Extract YAML frontmatter if present - fm_match = re.match(r"^---\s*\n(.*?)\n---\s*\n(.*)", content, re.DOTALL) + fm_match = re.match(r"^---[ \t]*\n(.*?)\n---[ \t]*\n(.*)", content, re.DOTALL)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/custom_agents.py` around lines 79 - 83, The frontmatter extraction regex (fm_match) fails on Windows CRLF because it only matches '\n'; update the logic in the block that computes fm_match/fm_text/body to handle CRLF by either normalizing content line endings first (e.g., replace '\r\n' with '\n' before running the regex) or by making the regex accept optional '\r' (e.g., use '\r?\n' in the pattern) so the pattern ^---\s*\r?\n(.*?)\r?\n---\s*\r?\n(.*) will correctly capture frontmatter on Windows; apply the change where fm_match is defined and ensure body = fm_match.group(2).strip() still runs only when fm_match is truthy.
114-145:⚠️ Potential issue | 🔴 CriticalCritical: Validate
agent_idto prevent path traversal.The
agent_idparameter is used directly in path construction (f"{agent_id}.md") without validation. An attacker-controlledagent_idcontaining../or path separators could escape the agents directory and read arbitrary.mdfiles.🔒 Proposed fix
def load_custom_agent(agent_id: str) -> CustomAgentConfig | None: """ Load a custom agent by ID. ... """ + # Validate agent_id to prevent path traversal + if not re.fullmatch(r"[A-Za-z0-9_-]+", agent_id): + logger.warning(f"Invalid custom agent id (unsafe characters): {agent_id}") + return None + agents_dir = get_agents_dir() if not agents_dir.exists(): return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/custom_agents.py` around lines 114 - 145, The load_custom_agent function currently concatenates agent_id into paths directly; validate and sanitize agent_id before using it: reject or normalize any input containing path separators or traversal segments (e.g., "/" "\" or "..") or use a safe basename (e.g., Path(agent_id).name) so only a filename is allowed; after constructing agent_file or root_file, resolve both the target path and agents_dir (Path.resolve()) and enforce the target is inside agents_dir (e.g., target_path.resolve().is_relative_to(agents_dir.resolve()) or equivalent) before calling parse_agent_file; if validation fails, return None (or raise) to prevent path traversal. Ensure references: load_custom_agent, get_agents_dir, agents_dir, agent_file, root_file, parse_agent_file.apps/backend/agents/coder.py (1)
85-87:⚠️ Potential issue | 🔴 CriticalFix import ordering to resolve Ruff lint failure.
The pipeline is failing with
I001 Import block is un-sorted or un-formatted. The import from.custom_agents(line 86) should come before.memory_manager(line 85) alphabetically.🧹 Proposed fix
-from .memory_manager import debug_memory_system_status, get_graphiti_context from .custom_agents import build_agents_catalog_prompt +from .memory_manager import debug_memory_system_status, get_graphiti_context from .session import post_session_processing, run_agent_sessionAs per coding guidelines, "Lint backend code with Ruff and ensure all tests pass with pytest."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/coder.py` around lines 85 - 87, The import block is out-of-order causing a Ruff I001 error: swap the two local imports so .custom_agents is imported before .memory_manager (i.e., place "from .custom_agents import build_agents_catalog_prompt" above "from .memory_manager import debug_memory_system_status, get_graphiti_context"); verify imported names remain correct and that the rest of the import block (e.g. "from .session import post_session_processing, run_agent_session") stays in its original position to satisfy Ruff's alphabetical ordering rules.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/backend/agents/custom_agents.py`:
- Around line 148-164: The load_all_agents() and build_agents_catalog_prompt
functions currently only iterate category subdirectories and thus omit
root-level agent files; update both to also glob for "*.md" files directly under
the agents root (get_agents_dir()) and parse them with parse_agent_file (same
logic used for category files), skipping README.md and deduplicating by ID if
necessary to match load_custom_agent() behavior; locate and modify
load_all_agents and build_agents_catalog_prompt to first collect root-level
agent files before or in addition to iterating category_dir.glob("*.md").
- Line 183: The inline transformation setting category_name from
category_dir.name is dense; extract that logic into a small helper (e.g., def
parse_category_name(raw_name) or format_category_name(name)) and replace the
inline expression (the assignment to category_name) with a call to that helper;
the helper should implement the same behavior: if "-" in raw_name split on the
first "-" and take the tail, replace "-" with " " and title-case the result,
otherwise return raw_name unchanged, and update all places that compute
category_name to call the new helper for readability and reuse.
- Around line 98-102: Replace the hardcoded check for thinking levels with the
centralized sanitizer: import and call sanitize_thinking_level(thinking) from
phase_config instead of validating against ("low","medium","high"); assign its
return back to thinking and rely on its built-in handling/logging (it will map
legacy values like "ultrathink" → "high" and produce consistent warnings), and
ensure the surrounding context (e.g., file_path) still makes sense if you need
to add any extra log context.
In `@apps/backend/core/client.py`:
- Around line 908-914: The agents_catalog_prompt is concatenated into
base_prompt without any Windows command-line length protection; update the block
that appends agents_catalog_prompt so on Windows (os.name == 'nt') you either
truncate agents_catalog_prompt to the available command length (using the same
MAX/CALC logic used by the CLAUDE.md truncation code) or emit a clear warning
when len(agents_catalog_prompt) exceeds a safe threshold before appending;
ensure you update the injected print/log line to reflect truncation or the
warning and reference the same variables (base_prompt, agents_catalog_prompt) so
the behavior mirrors the CLAUDE.md truncation logic.
In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx`:
- Around line 387-438: The agents catalog can grow beyond the viewport; update
the scroll container (the div rendering the panel around agentsInfo.categories —
currently the div with className "border-t border-border p-4 space-y-1") to use
a fixed max-height and internal scrolling (e.g., add a Tailwind class like
max-h-60 and overflow-y-auto or equivalent) so
expandedAgentCategories/agentsInfo.categories content scrolls internally instead
of overflowing the page; ensure nested expanded lists (the div showing category
agents) keep standard spacing but rely on the parent container's internal
scroll.
---
Duplicate comments:
In `@apps/backend/agents/coder.py`:
- Around line 85-87: The import block is out-of-order causing a Ruff I001 error:
swap the two local imports so .custom_agents is imported before .memory_manager
(i.e., place "from .custom_agents import build_agents_catalog_prompt" above
"from .memory_manager import debug_memory_system_status, get_graphiti_context");
verify imported names remain correct and that the rest of the import block (e.g.
"from .session import post_session_processing, run_agent_session") stays in its
original position to satisfy Ruff's alphabetical ordering rules.
In `@apps/backend/agents/custom_agents.py`:
- Around line 226-258: The _parse_simple_yaml function currently ignores YAML
block-list syntax; update it so when a key is found with an empty value
(result[key] = ""), the parser peeks at subsequent lines from the original text
to detect an indented block of list items (lines starting with "-" possibly
preceded by spaces), collect those items into a Python list (stripping "- " and
quotes) and assign that list to result[key] instead of the empty string; ensure
you respect the indentation level relative to the key and stop collecting when
indentation decreases or a non-list line is encountered, and update any
loop/control flow in _parse_simple_yaml to iterate with index or an iterator so
you can consume block-list lines properly.
- Around line 79-83: The frontmatter extraction regex (fm_match) fails on
Windows CRLF because it only matches '\n'; update the logic in the block that
computes fm_match/fm_text/body to handle CRLF by either normalizing content line
endings first (e.g., replace '\r\n' with '\n' before running the regex) or by
making the regex accept optional '\r' (e.g., use '\r?\n' in the pattern) so the
pattern ^---\s*\r?\n(.*?)\r?\n---\s*\r?\n(.*) will correctly capture frontmatter
on Windows; apply the change where fm_match is defined and ensure body =
fm_match.group(2).strip() still runs only when fm_match is truthy.
- Around line 114-145: The load_custom_agent function currently concatenates
agent_id into paths directly; validate and sanitize agent_id before using it:
reject or normalize any input containing path separators or traversal segments
(e.g., "/" "\" or "..") or use a safe basename (e.g., Path(agent_id).name) so
only a filename is allowed; after constructing agent_file or root_file, resolve
both the target path and agents_dir (Path.resolve()) and enforce the target is
inside agents_dir (e.g.,
target_path.resolve().is_relative_to(agents_dir.resolve()) or equivalent) before
calling parse_agent_file; if validation fails, return None (or raise) to prevent
path traversal. Ensure references: load_custom_agent, get_agents_dir,
agents_dir, agent_file, root_file, parse_agent_file.
In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx`:
- Around line 26-27: Replace the deep-relative imports for shared types with the
frontend path aliases: change the import sources that reference
'../../../shared/types/settings' and '../../../shared/types/integrations' to use
'@shared/types/settings' and '@shared/types/integrations' respectively so the
imported symbols (AgentProfile, PhaseModelConfig, PhaseThinkingConfig,
ModelTypeShort, ThinkingLevel, ClaudeAgentsInfo) use the configured tsconfig
alias; update the import statements in AgentProfileSettings.tsx accordingly to
avoid deep-relative paths.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: a404dd2c-2dd5-417c-a426-502fdb1719f8
📒 Files selected for processing (9)
apps/backend/agents/coder.pyapps/backend/agents/custom_agents.pyapps/backend/agents/planner.pyapps/backend/core/client.pyapps/backend/qa/loop.pyapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/components/settings/AgentProfileSettings.tsxapps/frontend/src/shared/i18n/locales/en/settings.jsonapps/frontend/src/shared/i18n/locales/fr/settings.json
apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx
Outdated
Show resolved
Hide resolved
Enhance the Claude Code MCPs (Global) section in MCP Overview: - Health status indicator (green/red/yellow/gray dot) per MCP server - "Check Health" button to probe all global MCPs in parallel - Response time display for HTTP servers - Pipeline phase toggle chips (Spec, Build, QA, Utility, Ideation) per MCP - Phase assignments persisted in settings as globalMcpPhases - Add GlobalMcpPhaseConfig type to settings types - i18n keys for EN and FR Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…list Global MCPs from Claude Code config (~/.claude.json, ~/.claude/settings.json) are a trusted source. The existing checkMcpHealth rejects commands not in the SAFE_COMMANDS allowlist (npx, npm, node, etc.), causing all global MCP command-based servers to show as "unhealthy" (red). Add checkGlobalMcpHealth IPC handler that verifies command existence via which/where.exe without the allowlist filter. Shows "Available — starts on demand" for found commands, matching how Claude Code actually uses them. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
apps/frontend/src/renderer/components/AgentTools.tsx (1)
1426-1458:⚠️ Potential issue | 🟠 MajorAlways render the global MCP card container so retry actions remain available.
Line 1426 hides the whole section when
allGlobalServers.length === 0, which also hides refresh and health-check controls after empty/failed loads.Proposed fix direction
- {allGlobalServers.length > 0 && ( + {(isLoadingGlobalMcps || globalMcps !== null) && ( <div className="rounded-lg border border-border bg-card p-4"> <div className="flex items-center justify-between mb-4"> ... </div> - <div className="space-y-2"> - {allGlobalServers.map((server) => { + {isLoadingGlobalMcps ? ( + <p className="text-xs text-muted-foreground">{t('settings:mcp.globalMcps.loading')}</p> + ) : allGlobalServers.length === 0 ? ( + <p className="text-xs text-muted-foreground">{t('settings:mcp.globalMcps.empty')}</p> + ) : ( + <div className="space-y-2"> + {allGlobalServers.map((server) => { ... - })} - </div> + })} + </div> + )} </div> )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1426 - 1458, The MCP card is currently hidden when allGlobalServers.length === 0, removing the retry/refresh controls; change rendering so the outer container (the div with class "rounded-lg border...") always renders, while the list of servers inside is conditional — keep the header and the Buttons (onClick handlers checkGlobalMcpHealth and loadGlobalMcps, and states isCheckingGlobalHealth/isLoadingGlobalMcps) visible at all times and only hide or show the server list/content based on allGlobalServers.length; update the JSX to move the conditional from the wrapper to the inner server-list section so retry actions remain available when the list is empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 749-753: Replace the hardcoded fallback message strings in
AgentTools.tsx with react-i18next translation keys: instead of 'Health check
failed' used when populating results[server.serverId] in the health-check path
(and the similar string at the other occurrence around line 761), call the i18n
translator (from useTranslation) and use a descriptive key like
t('agentTools.healthCheckFailed') so the tooltip/status reads the translated
text; ensure useTranslation is imported and the translation key is added to the
relevant locales.
- Around line 50-52: Replace relative imports with renderer path aliases: change
imports that reference '../stores/settings-store' and '../lib/utils' (symbols:
useSettingsStore, saveSettings, cn) and '../stores/project-store' (symbol:
useProjectStore) to use the configured `@/*` aliases (e.g.,
`@/stores/settings-store`, `@/lib/utils`, `@/stores/project-store`) so frontend
code follows tsconfig aliasing; update the import lines in AgentTools.tsx
accordingly.
In `@apps/frontend/src/shared/i18n/locales/en/settings.json`:
- Line 352: Translation for "projectSections.{id}.description" expects a name
interpolation but AppSettings.tsx currently calls
t(`projectSections.${item.id}.description`) without args; update the render in
apps/frontend/src/renderer/components/settings/AppSettings.tsx to pass the
project name (e.g., t(`projectSections.${item.id}.description`, { name:
item.name })) or a safe fallback (e.g., { name: item.name || '' }) so the
{{name}} token is replaced and users won't see the literal interpolation.
---
Duplicate comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 1426-1458: The MCP card is currently hidden when
allGlobalServers.length === 0, removing the retry/refresh controls; change
rendering so the outer container (the div with class "rounded-lg border...")
always renders, while the list of servers inside is conditional — keep the
header and the Buttons (onClick handlers checkGlobalMcpHealth and
loadGlobalMcps, and states isCheckingGlobalHealth/isLoadingGlobalMcps) visible
at all times and only hide or show the server list/content based on
allGlobalServers.length; update the JSX to move the conditional from the wrapper
to the inner server-list section so retry actions remain available when the list
is empty.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5837d255-14dd-46d1-82a6-7ded19f79a6f
📒 Files selected for processing (4)
apps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/shared/i18n/locales/en/settings.jsonapps/frontend/src/shared/i18n/locales/fr/settings.jsonapps/frontend/src/shared/types/settings.ts
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
apps/frontend/src/shared/types/ipc.ts (1)
458-459:⚠️ Potential issue | 🟡 MinorInclude
projectIdinonIndexProgresspayload for event scoping.The
onIndexProgresscallback payload lacks aprojectIdfield. When multiple projects are being indexed (or rapidly switched), progress events cannot be reliably associated with the correct project, potentially causing UI updates to be misrouted.🔧 Suggested fix
- onIndexProgress: (callback: (data: { message: string; current?: number; total?: number }) => void) => () => void; + onIndexProgress: (callback: (data: { projectId: string; message: string; current?: number; total?: number }) => void) => () => void;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/shared/types/ipc.ts` around lines 458 - 459, The onIndexProgress callback payload is missing projectId which prevents scoping progress events; update the IPC type signature for onIndexProgress to include projectId: string in the payload (so its callback becomes (data: { projectId: string; message: string; current?: number; total?: number }) => void) and then adjust any emitters/listeners that use onIndexProgress and refreshProjectIndex to pass and check projectId so progress events are correctly routed to the matching project.apps/frontend/src/renderer/components/AgentTools.tsx (3)
1425-1427:⚠️ Potential issue | 🟡 MinorDon't hide the entire section when data is empty or on first load failure.
The condition
allGlobalServers.length > 0hides the card completely when there are no servers or when the initial fetch fails. This also hides the refresh button, making recovery harder.Suggested direction
- {allGlobalServers.length > 0 && ( + {(isLoadingGlobalMcps || globalMcps !== null) && ( <div className="rounded-lg border border-border bg-card p-4">Then render:
- A loading state when
isLoadingGlobalMcpsis true- An empty state with messaging when
allGlobalServers.length === 0- The populated list otherwise
This keeps the refresh control accessible for retry.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1425 - 1427, The current JSX hides the entire Claude Code Global MCPs card when allGlobalServers.length === 0, which also hides the refresh control; change the render logic around the card (the block currently gated by allGlobalServers.length > 0) to always render the card and inside it: show a loading state when isLoadingGlobalMcps is true, show an explicit empty state/message when allGlobalServers.length === 0 (with the refresh button visible), and otherwise render the populated list using allGlobalServers; ensure the refresh control remains outside the length check so it is always accessible for retries.
50-52: 🛠️ Refactor suggestion | 🟠 MajorUse renderer path aliases instead of relative imports.
Lines 50-51 use relative imports where
@/*aliases are required per coding guidelines.Proposed fix
-import { useSettingsStore, saveSettings } from '../stores/settings-store'; -import { cn } from '../lib/utils'; +import { useSettingsStore, saveSettings } from '@/stores/settings-store'; +import { cn } from '@/lib/utils';As per coding guidelines: "Use path aliases from
tsconfig.jsonin frontend code:@/*for renderer."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 50 - 52, The imports in AgentTools.tsx use relative paths instead of the renderer path alias; update the import statements that reference useSettingsStore, saveSettings, cn, and useProjectStore to use the `@/...` alias (e.g., import from '@/stores/settings-store' and '@/lib/utils' and '@/stores/project-store') so they follow the project's tsconfig renderer alias convention; ensure all four imported symbols (useSettingsStore, saveSettings, cn, useProjectStore) are imported from their aliased module paths.
749-753:⚠️ Potential issue | 🟡 MinorTranslate the fallback health-check message.
Line 752 uses a hardcoded English string
'Health check failed'that surfaces in status tooltips. This violates the i18n requirement.Proposed fix
} catch { results[server.serverId] = { serverId: server.serverId, status: 'unknown', - message: 'Health check failed', + message: t('settings:mcp.globalMcps.healthCheckFailed'), checkedAt: new Date().toISOString(), }; }And update the dependency array on line 761:
- }, [allGlobalServers]); + }, [allGlobalServers, t]);As per coding guidelines: "All frontend user-facing text must use
react-i18nexttranslation keys. Never hardcode strings in JSX/TSX."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 749 - 753, Replace the hardcoded message 'Health check failed' in the results assignment (the object created at results[server.serverId]) with a translation call, e.g. use t('agent.healthCheckFailed') from react-i18next so the tooltip uses i18n; then ensure the hook that builds/returns these results (the enclosing useCallback/useEffect that references results) includes the translation function t in its dependency array so translations update correctly when locale changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 763-768: The useEffect currently depends on
[allGlobalServers.length, checkGlobalMcpHealth], which causes health checks to
only run when the array length changes while the callback captures the full
array; update the effect to avoid this mismatch by either (A) depending directly
on allGlobalServers (i.e., useEffect(() => { if (allGlobalServers.length > 0)
checkGlobalMcpHealth(); }, [allGlobalServers])) so checks re-run on any server
change, or (B) depend only on allGlobalServers.length and read server details
via a ref inside checkGlobalMcpHealth; pick one approach and update the
useEffect dependency array and related refs/callbacks (checkGlobalMcpHealth,
allGlobalServers) accordingly.
In `@apps/frontend/src/renderer/lib/browser-mock.ts`:
- Around line 372-379: The getClaudeAgents mock has a redundant type assertion
on the success property; in the getClaudeAgents async function remove the
unnecessary "as const" from "success: true as const" so it simply returns
"success: true" (matching the style used in getGlobalMcps) and rely on
TypeScript's type inference for the boolean literal.
---
Duplicate comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 1425-1427: The current JSX hides the entire Claude Code Global
MCPs card when allGlobalServers.length === 0, which also hides the refresh
control; change the render logic around the card (the block currently gated by
allGlobalServers.length > 0) to always render the card and inside it: show a
loading state when isLoadingGlobalMcps is true, show an explicit empty
state/message when allGlobalServers.length === 0 (with the refresh button
visible), and otherwise render the populated list using allGlobalServers; ensure
the refresh control remains outside the length check so it is always accessible
for retries.
- Around line 50-52: The imports in AgentTools.tsx use relative paths instead of
the renderer path alias; update the import statements that reference
useSettingsStore, saveSettings, cn, and useProjectStore to use the `@/...` alias
(e.g., import from '@/stores/settings-store' and '@/lib/utils' and
'@/stores/project-store') so they follow the project's tsconfig renderer alias
convention; ensure all four imported symbols (useSettingsStore, saveSettings,
cn, useProjectStore) are imported from their aliased module paths.
- Around line 749-753: Replace the hardcoded message 'Health check failed' in
the results assignment (the object created at results[server.serverId]) with a
translation call, e.g. use t('agent.healthCheckFailed') from react-i18next so
the tooltip uses i18n; then ensure the hook that builds/returns these results
(the enclosing useCallback/useEffect that references results) includes the
translation function t in its dependency array so translations update correctly
when locale changes.
In `@apps/frontend/src/shared/types/ipc.ts`:
- Around line 458-459: The onIndexProgress callback payload is missing projectId
which prevents scoping progress events; update the IPC type signature for
onIndexProgress to include projectId: string in the payload (so its callback
becomes (data: { projectId: string; message: string; current?: number; total?:
number }) => void) and then adjust any emitters/listeners that use
onIndexProgress and refreshProjectIndex to pass and check projectId so progress
events are correctly routed to the matching project.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9d592a98-9ed9-440c-8a5e-682586349966
📒 Files selected for processing (6)
apps/frontend/src/main/ipc-handlers/mcp-handlers.tsapps/frontend/src/preload/api/modules/mcp-api.tsapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/lib/browser-mock.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/shared/types/ipc.ts
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
Address all 49 review comments from PR AndyMik90#1920: Backend (Python): - custom_agents.py: path traversal validation, CRLF normalization, YAML block-list parsing, type hints - route_detector.py: fix .NET HTTP attribute regex, method modifier handling - client.py: Windows command-line length protection for agent prompts, load global MCP servers from ~/.claude.json (Auto-Claude MCPs take priority) - graphiti config/schema/search: non-negative TTL validation, safe env parsing, proper exception logging - phase_config.py: guard phaseCustomAgents dict type - env_detector.py: fix config value shadowing by reordering parsers - database_detector.py: guard empty string indexing - framework_analyzer.py: handle path traversal in .sln parsing - Tests: fix sys.modules teardown, unused variables, clarify test markers Frontend (TypeScript/React): - customer-github-handlers.ts: move imports to top, validate IPC params - settings-handlers.ts: normalize token checks, unify token-source logic - project-handlers.ts: validate project path before mkdirSync - memory-handlers.ts: warn on heuristic embedding dimension fallback - project-context-handlers.ts: use i18n keys for progress messages - CustomerReposModal.tsx: aria-label, Tailwind class instead of inline style - ProjectIndexTab.tsx: guard division by zero in progress bars - GitHubPRs.tsx: clamp negative timestamps, guard stale PR detail - useMultiRepoGitHubIssues/PRs: async cancellation guards - Sidebar.tsx: path normalization for Windows, wire AddCustomerModal trigger - SortableProjectTab.tsx: fix invisible keyboard focus target - i18n: singular/plural repo count (en + fr) - ipc.ts: add projectId to onIndexProgress payload Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Backend: - port_detector.py: handle --port=<value> equals form in regex - project_analyzer_module.py: don't early-return on single .sln, check for sibling services - custom_agents.py: use sanitize_thinking_level(), include root-level agents in catalog, extract _format_category_name() helper - test_integration_graphiti.py: clarify these are unit-level tests with mocked deps Frontend: - claude-mcp-handlers.ts: convert sync fs calls to async, validate header values - App.tsx: keep GitHub setup modal open on save failure - AddCustomerModal.tsx: handle registration failures explicitly - GitHubIssues.tsx: additional fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- AgentTools.tsx: use path aliases, translate health-check message, simplify useEffect deps - AgentProfileSettings.tsx: constrain catalog panel height with max-h and overflow - GitHubIntegration.tsx: use cn() for conditional Tailwind class composition Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- AgentTools.tsx: use @/ path aliases, translate health-check fallback message, simplify useEffect dependency array - browser-mock.ts: remove redundant `as const` assertion - settings.json (en/fr): add healthCheckFailed translation key Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 31
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/frontend/src/renderer/components/AgentTools.tsx (1)
1020-1029:⚠️ Potential issue | 🟡 MinorTranslate hardcoded health-check fallback message.
Line 1026 contains the same hardcoded
'Health check failed'string that should use an i18n translation key for consistency.Proposed fix
} catch (_error) { setServerHealthStatus(prev => ({ ...prev, [server.id]: { serverId: server.id, status: 'unknown', - message: 'Health check failed', + message: t('settings:mcp.healthCheckFailed'), checkedAt: new Date().toISOString(), } })); }As per coding guidelines: "All frontend user-facing text must use
react-i18nexttranslation keys."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1020 - 1029, The catch block that calls setServerHealthStatus in AgentTools currently sets message: 'Health check failed' as a hardcoded string; replace that literal with a react-i18next translation (e.g., t('health_check.failed')) and ensure the AgentTools component has access to the t function from useTranslation (import useTranslation and call const { t } = useTranslation() at component scope). Update the translation JSON keys (e.g., "health_check.failed") in your locales so the message is localized. Ensure server.id and other fields remain unchanged when setting state.apps/frontend/src/main/ipc-handlers/context/project-context-handlers.ts (1)
412-463: 🧹 Nitpick | 🔵 TrivialConsider extracting shared analyzer spawn logic.
The spawn/timeout pattern is duplicated between
refreshChildIndex(lines 78-130) and this block. Extracting a shared helper would reduce duplication and ensure consistent behavior (including the cross-platform process termination fix).♻️ Suggested extraction
async function runAnalyzer( projectPath: string, projectName: string, analyzerPath: string, indexOutputPath: string, timeoutMs: number = 120_000 ): Promise<void> { const pythonCmd = getConfiguredPythonPath(); const [pythonCommand, pythonBaseArgs] = parsePythonCommand(pythonCmd); return new Promise<void>((resolve, reject) => { let stdout = ''; let stderr = ''; const proc = spawn(pythonCommand, [ ...pythonBaseArgs, analyzerPath, '--project-dir', projectPath, '--output', indexOutputPath ], { cwd: projectPath, env: { ...getAugmentedEnv(), PYTHONIOENCODING: 'utf-8', PYTHONUTF8: '1' } }); const timeout = setTimeout(() => { debugLog(`[project-context] Analyzer (${projectName}) timed out after ${timeoutMs}ms`); proc.kill('SIGTERM'); reject(new Error(`Analyzer timed out after ${timeoutMs / 1000}s`)); }, timeoutMs); proc.stdout?.on('data', (data) => { stdout += data.toString('utf-8'); }); proc.stderr?.on('data', (data) => { stderr += data.toString('utf-8'); }); proc.on('close', (code: number) => { clearTimeout(timeout); if (code === 0) { debugLog(`[project-context] Analyzer (${projectName}) stdout:`, stdout); resolve(); } else { debugLog(`[project-context] Analyzer (${projectName}) failed with code`, code); reject(new Error(`Analyzer exited with code ${code}: ${stderr || stdout}`)); } }); proc.on('error', (err) => { clearTimeout(timeout); reject(err); }); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/context/project-context-handlers.ts` around lines 412 - 463, The analyzer spawn/timeout logic is duplicated between refreshChildIndex and the block in project-context-handlers; extract it into a shared async helper (e.g., runAnalyzer) that accepts projectPath, projectName, analyzerPath, indexOutputPath, and timeoutMs, and internally calls getConfiguredPythonPath/parsePythonCommand, uses getAugmentedEnv, collects stdout/stderr, sets the timeout, and handles proc.on('close') and proc.on('error'); ensure the helper implements the same cross-platform termination behavior (use proc.kill('SIGTERM') and clearTimeout on all exits) and replace both refreshChildIndex and the existing inline Promise with calls to runAnalyzer to remove duplication and keep behavior consistent.apps/frontend/src/renderer/components/SortableProjectTab.tsx (1)
37-44:⚠️ Potential issue | 🟠 MajorUse
setActivatorNodeRefon the drag handle and consolidate activator properties to prevent mobile drag regression.Line 97 hides the drag handle (
'hidden sm:block'), which is the only element withlisteners, so belowsmthere's no DOM node to initiate drag. Additionally,attributesis on the wrapper (line 67) whilelistenersis on the handle (line 92), splitting the activator semantics. DnD Kit's drag handle pattern requiressetActivatorNodeRef,attributes, andlistenersall on the same element (the handle) to properly support keyboard focus restoration and touch/mouse activation.Move
setActivatorNodeRef,attributes, andlistenersto the drag handle element, and remove thehiddenclass so the handle is always interactive (visibility can be controlled via opacity).Suggested direction
const { attributes, listeners, setNodeRef, + setActivatorNodeRef, transform, transition, isDragging } = useSortable({ id: project.id }); return ( <div ref={setNodeRef} style={style} className={cn( 'group relative flex items-center min-w-0', isActive ? 'max-w-[180px] sm:max-w-[220px] md:max-w-[280px]' : 'max-w-[120px] sm:max-w-[160px] md:max-w-[200px]', 'border-r border-border last:border-r-0', 'touch-none transition-all duration-200', isDragging && 'opacity-60 scale-[0.98] shadow-lg' )} - {...attributes} > <Tooltip delayDuration={200}> <TooltipTrigger asChild> <div className={cn( 'flex-1 flex items-center gap-1 sm:gap-2', 'px-2 sm:px-3 md:px-4 py-2 sm:py-2.5', 'text-xs sm:text-sm', 'min-w-0 truncate hover:bg-muted/50 transition-colors', 'border-b-2 border-transparent cursor-pointer', isActive && [ 'bg-background border-b-primary text-foreground', 'hover:bg-background' ], !isActive && [ 'text-muted-foreground', 'hover:text-foreground' ] )} onClick={onSelect} > {/* Drag handle */} <div + ref={setActivatorNodeRef} + {...attributes} {...listeners} role="button" aria-label={t('projectTab.dragHandle', { name: project.name })} tabIndex={isActive ? 0 : -1} className={cn( - 'hidden sm:block', + 'block', 'cursor-grab active:cursor-grabbing', 'w-1 h-4 rounded-full flex-shrink-0 transition-all', isActive ? 'opacity-100 bg-primary' - : 'opacity-0 group-hover:opacity-60 bg-muted-foreground' + : 'opacity-0 sm:group-hover:opacity-60 bg-muted-foreground' )} />Also applies to: 55-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/SortableProjectTab.tsx` around lines 37 - 44, The drag handle currently misuses useSortable by placing attributes on the wrapper (useSortable destructured names) and listeners on a hidden handle, causing mobile/keyboard regressions; change the drag-handle element to receive setActivatorNodeRef, attributes, and listeners from useSortable (instead of setNodeRef/attributes on the wrapper) and remove the 'hidden sm:block' class so the activator exists on small screens (control visibility via opacity/CSS rather than display:none); keep setNodeRef on the root sortable item for overall positioning/transform and use setActivatorNodeRef on the visible handle to consolidate activator semantics for useSortable (id: project.id).apps/frontend/src/renderer/components/Sidebar.tsx (1)
261-287:⚠️ Potential issue | 🟠 MajorReset the Git setup modal when the selected project no longer needs it.
Lines 265–267 clear only
gitStatusfor customer projects, and line 276 only ever opens the modal. If the user switches from a non-git repo (modal open) to a customer project, a repo with commits, or no project, the stale Git setup modal remains open. Additionally, line 287 only tracksselectedProjectId, so changing the selected project'spathortypein place will not rerun the effect, leaving git state stale.Proposed fix
useEffect(() => { + let cancelled = false; + const checkGit = async () => { - if (selectedProject) { - // Customer folders don't require git - if (selectedProject.type === 'customer') { - setGitStatus(null); - return; - } - try { - const result = await window.electronAPI.checkGitStatus(selectedProject.path); - if (result.success && result.data) { - setGitStatus(result.data); - // Show git setup modal if project is not a git repo or has no commits - // but only if user hasn't already dismissed it for this project - if ((!result.data.isGitRepo || !result.data.hasCommits) && !gitModalDismissedRef.current.has(selectedProject.id)) { - setShowGitSetupModal(true); - } - } - } catch (error) { - console.error('Failed to check git status:', error); - } - } else { + if (!selectedProject || selectedProject.type === 'customer') { setGitStatus(null); + setShowGitSetupModal(false); + return; + } + + try { + const result = await window.electronAPI.checkGitStatus(selectedProject.path); + if (cancelled) return; + + if (result.success && result.data) { + setGitStatus(result.data); + const needsGitSetup = !result.data.isGitRepo || !result.data.hasCommits; + setShowGitSetupModal( + needsGitSetup && !gitModalDismissedRef.current.has(selectedProject.id) + ); + } else { + setShowGitSetupModal(false); + } + } catch (error) { + console.error('Failed to check git status:', error); + setShowGitSetupModal(false); } }; + checkGit(); - }, [selectedProjectId]); + return () => { + cancelled = true; + }; + }, [selectedProject?.id, selectedProject?.path, selectedProject?.type]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/Sidebar.tsx` around lines 261 - 287, The effect that checks Git status (useEffect -> checkGit) only clears gitStatus for customer projects and may leave the Git setup modal open when switching to a customer project, a repo with commits, or no project; also the dependency array only tracks selectedProjectId so changes to selectedProject.path/type won't retrigger the check. Fix by ensuring when selectedProject is falsy or selectedProject.type === 'customer' you both setGitStatus(null) and setShowGitSetupModal(false) (and optionally clear any related state like gitModalDismissedRef if needed), and update the effect dependencies to include the selectedProject object (or at minimum selectedProject?.path and selectedProject?.type along with selectedProjectId) so the checkGit() runs whenever the project's path/type change; keep the existing logic that opens the modal only when (!result.data.isGitRepo || !result.data.hasCommits) and the project hasn't been dismissed.
♻️ Duplicate comments (20)
apps/frontend/src/renderer/components/CustomerReposModal.tsx (1)
136-145:⚠️ Potential issue | 🟡 MinorUse i18n translation key for the
aria-labelattribute.The
aria-label="Clear search"on line 140 is a hardcoded English string. Screen reader labels are user-facing text and must use translation keys per coding guidelines.♿ Suggested fix
{search && ( <button type="button" onClick={() => setSearch('')} - aria-label="Clear search" + aria-label={t('customerRepos.clearSearch')} className="absolute right-3 top-1/2 -translate-y-1/2 text-muted-foreground hover:text-foreground" > <X className="h-4 w-4" /> </button> )}Ensure
customerRepos.clearSearchis added to bothen/*.jsonandfr/*.jsontranslation files. As per coding guidelines, "All frontend user-facing text must usereact-i18nexttranslation keys."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/CustomerReposModal.tsx` around lines 136 - 145, Replace the hardcoded aria-label in CustomerReposModal's clear button with a react-i18next translation key: use the useTranslation hook and call t('customerRepos.clearSearch') for the aria-label on the button (the component and element to change are in CustomerReposModal.tsx where the button with onClick={() => setSearch('')} and <X /> lives); also add the key "customerRepos.clearSearch" with appropriate English and French values to both en/*.json and fr/*.json translation files so the label is localized for screen readers.apps/frontend/src/renderer/components/GitHubIssues.tsx (1)
257-257:⚠️ Potential issue | 🔴 CriticalMap lookup uses wrong key type/format — will never match.
The
issueToTaskMapuses string keys like"owner/repo#123"or"#123", but the lookup passesselectedIssue.number(a plain number). This causes both a TypeScript type mismatch and a runtime bug where linked tasks are never found.🐛 Proposed fix to use repo-scoped key for lookup
linkedTaskId={issueToTaskMap.get(selectedIssue.number)} + linkedTaskId={issueToTaskMap.get( + selectedIssue.repoFullName + ? `${selectedIssue.repoFullName}#${selectedIssue.number}` + : `#${selectedIssue.number}` + )}Alternatively, extract the key computation into a helper to keep it DRY with lines 145-147:
const getIssueTaskKey = (repo: string | undefined, issueNumber: number): string => repo ? `${repo}#${issueNumber}` : `#${issueNumber}`;Then use it in both the map population and lookup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/GitHubIssues.tsx` at line 257, The lookup into issueToTaskMap is using selectedIssue.number (a number) but the map keys are strings like "owner/repo#123" or "#123", so change the lookup to use the same string key format used when populating the map; add or reuse a helper (e.g., getIssueTaskKey(repo, issueNumber)) that returns repo ? `${repo}#${issueNumber}` : `#${issueNumber}` and call it where linkedTaskId is computed (replace issueToTaskMap.get(selectedIssue.number) with issueToTaskMap.get(getIssueTaskKey(selectedRepo, selectedIssue.number))), ensuring the helper is also used where the map is populated to keep keys consistent.apps/frontend/src/main/claude-code-settings/reader.ts (1)
103-104:⚠️ Potential issue | 🟠 MajorBlock prototype-pollution keys in sanitized maps.
The sanitizer functions write untrusted JSON keys into plain
{}objects without guarding against prototype-pollution keys (__proto__,constructor,prototype). This affects:
sanitizeMcpServers()at Line 103 (mcpServers map) and Line 140 (headers map)sanitizeEnabledPlugins()at Line 200Use
Object.create(null)for all sanitized maps and validate keys before assignment.🔒 Proposed hardening
+const FORBIDDEN_KEYS = new Set(['__proto__', 'constructor', 'prototype']); + +function isSafeKey(key: string): boolean { + return !FORBIDDEN_KEYS.has(key); +} + function sanitizeMcpServers(mcpServers: unknown): Record<string, ClaudeCodeMcpServerConfig> | undefined { if (!isPlainObject(mcpServers)) { return undefined; } - const sanitized: Record<string, ClaudeCodeMcpServerConfig> = {}; + const sanitized: Record<string, ClaudeCodeMcpServerConfig> = Object.create(null); let hasValidEntries = false; for (const [key, value] of Object.entries(mcpServers)) { + if (!isSafeKey(key)) { + debugLog(`${LOG_PREFIX} Skipping unsafe mcpServers key:`, key); + continue; + } if (!isPlainObject(value)) {Apply the same pattern to headers (Line 140) and
sanitizeEnabledPlugins(Line 200).Also applies to: 140-140, 200-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/claude-code-settings/reader.ts` around lines 103 - 104, The sanitized maps in sanitizeMcpServers() and sanitizeEnabledPlugins() currently use plain {} which allows prototype-pollution via keys like "__proto__", "constructor", or "prototype"; change the maps (the local variables currently named sanitized/mcpServers, headers, and enabledPlugins) to be created with Object.create(null) and, before assigning any untrusted key into these maps inside sanitizeMcpServers() and sanitizeEnabledPlugins(), validate the key against a deny-list (at minimum "__proto__", "constructor", "prototype") or a safe-key regex and skip/ignore any disallowed keys so they are never written into the prototype chain.apps/frontend/src/renderer/components/AddCustomerModal.tsx (1)
44-48:⚠️ Potential issue | 🟠 MajorHandle project registration failures explicitly.
When
window.electronAPI.addProject()fails,registerAndInitCustomer()returns silently without surfacing any error to the user. BothhandleOpenExistingandhandleCreateFoldercall this function, but neither will display an error if registration fails since no exception is thrown.🔧 Suggested fix
const registerAndInitCustomer = async (path: string) => { // Pass type: 'customer' through IPC so it's persisted to disk (projects.json) const result = await window.electronAPI.addProject(path, 'customer'); - if (!result.success || !result.data) return; + if (!result.success || !result.data) { + throw new Error(result.error || t('addCustomer.failedToCreate')); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AddCustomerModal.tsx` around lines 44 - 48, registerAndInitCustomer currently swallows failures from window.electronAPI.addProject, causing handleOpenExisting and handleCreateFolder to return silently; modify registerAndInitCustomer to explicitly handle unsuccessful results by either throwing an Error or returning a rejected Promise with a descriptive message, and ensure callers (handleOpenExisting and handleCreateFolder) catch that rejection and surface it to the user (e.g., via the existing UI error state / notification mechanism or by setting an error message in component state). Locate registerAndInitCustomer, window.electronAPI.addProject, handleOpenExisting, and handleCreateFolder and add explicit error propagation and user-facing reporting rather than a silent return.apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts (1)
70-76:⚠️ Potential issue | 🟡 MinorMissing cancellation check in the else branch.
The
elsebranch at line 70 sets state without first checkingif (cancelled) return, which is inconsistent with the pattern used elsewhere in this effect (lines 62, 78). If the component unmounts orcustomerIdchanges during the async call, this branch could still update state.🔧 Suggested fix
} else { + if (cancelled) return; setState(prev => ({ ...prev, syncStatus: { connected: false, repos: [], error: result.error }, error: result.error || t('issues.multiRepo.failedToCheckConnection'), })); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts` around lines 70 - 76, The else branch that calls setState(...) should guard against updates after unmount by checking the same cancellation flag used elsewhere; before calling setState in the else branch (the block handling a failed connection result inside the async effect in useMultiRepoGitHubIssues), add "if (cancelled) return" (or the existing cancellation variable check) to avoid updating state after the component is unmounted or customerId changes.apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx (1)
491-517:⚠️ Potential issue | 🟡 MinorStrengthen the guard to verify PR belongs to the resolved child project.
The current check
multiRepoSelectedNumber === selectedPR.numberonly validates the PR number, but different repos can have PRs with the same number. During transitions whenresolvedChildProjectIdupdates butsingleRepo.selectedPRhasn't reloaded yet, this could briefly pair a stale PR from the wrong repo with the new project context.🛡️ Proposed additional guard
fullPRDetail={ - resolvedChildProjectId && selectedPR && multiRepoSelectedNumber === selectedPR.number ? ( + resolvedChildProjectId && selectedPR && + multiRepoSelectedNumber === selectedPR.number && + multiRepo.selectedPR?.repoFullName === selectedPR.repoFullName ? ( <PRDetail pr={selectedPR} projectId={resolvedChildProjectId}This adds a check that the selected PR's repo matches the multi-repo selection's repo, ensuring the detail view is only rendered when both contexts are in sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx` around lines 491 - 517, The conditional rendering for fullPRDetail in the PR list only checks PR number (multiRepoSelectedNumber === selectedPR.number) which can match across different repos; update the guard inside the fullPRDetail conditional (around resolvedChildProjectId, selectedPR and multiRepoSelectedNumber) to also verify the selected PR belongs to the same repo as the multi-repo selection — e.g., compare a unique repo identifier on selectedPR (repositoryId, repoFullName, or owner/name) against the multi-repo selected repo id/name (or resolvedChildProjectId if that maps to the repo) so the detail view is only shown when both PR number and repo match. Ensure you use the existing symbols selectedPR, resolvedChildProjectId and multiRepoSelectedNumber to locate and change the check.apps/frontend/src/main/ipc-handlers/memory-handlers.ts (1)
74-112:⚠️ Potential issue | 🟠 MajorFail fast for unknown models instead of returning guessed dimensions.
The
"fallback"path still lets unknown models succeed with a guessed vector size. That can surface later as embedding-dimension mismatches, and this concern was already raised in a prior review.🛠️ Proposed fix
-function lookupEmbeddingDim(modelName: string): { dim: number; source: 'known' | 'fallback' } | null { +function lookupEmbeddingDim(modelName: string): { dim: number; source: 'known' } | null { const nameLower = modelName.toLowerCase(); ... - // Heuristic fallback based on name patterns. - // WARNING: These are guesses and may be incorrect for unknown models. - // The 'fallback' source flag allows callers to surface this uncertainty. - if (nameLower.includes('large')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (1024) for unknown model: ${modelName}`); - return { dim: 1024, source: 'fallback' }; - } - if (nameLower.includes('base')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (768) for unknown model: ${modelName}`); - return { dim: 768, source: 'fallback' }; - } - if (nameLower.includes('small') || nameLower.includes('mini')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (384) for unknown model: ${modelName}`); - return { dim: 384, source: 'fallback' }; - } - return null; }Also narrow the IPC payload type at Line 959 to remove
'fallback'.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/memory-handlers.ts` around lines 74 - 112, Update lookupEmbeddingDim to fail fast for unknown models: remove the heuristic "fallback" branches and their console warnings so the function only returns known dimensions (source: 'known') or null; callers should handle null as an unknown/unsupported model. Also update the IPC payload type that currently allows a 'fallback' source (the IPC message type that carries lookupEmbeddingDim results) to exclude 'fallback' so its source can only be 'known' (or the payload allows null/unsupported), and adjust any call sites that assumed fallback to handle null/unsupported appropriately.apps/backend/agents/custom_agents.py (1)
155-171:⚠️ Potential issue | 🟡 MinorRoot-level agents are excluded from
load_all_agents()but included inload_custom_agent().
load_custom_agent()searches both category subdirectories (lines 139-144) AND the root agents directory (lines 147-149), butload_all_agents()only iterates subdirectories. This means an agent at~/.claude/agents/my-agent.mdcan be loaded by ID but won't appear in the full agent list or catalog.🔧 Suggested fix
def load_all_agents() -> list[CustomAgentConfig]: """Load all custom agents from all categories in ~/.claude/agents/.""" agents_dir = get_agents_dir() if not agents_dir.exists(): return [] agents = [] + # Load root-level agents first + for agent_file in sorted(agents_dir.glob("*.md")): + if agent_file.name == "README.md": + continue + agent = parse_agent_file(agent_file) + if agent: + agents.append(agent) + + # Then load from category subdirectories for category_dir in sorted(agents_dir.iterdir()):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/custom_agents.py` around lines 155 - 171, load_all_agents currently only iterates category subdirectories and therefore omits root-level agent files that load_custom_agent can find; update load_all_agents (which uses get_agents_dir and parse_agent_file) to also scan agents_dir.glob("*.md") (skipping README.md) and parse/append those files before or after iterating category_dir files so root-level agents are included in the returned list.apps/backend/analysis/analyzers/route_detector.py (2)
650-656:⚠️ Potential issue | 🟠 MajorMinimal API auth detection can miss
.RequireAuthorization()when lambdas contain internal semicolons.Line 651 uses
content.find(";", match.end())to find the statement end, but block lambdas like() => { var a = 1; return a; }contain internal semicolons. This causes the search to stop prematurely, missing chained.RequireAuthorization()calls.🛠️ Suggested fix
Implement a bracket-aware statement end finder:
def _find_statement_end(self, content: str, start: int, lookahead: int = 1200) -> int: """Find statement end, accounting for nested braces/parens in lambdas.""" end_limit = min(len(content), start + lookahead) paren = brace = bracket = 0 in_str: str | None = None i = start while i < end_limit: ch = content[i] prev = content[i - 1] if i > 0 else "" if in_str: if ch == in_str and prev != "\\": in_str = None else: if ch in {"'", '"'}: in_str = ch elif ch == "(": paren += 1 elif ch == ")": paren = max(0, paren - 1) elif ch == "{": brace += 1 elif ch == "}": brace = max(0, brace - 1) elif ch == ";": if paren == 0 and brace == 0 and bracket == 0: return i + 1 i += 1 return end_limit🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/route_detector.py` around lines 650 - 656, The current detection in route_detector.py stops at the first semicolon after match (using content.find) which breaks when lambdas contain internal semicolons; replace that logic by implementing a bracket- and string-aware helper (e.g., _find_statement_end) that scans from match.end(), tracks paren/brace/bracket nesting and ignores semicolons inside strings or nested blocks, returns the correct statement end (or a lookahead limit) and then use it to build route_stmt and compute requires_auth (instead of the simple content.find result) so chained .RequireAuthorization(...) is reliably detected.
484-589:⚠️ Potential issue | 🟠 MajorController context is only extracted from the first class, causing route misattribution in multi-controller files.
Line 484 uses
re.search()which captures only the firstclassdeclaration, but line 521 iterates over all[Http*]attributes in the entire file. This means routes in the second and subsequent controllers will incorrectly use the first controller's route prefix and authorization rules.🛠️ Suggested approach
Find all class declarations using
re.finditer(), compute each class's span, then associate each[Http*]match with the nearest preceding class by comparing positions:# Find all controller classes with their spans class_pattern = re.compile(r"class\s+(\w+)") classes = [(m.start(), m.group(1)) for m in class_pattern.finditer(content)] # For each Http* match, find which class it belongs to for match in http_method_pattern.finditer(content): # Find the class this method belongs to (largest start pos < match.start()) owning_class = None for class_start, class_name in classes: if class_start < match.start(): owning_class = (class_start, class_name) # Use owning_class context for route prefix and auth🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/route_detector.py` around lines 484 - 589, The code currently uses a single class_match (class_match) and shared class_route_prefix/class_authorize while iterating all Http attributes (http_method_pattern), causing later controller methods to inherit the first class's context; fix by finding all classes with re.finditer (e.g., class_pattern = re.compile(r"class\s+(\w+)") and capture their start/end spans and names), then for each http_method_pattern.match determine the owning class by selecting the nearest preceding class start, compute that class's controller_name, class_route_prefix and class_authorize from the owning class's pre-class section (instead of the top-level variables), and then proceed to compute full_path, method_authorize and requires_auth for that specific class before appending to routes (keep usage of _normalize_aspnet_path and existing method_name search logic).apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx (1)
386-438: 🧹 Nitpick | 🔵 TrivialConstrain catalog panel height to avoid viewport overflow.
The expanded agents catalog can become very tall with many categories/agents. Add fixed max-height with internal scrolling to match settings dropdown behavior in this area.
♻️ Proposed fix
{showAgentsCatalog && ( - <div className="border-t border-border p-4 space-y-1"> + <div className="border-t border-border p-4 space-y-1 max-h-60 overflow-y-auto"> <p className="text-xs text-muted-foreground mb-3">Based on learnings, "Dropdown components in apps/frontend/src/renderer/components/settings use a fixed max-height (e.g., max-h-60) with internal overflow-y-auto scrolling... If you add new dropdowns in this area, follow the same fixed-height + internal-scroll approach for consistency."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx` around lines 386 - 438, The agents catalog panel can grow beyond the viewport; wrap the category list (the JSX block rendered when showAgentsCatalog is true — currently the div with className "border-t border-border p-4 space-y-1") or the inner container that maps agentsInfo.categories with a fixed max-height and internal scrolling (e.g., add a utility class like max-h-60 and overflow-y-auto) so the expandedAgentCategories toggle still works but the panel scrolls internally; update the element around agentsInfo.categories mapping in AgentProfileSettings (the section using expandedAgentCategories and setExpandedAgentCategories) to include the max-height + overflow-y-auto classes for consistent dropdown behavior.apps/backend/analysis/analyzers/database_detector.py (1)
477-488:⚠️ Potential issue | 🟠 MajorStill exclude entity-reference navigations before the PascalCase fallback.
This is the same gap raised earlier and it still looks unresolved:
Customer Customer { get; set; }passes the uppercase-type filter and is emitted as a scalar field. That overstates columns for EF Core models with non-virtual reference navigations.Proposed fix
is_nullable = raw_type.endswith("?") clean_type = raw_type.rstrip("?") if not clean_type: continue + if clean_type in known_entities and not prop_name.endswith( + ("Id", "ID") + ): + continue + # Only include properties with recognized types or common patterns if clean_type not in csharp_types and ( not clean_type or not clean_type[0].isupper() ): continueRead-only verification
#!/bin/bash python - <<'PY' from pathlib import Path import re excluded = {"bin", "obj", "node_modules", ".git", "TestResults"} cs_files = [ p for p in Path(".").rglob("*.cs") if not any(part in excluded for part in p.parts) ] dbset_pattern = re.compile(r"DbSet<(\w+)>\s+(\w+)") config_pattern = re.compile(r"IEntityTypeConfiguration<(\w+)>") prop_pattern = re.compile( r"public\s+(virtual\s+)?" r"([\w<>,?\[\]\s]+?)\s+" r"(\w+)\s*\{\s*get;\s*set;\s*\}" ) known_entities = set() for path in cs_files: try: text = path.read_text(encoding="utf-8") except (OSError, UnicodeDecodeError): continue known_entities.update(m.group(1) for m in dbset_pattern.finditer(text)) known_entities.update(m.group(1) for m in config_pattern.finditer(text)) for path in cs_files: try: text = path.read_text(encoding="utf-8") except (OSError, UnicodeDecodeError): continue for match in prop_pattern.finditer(text): is_virtual = match.group(1) is not None clean_type = match.group(2).strip().rstrip("?") prop_name = match.group(3) if not is_virtual and clean_type in known_entities and not prop_name.endswith(("Id", "ID")): line = text.count("\n", 0, match.start()) + 1 print(f"{path}:{line}: public {clean_type} {prop_name} {{ get; set; }}") PYExpected result: any output is currently being misclassified as a scalar field.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/database_detector.py` around lines 477 - 488, The parser currently treats uppercase type names like "Customer Customer { get; set; }" as scalar because the PascalCase fallback runs before excluding EF reference navigations; modify the logic around raw_type/is_nullable/clean_type (and before the PascalCase fallback using csharp_types) to skip non-virtual properties whose clean_type matches a discovered EF entity type (use the module's known entity set—e.g., known_entities or entity_type_names populated from DbSet<> and IEntityTypeConfiguration detections) and whose property name does not end with "Id" or "ID"; in short, add a guard that if not is_virtual and clean_type in known_entities and not prop_name.endswith(("Id","ID")) then continue so reference navigations are not emitted as scalar fields.apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts (1)
193-195:⚠️ Potential issue | 🟡 MinorFractional IPC inputs still pass validation.
Line 193 and Line 287 only check for finite positive numbers, so
page = 1.5andissueNumber = 7.2still produce malformed GitHub endpoints. Tighten both paths toNumber.isInteger(...) && > 0before building the request.Also applies to: 287-289
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts` around lines 193 - 195, The current validation allows fractional values like page = 1.5 and issueNumber = 7.2; update both checks so they only accept positive integers by replacing the loose finite checks with Number.isInteger(page) && page > 0 for the page validation and Number.isInteger(issueNumber) && issueNumber > 0 for the issueNumber validation (locate the checks that currently read typeof ... Number.isFinite(...) and tighten them in the handler that constructs the GitHub endpoints).apps/backend/analysis/analyzers/context/env_detector.py (1)
44-45:⚠️ Potential issue | 🟠 MajorPlaceholder entries still block
launchSettings.jsonvalues.Line 44/45 seeds
.env.exampleand docker-compose placeholders before Line 51 readslaunchSettings.json, and_parse_launch_settings()refuses to overwrite existing keys. If both declare something likeASPNETCORE_ENVIRONMENT, the detector keepsvalue=Noneinstead of the concrete local value. Parse those placeholder sources later, or letlaunchSettingsreplace entries whose current value isNone.Also applies to: 50-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/context/env_detector.py` around lines 44 - 45, The detector seeds placeholders via _parse_env_example(env_vars, required_vars) and _parse_docker_compose(env_vars) before reading launchSettings with _parse_launch_settings(), causing launchSettings values (e.g., ASPNETCORE_ENVIRONMENT) to be blocked when placeholders set value=None; either move the call to _parse_launch_settings() to run before _parse_env_example/_parse_docker_compose or change _parse_launch_settings() to explicitly overwrite existing keys whose current value is None in the env_vars dict; update references to env_vars and required_vars accordingly so launchSettings wins for concrete local values.apps/backend/analysis/analyzers/framework_analyzer.py (1)
733-743:⚠️ Potential issue | 🔴 CriticalResolve solution project paths before
relative_to().
Path.relative_to()is lexical. At Line 742,self.path / "../Shared/Shared.csproj"can still yield../Sharedinstead of being rejected, so projects outside the analyzed root get stored indotnet_solutionand can be reused by other analyzers. Resolvecsproj_pathfirst and compare againstself.path.resolve().Suggested change
- csproj_path = self.path / project_rel_path + root = self.path.resolve() + csproj_path = (self.path / project_rel_path).resolve() if not csproj_path.exists(): continue # Get the project directory path relative to solution root. # Skip projects that resolve outside the solution directory # (e.g., via ".." segments in the .sln reference). try: - project_dir = str(csproj_path.parent.relative_to(self.path)) + project_dir = str(csproj_path.relative_to(root).parent) except ValueError: continueRun this to confirm the containment check. Expected result:
relative_to(root)prints a path with.., while the resolved path is outside the root.#!/bin/bash set -euo pipefail sed -n '733,743p' apps/backend/analysis/analyzers/framework_analyzer.py python - <<'PY' from pathlib import Path import tempfile with tempfile.TemporaryDirectory() as d: root = Path(d) outside = root.parent / "outside-cr" outside.mkdir(exist_ok=True) csproj = root / ".." / "outside-cr" / "Shared.csproj" csproj.touch() print("exists:", csproj.exists()) print("relative_to(root):", csproj.parent.relative_to(root)) print("resolved inside root:", csproj.resolve().is_relative_to(root.resolve())) PY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/framework_analyzer.py` around lines 733 - 743, The code uses csproj_path.parent.relative_to(self.path) which is lexical and can incorrectly accept paths with ".."; change to resolve the csproj path first and perform the containment check against the resolved solution root: compute csproj_resolved = (self.path / project_rel_path).resolve() (or resolve csproj_path then csproj_path.parent.resolve()), compute root_resolved = self.path.resolve(), and then use csproj_resolved.parent.relative_to(root_resolved) (or root_resolved in/is_relative_to) inside the try/except to skip projects that resolve outside the solution; update the references to csproj_path, project_dir and the existing except ValueError branch accordingly so external projects are not added to dotnet_solution.apps/backend/integrations/graphiti/config.py (1)
243-246:⚠️ Potential issue | 🟡 MinorWarn when
GRAPHITI_EPISODE_TTL_DAYSis non-numeric.A typo like
GRAPHITI_EPISODE_TTL_DAYS=sevencurrently disables TTL cleanup silently by falling back to0. Please log the raw invalid value in theexceptpath before defaulting.Suggested change
- try: - episode_ttl_days = int(os.environ.get("GRAPHITI_EPISODE_TTL_DAYS", "0")) + raw_episode_ttl_days = os.environ.get("GRAPHITI_EPISODE_TTL_DAYS", "0") + try: + episode_ttl_days = int(raw_episode_ttl_days) except ValueError: + logger.warning( + "Invalid GRAPHITI_EPISODE_TTL_DAYS=%r; defaulting episode_ttl_days to 0", + raw_episode_ttl_days, + ) episode_ttl_days = 0As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/config.py` around lines 243 - 246, The try/except around parsing GRAPHITI_EPISODE_TTL_DAYS quietly falls back to 0 on ValueError; update the except block for the episode_ttl_days parsing to log the raw invalid environment value (os.environ.get("GRAPHITI_EPISODE_TTL_DAYS")) and the exception before setting episode_ttl_days = 0 so the bad input is visible in logs; reference the episode_ttl_days variable and GRAPHITI_EPISODE_TTL_DAYS env key and use the module's logger (or logging.warning/error) to record the invalid value and the ValueError.apps/backend/integrations/graphiti/queries_pkg/schema.py (1)
29-32:⚠️ Potential issue | 🟠 MajorClamp
GRAPHITI_MAX_RESULTSto a positive integer.
0and negative values still make it through this parser. Downstreammin(num_results, MAX_CONTEXT_RESULTS)can then pass0/negativenum_resultsinto search and effectively disable context retrieval.Suggested change
try: - MAX_CONTEXT_RESULTS = int(os.getenv("GRAPHITI_MAX_RESULTS", "10")) + MAX_CONTEXT_RESULTS = max(1, int(os.getenv("GRAPHITI_MAX_RESULTS", "10"))) except ValueError: MAX_CONTEXT_RESULTS = 10As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/queries_pkg/schema.py` around lines 29 - 32, The env parsing for GRAPHITI_MAX_RESULTS must enforce a positive integer: when reading os.getenv("GRAPHITI_MAX_RESULTS", "10") and converting to int for MAX_CONTEXT_RESULTS, catch ValueError as before but also clamp the parsed value to at least 1 (e.g., MAX_CONTEXT_RESULTS = max(1, parsed_value)) so 0 or negative values cannot propagate; keep the existing default of 10 when parsing fails or env is absent and reference the MAX_CONTEXT_RESULTS symbol in your change.apps/backend/integrations/graphiti/tests/test_integration_graphiti.py (1)
31-33:⚠️ Potential issue | 🟠 Major
integration+ autouse mocks leaves both gaps: no live coverage and skipped CI.Every test in this module replaces
graphiti_core, so the suite is unit-level. Withpytest.mark.integration, the default CI selection can skip it entirely while the real LadybugDB/Graphiti path still goes untested.Also applies to: 40-95
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/tests/test_integration_graphiti.py` around lines 31 - 33, The tests are marked as integration via the module-level pytestmark but they autouse-mock graphiti_core, making them unit tests; remove or replace the module-level pytestmark = [pytest.mark.integration] with a more accurate marker (e.g., pytest.mark.unit or no marker) so CI doesn't skip live coverage, and/or adjust the autouse fixture that mocks graphiti_core (the autouse mock definition referring to graphiti_core) so that tests either actually run integration against LadybugDB/Graphiti or are correctly labeled as unit tests; update the marker declaration and/or the autouse mock fixture to keep test intent and CI selection consistent.apps/backend/integrations/graphiti/tests/test_integration_ollama.py (1)
31-36:⚠️ Potential issue | 🟠 MajorDon’t keep fully mocked tests under
pytest.mark.integration.With the current autouse mocks, this module never touches live Ollama/Graphiti behavior. Keeping the marker means the default
-m "not integration"CI run skips unit-level coverage entirely while still not providing real integration coverage.Also applies to: 43-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/tests/test_integration_ollama.py` around lines 31 - 36, The module-level pytest marker pytestmark = [ pytest.mark.integration ] incorrectly labels fully mocked unit tests as integration tests; remove or change that marker so CI doesn't skip these tests (e.g., delete the pytestmark definition or replace it with pytest.mark.unit), and ensure any truly integration tests that require a live Ollama/Graphiti server are moved to a separate file or given the correct pytest.mark.integration marker (also update the same marker usage in the other block spanning the tests referenced at 43-93).apps/backend/integrations/graphiti/queries_pkg/search.py (1)
64-79:⚠️ Potential issue | 🟠 MajorVerify the
GraphitiConfigimport here and only mark validation complete after success.This block imports
graphiti_config, while the config class is exposed asintegrations.graphiti.configelsewhere in this PR. If that alias does not exist, Line 66 flips_dimension_validatedbefore theImportError, the exception is swallowed, and every later search skips the validator permanently.Run this read-only check to confirm the module path:
#!/bin/bash set -euo pipefail fd -HI 'graphiti_config\.py$' rg -n --type=py 'class GraphitiConfig|from graphiti_config import GraphitiConfig|from integrations\.graphiti\.config import GraphitiConfig|from \.\.?config import GraphitiConfig'Expected result: either a real
graphiti_config.pyalias exists, orGraphitiConfigis only defined/imported underapps/backend/integrations/graphiti/config.py/integrations.graphiti.config, in which case this import should be updated and_dimension_validatedshould only be set after the lookup succeeds.As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/queries_pkg/search.py` around lines 64 - 79, The code prematurely sets GraphitiSearch._dimension_validated = True before the import/validation can succeed and uses the wrong module path; move the assignment so it only runs after a successful validation, and import GraphitiConfig from the canonical integrations.graphiti.config module (i.e. replace from graphiti_config import GraphitiConfig with from integrations.graphiti.config import GraphitiConfig or the correct package alias), wrap the import and config.get_embedding_dimension() call in the try/except so only on success you set GraphitiSearch._dimension_validated = True, and keep the exception handling but ensure ImportError/AttributeError are logged at debug and do not flip the validated flag.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d252083d-ee65-4847-af54-ba54ead4f589
📒 Files selected for processing (37)
apps/backend/agents/custom_agents.pyapps/backend/analysis/analyzers/context/env_detector.pyapps/backend/analysis/analyzers/database_detector.pyapps/backend/analysis/analyzers/framework_analyzer.pyapps/backend/analysis/analyzers/route_detector.pyapps/backend/core/client.pyapps/backend/integrations/graphiti/config.pyapps/backend/integrations/graphiti/queries_pkg/schema.pyapps/backend/integrations/graphiti/queries_pkg/search.pyapps/backend/integrations/graphiti/tests/test_episode_types.pyapps/backend/integrations/graphiti/tests/test_integration_graphiti.pyapps/backend/integrations/graphiti/tests/test_integration_ollama.pyapps/backend/phase_config.pyapps/frontend/src/main/claude-code-settings/reader.tsapps/frontend/src/main/claude-code-settings/types.tsapps/frontend/src/main/ipc-handlers/claude-agents-handlers.tsapps/frontend/src/main/ipc-handlers/context/project-context-handlers.tsapps/frontend/src/main/ipc-handlers/github/customer-github-handlers.tsapps/frontend/src/main/ipc-handlers/memory-handlers.tsapps/frontend/src/main/ipc-handlers/project-handlers.tsapps/frontend/src/main/ipc-handlers/settings-handlers.tsapps/frontend/src/renderer/components/AddCustomerModal.tsxapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/components/CustomerReposModal.tsxapps/frontend/src/renderer/components/EnvConfigModal.tsxapps/frontend/src/renderer/components/GitHubIssues.tsxapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/SortableProjectTab.tsxapps/frontend/src/renderer/components/context/ProjectIndexTab.tsxapps/frontend/src/renderer/components/github-issues/components/IssueListItem.tsxapps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.tsapps/frontend/src/renderer/components/github-prs/GitHubPRs.tsxapps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.tsapps/frontend/src/renderer/components/settings/AgentProfileSettings.tsxapps/frontend/src/shared/i18n/locales/en/navigation.jsonapps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/shared/types/ipc.ts
| // Get filtered issues based on selected repo | ||
| // Note: state filtering is already done by the API via the `state` parameter | ||
| const getFilteredIssues = useCallback((): GitHubIssue[] => { | ||
| const { issues, selectedRepo } = state; | ||
|
|
||
| // Filter by repo | ||
| if (selectedRepo !== 'all') { | ||
| return issues.filter(issue => issue.repoFullName === selectedRepo); | ||
| } | ||
|
|
||
| return issues; | ||
| }, [state]); | ||
|
|
||
| const getOpenIssuesCount = useCallback((): number => { | ||
| const { issues, selectedRepo } = state; | ||
| let filtered = issues.filter(issue => issue.state === 'open'); | ||
| if (selectedRepo !== 'all') { | ||
| filtered = filtered.filter(issue => issue.repoFullName === selectedRepo); | ||
| } | ||
| return filtered.length; | ||
| }, [state]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Overly broad useCallback dependencies cause unnecessary re-creations.
Both getFilteredIssues and getOpenIssuesCount depend on the entire state object. This means they are recreated on every state change (including isLoading, error, etc.), defeating memoization benefits.
♻️ Suggested narrower dependencies
const getFilteredIssues = useCallback((): GitHubIssue[] => {
- const { issues, selectedRepo } = state;
-
// Filter by repo
- if (selectedRepo !== 'all') {
- return issues.filter(issue => issue.repoFullName === selectedRepo);
+ if (state.selectedRepo !== 'all') {
+ return state.issues.filter(issue => issue.repoFullName === state.selectedRepo);
}
-
- return issues;
- }, [state]);
+ return state.issues;
+ }, [state.issues, state.selectedRepo]);
const getOpenIssuesCount = useCallback((): number => {
- const { issues, selectedRepo } = state;
- let filtered = issues.filter(issue => issue.state === 'open');
- if (selectedRepo !== 'all') {
+ let filtered = state.issues.filter(issue => issue.state === 'open');
+ if (state.selectedRepo !== 'all') {
filtered = filtered.filter(issue => issue.repoFullName === selectedRepo);
}
return filtered.length;
- }, [state]);
+ }, [state.issues, state.selectedRepo]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts`
around lines 195 - 215, getFilteredIssues and getOpenIssuesCount are recreating
too often because their useCallback deps reference the whole state object;
narrow the dependencies to only the specific pieces used (state.issues and
state.selectedRepo) by replacing [state] with [state.issues, state.selectedRepo]
(or their local destructured equivalents) so getFilteredIssues and
getOpenIssuesCount only re-run when issues or selectedRepo change; update the
useCallback declarations for getFilteredIssues and getOpenIssuesCount
accordingly.
| useEffect(() => { | ||
| if (!isCustomer || !resolvedChildProjectId || !multiRepoSelectedNumber) return; | ||
| if (prs.length > 0) { | ||
| const prExists = prs.some(pr => pr.number === multiRepoSelectedNumber); | ||
| if (prExists && selectedPRNumber !== multiRepoSelectedNumber) { | ||
| selectPR(multiRepoSelectedNumber); | ||
| } | ||
| } | ||
| }, [isCustomer, resolvedChildProjectId, multiRepoSelectedNumber, prs, selectedPRNumber, selectPR]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Effect runs on every prs array change.
This effect syncs selection from multi-repo to single-repo context. Including prs in the dependency array means it re-runs whenever the PR list updates (e.g., after refresh), potentially causing redundant selectPR calls when selectedPRNumber already equals multiRepoSelectedNumber.
The current guard selectedPRNumber !== multiRepoSelectedNumber prevents unnecessary state updates, so this is functionally correct but slightly inefficient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx` around lines
396 - 404, The effect unnecessarily re-runs on any change to the prs array
reference; change the dependency to prs.length instead of prs so the effect only
re-runs when the list size changes (not when item identities change), and keep
the existing guards (isCustomer, resolvedChildProjectId,
multiRepoSelectedNumber, selectedPRNumber) and the
selectPR(multiRepoSelectedNumber) call in place; update the useEffect dependency
array reference to prs.length and retain selectPR, selectedPRNumber,
multiRepoSelectedNumber, isCustomer, and resolvedChildProjectId to locate the
correct code paths.
apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts
Show resolved
Hide resolved
| const handleRefresh = useCallback(() => { | ||
| if (!customerId) return; | ||
|
|
||
| const refresh = async () => { | ||
| setState(prev => ({ ...prev, isLoading: true, error: null })); | ||
|
|
||
| try { | ||
| const connResult = await window.electronAPI.github.checkMultiRepoConnection(customerId); | ||
| if (connResult.success && connResult.data) { | ||
| const connData = connResult.data; | ||
| setState(prev => ({ | ||
| ...prev, | ||
| syncStatus: connData, | ||
| repos: connData.repos.map(r => r.repoFullName), | ||
| })); | ||
| } | ||
|
|
||
| const result = await window.electronAPI.github.getMultiRepoPRs(customerId); | ||
|
|
||
| if (result.success && result.data) { | ||
| const data = result.data; | ||
| setState(prev => ({ | ||
| ...prev, | ||
| prs: data.prs, | ||
| repos: data.repos.length > 0 ? data.repos : prev.repos, | ||
| isLoading: false, | ||
| })); | ||
| } else { | ||
| setState(prev => ({ | ||
| ...prev, | ||
| error: result.error || t('prReview.multiRepo.failedToRefreshPRs'), | ||
| isLoading: false, | ||
| })); | ||
| } | ||
| } catch (error) { | ||
| setState(prev => ({ | ||
| ...prev, | ||
| error: error instanceof Error ? error.message : t('prReview.multiRepo.unknownError'), | ||
| isLoading: false, | ||
| })); | ||
| } | ||
| }; | ||
|
|
||
| refresh(); | ||
| }, [customerId, t]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding cancellation logic to handleRefresh.
Unlike the useEffect hooks, handleRefresh doesn't track whether the request is still relevant. If a user triggers refresh and then quickly switches customers, stale results could be applied to the new customer context.
♻️ Proposed fix with ref-based cancellation
+import { useRef } from 'react';
+
export function useMultiRepoGitHubPRs(customerId: string | undefined) {
const { t } = useTranslation('common');
+ const refreshIdRef = useRef(0);
// ... existing state ...
const handleRefresh = useCallback(() => {
if (!customerId) return;
+ const thisRefreshId = ++refreshIdRef.current;
const refresh = async () => {
setState(prev => ({ ...prev, isLoading: true, error: null }));
try {
const connResult = await window.electronAPI.github.checkMultiRepoConnection(customerId);
+ if (refreshIdRef.current !== thisRefreshId) return;
if (connResult.success && connResult.data) {
// ... existing logic ...
}
const result = await window.electronAPI.github.getMultiRepoPRs(customerId);
+ if (refreshIdRef.current !== thisRefreshId) return;
// ... existing logic ...
} catch (error) {
+ if (refreshIdRef.current !== thisRefreshId) return;
// ... existing error handling ...
}
};
refresh();
}, [customerId, t]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts`
around lines 140 - 184, handleRefresh can apply stale results if a new customer
is selected before async calls complete; add cancellation logic by introducing a
ref-based request token (e.g., refreshRequestIdRef) or an AbortController stored
in a ref and increment/replace it each time handleRefresh is called, capture the
current token inside the inner async refresh, and before any setState or after
awaiting window.electronAPI.github.checkMultiRepoConnection / getMultiRepoPRs
verify the token still matches (or that the signal is not aborted); ensure you
abort/invalidates the previous token when starting a new refresh and also clean
up on unmount so setState in handleRefresh and refresh only runs for the latest
valid request.
| // Get filtered PRs based on selected repo | ||
| const filteredPRs = useMemo((): MultiRepoPRData[] => { | ||
| const { prs, selectedRepo } = state; | ||
| if (selectedRepo === 'all') return prs; | ||
| return prs.filter(pr => pr.repoFullName === selectedRepo); | ||
| }, [state.prs, state.selectedRepo]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor: Simplify useMemo dependencies.
The destructuring inside the memo accesses state.prs and state.selectedRepo, but the dependency array already lists these explicitly. This works correctly but could be simplified for consistency.
♻️ Simplified version
const filteredPRs = useMemo((): MultiRepoPRData[] => {
- const { prs, selectedRepo } = state;
- if (selectedRepo === 'all') return prs;
- return prs.filter(pr => pr.repoFullName === selectedRepo);
+ if (state.selectedRepo === 'all') return state.prs;
+ return state.prs.filter(pr => pr.repoFullName === state.selectedRepo);
}, [state.prs, state.selectedRepo]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts`
around lines 186 - 191, The useMemo for filteredPRs destructures state but lists
state.prs and state.selectedRepo in its dependency array; simplify by removing
the destructuring and reference state.prs and state.selectedRepo directly inside
the memo (or alternatively use const { prs, selectedRepo } = state and then
adjust the dependency array to [prs, selectedRepo]) so the dependencies match
the referenced identifiers; update the filteredPRs calculation (function name:
filteredPRs, hook: useMemo, types: MultiRepoPRData[], state properties: prs and
selectedRepo) accordingly for consistency.
- Add env field to GlobalMcpServerEntry.config and CustomMcpServer types - Parse and pass env from ~/.claude.json MCP server configs - Filter disabled MCPs (disabled: true) from the global MCP list - Pass env when converting GlobalMcpServerEntry to CustomMcpServer for health checks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| body_for_check = route_body | ||
| if children_match: | ||
| # Remove the children block from the body for target detection | ||
| children_rel_start = children_match.start() | ||
| children_rel_end = ( | ||
| children_match.end() | ||
| - children_match.start() | ||
| + (len(children_content) if children_content else 0) |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
… items) - Fix critical UnboundLocalError: global_mcp_added used before definition in security_settings dict (client.py). Pre-load global MCP servers and update settings file after population. - Add comment to empty except clause in custom_agents.py YAML fallback - Remove unused filePath variable in claude-agents-handlers.ts - Fix unused _mock_driver in test_integration_graphiti.py - Remove redundant "r" mode argument in open() call (Ruff UP015) - Guard parseIssueId/parsePRId against NaN from malformed input - Add aria-labels to Lock/Globe repo visibility icons (a11y) - Add i18n keys for private/public repo labels (en + fr) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tests were failing because the real CLAUDE_CONFIG_DIR env var leaked into test fixtures, causing token resolution to find real credentials instead of using mocked values. Updated clear_env/clear_auth_env fixtures to also save/remove/restore CLAUDE_CONFIG_DIR. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace fs.access() + fs.readFile() with a single fs.readFile() call, handling ENOENT in the catch block. This removes the file system race condition flagged by CodeQL (file could change between check and read). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| additions: pr.additions ?? 0, | ||
| deletions: pr.deletions ?? 0, | ||
| changedFiles: pr.changed_files ?? 0, |
There was a problem hiding this comment.
Bug: The multi-repo PR handler uses a REST endpoint that doesn't provide diff stats, causing pr.additions and pr.deletions to always be 0 in the UI.
Severity: MEDIUM
Suggested Fix
To resolve this, either switch to using the GraphQL API for multi-repo PRs, which includes diff stats in the list response, or make additional REST API calls to the individual PR detail endpoint for each pull request to fetch the missing statistics. A third option is to remove the display of these fields from the multi-repo view if the data is not essential.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location:
apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts#L378-L380
Potential issue: The code fetches pull requests for multiple repositories using the
GitHub REST API list endpoint. This endpoint does not include the `additions`,
`deletions`, or `changed_files` fields in its response. Consequently, when the code at
lines 378-380 accesses `pr.additions`, `pr.deletions`, and `pr.changed_files`, these
values are `undefined` and default to `0` due to the nullish coalescing operator. This
results in the UI for the multi-repo view always displaying `+0 -0` for diff statistics
and never showing the diff stat badge, which is a functional regression.
- fast_mode.py: use CLAUDE_CONFIG_DIR env var for settings path resolution instead of always defaulting to ~/.claude - reader.ts: export getUserConfigDir for reuse by other modules Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Consolidated PR combining three previously separate PRs (#1908, #1918, #1919) into a single cohesive change:
Graphiti Memory System Improvements
max_resultsconfig, score-based filtering, deduplicationepisode_ttl_daysconfig for automatic episode cleanupresolveEmbeddingDim()IPC helperCustom Agents Integration
.mdagent files from~/.claude/agents/with YAML frontmatter supportget_phase_custom_agent()resolves agent per phase from task metadatacreate_client()for planner, coder, and QA phasestask_metadata.jsonvia TaskCreationWizard and TaskEditDialogCustomer Module (already on develop)
Supersedes
Test plan
npx tsc --noEmit)🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Summary by CodeRabbit
New Features
Improvements