feat: Memory system improvements + Custom Agents integration#1920
feat: Memory system improvements + Custom Agents integration#1920vitorafgomes wants to merge 49 commits intoAndyMik90:developfrom
Conversation
Add customer folder creation/selection with auto-initialization and
GitHub authentication. Customer folders skip git setup requirements
since they don't need git repos.
- Add type field ('project' | 'customer') to Project interface
- Rewrite AddCustomerModal with create new/open existing folder steps
- Auto-initialize .auto-claude/ on customer creation
- Skip git check and init dialog for customer-type projects
- Trigger GitHubSetupModal after customer folder is ready
- Add i18n keys for EN and FR
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The full initializeProject requires git, which customer folders don't have. Use createProjectFolder to create .auto-claude/ directly instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The project object passed to onCustomerAdded was stale (missing autoBuildPath). Now reads the updated project from the store after .auto-claude/ creation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Customer folders don't have git, so the regular initializeProject fails. Add a dedicated IPC handler that creates .auto-claude/ and persists autoBuildPath in the main process project store without git checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a Customer tab is active, GitHub Issues now shows issues from ALL child repositories. Includes repo filter dropdown, per-issue repo badge, and aggregated view sorted by update date. Token comes from Customer parent .env, each child repo contributes its own GITHUB_REPO. Also includes prior uncommitted work: Customer flow improvements, GitHub integration settings, i18n additions, and UI refinements. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Customer projects now always show GitHub Issues/PRs nav items since multi-repo aggregation pulls from all child repos. Removed the guard that was hiding GitHub nav when a child repo was selected via the Customer dropdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…opdown Use customerContext instead of isCustomerProject so GitHub nav stays visible both when the Customer itself is selected AND when a child repo is selected via the Customer dropdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Child repos added to a Customer often don't have .env with GITHUB_REPO. Now falls back to detecting the repo from git remote origin URL, which is always available for cloned repos. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds multi-repo PR support matching the existing multi-repo Issues pattern. When a Customer is active, PRs from all child repos are aggregated with repo badges and a repo filter dropdown. Review/merge features are read-only in multi-repo mode. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Read globally configured MCP servers from ~/.claude/settings.json (enabledPlugins and mcpServers) and display them as a read-only "Claude Code MCPs (Global)" section in the MCP Server Overview page. - Add ClaudeCodeMcpServerConfig type and enabledPlugins/mcpServers fields - Create claude-mcp-handlers.ts IPC handler to resolve plugin cache configs - Expose getGlobalMcps() via preload API - Add Global MCPs section with source badges (Plugin/Settings) and type labels - Add i18n keys for en and fr Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix MCP handler to read ~/.claude.json (main Claude Code config) in addition to ~/.claude/settings.json, resolving 32+ MCP servers that were previously invisible - Add new IPC handler for reading custom agents from ~/.claude/agents/ (11 categories, 147 agents) - Display custom agents section in MCP Overview with collapsible categories - Add 'claude-json' source badge to distinguish MCP origin - i18n keys added for EN and FR Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…i-repo Resolve child project ID from the selected issue/PR's repoFullName using syncStatus.repos mapping. This enables features that were previously disabled for Customer multi-repo mode: - Investigation: hook receives resolved child projectId instead of undefined - Auto-fix: per-issue auto-fix now functional with child project context - PR Review: full PRDetail with review/findings/fix replaces read-only view - IssueDetail: projectId, autoFixConfig, autoFixQueueItem now active Header-level batch operations (Analyze & Group, global auto-fix toggle) remain disabled for multi-repo as cross-repo batch is too complex. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…icrofrontend detection
Customer projects (multi-repo) were unable to generate a project index
because the analyzer ran against the parent folder (not a code repo),
producing an empty index. This broke the spec creation pipeline's
discovery and requirements phases.
## Customer project indexing (aggregation)
- CONTEXT_REFRESH_INDEX handler now detects `project.type === 'customer'`
and discovers child repos via projectStore
- For each child repo without an index, runs analyzer.py individually
- Aggregates all child indexes into a single customer-level index with
`project_type: 'customer'` and `child_repos` map
- Services are prefixed with repo name (e.g., `repo/main`) to avoid
key collisions across repos
- CONTEXT_GET handler also builds aggregated index on-the-fly from
existing child indexes when no persisted aggregate exists
- Accepts `force` parameter to re-run analyzer on all children
## Analyzer: .NET/C# detection
- Detects `.csproj` / `.sln` files via new glob-based service root
detection (`has_service_root()` helper in base.py)
- Identifies frameworks: ASP.NET Core, Blazor, WPF, MAUI, Worker
Services, gRPC
- Detects ORMs (Entity Framework, Dapper, Npgsql), task queues
(MassTransit, Hangfire, RabbitMQ), test frameworks (xUnit, NUnit,
MSTest)
- Parses .csproj XML for PackageReference and SDK type detection
## Analyzer: documentation project detection
- Standalone detection: MkDocs/Material, Sphinx, mdBook
- Node.js-based: Docusaurus, VitePress, VuePress, Nextra, Storybook
- New `documentation` service type with dedicated icon (BookOpen) and
color (emerald) in the UI constants
## Analyzer: microfrontend detection
- Detects Module Federation, Native Federation, single-spa, Qiankun
from package.json dependencies
- Also checks webpack config for ModuleFederationPlugin
- Stores result in `microfrontend` field on service analysis
## UI: progress tracking & re-analyze
- New `CONTEXT_INDEX_PROGRESS` IPC channel for real-time progress events
- Handler sends progress via `webContents.send()` as each child repo is
analyzed (message + current/total count)
- New `indexProgress` / `indexProgressCurrent` / `indexProgressTotal`
state fields in context-store
- `useIndexProgress()` hook listens for IPC progress events
- ProjectIndexTab now shows:
- Progress bar with percentage and repo count during indexing
- "Re-analyze" button to force re-run analyzer on all repos
- "Repositories" section for customer projects showing each repo
with type and framework badges
- Inline progress banner when re-analyzing with existing data
## Type changes
- `ProjectIndex.project_type` now includes `'customer'`
- `ProjectIndex.child_repos` optional field for per-repo indexes
- `ServiceInfo.type` now includes `'documentation'`
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
feat: Customer module - Project indexing aggregation, enhanced codebase analysis & progress UI
- Remove unused SERVICE_ROOT_FILES import from project_analyzer_module.py
- Fix .csproj XML parsing to handle legacy files with XML namespaces
where tree.iter("PackageReference") silently returns nothing.
Now tries namespace-aware iteration and falls back to regex if
no packages found, not just on ParseError.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Export transformIssue from issue-handlers.ts and import in
customer-github-handlers.ts instead of duplicating (DRY)
- Narrow except Exception to except (OSError, UnicodeDecodeError) in
framework_analyzer.py .csproj parsing
- Replace require('path')/require('fs') with ES import statements in
oauth-handlers.ts and project-handlers.ts
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…, logging - CRITICAL: Fix port_detector undefined in framework_analyzer.py (NameError at runtime) - CRITICAL: Fix command injection in oauth-handlers.ts (execSync → execFileSync with args array + isValidGitHubRepo validation) - MAJOR: Replace console.log/error/warn with debugLog in project-context-handlers.ts - MAJOR: Fix cross-platform path detection (startsWith → path.relative) in project-context-handlers.ts and customer-github-handlers.ts - MAJOR: Add runtime type validation for PROJECT_ADD IPC handler - MAJOR: Use @shared path aliases in AgentTools.tsx imports - MINOR: Tighten Sphinx detection (remove overly broad "extensions" check) - TRIVIAL: Remove redundant packages_lower variable in dotnet detection Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Apply ruff format to base.py and framework_analyzer.py (dict line length, regex quotes) - Update smoke and ipc-bridge tests to expect third `undefined` argument in addProject IPC calls (matches new optional `type` parameter) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…nc, DRY Backend: - Remove unused re/ET imports in framework_analyzer.py - Ruff format all modified analyzer files MCP handlers: - Enforce transport requirements (command or url) in sanitizeMcpServers - Add runtime type validation in resolveInlineServers for untrusted JSON - Respect CLAUDE_CONFIG_DIR env var for .claude.json lookup Main process handlers: - Convert execFileSync to async execFile with timeout in customer-github-handlers - Extract duplicated keychain probe into hasKeychainClaudeToken() helper - Use atomic file write for aggregated customer index Renderer components: - Add i18n for all hardcoded strings in ProjectIndexTab (en + fr) - Localize error messages in EnvConfigModal with react-i18next - Add debug logging to silent catch in AddCustomerModal - Fix Windows path separator in AddCustomerModal folder preview - Wrap loadRepos in useCallback and fix useEffect deps in CustomerReposModal - Handle addProject registration failure in clone flow - Add TODO for repo-aware issue selection in multi-repo mode Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…mpat, backend hardening - Multi-repo selection: composite key (repo#number) for issues/PRs to prevent ambiguous selection across repos - i18n: localize all hardcoded strings in GitHub issues/PRs/integration components (16 new translation keys) - French translations: fix all missing diacritical marks (accents) across context, dialogs, settings - Windows path compat: use path.relative() and normalize helpers instead of hardcoded '/' separator - Path aliases: switch relative imports to @shared/* in 6 files (main, preload, renderer) - Backend: fix Storybook devDependency misclassification, .NET sdk_type priority, Angular route regex - Backend: add empty except comments, guard empty clean_type, remove appsettings.json from code refs - Frontend: analyzer timeout (120s), atomic child key dedup, progress bar null checks - Frontend: add type guard for PROJECT_INIT_CUSTOMER, repo validation for issue detail fetches - Frontend: fix hasMore pagination, stale connection state reset, loading state for directory picker - Frontend: simplify AgentTools ternary, add aria-expanded, use semantic colors, remove unused imports - Frontend: guard customer onboarding against failed GitHub settings save - Logging: consistent debugError for field sanitization warnings Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…on-vite build Main process and preload builds use Rollup SSR mode which doesn't resolve @shared/* tsconfig path aliases. Revert to relative imports for these two files while keeping @shared aliases in renderer files (which work fine). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…case sensitivity - .NET: pick best .sln when multiple exist (most entry_points + libraries) - .NET: label Worker SDK as ".NET Worker" instead of "Azure Functions" - .NET: only skip monorepo detection for single .sln repos - .NET: broaden csproj parse exception handling to catch all errors - .NET: remove redundant role variable default assignment - Routes: fix ASP.NET attribute regex and duplicate-slash normalization - Routes: prevent Angular double-counting of nested child routes - Case sensitivity: add lowercase properties/ path for launchSettings.json - env_detector: continue processing remaining launchSettings files (break vs return) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Enhance the Claude Code MCPs (Global) section in MCP Overview: - Health status indicator (green/red/yellow/gray dot) per MCP server - "Check Health" button to probe all global MCPs in parallel - Response time display for HTTP servers - Pipeline phase toggle chips (Spec, Build, QA, Utility, Ideation) per MCP - Phase assignments persisted in settings as globalMcpPhases - Add GlobalMcpPhaseConfig type to settings types - i18n keys for EN and FR Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…list Global MCPs from Claude Code config (~/.claude.json, ~/.claude/settings.json) are a trusted source. The existing checkMcpHealth rejects commands not in the SAFE_COMMANDS allowlist (npx, npm, node, etc.), causing all global MCP command-based servers to show as "unhealthy" (red). Add checkGlobalMcpHealth IPC handler that verifies command existence via which/where.exe without the allowlist filter. Shows "Available — starts on demand" for found commands, matching how Claude Code actually uses them. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
apps/frontend/src/renderer/components/AgentTools.tsx (1)
1426-1458:⚠️ Potential issue | 🟠 MajorAlways render the global MCP card container so retry actions remain available.
Line 1426 hides the whole section when
allGlobalServers.length === 0, which also hides refresh and health-check controls after empty/failed loads.Proposed fix direction
- {allGlobalServers.length > 0 && ( + {(isLoadingGlobalMcps || globalMcps !== null) && ( <div className="rounded-lg border border-border bg-card p-4"> <div className="flex items-center justify-between mb-4"> ... </div> - <div className="space-y-2"> - {allGlobalServers.map((server) => { + {isLoadingGlobalMcps ? ( + <p className="text-xs text-muted-foreground">{t('settings:mcp.globalMcps.loading')}</p> + ) : allGlobalServers.length === 0 ? ( + <p className="text-xs text-muted-foreground">{t('settings:mcp.globalMcps.empty')}</p> + ) : ( + <div className="space-y-2"> + {allGlobalServers.map((server) => { ... - })} - </div> + })} + </div> + )} </div> )}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1426 - 1458, The MCP card is currently hidden when allGlobalServers.length === 0, removing the retry/refresh controls; change rendering so the outer container (the div with class "rounded-lg border...") always renders, while the list of servers inside is conditional — keep the header and the Buttons (onClick handlers checkGlobalMcpHealth and loadGlobalMcps, and states isCheckingGlobalHealth/isLoadingGlobalMcps) visible at all times and only hide or show the server list/content based on allGlobalServers.length; update the JSX to move the conditional from the wrapper to the inner server-list section so retry actions remain available when the list is empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 749-753: Replace the hardcoded fallback message strings in
AgentTools.tsx with react-i18next translation keys: instead of 'Health check
failed' used when populating results[server.serverId] in the health-check path
(and the similar string at the other occurrence around line 761), call the i18n
translator (from useTranslation) and use a descriptive key like
t('agentTools.healthCheckFailed') so the tooltip/status reads the translated
text; ensure useTranslation is imported and the translation key is added to the
relevant locales.
- Around line 50-52: Replace relative imports with renderer path aliases: change
imports that reference '../stores/settings-store' and '../lib/utils' (symbols:
useSettingsStore, saveSettings, cn) and '../stores/project-store' (symbol:
useProjectStore) to use the configured `@/*` aliases (e.g.,
`@/stores/settings-store`, `@/lib/utils`, `@/stores/project-store`) so frontend
code follows tsconfig aliasing; update the import lines in AgentTools.tsx
accordingly.
In `@apps/frontend/src/shared/i18n/locales/en/settings.json`:
- Line 352: Translation for "projectSections.{id}.description" expects a name
interpolation but AppSettings.tsx currently calls
t(`projectSections.${item.id}.description`) without args; update the render in
apps/frontend/src/renderer/components/settings/AppSettings.tsx to pass the
project name (e.g., t(`projectSections.${item.id}.description`, { name:
item.name })) or a safe fallback (e.g., { name: item.name || '' }) so the
{{name}} token is replaced and users won't see the literal interpolation.
---
Duplicate comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 1426-1458: The MCP card is currently hidden when
allGlobalServers.length === 0, removing the retry/refresh controls; change
rendering so the outer container (the div with class "rounded-lg border...")
always renders, while the list of servers inside is conditional — keep the
header and the Buttons (onClick handlers checkGlobalMcpHealth and
loadGlobalMcps, and states isCheckingGlobalHealth/isLoadingGlobalMcps) visible
at all times and only hide or show the server list/content based on
allGlobalServers.length; update the JSX to move the conditional from the wrapper
to the inner server-list section so retry actions remain available when the list
is empty.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 5837d255-14dd-46d1-82a6-7ded19f79a6f
📒 Files selected for processing (4)
apps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/shared/i18n/locales/en/settings.jsonapps/frontend/src/shared/i18n/locales/fr/settings.jsonapps/frontend/src/shared/types/settings.ts
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
apps/frontend/src/shared/types/ipc.ts (1)
458-459:⚠️ Potential issue | 🟡 MinorInclude
projectIdinonIndexProgresspayload for event scoping.The
onIndexProgresscallback payload lacks aprojectIdfield. When multiple projects are being indexed (or rapidly switched), progress events cannot be reliably associated with the correct project, potentially causing UI updates to be misrouted.🔧 Suggested fix
- onIndexProgress: (callback: (data: { message: string; current?: number; total?: number }) => void) => () => void; + onIndexProgress: (callback: (data: { projectId: string; message: string; current?: number; total?: number }) => void) => () => void;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/shared/types/ipc.ts` around lines 458 - 459, The onIndexProgress callback payload is missing projectId which prevents scoping progress events; update the IPC type signature for onIndexProgress to include projectId: string in the payload (so its callback becomes (data: { projectId: string; message: string; current?: number; total?: number }) => void) and then adjust any emitters/listeners that use onIndexProgress and refreshProjectIndex to pass and check projectId so progress events are correctly routed to the matching project.apps/frontend/src/renderer/components/AgentTools.tsx (3)
1425-1427:⚠️ Potential issue | 🟡 MinorDon't hide the entire section when data is empty or on first load failure.
The condition
allGlobalServers.length > 0hides the card completely when there are no servers or when the initial fetch fails. This also hides the refresh button, making recovery harder.Suggested direction
- {allGlobalServers.length > 0 && ( + {(isLoadingGlobalMcps || globalMcps !== null) && ( <div className="rounded-lg border border-border bg-card p-4">Then render:
- A loading state when
isLoadingGlobalMcpsis true- An empty state with messaging when
allGlobalServers.length === 0- The populated list otherwise
This keeps the refresh control accessible for retry.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1425 - 1427, The current JSX hides the entire Claude Code Global MCPs card when allGlobalServers.length === 0, which also hides the refresh control; change the render logic around the card (the block currently gated by allGlobalServers.length > 0) to always render the card and inside it: show a loading state when isLoadingGlobalMcps is true, show an explicit empty state/message when allGlobalServers.length === 0 (with the refresh button visible), and otherwise render the populated list using allGlobalServers; ensure the refresh control remains outside the length check so it is always accessible for retries.
50-52: 🛠️ Refactor suggestion | 🟠 MajorUse renderer path aliases instead of relative imports.
Lines 50-51 use relative imports where
@/*aliases are required per coding guidelines.Proposed fix
-import { useSettingsStore, saveSettings } from '../stores/settings-store'; -import { cn } from '../lib/utils'; +import { useSettingsStore, saveSettings } from '@/stores/settings-store'; +import { cn } from '@/lib/utils';As per coding guidelines: "Use path aliases from
tsconfig.jsonin frontend code:@/*for renderer."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 50 - 52, The imports in AgentTools.tsx use relative paths instead of the renderer path alias; update the import statements that reference useSettingsStore, saveSettings, cn, and useProjectStore to use the `@/...` alias (e.g., import from '@/stores/settings-store' and '@/lib/utils' and '@/stores/project-store') so they follow the project's tsconfig renderer alias convention; ensure all four imported symbols (useSettingsStore, saveSettings, cn, useProjectStore) are imported from their aliased module paths.
749-753:⚠️ Potential issue | 🟡 MinorTranslate the fallback health-check message.
Line 752 uses a hardcoded English string
'Health check failed'that surfaces in status tooltips. This violates the i18n requirement.Proposed fix
} catch { results[server.serverId] = { serverId: server.serverId, status: 'unknown', - message: 'Health check failed', + message: t('settings:mcp.globalMcps.healthCheckFailed'), checkedAt: new Date().toISOString(), }; }And update the dependency array on line 761:
- }, [allGlobalServers]); + }, [allGlobalServers, t]);As per coding guidelines: "All frontend user-facing text must use
react-i18nexttranslation keys. Never hardcode strings in JSX/TSX."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 749 - 753, Replace the hardcoded message 'Health check failed' in the results assignment (the object created at results[server.serverId]) with a translation call, e.g. use t('agent.healthCheckFailed') from react-i18next so the tooltip uses i18n; then ensure the hook that builds/returns these results (the enclosing useCallback/useEffect that references results) includes the translation function t in its dependency array so translations update correctly when locale changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 763-768: The useEffect currently depends on
[allGlobalServers.length, checkGlobalMcpHealth], which causes health checks to
only run when the array length changes while the callback captures the full
array; update the effect to avoid this mismatch by either (A) depending directly
on allGlobalServers (i.e., useEffect(() => { if (allGlobalServers.length > 0)
checkGlobalMcpHealth(); }, [allGlobalServers])) so checks re-run on any server
change, or (B) depend only on allGlobalServers.length and read server details
via a ref inside checkGlobalMcpHealth; pick one approach and update the
useEffect dependency array and related refs/callbacks (checkGlobalMcpHealth,
allGlobalServers) accordingly.
In `@apps/frontend/src/renderer/lib/browser-mock.ts`:
- Around line 372-379: The getClaudeAgents mock has a redundant type assertion
on the success property; in the getClaudeAgents async function remove the
unnecessary "as const" from "success: true as const" so it simply returns
"success: true" (matching the style used in getGlobalMcps) and rely on
TypeScript's type inference for the boolean literal.
---
Duplicate comments:
In `@apps/frontend/src/renderer/components/AgentTools.tsx`:
- Around line 1425-1427: The current JSX hides the entire Claude Code Global
MCPs card when allGlobalServers.length === 0, which also hides the refresh
control; change the render logic around the card (the block currently gated by
allGlobalServers.length > 0) to always render the card and inside it: show a
loading state when isLoadingGlobalMcps is true, show an explicit empty
state/message when allGlobalServers.length === 0 (with the refresh button
visible), and otherwise render the populated list using allGlobalServers; ensure
the refresh control remains outside the length check so it is always accessible
for retries.
- Around line 50-52: The imports in AgentTools.tsx use relative paths instead of
the renderer path alias; update the import statements that reference
useSettingsStore, saveSettings, cn, and useProjectStore to use the `@/...` alias
(e.g., import from '@/stores/settings-store' and '@/lib/utils' and
'@/stores/project-store') so they follow the project's tsconfig renderer alias
convention; ensure all four imported symbols (useSettingsStore, saveSettings,
cn, useProjectStore) are imported from their aliased module paths.
- Around line 749-753: Replace the hardcoded message 'Health check failed' in
the results assignment (the object created at results[server.serverId]) with a
translation call, e.g. use t('agent.healthCheckFailed') from react-i18next so
the tooltip uses i18n; then ensure the hook that builds/returns these results
(the enclosing useCallback/useEffect that references results) includes the
translation function t in its dependency array so translations update correctly
when locale changes.
In `@apps/frontend/src/shared/types/ipc.ts`:
- Around line 458-459: The onIndexProgress callback payload is missing projectId
which prevents scoping progress events; update the IPC type signature for
onIndexProgress to include projectId: string in the payload (so its callback
becomes (data: { projectId: string; message: string; current?: number; total?:
number }) => void) and then adjust any emitters/listeners that use
onIndexProgress and refreshProjectIndex to pass and check projectId so progress
events are correctly routed to the matching project.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 9d592a98-9ed9-440c-8a5e-682586349966
📒 Files selected for processing (6)
apps/frontend/src/main/ipc-handlers/mcp-handlers.tsapps/frontend/src/preload/api/modules/mcp-api.tsapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/lib/browser-mock.tsapps/frontend/src/shared/constants/ipc.tsapps/frontend/src/shared/types/ipc.ts
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
Address all 49 review comments from PR AndyMik90#1920: Backend (Python): - custom_agents.py: path traversal validation, CRLF normalization, YAML block-list parsing, type hints - route_detector.py: fix .NET HTTP attribute regex, method modifier handling - client.py: Windows command-line length protection for agent prompts, load global MCP servers from ~/.claude.json (Auto-Claude MCPs take priority) - graphiti config/schema/search: non-negative TTL validation, safe env parsing, proper exception logging - phase_config.py: guard phaseCustomAgents dict type - env_detector.py: fix config value shadowing by reordering parsers - database_detector.py: guard empty string indexing - framework_analyzer.py: handle path traversal in .sln parsing - Tests: fix sys.modules teardown, unused variables, clarify test markers Frontend (TypeScript/React): - customer-github-handlers.ts: move imports to top, validate IPC params - settings-handlers.ts: normalize token checks, unify token-source logic - project-handlers.ts: validate project path before mkdirSync - memory-handlers.ts: warn on heuristic embedding dimension fallback - project-context-handlers.ts: use i18n keys for progress messages - CustomerReposModal.tsx: aria-label, Tailwind class instead of inline style - ProjectIndexTab.tsx: guard division by zero in progress bars - GitHubPRs.tsx: clamp negative timestamps, guard stale PR detail - useMultiRepoGitHubIssues/PRs: async cancellation guards - Sidebar.tsx: path normalization for Windows, wire AddCustomerModal trigger - SortableProjectTab.tsx: fix invisible keyboard focus target - i18n: singular/plural repo count (en + fr) - ipc.ts: add projectId to onIndexProgress payload Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Backend: - port_detector.py: handle --port=<value> equals form in regex - project_analyzer_module.py: don't early-return on single .sln, check for sibling services - custom_agents.py: use sanitize_thinking_level(), include root-level agents in catalog, extract _format_category_name() helper - test_integration_graphiti.py: clarify these are unit-level tests with mocked deps Frontend: - claude-mcp-handlers.ts: convert sync fs calls to async, validate header values - App.tsx: keep GitHub setup modal open on save failure - AddCustomerModal.tsx: handle registration failures explicitly - GitHubIssues.tsx: additional fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- AgentTools.tsx: use path aliases, translate health-check message, simplify useEffect deps - AgentProfileSettings.tsx: constrain catalog panel height with max-h and overflow - GitHubIntegration.tsx: use cn() for conditional Tailwind class composition Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- AgentTools.tsx: use @/ path aliases, translate health-check fallback message, simplify useEffect dependency array - browser-mock.ts: remove redundant `as const` assertion - settings.json (en/fr): add healthCheckFailed translation key Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 31
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
apps/frontend/src/renderer/components/AgentTools.tsx (1)
1020-1029:⚠️ Potential issue | 🟡 MinorTranslate hardcoded health-check fallback message.
Line 1026 contains the same hardcoded
'Health check failed'string that should use an i18n translation key for consistency.Proposed fix
} catch (_error) { setServerHealthStatus(prev => ({ ...prev, [server.id]: { serverId: server.id, status: 'unknown', - message: 'Health check failed', + message: t('settings:mcp.healthCheckFailed'), checkedAt: new Date().toISOString(), } })); }As per coding guidelines: "All frontend user-facing text must use
react-i18nexttranslation keys."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AgentTools.tsx` around lines 1020 - 1029, The catch block that calls setServerHealthStatus in AgentTools currently sets message: 'Health check failed' as a hardcoded string; replace that literal with a react-i18next translation (e.g., t('health_check.failed')) and ensure the AgentTools component has access to the t function from useTranslation (import useTranslation and call const { t } = useTranslation() at component scope). Update the translation JSON keys (e.g., "health_check.failed") in your locales so the message is localized. Ensure server.id and other fields remain unchanged when setting state.apps/frontend/src/main/ipc-handlers/context/project-context-handlers.ts (1)
412-463: 🧹 Nitpick | 🔵 TrivialConsider extracting shared analyzer spawn logic.
The spawn/timeout pattern is duplicated between
refreshChildIndex(lines 78-130) and this block. Extracting a shared helper would reduce duplication and ensure consistent behavior (including the cross-platform process termination fix).♻️ Suggested extraction
async function runAnalyzer( projectPath: string, projectName: string, analyzerPath: string, indexOutputPath: string, timeoutMs: number = 120_000 ): Promise<void> { const pythonCmd = getConfiguredPythonPath(); const [pythonCommand, pythonBaseArgs] = parsePythonCommand(pythonCmd); return new Promise<void>((resolve, reject) => { let stdout = ''; let stderr = ''; const proc = spawn(pythonCommand, [ ...pythonBaseArgs, analyzerPath, '--project-dir', projectPath, '--output', indexOutputPath ], { cwd: projectPath, env: { ...getAugmentedEnv(), PYTHONIOENCODING: 'utf-8', PYTHONUTF8: '1' } }); const timeout = setTimeout(() => { debugLog(`[project-context] Analyzer (${projectName}) timed out after ${timeoutMs}ms`); proc.kill('SIGTERM'); reject(new Error(`Analyzer timed out after ${timeoutMs / 1000}s`)); }, timeoutMs); proc.stdout?.on('data', (data) => { stdout += data.toString('utf-8'); }); proc.stderr?.on('data', (data) => { stderr += data.toString('utf-8'); }); proc.on('close', (code: number) => { clearTimeout(timeout); if (code === 0) { debugLog(`[project-context] Analyzer (${projectName}) stdout:`, stdout); resolve(); } else { debugLog(`[project-context] Analyzer (${projectName}) failed with code`, code); reject(new Error(`Analyzer exited with code ${code}: ${stderr || stdout}`)); } }); proc.on('error', (err) => { clearTimeout(timeout); reject(err); }); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/context/project-context-handlers.ts` around lines 412 - 463, The analyzer spawn/timeout logic is duplicated between refreshChildIndex and the block in project-context-handlers; extract it into a shared async helper (e.g., runAnalyzer) that accepts projectPath, projectName, analyzerPath, indexOutputPath, and timeoutMs, and internally calls getConfiguredPythonPath/parsePythonCommand, uses getAugmentedEnv, collects stdout/stderr, sets the timeout, and handles proc.on('close') and proc.on('error'); ensure the helper implements the same cross-platform termination behavior (use proc.kill('SIGTERM') and clearTimeout on all exits) and replace both refreshChildIndex and the existing inline Promise with calls to runAnalyzer to remove duplication and keep behavior consistent.apps/frontend/src/renderer/components/SortableProjectTab.tsx (1)
37-44:⚠️ Potential issue | 🟠 MajorUse
setActivatorNodeRefon the drag handle and consolidate activator properties to prevent mobile drag regression.Line 97 hides the drag handle (
'hidden sm:block'), which is the only element withlisteners, so belowsmthere's no DOM node to initiate drag. Additionally,attributesis on the wrapper (line 67) whilelistenersis on the handle (line 92), splitting the activator semantics. DnD Kit's drag handle pattern requiressetActivatorNodeRef,attributes, andlistenersall on the same element (the handle) to properly support keyboard focus restoration and touch/mouse activation.Move
setActivatorNodeRef,attributes, andlistenersto the drag handle element, and remove thehiddenclass so the handle is always interactive (visibility can be controlled via opacity).Suggested direction
const { attributes, listeners, setNodeRef, + setActivatorNodeRef, transform, transition, isDragging } = useSortable({ id: project.id }); return ( <div ref={setNodeRef} style={style} className={cn( 'group relative flex items-center min-w-0', isActive ? 'max-w-[180px] sm:max-w-[220px] md:max-w-[280px]' : 'max-w-[120px] sm:max-w-[160px] md:max-w-[200px]', 'border-r border-border last:border-r-0', 'touch-none transition-all duration-200', isDragging && 'opacity-60 scale-[0.98] shadow-lg' )} - {...attributes} > <Tooltip delayDuration={200}> <TooltipTrigger asChild> <div className={cn( 'flex-1 flex items-center gap-1 sm:gap-2', 'px-2 sm:px-3 md:px-4 py-2 sm:py-2.5', 'text-xs sm:text-sm', 'min-w-0 truncate hover:bg-muted/50 transition-colors', 'border-b-2 border-transparent cursor-pointer', isActive && [ 'bg-background border-b-primary text-foreground', 'hover:bg-background' ], !isActive && [ 'text-muted-foreground', 'hover:text-foreground' ] )} onClick={onSelect} > {/* Drag handle */} <div + ref={setActivatorNodeRef} + {...attributes} {...listeners} role="button" aria-label={t('projectTab.dragHandle', { name: project.name })} tabIndex={isActive ? 0 : -1} className={cn( - 'hidden sm:block', + 'block', 'cursor-grab active:cursor-grabbing', 'w-1 h-4 rounded-full flex-shrink-0 transition-all', isActive ? 'opacity-100 bg-primary' - : 'opacity-0 group-hover:opacity-60 bg-muted-foreground' + : 'opacity-0 sm:group-hover:opacity-60 bg-muted-foreground' )} />Also applies to: 55-67
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/SortableProjectTab.tsx` around lines 37 - 44, The drag handle currently misuses useSortable by placing attributes on the wrapper (useSortable destructured names) and listeners on a hidden handle, causing mobile/keyboard regressions; change the drag-handle element to receive setActivatorNodeRef, attributes, and listeners from useSortable (instead of setNodeRef/attributes on the wrapper) and remove the 'hidden sm:block' class so the activator exists on small screens (control visibility via opacity/CSS rather than display:none); keep setNodeRef on the root sortable item for overall positioning/transform and use setActivatorNodeRef on the visible handle to consolidate activator semantics for useSortable (id: project.id).apps/frontend/src/renderer/components/Sidebar.tsx (1)
261-287:⚠️ Potential issue | 🟠 MajorReset the Git setup modal when the selected project no longer needs it.
Lines 265–267 clear only
gitStatusfor customer projects, and line 276 only ever opens the modal. If the user switches from a non-git repo (modal open) to a customer project, a repo with commits, or no project, the stale Git setup modal remains open. Additionally, line 287 only tracksselectedProjectId, so changing the selected project'spathortypein place will not rerun the effect, leaving git state stale.Proposed fix
useEffect(() => { + let cancelled = false; + const checkGit = async () => { - if (selectedProject) { - // Customer folders don't require git - if (selectedProject.type === 'customer') { - setGitStatus(null); - return; - } - try { - const result = await window.electronAPI.checkGitStatus(selectedProject.path); - if (result.success && result.data) { - setGitStatus(result.data); - // Show git setup modal if project is not a git repo or has no commits - // but only if user hasn't already dismissed it for this project - if ((!result.data.isGitRepo || !result.data.hasCommits) && !gitModalDismissedRef.current.has(selectedProject.id)) { - setShowGitSetupModal(true); - } - } - } catch (error) { - console.error('Failed to check git status:', error); - } - } else { + if (!selectedProject || selectedProject.type === 'customer') { setGitStatus(null); + setShowGitSetupModal(false); + return; + } + + try { + const result = await window.electronAPI.checkGitStatus(selectedProject.path); + if (cancelled) return; + + if (result.success && result.data) { + setGitStatus(result.data); + const needsGitSetup = !result.data.isGitRepo || !result.data.hasCommits; + setShowGitSetupModal( + needsGitSetup && !gitModalDismissedRef.current.has(selectedProject.id) + ); + } else { + setShowGitSetupModal(false); + } + } catch (error) { + console.error('Failed to check git status:', error); + setShowGitSetupModal(false); } }; + checkGit(); - }, [selectedProjectId]); + return () => { + cancelled = true; + }; + }, [selectedProject?.id, selectedProject?.path, selectedProject?.type]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/Sidebar.tsx` around lines 261 - 287, The effect that checks Git status (useEffect -> checkGit) only clears gitStatus for customer projects and may leave the Git setup modal open when switching to a customer project, a repo with commits, or no project; also the dependency array only tracks selectedProjectId so changes to selectedProject.path/type won't retrigger the check. Fix by ensuring when selectedProject is falsy or selectedProject.type === 'customer' you both setGitStatus(null) and setShowGitSetupModal(false) (and optionally clear any related state like gitModalDismissedRef if needed), and update the effect dependencies to include the selectedProject object (or at minimum selectedProject?.path and selectedProject?.type along with selectedProjectId) so the checkGit() runs whenever the project's path/type change; keep the existing logic that opens the modal only when (!result.data.isGitRepo || !result.data.hasCommits) and the project hasn't been dismissed.
♻️ Duplicate comments (20)
apps/frontend/src/renderer/components/CustomerReposModal.tsx (1)
136-145:⚠️ Potential issue | 🟡 MinorUse i18n translation key for the
aria-labelattribute.The
aria-label="Clear search"on line 140 is a hardcoded English string. Screen reader labels are user-facing text and must use translation keys per coding guidelines.♿ Suggested fix
{search && ( <button type="button" onClick={() => setSearch('')} - aria-label="Clear search" + aria-label={t('customerRepos.clearSearch')} className="absolute right-3 top-1/2 -translate-y-1/2 text-muted-foreground hover:text-foreground" > <X className="h-4 w-4" /> </button> )}Ensure
customerRepos.clearSearchis added to bothen/*.jsonandfr/*.jsontranslation files. As per coding guidelines, "All frontend user-facing text must usereact-i18nexttranslation keys."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/CustomerReposModal.tsx` around lines 136 - 145, Replace the hardcoded aria-label in CustomerReposModal's clear button with a react-i18next translation key: use the useTranslation hook and call t('customerRepos.clearSearch') for the aria-label on the button (the component and element to change are in CustomerReposModal.tsx where the button with onClick={() => setSearch('')} and <X /> lives); also add the key "customerRepos.clearSearch" with appropriate English and French values to both en/*.json and fr/*.json translation files so the label is localized for screen readers.apps/frontend/src/renderer/components/GitHubIssues.tsx (1)
257-257:⚠️ Potential issue | 🔴 CriticalMap lookup uses wrong key type/format — will never match.
The
issueToTaskMapuses string keys like"owner/repo#123"or"#123", but the lookup passesselectedIssue.number(a plain number). This causes both a TypeScript type mismatch and a runtime bug where linked tasks are never found.🐛 Proposed fix to use repo-scoped key for lookup
linkedTaskId={issueToTaskMap.get(selectedIssue.number)} + linkedTaskId={issueToTaskMap.get( + selectedIssue.repoFullName + ? `${selectedIssue.repoFullName}#${selectedIssue.number}` + : `#${selectedIssue.number}` + )}Alternatively, extract the key computation into a helper to keep it DRY with lines 145-147:
const getIssueTaskKey = (repo: string | undefined, issueNumber: number): string => repo ? `${repo}#${issueNumber}` : `#${issueNumber}`;Then use it in both the map population and lookup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/GitHubIssues.tsx` at line 257, The lookup into issueToTaskMap is using selectedIssue.number (a number) but the map keys are strings like "owner/repo#123" or "#123", so change the lookup to use the same string key format used when populating the map; add or reuse a helper (e.g., getIssueTaskKey(repo, issueNumber)) that returns repo ? `${repo}#${issueNumber}` : `#${issueNumber}` and call it where linkedTaskId is computed (replace issueToTaskMap.get(selectedIssue.number) with issueToTaskMap.get(getIssueTaskKey(selectedRepo, selectedIssue.number))), ensuring the helper is also used where the map is populated to keep keys consistent.apps/frontend/src/main/claude-code-settings/reader.ts (1)
103-104:⚠️ Potential issue | 🟠 MajorBlock prototype-pollution keys in sanitized maps.
The sanitizer functions write untrusted JSON keys into plain
{}objects without guarding against prototype-pollution keys (__proto__,constructor,prototype). This affects:
sanitizeMcpServers()at Line 103 (mcpServers map) and Line 140 (headers map)sanitizeEnabledPlugins()at Line 200Use
Object.create(null)for all sanitized maps and validate keys before assignment.🔒 Proposed hardening
+const FORBIDDEN_KEYS = new Set(['__proto__', 'constructor', 'prototype']); + +function isSafeKey(key: string): boolean { + return !FORBIDDEN_KEYS.has(key); +} + function sanitizeMcpServers(mcpServers: unknown): Record<string, ClaudeCodeMcpServerConfig> | undefined { if (!isPlainObject(mcpServers)) { return undefined; } - const sanitized: Record<string, ClaudeCodeMcpServerConfig> = {}; + const sanitized: Record<string, ClaudeCodeMcpServerConfig> = Object.create(null); let hasValidEntries = false; for (const [key, value] of Object.entries(mcpServers)) { + if (!isSafeKey(key)) { + debugLog(`${LOG_PREFIX} Skipping unsafe mcpServers key:`, key); + continue; + } if (!isPlainObject(value)) {Apply the same pattern to headers (Line 140) and
sanitizeEnabledPlugins(Line 200).Also applies to: 140-140, 200-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/claude-code-settings/reader.ts` around lines 103 - 104, The sanitized maps in sanitizeMcpServers() and sanitizeEnabledPlugins() currently use plain {} which allows prototype-pollution via keys like "__proto__", "constructor", or "prototype"; change the maps (the local variables currently named sanitized/mcpServers, headers, and enabledPlugins) to be created with Object.create(null) and, before assigning any untrusted key into these maps inside sanitizeMcpServers() and sanitizeEnabledPlugins(), validate the key against a deny-list (at minimum "__proto__", "constructor", "prototype") or a safe-key regex and skip/ignore any disallowed keys so they are never written into the prototype chain.apps/frontend/src/renderer/components/AddCustomerModal.tsx (1)
44-48:⚠️ Potential issue | 🟠 MajorHandle project registration failures explicitly.
When
window.electronAPI.addProject()fails,registerAndInitCustomer()returns silently without surfacing any error to the user. BothhandleOpenExistingandhandleCreateFoldercall this function, but neither will display an error if registration fails since no exception is thrown.🔧 Suggested fix
const registerAndInitCustomer = async (path: string) => { // Pass type: 'customer' through IPC so it's persisted to disk (projects.json) const result = await window.electronAPI.addProject(path, 'customer'); - if (!result.success || !result.data) return; + if (!result.success || !result.data) { + throw new Error(result.error || t('addCustomer.failedToCreate')); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/AddCustomerModal.tsx` around lines 44 - 48, registerAndInitCustomer currently swallows failures from window.electronAPI.addProject, causing handleOpenExisting and handleCreateFolder to return silently; modify registerAndInitCustomer to explicitly handle unsuccessful results by either throwing an Error or returning a rejected Promise with a descriptive message, and ensure callers (handleOpenExisting and handleCreateFolder) catch that rejection and surface it to the user (e.g., via the existing UI error state / notification mechanism or by setting an error message in component state). Locate registerAndInitCustomer, window.electronAPI.addProject, handleOpenExisting, and handleCreateFolder and add explicit error propagation and user-facing reporting rather than a silent return.apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts (1)
70-76:⚠️ Potential issue | 🟡 MinorMissing cancellation check in the else branch.
The
elsebranch at line 70 sets state without first checkingif (cancelled) return, which is inconsistent with the pattern used elsewhere in this effect (lines 62, 78). If the component unmounts orcustomerIdchanges during the async call, this branch could still update state.🔧 Suggested fix
} else { + if (cancelled) return; setState(prev => ({ ...prev, syncStatus: { connected: false, repos: [], error: result.error }, error: result.error || t('issues.multiRepo.failedToCheckConnection'), })); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts` around lines 70 - 76, The else branch that calls setState(...) should guard against updates after unmount by checking the same cancellation flag used elsewhere; before calling setState in the else branch (the block handling a failed connection result inside the async effect in useMultiRepoGitHubIssues), add "if (cancelled) return" (or the existing cancellation variable check) to avoid updating state after the component is unmounted or customerId changes.apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx (1)
491-517:⚠️ Potential issue | 🟡 MinorStrengthen the guard to verify PR belongs to the resolved child project.
The current check
multiRepoSelectedNumber === selectedPR.numberonly validates the PR number, but different repos can have PRs with the same number. During transitions whenresolvedChildProjectIdupdates butsingleRepo.selectedPRhasn't reloaded yet, this could briefly pair a stale PR from the wrong repo with the new project context.🛡️ Proposed additional guard
fullPRDetail={ - resolvedChildProjectId && selectedPR && multiRepoSelectedNumber === selectedPR.number ? ( + resolvedChildProjectId && selectedPR && + multiRepoSelectedNumber === selectedPR.number && + multiRepo.selectedPR?.repoFullName === selectedPR.repoFullName ? ( <PRDetail pr={selectedPR} projectId={resolvedChildProjectId}This adds a check that the selected PR's repo matches the multi-repo selection's repo, ensuring the detail view is only rendered when both contexts are in sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx` around lines 491 - 517, The conditional rendering for fullPRDetail in the PR list only checks PR number (multiRepoSelectedNumber === selectedPR.number) which can match across different repos; update the guard inside the fullPRDetail conditional (around resolvedChildProjectId, selectedPR and multiRepoSelectedNumber) to also verify the selected PR belongs to the same repo as the multi-repo selection — e.g., compare a unique repo identifier on selectedPR (repositoryId, repoFullName, or owner/name) against the multi-repo selected repo id/name (or resolvedChildProjectId if that maps to the repo) so the detail view is only shown when both PR number and repo match. Ensure you use the existing symbols selectedPR, resolvedChildProjectId and multiRepoSelectedNumber to locate and change the check.apps/frontend/src/main/ipc-handlers/memory-handlers.ts (1)
74-112:⚠️ Potential issue | 🟠 MajorFail fast for unknown models instead of returning guessed dimensions.
The
"fallback"path still lets unknown models succeed with a guessed vector size. That can surface later as embedding-dimension mismatches, and this concern was already raised in a prior review.🛠️ Proposed fix
-function lookupEmbeddingDim(modelName: string): { dim: number; source: 'known' | 'fallback' } | null { +function lookupEmbeddingDim(modelName: string): { dim: number; source: 'known' } | null { const nameLower = modelName.toLowerCase(); ... - // Heuristic fallback based on name patterns. - // WARNING: These are guesses and may be incorrect for unknown models. - // The 'fallback' source flag allows callers to surface this uncertainty. - if (nameLower.includes('large')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (1024) for unknown model: ${modelName}`); - return { dim: 1024, source: 'fallback' }; - } - if (nameLower.includes('base')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (768) for unknown model: ${modelName}`); - return { dim: 768, source: 'fallback' }; - } - if (nameLower.includes('small') || nameLower.includes('mini')) { - console.warn(`[OllamaEmbedding] Using heuristic dimension guess (384) for unknown model: ${modelName}`); - return { dim: 384, source: 'fallback' }; - } - return null; }Also narrow the IPC payload type at Line 959 to remove
'fallback'.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/memory-handlers.ts` around lines 74 - 112, Update lookupEmbeddingDim to fail fast for unknown models: remove the heuristic "fallback" branches and their console warnings so the function only returns known dimensions (source: 'known') or null; callers should handle null as an unknown/unsupported model. Also update the IPC payload type that currently allows a 'fallback' source (the IPC message type that carries lookupEmbeddingDim results) to exclude 'fallback' so its source can only be 'known' (or the payload allows null/unsupported), and adjust any call sites that assumed fallback to handle null/unsupported appropriately.apps/backend/agents/custom_agents.py (1)
155-171:⚠️ Potential issue | 🟡 MinorRoot-level agents are excluded from
load_all_agents()but included inload_custom_agent().
load_custom_agent()searches both category subdirectories (lines 139-144) AND the root agents directory (lines 147-149), butload_all_agents()only iterates subdirectories. This means an agent at~/.claude/agents/my-agent.mdcan be loaded by ID but won't appear in the full agent list or catalog.🔧 Suggested fix
def load_all_agents() -> list[CustomAgentConfig]: """Load all custom agents from all categories in ~/.claude/agents/.""" agents_dir = get_agents_dir() if not agents_dir.exists(): return [] agents = [] + # Load root-level agents first + for agent_file in sorted(agents_dir.glob("*.md")): + if agent_file.name == "README.md": + continue + agent = parse_agent_file(agent_file) + if agent: + agents.append(agent) + + # Then load from category subdirectories for category_dir in sorted(agents_dir.iterdir()):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/agents/custom_agents.py` around lines 155 - 171, load_all_agents currently only iterates category subdirectories and therefore omits root-level agent files that load_custom_agent can find; update load_all_agents (which uses get_agents_dir and parse_agent_file) to also scan agents_dir.glob("*.md") (skipping README.md) and parse/append those files before or after iterating category_dir files so root-level agents are included in the returned list.apps/backend/analysis/analyzers/route_detector.py (2)
650-656:⚠️ Potential issue | 🟠 MajorMinimal API auth detection can miss
.RequireAuthorization()when lambdas contain internal semicolons.Line 651 uses
content.find(";", match.end())to find the statement end, but block lambdas like() => { var a = 1; return a; }contain internal semicolons. This causes the search to stop prematurely, missing chained.RequireAuthorization()calls.🛠️ Suggested fix
Implement a bracket-aware statement end finder:
def _find_statement_end(self, content: str, start: int, lookahead: int = 1200) -> int: """Find statement end, accounting for nested braces/parens in lambdas.""" end_limit = min(len(content), start + lookahead) paren = brace = bracket = 0 in_str: str | None = None i = start while i < end_limit: ch = content[i] prev = content[i - 1] if i > 0 else "" if in_str: if ch == in_str and prev != "\\": in_str = None else: if ch in {"'", '"'}: in_str = ch elif ch == "(": paren += 1 elif ch == ")": paren = max(0, paren - 1) elif ch == "{": brace += 1 elif ch == "}": brace = max(0, brace - 1) elif ch == ";": if paren == 0 and brace == 0 and bracket == 0: return i + 1 i += 1 return end_limit🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/route_detector.py` around lines 650 - 656, The current detection in route_detector.py stops at the first semicolon after match (using content.find) which breaks when lambdas contain internal semicolons; replace that logic by implementing a bracket- and string-aware helper (e.g., _find_statement_end) that scans from match.end(), tracks paren/brace/bracket nesting and ignores semicolons inside strings or nested blocks, returns the correct statement end (or a lookahead limit) and then use it to build route_stmt and compute requires_auth (instead of the simple content.find result) so chained .RequireAuthorization(...) is reliably detected.
484-589:⚠️ Potential issue | 🟠 MajorController context is only extracted from the first class, causing route misattribution in multi-controller files.
Line 484 uses
re.search()which captures only the firstclassdeclaration, but line 521 iterates over all[Http*]attributes in the entire file. This means routes in the second and subsequent controllers will incorrectly use the first controller's route prefix and authorization rules.🛠️ Suggested approach
Find all class declarations using
re.finditer(), compute each class's span, then associate each[Http*]match with the nearest preceding class by comparing positions:# Find all controller classes with their spans class_pattern = re.compile(r"class\s+(\w+)") classes = [(m.start(), m.group(1)) for m in class_pattern.finditer(content)] # For each Http* match, find which class it belongs to for match in http_method_pattern.finditer(content): # Find the class this method belongs to (largest start pos < match.start()) owning_class = None for class_start, class_name in classes: if class_start < match.start(): owning_class = (class_start, class_name) # Use owning_class context for route prefix and auth🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/route_detector.py` around lines 484 - 589, The code currently uses a single class_match (class_match) and shared class_route_prefix/class_authorize while iterating all Http attributes (http_method_pattern), causing later controller methods to inherit the first class's context; fix by finding all classes with re.finditer (e.g., class_pattern = re.compile(r"class\s+(\w+)") and capture their start/end spans and names), then for each http_method_pattern.match determine the owning class by selecting the nearest preceding class start, compute that class's controller_name, class_route_prefix and class_authorize from the owning class's pre-class section (instead of the top-level variables), and then proceed to compute full_path, method_authorize and requires_auth for that specific class before appending to routes (keep usage of _normalize_aspnet_path and existing method_name search logic).apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx (1)
386-438: 🧹 Nitpick | 🔵 TrivialConstrain catalog panel height to avoid viewport overflow.
The expanded agents catalog can become very tall with many categories/agents. Add fixed max-height with internal scrolling to match settings dropdown behavior in this area.
♻️ Proposed fix
{showAgentsCatalog && ( - <div className="border-t border-border p-4 space-y-1"> + <div className="border-t border-border p-4 space-y-1 max-h-60 overflow-y-auto"> <p className="text-xs text-muted-foreground mb-3">Based on learnings, "Dropdown components in apps/frontend/src/renderer/components/settings use a fixed max-height (e.g., max-h-60) with internal overflow-y-auto scrolling... If you add new dropdowns in this area, follow the same fixed-height + internal-scroll approach for consistency."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/renderer/components/settings/AgentProfileSettings.tsx` around lines 386 - 438, The agents catalog panel can grow beyond the viewport; wrap the category list (the JSX block rendered when showAgentsCatalog is true — currently the div with className "border-t border-border p-4 space-y-1") or the inner container that maps agentsInfo.categories with a fixed max-height and internal scrolling (e.g., add a utility class like max-h-60 and overflow-y-auto) so the expandedAgentCategories toggle still works but the panel scrolls internally; update the element around agentsInfo.categories mapping in AgentProfileSettings (the section using expandedAgentCategories and setExpandedAgentCategories) to include the max-height + overflow-y-auto classes for consistent dropdown behavior.apps/backend/analysis/analyzers/database_detector.py (1)
477-488:⚠️ Potential issue | 🟠 MajorStill exclude entity-reference navigations before the PascalCase fallback.
This is the same gap raised earlier and it still looks unresolved:
Customer Customer { get; set; }passes the uppercase-type filter and is emitted as a scalar field. That overstates columns for EF Core models with non-virtual reference navigations.Proposed fix
is_nullable = raw_type.endswith("?") clean_type = raw_type.rstrip("?") if not clean_type: continue + if clean_type in known_entities and not prop_name.endswith( + ("Id", "ID") + ): + continue + # Only include properties with recognized types or common patterns if clean_type not in csharp_types and ( not clean_type or not clean_type[0].isupper() ): continueRead-only verification
#!/bin/bash python - <<'PY' from pathlib import Path import re excluded = {"bin", "obj", "node_modules", ".git", "TestResults"} cs_files = [ p for p in Path(".").rglob("*.cs") if not any(part in excluded for part in p.parts) ] dbset_pattern = re.compile(r"DbSet<(\w+)>\s+(\w+)") config_pattern = re.compile(r"IEntityTypeConfiguration<(\w+)>") prop_pattern = re.compile( r"public\s+(virtual\s+)?" r"([\w<>,?\[\]\s]+?)\s+" r"(\w+)\s*\{\s*get;\s*set;\s*\}" ) known_entities = set() for path in cs_files: try: text = path.read_text(encoding="utf-8") except (OSError, UnicodeDecodeError): continue known_entities.update(m.group(1) for m in dbset_pattern.finditer(text)) known_entities.update(m.group(1) for m in config_pattern.finditer(text)) for path in cs_files: try: text = path.read_text(encoding="utf-8") except (OSError, UnicodeDecodeError): continue for match in prop_pattern.finditer(text): is_virtual = match.group(1) is not None clean_type = match.group(2).strip().rstrip("?") prop_name = match.group(3) if not is_virtual and clean_type in known_entities and not prop_name.endswith(("Id", "ID")): line = text.count("\n", 0, match.start()) + 1 print(f"{path}:{line}: public {clean_type} {prop_name} {{ get; set; }}") PYExpected result: any output is currently being misclassified as a scalar field.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/database_detector.py` around lines 477 - 488, The parser currently treats uppercase type names like "Customer Customer { get; set; }" as scalar because the PascalCase fallback runs before excluding EF reference navigations; modify the logic around raw_type/is_nullable/clean_type (and before the PascalCase fallback using csharp_types) to skip non-virtual properties whose clean_type matches a discovered EF entity type (use the module's known entity set—e.g., known_entities or entity_type_names populated from DbSet<> and IEntityTypeConfiguration detections) and whose property name does not end with "Id" or "ID"; in short, add a guard that if not is_virtual and clean_type in known_entities and not prop_name.endswith(("Id","ID")) then continue so reference navigations are not emitted as scalar fields.apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts (1)
193-195:⚠️ Potential issue | 🟡 MinorFractional IPC inputs still pass validation.
Line 193 and Line 287 only check for finite positive numbers, so
page = 1.5andissueNumber = 7.2still produce malformed GitHub endpoints. Tighten both paths toNumber.isInteger(...) && > 0before building the request.Also applies to: 287-289
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts` around lines 193 - 195, The current validation allows fractional values like page = 1.5 and issueNumber = 7.2; update both checks so they only accept positive integers by replacing the loose finite checks with Number.isInteger(page) && page > 0 for the page validation and Number.isInteger(issueNumber) && issueNumber > 0 for the issueNumber validation (locate the checks that currently read typeof ... Number.isFinite(...) and tighten them in the handler that constructs the GitHub endpoints).apps/backend/analysis/analyzers/context/env_detector.py (1)
44-45:⚠️ Potential issue | 🟠 MajorPlaceholder entries still block
launchSettings.jsonvalues.Line 44/45 seeds
.env.exampleand docker-compose placeholders before Line 51 readslaunchSettings.json, and_parse_launch_settings()refuses to overwrite existing keys. If both declare something likeASPNETCORE_ENVIRONMENT, the detector keepsvalue=Noneinstead of the concrete local value. Parse those placeholder sources later, or letlaunchSettingsreplace entries whose current value isNone.Also applies to: 50-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/context/env_detector.py` around lines 44 - 45, The detector seeds placeholders via _parse_env_example(env_vars, required_vars) and _parse_docker_compose(env_vars) before reading launchSettings with _parse_launch_settings(), causing launchSettings values (e.g., ASPNETCORE_ENVIRONMENT) to be blocked when placeholders set value=None; either move the call to _parse_launch_settings() to run before _parse_env_example/_parse_docker_compose or change _parse_launch_settings() to explicitly overwrite existing keys whose current value is None in the env_vars dict; update references to env_vars and required_vars accordingly so launchSettings wins for concrete local values.apps/backend/analysis/analyzers/framework_analyzer.py (1)
733-743:⚠️ Potential issue | 🔴 CriticalResolve solution project paths before
relative_to().
Path.relative_to()is lexical. At Line 742,self.path / "../Shared/Shared.csproj"can still yield../Sharedinstead of being rejected, so projects outside the analyzed root get stored indotnet_solutionand can be reused by other analyzers. Resolvecsproj_pathfirst and compare againstself.path.resolve().Suggested change
- csproj_path = self.path / project_rel_path + root = self.path.resolve() + csproj_path = (self.path / project_rel_path).resolve() if not csproj_path.exists(): continue # Get the project directory path relative to solution root. # Skip projects that resolve outside the solution directory # (e.g., via ".." segments in the .sln reference). try: - project_dir = str(csproj_path.parent.relative_to(self.path)) + project_dir = str(csproj_path.relative_to(root).parent) except ValueError: continueRun this to confirm the containment check. Expected result:
relative_to(root)prints a path with.., while the resolved path is outside the root.#!/bin/bash set -euo pipefail sed -n '733,743p' apps/backend/analysis/analyzers/framework_analyzer.py python - <<'PY' from pathlib import Path import tempfile with tempfile.TemporaryDirectory() as d: root = Path(d) outside = root.parent / "outside-cr" outside.mkdir(exist_ok=True) csproj = root / ".." / "outside-cr" / "Shared.csproj" csproj.touch() print("exists:", csproj.exists()) print("relative_to(root):", csproj.parent.relative_to(root)) print("resolved inside root:", csproj.resolve().is_relative_to(root.resolve())) PY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/analysis/analyzers/framework_analyzer.py` around lines 733 - 743, The code uses csproj_path.parent.relative_to(self.path) which is lexical and can incorrectly accept paths with ".."; change to resolve the csproj path first and perform the containment check against the resolved solution root: compute csproj_resolved = (self.path / project_rel_path).resolve() (or resolve csproj_path then csproj_path.parent.resolve()), compute root_resolved = self.path.resolve(), and then use csproj_resolved.parent.relative_to(root_resolved) (or root_resolved in/is_relative_to) inside the try/except to skip projects that resolve outside the solution; update the references to csproj_path, project_dir and the existing except ValueError branch accordingly so external projects are not added to dotnet_solution.apps/backend/integrations/graphiti/config.py (1)
243-246:⚠️ Potential issue | 🟡 MinorWarn when
GRAPHITI_EPISODE_TTL_DAYSis non-numeric.A typo like
GRAPHITI_EPISODE_TTL_DAYS=sevencurrently disables TTL cleanup silently by falling back to0. Please log the raw invalid value in theexceptpath before defaulting.Suggested change
- try: - episode_ttl_days = int(os.environ.get("GRAPHITI_EPISODE_TTL_DAYS", "0")) + raw_episode_ttl_days = os.environ.get("GRAPHITI_EPISODE_TTL_DAYS", "0") + try: + episode_ttl_days = int(raw_episode_ttl_days) except ValueError: + logger.warning( + "Invalid GRAPHITI_EPISODE_TTL_DAYS=%r; defaulting episode_ttl_days to 0", + raw_episode_ttl_days, + ) episode_ttl_days = 0As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/config.py` around lines 243 - 246, The try/except around parsing GRAPHITI_EPISODE_TTL_DAYS quietly falls back to 0 on ValueError; update the except block for the episode_ttl_days parsing to log the raw invalid environment value (os.environ.get("GRAPHITI_EPISODE_TTL_DAYS")) and the exception before setting episode_ttl_days = 0 so the bad input is visible in logs; reference the episode_ttl_days variable and GRAPHITI_EPISODE_TTL_DAYS env key and use the module's logger (or logging.warning/error) to record the invalid value and the ValueError.apps/backend/integrations/graphiti/queries_pkg/schema.py (1)
29-32:⚠️ Potential issue | 🟠 MajorClamp
GRAPHITI_MAX_RESULTSto a positive integer.
0and negative values still make it through this parser. Downstreammin(num_results, MAX_CONTEXT_RESULTS)can then pass0/negativenum_resultsinto search and effectively disable context retrieval.Suggested change
try: - MAX_CONTEXT_RESULTS = int(os.getenv("GRAPHITI_MAX_RESULTS", "10")) + MAX_CONTEXT_RESULTS = max(1, int(os.getenv("GRAPHITI_MAX_RESULTS", "10"))) except ValueError: MAX_CONTEXT_RESULTS = 10As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/queries_pkg/schema.py` around lines 29 - 32, The env parsing for GRAPHITI_MAX_RESULTS must enforce a positive integer: when reading os.getenv("GRAPHITI_MAX_RESULTS", "10") and converting to int for MAX_CONTEXT_RESULTS, catch ValueError as before but also clamp the parsed value to at least 1 (e.g., MAX_CONTEXT_RESULTS = max(1, parsed_value)) so 0 or negative values cannot propagate; keep the existing default of 10 when parsing fails or env is absent and reference the MAX_CONTEXT_RESULTS symbol in your change.apps/backend/integrations/graphiti/tests/test_integration_graphiti.py (1)
31-33:⚠️ Potential issue | 🟠 Major
integration+ autouse mocks leaves both gaps: no live coverage and skipped CI.Every test in this module replaces
graphiti_core, so the suite is unit-level. Withpytest.mark.integration, the default CI selection can skip it entirely while the real LadybugDB/Graphiti path still goes untested.Also applies to: 40-95
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/tests/test_integration_graphiti.py` around lines 31 - 33, The tests are marked as integration via the module-level pytestmark but they autouse-mock graphiti_core, making them unit tests; remove or replace the module-level pytestmark = [pytest.mark.integration] with a more accurate marker (e.g., pytest.mark.unit or no marker) so CI doesn't skip live coverage, and/or adjust the autouse fixture that mocks graphiti_core (the autouse mock definition referring to graphiti_core) so that tests either actually run integration against LadybugDB/Graphiti or are correctly labeled as unit tests; update the marker declaration and/or the autouse mock fixture to keep test intent and CI selection consistent.apps/backend/integrations/graphiti/tests/test_integration_ollama.py (1)
31-36:⚠️ Potential issue | 🟠 MajorDon’t keep fully mocked tests under
pytest.mark.integration.With the current autouse mocks, this module never touches live Ollama/Graphiti behavior. Keeping the marker means the default
-m "not integration"CI run skips unit-level coverage entirely while still not providing real integration coverage.Also applies to: 43-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/tests/test_integration_ollama.py` around lines 31 - 36, The module-level pytest marker pytestmark = [ pytest.mark.integration ] incorrectly labels fully mocked unit tests as integration tests; remove or change that marker so CI doesn't skip these tests (e.g., delete the pytestmark definition or replace it with pytest.mark.unit), and ensure any truly integration tests that require a live Ollama/Graphiti server are moved to a separate file or given the correct pytest.mark.integration marker (also update the same marker usage in the other block spanning the tests referenced at 43-93).apps/backend/integrations/graphiti/queries_pkg/search.py (1)
64-79:⚠️ Potential issue | 🟠 MajorVerify the
GraphitiConfigimport here and only mark validation complete after success.This block imports
graphiti_config, while the config class is exposed asintegrations.graphiti.configelsewhere in this PR. If that alias does not exist, Line 66 flips_dimension_validatedbefore theImportError, the exception is swallowed, and every later search skips the validator permanently.Run this read-only check to confirm the module path:
#!/bin/bash set -euo pipefail fd -HI 'graphiti_config\.py$' rg -n --type=py 'class GraphitiConfig|from graphiti_config import GraphitiConfig|from integrations\.graphiti\.config import GraphitiConfig|from \.\.?config import GraphitiConfig'Expected result: either a real
graphiti_config.pyalias exists, orGraphitiConfigis only defined/imported underapps/backend/integrations/graphiti/config.py/integrations.graphiti.config, in which case this import should be updated and_dimension_validatedshould only be set after the lookup succeeds.As per coding guidelines, "Check for proper error handling and security considerations."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/integrations/graphiti/queries_pkg/search.py` around lines 64 - 79, The code prematurely sets GraphitiSearch._dimension_validated = True before the import/validation can succeed and uses the wrong module path; move the assignment so it only runs after a successful validation, and import GraphitiConfig from the canonical integrations.graphiti.config module (i.e. replace from graphiti_config import GraphitiConfig with from integrations.graphiti.config import GraphitiConfig or the correct package alias), wrap the import and config.get_embedding_dimension() call in the try/except so only on success you set GraphitiSearch._dimension_validated = True, and keep the exception handling but ensure ImportError/AttributeError are logged at debug and do not flip the validated flag.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d252083d-ee65-4847-af54-ba54ead4f589
📒 Files selected for processing (37)
apps/backend/agents/custom_agents.pyapps/backend/analysis/analyzers/context/env_detector.pyapps/backend/analysis/analyzers/database_detector.pyapps/backend/analysis/analyzers/framework_analyzer.pyapps/backend/analysis/analyzers/route_detector.pyapps/backend/core/client.pyapps/backend/integrations/graphiti/config.pyapps/backend/integrations/graphiti/queries_pkg/schema.pyapps/backend/integrations/graphiti/queries_pkg/search.pyapps/backend/integrations/graphiti/tests/test_episode_types.pyapps/backend/integrations/graphiti/tests/test_integration_graphiti.pyapps/backend/integrations/graphiti/tests/test_integration_ollama.pyapps/backend/phase_config.pyapps/frontend/src/main/claude-code-settings/reader.tsapps/frontend/src/main/claude-code-settings/types.tsapps/frontend/src/main/ipc-handlers/claude-agents-handlers.tsapps/frontend/src/main/ipc-handlers/context/project-context-handlers.tsapps/frontend/src/main/ipc-handlers/github/customer-github-handlers.tsapps/frontend/src/main/ipc-handlers/memory-handlers.tsapps/frontend/src/main/ipc-handlers/project-handlers.tsapps/frontend/src/main/ipc-handlers/settings-handlers.tsapps/frontend/src/renderer/components/AddCustomerModal.tsxapps/frontend/src/renderer/components/AgentTools.tsxapps/frontend/src/renderer/components/CustomerReposModal.tsxapps/frontend/src/renderer/components/EnvConfigModal.tsxapps/frontend/src/renderer/components/GitHubIssues.tsxapps/frontend/src/renderer/components/Sidebar.tsxapps/frontend/src/renderer/components/SortableProjectTab.tsxapps/frontend/src/renderer/components/context/ProjectIndexTab.tsxapps/frontend/src/renderer/components/github-issues/components/IssueListItem.tsxapps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.tsapps/frontend/src/renderer/components/github-prs/GitHubPRs.tsxapps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.tsapps/frontend/src/renderer/components/settings/AgentProfileSettings.tsxapps/frontend/src/shared/i18n/locales/en/navigation.jsonapps/frontend/src/shared/i18n/locales/fr/navigation.jsonapps/frontend/src/shared/types/ipc.ts
| // Get filtered issues based on selected repo | ||
| // Note: state filtering is already done by the API via the `state` parameter | ||
| const getFilteredIssues = useCallback((): GitHubIssue[] => { | ||
| const { issues, selectedRepo } = state; | ||
|
|
||
| // Filter by repo | ||
| if (selectedRepo !== 'all') { | ||
| return issues.filter(issue => issue.repoFullName === selectedRepo); | ||
| } | ||
|
|
||
| return issues; | ||
| }, [state]); | ||
|
|
||
| const getOpenIssuesCount = useCallback((): number => { | ||
| const { issues, selectedRepo } = state; | ||
| let filtered = issues.filter(issue => issue.state === 'open'); | ||
| if (selectedRepo !== 'all') { | ||
| filtered = filtered.filter(issue => issue.repoFullName === selectedRepo); | ||
| } | ||
| return filtered.length; | ||
| }, [state]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Overly broad useCallback dependencies cause unnecessary re-creations.
Both getFilteredIssues and getOpenIssuesCount depend on the entire state object. This means they are recreated on every state change (including isLoading, error, etc.), defeating memoization benefits.
♻️ Suggested narrower dependencies
const getFilteredIssues = useCallback((): GitHubIssue[] => {
- const { issues, selectedRepo } = state;
-
// Filter by repo
- if (selectedRepo !== 'all') {
- return issues.filter(issue => issue.repoFullName === selectedRepo);
+ if (state.selectedRepo !== 'all') {
+ return state.issues.filter(issue => issue.repoFullName === state.selectedRepo);
}
-
- return issues;
- }, [state]);
+ return state.issues;
+ }, [state.issues, state.selectedRepo]);
const getOpenIssuesCount = useCallback((): number => {
- const { issues, selectedRepo } = state;
- let filtered = issues.filter(issue => issue.state === 'open');
- if (selectedRepo !== 'all') {
+ let filtered = state.issues.filter(issue => issue.state === 'open');
+ if (state.selectedRepo !== 'all') {
filtered = filtered.filter(issue => issue.repoFullName === selectedRepo);
}
return filtered.length;
- }, [state]);
+ }, [state.issues, state.selectedRepo]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-issues/hooks/useMultiRepoGitHubIssues.ts`
around lines 195 - 215, getFilteredIssues and getOpenIssuesCount are recreating
too often because their useCallback deps reference the whole state object;
narrow the dependencies to only the specific pieces used (state.issues and
state.selectedRepo) by replacing [state] with [state.issues, state.selectedRepo]
(or their local destructured equivalents) so getFilteredIssues and
getOpenIssuesCount only re-run when issues or selectedRepo change; update the
useCallback declarations for getFilteredIssues and getOpenIssuesCount
accordingly.
| useEffect(() => { | ||
| if (!isCustomer || !resolvedChildProjectId || !multiRepoSelectedNumber) return; | ||
| if (prs.length > 0) { | ||
| const prExists = prs.some(pr => pr.number === multiRepoSelectedNumber); | ||
| if (prExists && selectedPRNumber !== multiRepoSelectedNumber) { | ||
| selectPR(multiRepoSelectedNumber); | ||
| } | ||
| } | ||
| }, [isCustomer, resolvedChildProjectId, multiRepoSelectedNumber, prs, selectedPRNumber, selectPR]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Effect runs on every prs array change.
This effect syncs selection from multi-repo to single-repo context. Including prs in the dependency array means it re-runs whenever the PR list updates (e.g., after refresh), potentially causing redundant selectPR calls when selectedPRNumber already equals multiRepoSelectedNumber.
The current guard selectedPRNumber !== multiRepoSelectedNumber prevents unnecessary state updates, so this is functionally correct but slightly inefficient.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/frontend/src/renderer/components/github-prs/GitHubPRs.tsx` around lines
396 - 404, The effect unnecessarily re-runs on any change to the prs array
reference; change the dependency to prs.length instead of prs so the effect only
re-runs when the list size changes (not when item identities change), and keep
the existing guards (isCustomer, resolvedChildProjectId,
multiRepoSelectedNumber, selectedPRNumber) and the
selectPR(multiRepoSelectedNumber) call in place; update the useEffect dependency
array reference to prs.length and retain selectPR, selectedPRNumber,
multiRepoSelectedNumber, isCustomer, and resolvedChildProjectId to locate the
correct code paths.
apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts
Show resolved
Hide resolved
| const handleRefresh = useCallback(() => { | ||
| if (!customerId) return; | ||
|
|
||
| const refresh = async () => { | ||
| setState(prev => ({ ...prev, isLoading: true, error: null })); | ||
|
|
||
| try { | ||
| const connResult = await window.electronAPI.github.checkMultiRepoConnection(customerId); | ||
| if (connResult.success && connResult.data) { | ||
| const connData = connResult.data; | ||
| setState(prev => ({ | ||
| ...prev, | ||
| syncStatus: connData, | ||
| repos: connData.repos.map(r => r.repoFullName), | ||
| })); | ||
| } | ||
|
|
||
| const result = await window.electronAPI.github.getMultiRepoPRs(customerId); | ||
|
|
||
| if (result.success && result.data) { | ||
| const data = result.data; | ||
| setState(prev => ({ | ||
| ...prev, | ||
| prs: data.prs, | ||
| repos: data.repos.length > 0 ? data.repos : prev.repos, | ||
| isLoading: false, | ||
| })); | ||
| } else { | ||
| setState(prev => ({ | ||
| ...prev, | ||
| error: result.error || t('prReview.multiRepo.failedToRefreshPRs'), | ||
| isLoading: false, | ||
| })); | ||
| } | ||
| } catch (error) { | ||
| setState(prev => ({ | ||
| ...prev, | ||
| error: error instanceof Error ? error.message : t('prReview.multiRepo.unknownError'), | ||
| isLoading: false, | ||
| })); | ||
| } | ||
| }; | ||
|
|
||
| refresh(); | ||
| }, [customerId, t]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding cancellation logic to handleRefresh.
Unlike the useEffect hooks, handleRefresh doesn't track whether the request is still relevant. If a user triggers refresh and then quickly switches customers, stale results could be applied to the new customer context.
♻️ Proposed fix with ref-based cancellation
+import { useRef } from 'react';
+
export function useMultiRepoGitHubPRs(customerId: string | undefined) {
const { t } = useTranslation('common');
+ const refreshIdRef = useRef(0);
// ... existing state ...
const handleRefresh = useCallback(() => {
if (!customerId) return;
+ const thisRefreshId = ++refreshIdRef.current;
const refresh = async () => {
setState(prev => ({ ...prev, isLoading: true, error: null }));
try {
const connResult = await window.electronAPI.github.checkMultiRepoConnection(customerId);
+ if (refreshIdRef.current !== thisRefreshId) return;
if (connResult.success && connResult.data) {
// ... existing logic ...
}
const result = await window.electronAPI.github.getMultiRepoPRs(customerId);
+ if (refreshIdRef.current !== thisRefreshId) return;
// ... existing logic ...
} catch (error) {
+ if (refreshIdRef.current !== thisRefreshId) return;
// ... existing error handling ...
}
};
refresh();
}, [customerId, t]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts`
around lines 140 - 184, handleRefresh can apply stale results if a new customer
is selected before async calls complete; add cancellation logic by introducing a
ref-based request token (e.g., refreshRequestIdRef) or an AbortController stored
in a ref and increment/replace it each time handleRefresh is called, capture the
current token inside the inner async refresh, and before any setState or after
awaiting window.electronAPI.github.checkMultiRepoConnection / getMultiRepoPRs
verify the token still matches (or that the signal is not aborted); ensure you
abort/invalidates the previous token when starting a new refresh and also clean
up on unmount so setState in handleRefresh and refresh only runs for the latest
valid request.
| // Get filtered PRs based on selected repo | ||
| const filteredPRs = useMemo((): MultiRepoPRData[] => { | ||
| const { prs, selectedRepo } = state; | ||
| if (selectedRepo === 'all') return prs; | ||
| return prs.filter(pr => pr.repoFullName === selectedRepo); | ||
| }, [state.prs, state.selectedRepo]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor: Simplify useMemo dependencies.
The destructuring inside the memo accesses state.prs and state.selectedRepo, but the dependency array already lists these explicitly. This works correctly but could be simplified for consistency.
♻️ Simplified version
const filteredPRs = useMemo((): MultiRepoPRData[] => {
- const { prs, selectedRepo } = state;
- if (selectedRepo === 'all') return prs;
- return prs.filter(pr => pr.repoFullName === selectedRepo);
+ if (state.selectedRepo === 'all') return state.prs;
+ return state.prs.filter(pr => pr.repoFullName === state.selectedRepo);
}, [state.prs, state.selectedRepo]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/frontend/src/renderer/components/github-prs/hooks/useMultiRepoGitHubPRs.ts`
around lines 186 - 191, The useMemo for filteredPRs destructures state but lists
state.prs and state.selectedRepo in its dependency array; simplify by removing
the destructuring and reference state.prs and state.selectedRepo directly inside
the memo (or alternatively use const { prs, selectedRepo } = state and then
adjust the dependency array to [prs, selectedRepo]) so the dependencies match
the referenced identifiers; update the filteredPRs calculation (function name:
filteredPRs, hook: useMemo, types: MultiRepoPRData[], state properties: prs and
selectedRepo) accordingly for consistency.
- Add env field to GlobalMcpServerEntry.config and CustomMcpServer types - Parse and pass env from ~/.claude.json MCP server configs - Filter disabled MCPs (disabled: true) from the global MCP list - Pass env when converting GlobalMcpServerEntry to CustomMcpServer for health checks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| body_for_check = route_body | ||
| if children_match: | ||
| # Remove the children block from the body for target detection | ||
| children_rel_start = children_match.start() | ||
| children_rel_end = ( | ||
| children_match.end() | ||
| - children_match.start() | ||
| + (len(children_content) if children_content else 0) |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
… items) - Fix critical UnboundLocalError: global_mcp_added used before definition in security_settings dict (client.py). Pre-load global MCP servers and update settings file after population. - Add comment to empty except clause in custom_agents.py YAML fallback - Remove unused filePath variable in claude-agents-handlers.ts - Fix unused _mock_driver in test_integration_graphiti.py - Remove redundant "r" mode argument in open() call (Ruff UP015) - Guard parseIssueId/parsePRId against NaN from malformed input - Add aria-labels to Lock/Globe repo visibility icons (a11y) - Add i18n keys for private/public repo labels (en + fr) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tests were failing because the real CLAUDE_CONFIG_DIR env var leaked into test fixtures, causing token resolution to find real credentials instead of using mocked values. Updated clear_env/clear_auth_env fixtures to also save/remove/restore CLAUDE_CONFIG_DIR. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace fs.access() + fs.readFile() with a single fs.readFile() call, handling ENOENT in the catch block. This removes the file system race condition flagged by CodeQL (file could change between check and read). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| additions: pr.additions ?? 0, | ||
| deletions: pr.deletions ?? 0, | ||
| changedFiles: pr.changed_files ?? 0, |
There was a problem hiding this comment.
Bug: The multi-repo PR handler uses a REST endpoint that doesn't provide diff stats, causing pr.additions and pr.deletions to always be 0 in the UI.
Severity: MEDIUM
Suggested Fix
To resolve this, either switch to using the GraphQL API for multi-repo PRs, which includes diff stats in the list response, or make additional REST API calls to the individual PR detail endpoint for each pull request to fetch the missing statistics. A third option is to remove the display of these fields from the multi-repo view if the data is not essential.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location:
apps/frontend/src/main/ipc-handlers/github/customer-github-handlers.ts#L378-L380
Potential issue: The code fetches pull requests for multiple repositories using the
GitHub REST API list endpoint. This endpoint does not include the `additions`,
`deletions`, or `changed_files` fields in its response. Consequently, when the code at
lines 378-380 accesses `pr.additions`, `pr.deletions`, and `pr.changed_files`, these
values are `undefined` and default to `0` due to the nullish coalescing operator. This
results in the UI for the multi-repo view always displaying `+0 -0` for diff statistics
and never showing the diff stat badge, which is a functional regression.
- fast_mode.py: use CLAUDE_CONFIG_DIR env var for settings path resolution instead of always defaulting to ~/.claude - reader.ts: export getUserConfigDir for reuse by other modules Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…itecture Port viable frontend changes from PR AndyMik90#1920 to the current apps/desktop/ structure after upstream restructured apps/frontend/ → apps/desktop/ and migrated backend from Python to TypeScript (Vercel AI SDK v6). New features ported: - Custom Agents: IPC handler to read ~/.claude/agents/ definitions, specialist agents catalog in AgentProfileSettings - Global MCP: IPC handler for Claude Code MCP config, health checks, pipeline phase assignment UI in AgentTools - Customer Projects: AddCustomerModal, CustomerReposModal, multi-repo GitHub hooks (issues + PRs), RepoFilterDropdown, Sidebar integration - Memory: Ollama embedding dimension lookup with 30+ model mappings, index progress tracking in context store - i18n: New context namespace, 100+ translation keys (en + fr) for settings, dialogs, navigation, tasks Also fixes code scanning issues from original PR: - Export getUserConfigDir from claude-code-settings/reader - Export transformIssue from github/issue-handlers - Add type annotations to eliminate implicit any errors Supersedes AndyMik90#1920 (closed due to unmergeable conflicts) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Superseded by #1955 — ported viable changes to the current apps/desktop/ architecture. |
Summary
Consolidated PR combining three previously separate PRs (#1908, #1918, #1919) into a single cohesive change:
Graphiti Memory System Improvements
max_resultsconfig, score-based filtering, deduplicationepisode_ttl_daysconfig for automatic episode cleanupresolveEmbeddingDim()IPC helperCustom Agents Integration
.mdagent files from~/.claude/agents/with YAML frontmatter supportget_phase_custom_agent()resolves agent per phase from task metadatacreate_client()for planner, coder, and QA phasestask_metadata.jsonvia TaskCreationWizard and TaskEditDialogCustomer Module (already on develop)
Supersedes
Test plan
npx tsc --noEmit)🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Summary by CodeRabbit
New Features
Improvements