Skip to content

feat: add monorepo structure with core package, Electron shell, and w…#616

Open
ABAPPLO wants to merge 1 commit intobreaking-brake:mainfrom
ABAPPLO:feat/monorepo-electron
Open

feat: add monorepo structure with core package, Electron shell, and w…#616
ABAPPLO wants to merge 1 commit intobreaking-brake:mainfrom
ABAPPLO:feat/monorepo-electron

Conversation

@ABAPPLO
Copy link

@ABAPPLO ABAPPLO commented Feb 28, 2026

…ebview bridge

  • Create packages/core with platform-agnostic interfaces (IFileSystem, IMessageTransport, IWorkflowProvider, ILogger, IDialogService)
  • Move shared types and validation utils to @cc-wf-studio/core
  • Refactor McpServerManager to support both UI mode (transport) and headless mode (workflow provider)
  • Add platform-agnostic FileService, NodeFileSystem, ConsoleLogger, and FileSystemWorkflowProvider
  • Create webview bridge abstraction (IHostBridge) with VSCode and Electron adapters
  • Refactor main.tsx to auto-detect VSCode/Electron/Dev environment
  • Add Electron desktop app shell with IPC handlers, theme variables, and preload script
  • Add standalone MCP server CLI entry point (cc-wf-mcp-server)
  • Create VSCode adapters (VSCodeFileSystem, VSCodeMessageTransport, VSCodeLogger)
  • Add npm workspaces configuration and tsconfig.base.json

Summary by CodeRabbit

  • New Features

    • Added multi-platform support (VSCode extension, Electron app, and web)
    • Added internationalization with support for English, Japanese, Korean, and Chinese
    • Introduced headless MCP server mode for CLI environments
    • Added workflow validation and migration utilities
    • Integrated Slack workflow sharing and import capabilities with sensitive data detection
    • Added schema loading support with multiple format options
  • Chores

    • Restructured project with workspace configuration and layered build process

…ebview bridge

- Create packages/core with platform-agnostic interfaces (IFileSystem, IMessageTransport, IWorkflowProvider, ILogger, IDialogService)
- Move shared types and validation utils to @cc-wf-studio/core
- Refactor McpServerManager to support both UI mode (transport) and headless mode (workflow provider)
- Add platform-agnostic FileService, NodeFileSystem, ConsoleLogger, and FileSystemWorkflowProvider
- Create webview bridge abstraction (IHostBridge) with VSCode and Electron adapters
- Refactor main.tsx to auto-detect VSCode/Electron/Dev environment
- Add Electron desktop app shell with IPC handlers, theme variables, and preload script
- Add standalone MCP server CLI entry point (cc-wf-mcp-server)
- Create VSCode adapters (VSCodeFileSystem, VSCodeMessageTransport, VSCodeLogger)
- Add npm workspaces configuration and tsconfig.base.json
@coderabbitai
Copy link

coderabbitai bot commented Feb 28, 2026

📝 Walkthrough

Walkthrough

This PR introduces a monorepo architecture with a new @cc-wf-studio/core package containing shared interfaces, types, services, and adapters. It adds Electron and VSCode desktop platforms, establishes a platform-agnostic webview bridge layer, and introduces headless CLI and MCP server capabilities. Includes comprehensive workflow validation, i18n support, and multiple adapter implementations for file systems, dialogs, and message transport.

Changes

Cohort / File(s) Summary
Workspace & Build Configuration
.gitignore, package.json, tsconfig.base.json
Monorepo setup with workspace configuration, private flag, layered build scripts (build:core, build:vscode, build:electron), and base TypeScript strict mode config.
Core Package Setup
packages/core/package.json, packages/core/tsconfig.json
Package manifest for @cc-wf-studio/core with entry points, binary CLI, MCP/TOON/spawn dependencies, build/clean scripts, and TypeScript config.
Core Interfaces
packages/core/src/interfaces/*.ts
Interface contracts for ILogger, IFileSystem, IDialogService, IMessageTransport, and IWorkflowProvider for platform-agnostic abstraction.
Core Types & Definitions
packages/core/src/types/workflow-definition.ts, packages/core/src/types/messages.ts, packages/core/src/types/mcp-node.ts, packages/core/src/types/slack-integration-types.ts, packages/core/src/types/ai-metrics.ts
Comprehensive TypeScript contract for workflows, nodes (Skill, MCP, SubAgentFlow, Codex, etc.), messaging protocol, MCP configuration modes, Slack integration metadata, and AI generation metrics.
Core Adapters
packages/core/src/adapters/console-logger.ts, packages/core/src/adapters/node-file-system.ts
Console-based logger and Node.js file system adapter implementing core interfaces for headless/CLI modes.
Core Services
packages/core/src/services/file-service.ts, packages/core/src/services/fs-workflow-provider.ts, packages/core/src/services/logger-holder.ts, packages/core/src/services/mcp-server-service.ts, packages/core/src/services/mcp-server-tools.ts, packages/core/src/services/schema-loader-service.ts
File abstraction layer, filesystem-backed workflow provider, centralized logger facade, HTTP-based MCP server manager with UI/headless modes, MCP tool registration, and schema loading with caching.
Core i18n System
packages/core/src/i18n/i18n-service.ts, packages/core/src/i18n/translation-keys.ts, packages/core/src/i18n/translations/*.ts
Translation type definitions and implementations for English, Japanese, Korean, Simplified Chinese, and Traditional Chinese with locale detection and fallback logic.
Core CLI & Entry Point
packages/core/src/cli/mcp-server-cli.ts, packages/core/src/index.ts
Node.js headless MCP server CLI with graceful shutdown and comprehensive barrel export of public API surface.
Core Utilities
packages/core/src/utils/migrate-workflow.ts, packages/core/src/utils/path-utils.ts, packages/core/src/utils/schema-parser.ts, packages/core/src/utils/validate-workflow.ts, packages/core/src/utils/workflow-validator.ts, packages/core/src/utils/sensitive-data-detector.ts, packages/core/src/utils/slack-error-handler.ts, packages/core/src/utils/slack-message-builder.ts
Workflow migration/validation framework with node normalization, extensive path helpers (30+ functions) for tool-specific directories, JSON schema parsing with parameter validation, sensitive data detection with masking, Slack error mapping and message block building.
Electron Package Setup & Adapters
packages/electron/package.json, packages/electron/tsconfig.main.json, packages/electron/tsconfig.preload.json, packages/electron/renderer/index.html, packages/electron/src/main/adapters/electron-dialog-service.ts, packages/electron/src/main/adapters/electron-message-transport.ts
Electron package manifest with build/start/clean scripts, separate main/preload TypeScript configs, HTML entry point, and Electron-native dialog and IPC message transport adapters.
Electron Main Process
packages/electron/src/main/main.ts, packages/electron/src/main/preload.ts, packages/electron/src/main/ipc/ipc-handlers.ts
Main process entrypoint with window lifecycle, theme injection, IPC setup; preload script exposing safe electronAPI; IPC router for WEBVIEW_READY, SAVE_WORKFLOW, LOAD_WORKFLOW_LIST, EXPORT_WORKFLOW, and MCP responses with error handling.
VSCode Adapters
packages/vscode/src/adapters/vscode-file-system.ts, packages/vscode/src/adapters/vscode-logger.ts, packages/vscode/src/adapters/vscode-message-transport.ts
VSCode-native file system, logger (via OutputChannel), and message transport adapters implementing core interfaces.
Webview Bridge Layer
src/webview/src/services/bridge.ts, src/webview/src/services/electron-bridge-adapter.ts, src/webview/src/services/vscode-bridge-adapter.ts, src/webview/src/main.tsx
Platform-agnostic IHostBridge abstraction with factory functions for Electron (localStorage state) and VSCode (native API), integrated into main.tsx with bridge selection logic.

Sequence Diagrams

sequenceDiagram
    participant User as User/CLI
    participant CLI as mcp-server-cli
    participant FS as NodeFileSystem
    participant FSProvider as FileSystemWorkflowProvider
    participant MCP as McpServerManager
    participant Tools as MCP Tools
    participant Client as External Client

    User->>CLI: Run CLI with workflow.json path
    CLI->>FS: readFile(workflow.json)
    FS-->>CLI: Workflow content
    CLI->>FSProvider: Initialize with workflow path
    CLI->>MCP: Create & start server
    MCP->>MCP: Create HTTP server on localhost:0
    MCP-->>CLI: Server running on port N
    CLI-->>User: MCP Server URL printed

    Client->>MCP: GET /tools
    MCP->>Tools: register_mcp_tools
    Tools->>MCP: Tool definitions registered
    MCP-->>Client: Tool list

    Client->>MCP: POST /messages (get_current_workflow)
    MCP->>FSProvider: getCurrentWorkflow()
    FSProvider->>FS: readFile(workflow.json)
    FS-->>FSProvider: Parsed workflow
    FSProvider-->>MCP: { workflow, isStale: false }
    MCP-->>Client: Workflow response

    Client->>MCP: POST /messages (apply_workflow)
    MCP->>Tools: validate & apply workflow
    Tools->>FSProvider: applyWorkflow(newWorkflow)
    FSProvider->>FS: writeFile(workflow.json, JSON)
    FS-->>FSProvider: Written
    FSProvider-->>MCP: Success
    MCP-->>Client: Applied response
Loading
sequenceDiagram
    participant Renderer as Renderer Process
    participant Preload as Preload Script
    participant Main as Main Process
    participant FileService as FileService
    participant IPC as ipcMain Handler
    participant MCP as McpServerManager
    participant FS as NodeFileSystem

    Renderer->>Preload: window.electronAPI.send('webview-message', {...})
    Preload->>Main: ipcRenderer.send('webview-message', msg)
    Main->>IPC: setupIpcHandlers receives message
    
    alt WEBVIEW_READY
        IPC->>IPC: Build INITIAL_STATE
        IPC-->>Renderer: reply(INITIAL_STATE)
    else SAVE_WORKFLOW
        IPC->>FileService: Initialize with workspace
        FileService->>FS: ensureWorkflowsDirectory()
        FS-->>FileService: Directory ready
        FileService->>FS: writeFile(workflow.json, content)
        FS-->>FileService: Written
        IPC-->>Renderer: reply(SAVE_SUCCESS)
    else LOAD_WORKFLOW_LIST
        FileService->>FS: listWorkflowFiles()
        FS-->>FileService: [workflow1.json, workflow2.json...]
        FileService-->>IPC: File list
        IPC-->>Renderer: reply(WORKFLOW_LIST)
    else GET_CURRENT_WORKFLOW_RESPONSE
        IPC->>MCP: handleWorkflowResponse(payload)
        MCP->>MCP: Resolve pending request
        MCP-->>IPC: Request resolved
    end
Loading
sequenceDiagram
    participant Webview as Webview/React
    participant Bridge as IHostBridge
    participant ElectronAdapter as Electron Adapter
    participant ElectronAPI as window.electronAPI
    participant MainProcess as Main Process

    Webview->>Bridge: setBridge(createElectronBridge())
    ElectronAdapter->>ElectronAPI: Check window.electronAPI exists
    ElectronAdapter-->>Bridge: Bridge ready

    Webview->>Bridge: postMessage({ type: 'SAVE_WORKFLOW' })
    Bridge->>ElectronAdapter: postMessage(msg)
    ElectronAdapter->>ElectronAPI: send('webview-message', msg)
    ElectronAPI->>MainProcess: IPC send

    MainProcess-->>ElectronAPI: send('host-message', response)
    ElectronAPI->>ElectronAdapter: onMessage callback fired
    ElectronAdapter->>Bridge: handler(event)
    Bridge->>Webview: Dispatch to handlers

    Webview->>Bridge: getState()
    ElectronAdapter->>ElectronAdapter: localStorage.getItem('app-state')
    ElectronAdapter-->>Webview: Parsed state object
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~110 minutes

Possibly related issues

Possibly related PRs

Poem

🐰 A new architecture hops into view,
Core packages and bridges, shiny and new!
From Electron to VSCode, the paths align,
Workflows now flutter through types so fine.
With validation and i18n in place,
The studio finds its monorepo grace!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 59.55% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add monorepo structure with core package, Electron shell, and w…' accurately captures the main objective of adding a monorepo layout with a core package and Electron shell, which aligns with the PR's primary changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (22)
packages/electron/renderer/index.html-6-6 (1)

6-6: ⚠️ Potential issue | 🟠 Major

Harden renderer CSP to align with Electron security best practices.

The current policy with 'unsafe-inline' in script-src weakens XSS protection. Electron's official security guidance recommends restricting CSP to script-src 'self' and avoiding 'unsafe-inline'. Since the page loads only external module scripts, this change is safe and removes unnecessary attack surface.

🔒 Suggested CSP hardening
-  <meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" />
+  <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'; style-src 'self'; object-src 'none'; base-uri 'self'; frame-ancestors 'none';" />
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/renderer/index.html` at line 6, Update the
Content-Security-Policy meta tag to remove 'unsafe-inline' from the script-src
directive (i.e., change the meta's content attribute so script-src is 'self'
only) to harden renderer CSP per Electron best practices; verify the page uses
only external module scripts and then also remove 'unsafe-inline' from style-src
if no inline styles are required, ensuring the meta tag
(Content-Security-Policy) reflects these stricter directives.
packages/core/src/services/schema-loader-service.ts-14-15 (1)

14-15: ⚠️ Potential issue | 🟠 Major

Cache is global, not path-scoped, so later loads can return stale schema data.

Current caching assumes a single schema path per process.

Suggested patch
-let cachedJsonSchema: unknown | undefined;
-let cachedToonSchema: string | undefined;
+const cachedJsonSchemas = new Map<string, unknown>();
+const cachedToonSchemas = new Map<string, string>();
@@
-  if (cachedJsonSchema !== undefined) {
+  const cachedJsonSchema = cachedJsonSchemas.get(schemaPath);
+  if (cachedJsonSchema !== undefined) {
@@
-    cachedJsonSchema = schema;
+    cachedJsonSchemas.set(schemaPath, schema);
@@
-  if (cachedToonSchema !== undefined) {
+  const cachedToonSchema = cachedToonSchemas.get(toonPath);
+  if (cachedToonSchema !== undefined) {
@@
-    cachedToonSchema = toonContent;
+    cachedToonSchemas.set(toonPath, toonContent);
@@
 export function clearSchemaCache(): void {
-  cachedJsonSchema = undefined;
-  cachedToonSchema = undefined;
+  cachedJsonSchemas.clear();
+  cachedToonSchemas.clear();
 }

Also applies to: 38-45, 115-122, 189-192

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/schema-loader-service.ts` around lines 14 - 15,
The current global caches cachedJsonSchema and cachedToonSchema cause stale
returns when multiple schema paths are used; change them to path-keyed caches
(e.g., Map<string, unknown> or Record<string, unknown> for cachedJsonSchema and
Map<string, string> for cachedToonSchema) and update all reads/writes to index
by the incoming schema path parameter (use the same schemaPath variable used in
loader functions) so each distinct path has its own cached entry; apply this
change to every location that references cachedJsonSchema/cachedToonSchema (the
other occurrences noted in the review) and ensure cache invalidation/assignment
uses the path key.
packages/core/src/services/file-service.ts-22-29 (1)

22-29: ⚠️ Potential issue | 🟠 Major

Avoid treating every filesystem exception as “missing”.

Both methods currently swallow all IO failures, which can hide permission or provider errors as normal states.

Suggested patch
   async ensureWorkflowsDirectory(): Promise<void> {
-    const exists = await this.fs
-      .stat(this.workflowsDirectory)
-      .then(() => true)
-      .catch(() => false);
+    const exists = await this.fs.fileExists(this.workflowsDirectory);
     if (!exists) {
       await this.fs.createDirectory(this.workflowsDirectory);
     }
   }
@@
   async listWorkflowFiles(): Promise<string[]> {
     try {
       const entries = await this.fs.readDirectory(this.workflowsDirectory);
       return entries
         .filter((entry) => entry.isFile && entry.name.endsWith('.json'))
         .map((entry) => entry.name.replace(/\.json$/, ''));
-    } catch {
-      return [];
+    } catch (error) {
+      if (!(await this.fs.fileExists(this.workflowsDirectory))) {
+        return [];
+      }
+      throw error;
     }
   }

Also applies to: 60-68

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/file-service.ts` around lines 22 - 29, The
ensureWorkflowsDirectory method (and the similar block at lines 60-68) currently
treats any fs.stat error as "missing" and always creates the directory; update
the error handling so you only create the directory when the error indicates
"not found" (e.g., ENOENT/NotFound), and rethrow or propagate other
IO/permission/provider errors instead of swallowing them. Concretely, wrap the
await this.fs.stat(this.workflowsDirectory) call in a try/catch, on catch
inspect the thrown error/code and only call
this.fs.createDirectory(this.workflowsDirectory) when the error signals absence;
for any other error rethrow or bubble it up so callers can handle
permission/provider failures. Ensure the same pattern is applied to the other
existence-check/create block referenced at lines 60-68.
packages/vscode/src/adapters/vscode-message-transport.ts-34-37 (1)

34-37: ⚠️ Potential issue | 🟠 Major

Isolate per-handler failures during message fan-out.

A single throwing handler currently interrupts dispatch for all remaining handlers.

Suggested patch
   handleIncomingMessage(message: { type: string; requestId?: string; payload?: unknown }): void {
     for (const handler of this.handlers) {
-      handler(message);
+      try {
+        handler(message);
+      } catch (error) {
+        console.error('[VSCodeMessageTransport] Message handler failed:', error);
+      }
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/vscode/src/adapters/vscode-message-transport.ts` around lines 34 -
37, The message dispatch in handleIncomingMessage currently calls each handler
in this.handlers without protection, so a throwing handler aborts the fan-out;
fix by invoking each handler(message) inside a try/catch so one handler's
exception does not stop the loop, and on error log the failure (include handler
identity if available and message.requestId/payload context) using the module's
logger (or console.error if no logger) then continue to the next handler; update
handleIncomingMessage to catch and log per-handler errors while preserving
original behavior for successful handlers.
packages/electron/package.json-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

clean script is not cross-platform.

rm -rf will fail in many Windows setups; use a Node-based command instead.

Suggested patch
-    "clean": "rm -rf dist"
+    "clean": "node -e \"require('node:fs').rmSync('dist', { recursive: true, force: true })\""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/package.json` at line 10, The "clean" npm script currently
uses a non-cross-platform shell command ("rm -rf") which fails on Windows;
replace it with a cross-platform solution by using a Node-based tool like
rimraf: add rimraf as a devDependency and update the "clean" script entry named
"clean" in package.json to invoke rimraf on the dist folder so it works on all
platforms (alternatively, you can use a small node -e/fs.rmSync approach if you
prefer not to add a dependency).
packages/core/src/types/mcp-node.ts-287-291 (1)

287-291: ⚠️ Potential issue | 🟠 Major

Reject unknown mode strings during normalization.

The current cast allows arbitrary invalid values to pass as McpNodeMode.

Suggested patch
   const rawMode = (data.mode as string | undefined) ?? 'manualParameterConfig';
 
   // Map legacy mode to new mode, or use raw mode if already valid
-  const mode = legacyModeMap[rawMode] ?? (rawMode as McpNodeMode);
+  const validModes: readonly McpNodeMode[] = [
+    'manualParameterConfig',
+    'aiParameterConfig',
+    'aiToolSelection',
+  ];
+  const mode =
+    legacyModeMap[rawMode] ??
+    (validModes.includes(rawMode as McpNodeMode)
+      ? (rawMode as McpNodeMode)
+      : 'manualParameterConfig');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/types/mcp-node.ts` around lines 287 - 291, The
normalization currently casts arbitrary rawMode to McpNodeMode, allowing invalid
values; update the logic around rawMode/mode/legacyModeMap to validate the
string: compute mapped = legacyModeMap[rawMode]; if mapped exists use it; else
check if rawMode is a valid McpNodeMode via
Object.values(McpNodeMode).includes(rawMode as McpNodeMode); if valid use
rawMode as McpNodeMode; otherwise reject by throwing a clear Error (e.g.,
`Unknown McpNodeMode: ${rawMode}`) or returning a validation failure so unknown
mode strings are not accepted.
packages/vscode/src/adapters/vscode-file-system.ts-23-30 (1)

23-30: ⚠️ Potential issue | 🟠 Major

Use bitwise checks for FileType and properly distinguish "not found" from other stat errors.

VS Code's FileType is a bitmask (e.g., File | SymbolicLink = 65), so strict equality checks like type === vscode.FileType.File will fail to correctly identify symlinked files and directories. Additionally, fileExists currently suppresses all stat failures as "not found", masking permission and access errors; it should only map FileNotFound to false and rethrow other filesystem errors.

Suggested patch
   async fileExists(filePath: string): Promise<boolean> {
     const uri = vscode.Uri.file(filePath);
     try {
       await vscode.workspace.fs.stat(uri);
       return true;
-    } catch {
-      return false;
+    } catch (error) {
+      if (error instanceof vscode.FileSystemError && error.code === 'FileNotFound') {
+        return false;
+      }
+      throw error;
     }
   }
     return entries.map(([name, type]) => ({
       name,
-      isFile: type === vscode.FileType.File,
-      isDirectory: type === vscode.FileType.Directory,
+      isFile: (type & vscode.FileType.File) === vscode.FileType.File,
+      isDirectory: (type & vscode.FileType.Directory) === vscode.FileType.Directory,
     }));
   }
     return {
-      isFile: stat.type === vscode.FileType.File,
-      isDirectory: stat.type === vscode.FileType.Directory,
+      isFile: (stat.type & vscode.FileType.File) === vscode.FileType.File,
+      isDirectory: (stat.type & vscode.FileType.Directory) === vscode.FileType.Directory,
     };
   }

Also applies to: lines 45-46, 54-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/vscode/src/adapters/vscode-file-system.ts` around lines 23 - 30, The
fileExists implementation incorrectly treats all stat failures as "not found"
and uses strict equality to check FileType (which is a bitmask), so update
fileExists to call vscode.workspace.fs.stat(uri), inspect the returned
FileStat.type using bitwise checks (e.g., (stat.type & vscode.FileType.File) !==
0 or (stat.type & vscode.FileType.Directory) !== 0 or include
FileType.SymbolicLink via bitmask), and only return false when the thrown error
is a FileSystemError with code 'FileNotFound' (rethrow other errors); apply the
same bitwise type checks and error-handling pattern to the other places in this
file that test vscode.FileType with === (the other similar checks noted in the
review).
packages/core/src/utils/schema-parser.ts-276-279 (1)

276-279: ⚠️ Potential issue | 🟠 Major

Avoid constructing regexes directly from schema input.

new RegExp(param.pattern) on schema-provided patterns can enable catastrophic backtracking (ReDoS) and lock the process.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/schema-parser.ts` around lines 276 - 279, The code
constructs a RegExp directly from schema input (new RegExp(param.pattern)) which
can enable ReDoS; instead validate or sanitize param.pattern before using it:
run a safety check (e.g., use a safe-regex library or a custom heuristic) and
wrap RegExp construction and regex.test(value) in a try/catch; if the pattern is
unsafe or construction throws, reject the pattern (return { valid: false, error:
... }) or escape it so it’s treated as a literal; update the logic around
param.pattern, new RegExp(param.pattern), and regex.test(value) to perform these
checks and fail closed for unsafe patterns.
packages/core/src/utils/validate-workflow.ts-211-220 (1)

211-220: ⚠️ Potential issue | 🟠 Major

The validator can throw on malformed node/connection entries.

The function accepts unknown, but validateNodes and validateConnections assume every array item is a valid object. null/primitive entries will throw before producing validation errors.

Also applies to: 241-245, 841-851

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/validate-workflow.ts` around lines 211 - 220,
validateNodes and validateConnections assume each array item is an object and
directly access properties (e.g., node.id), which will throw for
null/primitives; add guards in validateNodes and validateConnections to treat
non-object/array items as validation failures instead of accessing properties —
for example, before using node.id in the duplicate-ID check (with nodeIds)
ensure typeof node === 'object' && node !== null and that 'id' in node is a
string/number, pushing a clear error (e.g., INVALID_NODE_ENTRY) for malformed
entries; do the same in validateConnections (check connection is an object and
has valid source/target fields) so malformed elements produce validation errors
rather than runtime exceptions.
packages/core/src/utils/validate-workflow.ts-426-433 (1)

426-433: ⚠️ Potential issue | 🟠 Major

validateSubAgentFlow skips connection integrity checks.

Sub-agent flow validation currently allows missing/invalid connections content, so structurally broken sub-flows can pass validation.

Also applies to: 478-479

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/validate-workflow.ts` around lines 426 - 433, The
validateSubAgentFlow function currently returns early after checking
subAgentFlow.nodes and therefore skips validating the connections array and that
each connection references valid node IDs; update validateSubAgentFlow to also
verify subAgentFlow.connections exists and is an array, iterate over each
connection to ensure required fields (e.g., source/target or similar names used
in your code) are present and reference node IDs that exist in
subAgentFlow.nodes, and push meaningful errors (e.g.,
SUBAGENTFLOW_MISSING_CONNECTIONS, SUBAGENTFLOW_INVALID_CONNECTION,
SUBAGENTFLOW_UNKNOWN_NODE_REFERENCE) when checks fail instead of allowing
structurally broken sub-flows to pass. Ensure the same connection validation
logic is applied where the code currently returns early (around the nodes check
and the region referenced at lines ~478-479).
packages/core/src/utils/schema-parser.ts-232-246 (1)

232-246: ⚠️ Potential issue | 🟠 Major

enum constraints are only applied to string parameters.

Right now, enum validation is enforced in validateStringValue only. Number, integer, boolean, array, and object parameters can bypass enum constraints entirely.

Also applies to: 260-263, 289-315, 320-326

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/schema-parser.ts` around lines 232 - 246, The switch
in schema-parser.ts applies enum checks only for strings; move the enum
validation out of validateStringValue and enforce it for all types by checking
param.enum (or param.enums) before delegating to type-specific validators (the
switch handling param.type and functions validateStringValue,
validateNumberValue, validateBooleanValue, validateArrayValue,
validateObjectValue); either perform a unified enum membership check that
serializes/compares values appropriately for numbers, integers, booleans, arrays
and objects, or add equivalent enum checks inside each validator
(validateNumberValue, validateBooleanValue, validateArrayValue,
validateObjectValue) so enum constraints cannot be bypassed. Ensure the same
change is applied to the other switch sites referenced (lines ~260-263,
~289-315, ~320-326) so all enum enforcement is consistent.
packages/core/src/utils/migrate-workflow.ts-47-66 (1)

47-66: ⚠️ Potential issue | 🟠 Major

Switch migration does not actually enforce a single default branch.

When multiple branches already have isDefault: true, this block only moves one branch to the end and preserves other defaults, leaving invalid state.

🔧 Proposed fix
     if (hasDefault) {
-      // Ensure default branch is last
-      const defaultIndex = branches.findIndex((b: SwitchCondition) => b.isDefault);
-      if (defaultIndex !== branches.length - 1) {
-        const defaultBranch = branches[defaultIndex];
-        const newBranches = [
-          ...branches.slice(0, defaultIndex),
-          ...branches.slice(defaultIndex + 1),
-          defaultBranch,
-        ];
-        return {
-          ...node,
-          data: {
-            ...switchData,
-            branches: newBranches,
-            outputPorts: newBranches.length,
-          },
-        } as WorkflowNode;
-      }
-      return node;
+      // Keep only one default branch and force it to the end
+      const defaultIndexes = branches
+        .map((b, i) => (b.isDefault ? i : -1))
+        .filter((i) => i >= 0);
+      const keepIndex = defaultIndexes[defaultIndexes.length - 1];
+      const normalized = branches.map((b, i) => ({ ...b, isDefault: i === keepIndex }));
+      const defaultBranch = normalized[keepIndex];
+      const newBranches = [...normalized.filter((_, i) => i !== keepIndex), defaultBranch];
+
+      return {
+        ...node,
+        data: {
+          ...switchData,
+          branches: newBranches,
+          outputPorts: newBranches.length,
+        },
+      } as WorkflowNode;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/migrate-workflow.ts` around lines 47 - 66, The
current logic in migrate-workflow.ts only moves one branch when multiple
SwitchCondition branches have isDefault true; update the block handling
hasDefault to enforce a single default by locating all indexes where
branches[].isDefault is true, choose the last default index to retain (or
otherwise select one consistently), set isDefault = false on all other branches,
then reorder so the retained default branch is moved to the end and update
outputPorts accordingly (references: hasDefault, branches, SwitchCondition,
isDefault, defaultIndex, newBranches).
packages/core/src/types/workflow-definition.ts-562-563 (1)

562-563: ⚠️ Potential issue | 🟠 Major

Workflow timestamps are typed as Date but persisted/loaded as JSON strings.

This contract mismatches filesystem persistence and can cause runtime type errors when date methods are used on loaded workflows.

🔧 Proposed fix
 export interface Workflow {
@@
-  createdAt: Date;
-  updatedAt: Date;
+  createdAt: string;
+  updatedAt: string;
@@
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/types/workflow-definition.ts` around lines 562 - 563, The
timestamps createdAt and updatedAt on the WorkflowDefinition type are declared
as Date but persisted/loaded as JSON strings; update their types to reflect
storage format by changing createdAt: Date; updatedAt: Date; to createdAt:
string | Date; updatedAt: string | Date; (or to string if you want strictness)
in the WorkflowDefinition (and any closely related type aliases) and update any
serializers/deserializers to parse ISO strings to Date only where Date methods
are needed (e.g., parse via new Date(value) in load/consume code paths).
packages/core/src/utils/path-utils.ts-122-125 (1)

122-125: ⚠️ Potential issue | 🟠 Major

Constrain project-scoped skill paths to workspaceRoot.

For scope === 'project', absolute skillPath values are returned as-is. That allows project-scoped entries to point outside the workspace boundary.

🔧 Proposed fix
 export function resolveSkillPathWithRoot(
   skillPath: string,
   scope: 'user' | 'project' | 'local',
   workspaceRoot?: string
 ): string {
   if (scope === 'user' || scope === 'local') {
     return skillPath;
   }
   if (!workspaceRoot) {
     throw new Error('No workspace folder found for project Skill resolution');
   }
-  if (path.isAbsolute(skillPath)) {
-    return skillPath;
-  }
-  return path.resolve(workspaceRoot, skillPath);
+  const resolved = path.isAbsolute(skillPath)
+    ? path.normalize(skillPath)
+    : path.resolve(workspaceRoot, skillPath);
+
+  const rel = path.relative(path.resolve(workspaceRoot), resolved);
+  if (rel.startsWith('..') || path.isAbsolute(rel)) {
+    throw new Error('Project skillPath must stay within workspaceRoot');
+  }
+
+  return resolved;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/path-utils.ts` around lines 122 - 125, The current
logic returns absolute skillPath for project-scoped entries, letting them point
outside the workspace; change the behavior so that when scope === 'project' and
path.isAbsolute(skillPath) you do not return it as-is but instead resolve it
under workspaceRoot (use path.resolve(workspaceRoot,
<relative-from-root-of-skillPath>)) so project-scoped paths are constrained to
the workspace; update the branch that checks path.isAbsolute(skillPath) to
consider scope and call path.resolve(workspaceRoot, ...) for project scope
rather than returning skillPath directly.
packages/core/src/services/fs-workflow-provider.ts-19-22 (1)

19-22: ⚠️ Potential issue | 🟠 Major

Handle invalid workflow JSON explicitly.

A malformed workflow file will throw from JSON.parse and bubble as a runtime failure. This path should fail with a controlled error message that includes the file path.

🔧 Proposed fix
   async getCurrentWorkflow(): Promise<{ workflow: Workflow | null; isStale: boolean }> {
     if (await this.fs.fileExists(this.workflowPath)) {
       const content = await this.fs.readFile(this.workflowPath);
-      return { workflow: JSON.parse(content), isStale: false };
+      try {
+        return { workflow: JSON.parse(content) as Workflow, isStale: false };
+      } catch (error) {
+        const reason = error instanceof Error ? error.message : String(error);
+        throw new Error(`Invalid workflow JSON at ${this.workflowPath}: ${reason}`);
+      }
     }
     return { workflow: null, isStale: false };
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/fs-workflow-provider.ts` around lines 19 - 22,
When reading the workflow file (the block that checks
this.fs.fileExists(this.workflowPath) and calls this.fs.readFile), wrap
JSON.parse(content) in a try/catch and convert parse errors into a controlled
error that includes the workflow path; e.g. catch the error from JSON.parse and
throw a new Error like "Invalid workflow JSON in <this.workflowPath>:
<originalErrorMessage>" (or include the original error as a cause) so failures
from JSON.parse don't bubble as raw runtime exceptions.
packages/core/src/utils/validate-workflow.ts-149-153 (1)

149-153: ⚠️ Potential issue | 🟠 Major

Hooks are validated from the wrong field.

Validation reads workflow.hooks, but hooks are modeled under slashCommandOptions.hooks. Invalid hooks inside slash command options currently bypass this validator.

🔧 Proposed fix
-  const rawWf = workflow as Record<string, unknown>;
-  if (rawWf.hooks) {
-    const hooksErrors = validateHooks(rawWf.hooks as WorkflowHooks);
+  const rawWf = workflow as Record<string, unknown>;
+  const slashCommandOptions = rawWf.slashCommandOptions as
+    | { hooks?: WorkflowHooks }
+    | undefined;
+  if (slashCommandOptions?.hooks) {
+    const hooksErrors = validateHooks(slashCommandOptions.hooks);
     errors.push(...hooksErrors);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/validate-workflow.ts` around lines 149 - 153, The
validator is checking rawWf.hooks but hooks live under slashCommandOptions;
update the check in validate-workflow.ts to read hooks from
rawWf.slashCommandOptions (e.g., const scOpts = rawWf.slashCommandOptions as
Record<string, unknown> and use scOpts.hooks) and pass those to validateHooks
(keep using validateHooks and errors.push(...hooksErrors)); ensure you only call
validateHooks when slashCommandOptions.hooks is present and typed as
WorkflowHooks so invalid hooks inside slash command options are caught.
packages/electron/src/main/ipc/ipc-handlers.ts-101-105 (1)

101-105: 🛠️ Refactor suggestion | 🟠 Major

Use static import for shell instead of dynamic require.

The shell module should be imported statically at the top of the file alongside other Electron imports for consistency and better tree-shaking.

♻️ Proposed fix

Add to imports at top of file:

-import type { BrowserWindow, IpcMain } from 'electron';
+import type { BrowserWindow, IpcMain } from 'electron';
+import { shell } from 'electron';

Then simplify the handler:

         case 'OPEN_EXTERNAL_URL': {
-          const { shell } = require('electron');
           shell.openExternal(payload.url);
           break;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/src/main/ipc/ipc-handlers.ts` around lines 101 - 105,
Replace the dynamic require in the 'OPEN_EXTERNAL_URL' IPC handler with a static
import: add a top-of-file import for shell from 'electron' alongside other
Electron imports, then update the handler (case 'OPEN_EXTERNAL_URL') to call
shell.openExternal(payload.url) directly without require. Ensure the import is
used where the handler references shell.openExternal and remove the inline
require to keep imports consistent and tree-shakeable.
packages/electron/src/main/main.ts-28-34 (1)

28-34: ⚠️ Potential issue | 🟠 Major

Enable sandbox: true to strengthen renderer process security.

Setting sandbox: false disables Chromium's OS-level sandbox protection for the renderer, weakening security isolation. The preload script uses contextBridge with contextIsolation: true, which continues to work correctly with sandbox enabled—no changes needed to the preload implementation.

There is no functional reason preventing sandbox from being enabled. If a specific requirement exists, document it explicitly in a code comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/src/main/main.ts` around lines 28 - 34, The webPreferences
object for the BrowserWindow sets sandbox: false which reduces renderer
security; change sandbox to true in the BrowserWindow options (the
webPreferences object where preload is set and contextIsolation/nodeIntegration
are configured) to enable Chromium OS-level sandboxing, and if there is any
intentional reason to keep sandbox disabled, add a clear inline comment above
the webPreferences block explaining that exception; ensure the preload script
(referenced by path.join(__dirname, '../preload/preload.js')) still works with
sandbox enabled (no code changes expected because contextIsolation: true and
contextBridge are used).
packages/core/src/services/mcp-server-service.ts-129-154 (1)

129-154: ⚠️ Potential issue | 🟠 Major

Cancel and reject all pending requests during shutdown.

stop() closes the server but leaves pendingWorkflowRequests / pendingApplyRequests timers alive. That can leak timers and reject/resolve long after shutdown.

Suggested fix
   async stop(): Promise<void> {
     this.writtenConfigs.clear();
     this.currentProvider = null;
+
+    for (const [id, pending] of this.pendingWorkflowRequests) {
+      clearTimeout(pending.timer);
+      pending.reject(new Error('MCP server stopped'));
+      this.pendingWorkflowRequests.delete(id);
+    }
+    for (const [id, pending] of this.pendingApplyRequests) {
+      clearTimeout(pending.timer);
+      pending.reject(new Error('MCP server stopped'));
+      this.pendingApplyRequests.delete(id);
+    }

     if (this.httpServer) {

Also applies to: 47-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/mcp-server-service.ts` around lines 129 - 154, The
stop() method currently closes the httpServer but leaves pendingWorkflowRequests
and pendingApplyRequests timers/promises alive; modify stop() (in
packages/core/src/services/mcp-server-service.ts) to iterate over
pendingWorkflowRequests and pendingApplyRequests, clear any associated timeouts
(clearTimeout), reject each pending promise with a clear shutdown error (e.g.
new Error('MCP Server stopped')), and then remove/clear those collections; do
this before or as part of the server shutdown flow so no timers or promise
callbacks remain after stop() completes, and ensure forceCloseTimer is still
cleared as currently implemented.
packages/core/src/types/messages.ts-1913-1929 (1)

1913-1929: ⚠️ Potential issue | 🟠 Major

Current message type guards are effectively no-ops.

Both guards accept any object with a string type, so malformed payloads and unknown message types are treated as valid ExtensionMessage/WebviewMessage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/types/messages.ts` around lines 1913 - 1929, The current
guards isExtensionMessage and isWebviewMessage accept any object with a string
type; update them to validate the discriminant and payload shape by checking
that message.type is one of the known literal types for the respective union and
verifying required fields for that variant. Modify isExtensionMessage to assert
message.type is in the ExtensionMessageType set/enum (or the exact string
literals used by ExtensionMessage) and validate required properties for those
specific types; do the same for isWebviewMessage against the WebviewMessageType
set/enum (or exact literals) and their required fields. Use the existing type
constants or the union variant names from the types file to build the
allowed-type checks and add minimal shape checks per variant so malformed
payloads are rejected. Ensure you only change the implementations of
isExtensionMessage and isWebviewMessage.
packages/core/src/services/mcp-server-service.ts-224-229 (1)

224-229: ⚠️ Potential issue | 🟠 Major

Guard pending-map lifecycle when postMessage throws.

Pending entries are inserted before sending transport messages. If postMessage throws synchronously, timers remain and map entries leak.

Suggested fix
         this.pendingWorkflowRequests.set(correlationId, { resolve, reject, timer });

-        this.transport?.postMessage({
-          type: 'GET_CURRENT_WORKFLOW_REQUEST',
-          payload: { correlationId },
-        });
+        try {
+          this.transport?.postMessage({
+            type: 'GET_CURRENT_WORKFLOW_REQUEST',
+            payload: { correlationId },
+          });
+        } catch (error) {
+          clearTimeout(timer);
+          this.pendingWorkflowRequests.delete(correlationId);
+          reject(error instanceof Error ? error : new Error(String(error)));
+        }
       });
         this.pendingApplyRequests.set(correlationId, { resolve, reject, timer });

-        this.transport?.postMessage({
-          type: 'APPLY_WORKFLOW_FROM_MCP',
-          payload: { correlationId, workflow, requireConfirmation, description },
-        });
+        try {
+          this.transport?.postMessage({
+            type: 'APPLY_WORKFLOW_FROM_MCP',
+            payload: { correlationId, workflow, requireConfirmation, description },
+          });
+        } catch (error) {
+          clearTimeout(timer);
+          this.pendingApplyRequests.delete(correlationId);
+          reject(error instanceof Error ? error : new Error(String(error)));
+        }
       });

Also applies to: 260-265

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/mcp-server-service.ts` around lines 224 - 229, The
code adds entries to pendingWorkflowRequests before calling
transport.postMessage, which can throw and leak the map/timer; wrap the
postMessage call in a try/catch and only set pendingWorkflowRequests (and start
the timer) after postMessage succeeds, or if you must set before, immediately
clear the map entry, cancel the timer, and call reject inside the catch.
Specifically update the block that uses pendingWorkflowRequests.set(...)
together with transport.postMessage({ type: 'GET_CURRENT_WORKFLOW_REQUEST',
payload: { correlationId } }) to ensure timer cleanup and removal of the
correlationId on synchronous postMessage failure; apply the same guard/cleanup
pattern to the other similar block that uses pendingWorkflowRequests and
transport.postMessage in this file.
packages/core/src/services/mcp-server-service.ts-108-125 (1)

108-125: ⚠️ Potential issue | 🟠 Major

Reset manager state when server listen fails.

If httpServer.listen() emits error, this.httpServer remains set, so later start() calls throw “already running” even though startup failed.

Suggested fix
       httpServer.on('error', (error) => {
+        this.httpServer = null;
+        this.port = null;
         log('ERROR', 'MCP Server: HTTP server error', {
           error: error.message,
         });
         reject(error);
       });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/mcp-server-service.ts` around lines 108 - 125, If
server startup fails, reset the manager state so subsequent start() calls don't
think it is already running: in the block that creates/uses httpServer (the
Promise that calls httpServer.listen(0, '127.0.0.1', ...)), ensure that on
failure paths (the httpServer.on('error', ...) handler and the else branch where
address is invalid) you clean up by closing the httpServer if needed and
clearing this.httpServer and this.port (or setting them to undefined/null)
before rejecting; reference the httpServer variable, the httpServer.on('error')
handler, and the code path that checks address !== 'string' to apply the
cleanup.
🟡 Minor comments (9)
packages/core/src/utils/workflow-validator.ts-97-100 (1)

97-100: ⚠️ Potential issue | 🟡 Minor

Validation incomplete for Workflow type contract.

The cast to Workflow bypasses TypeScript's type safety. The Workflow interface requires createdAt and updatedAt as Date types, but these aren't validated. If the JSON contains string dates (common in serialized data), callers may encounter runtime issues when accessing these as Date objects.

🔧 Consider validating or transforming date fields
+  // Validate and transform date fields
+  if ('createdAt' in workflow) {
+    const createdAt = new Date(workflow.createdAt as string);
+    if (isNaN(createdAt.getTime())) {
+      errors.push('Field "createdAt" must be a valid date');
+    } else {
+      workflow.createdAt = createdAt;
+    }
+  }
+  if ('updatedAt' in workflow) {
+    const updatedAt = new Date(workflow.updatedAt as string);
+    if (isNaN(updatedAt.getTime())) {
+      errors.push('Field "updatedAt" must be a valid date');
+    } else {
+      workflow.updatedAt = updatedAt;
+    }
+  }
+
   return {
     valid: true,
     workflow: workflow as unknown as Workflow,
   };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/workflow-validator.ts` around lines 97 - 100, The
current return in the workflow validator is casting the input to Workflow and
skipping validation of date fields; update the validator (the function that
returns { valid: true, workflow: ... }) to explicitly validate and/or transform
Workflow.createdAt and Workflow.updatedAt: if they are strings parse them into
Date objects (reject or mark invalid if parsing fails), and ensure that
createdAt/updatedAt are actual Date instances before returning the workflow;
keep the rest of the Workflow validation logic intact and only return a casted
Workflow after these fields are validated/transformed.
packages/core/src/services/schema-loader-service.ts-112-112 (1)

112-112: ⚠️ Potential issue | 🟡 Minor

TOON path derivation should only replace a trailing .json extension.

Current replacement can modify earlier .json occurrences in the path string.

Suggested patch
-  const toonPath = schemaPath.replace('.json', '.toon');
+  const toonPath = schemaPath.replace(/\.json$/i, '.toon');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/schema-loader-service.ts` at line 112, The
derivation of toonPath from schemaPath currently does a blind replace of ".json"
which can affect earlier occurrences; update the logic that computes toonPath
(the schemaPath -> toonPath assignment) to only replace a trailing ".json"
extension (e.g. use a replacement that targets /\.json$/ or use path parsing to
swap the extname) so only the final file extension is changed to ".toon".
packages/core/src/i18n/i18n-service.ts-99-101 (1)

99-101: ⚠️ Potential issue | 🟡 Minor

Parameter interpolation currently replaces only the first occurrence.

If a token appears multiple times in one message, later occurrences remain unreplaced.

Suggested patch
   if (params) {
     for (const [paramKey, paramValue] of Object.entries(params)) {
-      text = text.replace(`{{${paramKey}}}`, String(paramValue));
+      text = text.replaceAll(`{{${paramKey}}}`, String(paramValue));
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/i18n/i18n-service.ts` around lines 99 - 101, The parameter
interpolation in i18n-service.ts (the loop that iterates Object.entries(params)
and calls text.replace) only replaces the first occurrence of each token; change
the replacement in that loop inside the relevant function (the interpolation
routine in the i18n service) to replace all occurrences of `{{paramKey}}` (e.g.,
use a global /g regex or a replaceAll approach) while ensuring paramValue is
converted to string — this ensures every instance of the token in text is
replaced.
packages/core/src/cli/mcp-server-cli.ts-44-51 (1)

44-51: ⚠️ Potential issue | 🟡 Minor

Guard async signal shutdown against unhandled rejection.

process.on('SIGINT', shutdown) / process.on('SIGTERM', shutdown) attach an async handler directly. If manager.stop() rejects, it can surface as an unhandled rejection.

🔧 Proposed fix
-  process.on('SIGINT', shutdown);
-  process.on('SIGTERM', shutdown);
+  process.once('SIGINT', () => {
+    void shutdown().catch((error) => {
+      logger.error('Shutdown failed', error);
+      process.exit(1);
+    });
+  });
+  process.once('SIGTERM', () => {
+    void shutdown().catch((error) => {
+      logger.error('Shutdown failed', error);
+      process.exit(1);
+    });
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/cli/mcp-server-cli.ts` around lines 44 - 51, The async
shutdown handler may reject and produce an unhandled rejection; change the
signal listeners to call the async shutdown and handle errors explicitly (e.g.,
process.on('SIGINT', () => shutdown().catch(err => { logger.error('Shutdown
failed', err); process.exit(1); }))). Ensure you reference the existing shutdown
function and manager.stop() call, and optionally guard re‑entrancy with a local
flag (e.g., shuttingDown) so repeated signals don't run shutdown concurrently.
packages/electron/src/main/ipc/ipc-handlers.ts-95-99 (1)

95-99: ⚠️ Potential issue | 🟡 Minor

STATE_UPDATE handler doesn't persist state as the comment suggests.

The comment mentions "store in localStorage via preload" but the handler only logs receipt. If state persistence is needed, implement the forwarding logic; otherwise, clarify the comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/src/main/ipc/ipc-handlers.ts` around lines 95 - 99, The
'STATE_UPDATE' case only logs receipt but does not persist state as the comment
says; capture the incoming payload in the 'STATE_UPDATE' branch and forward it
to the renderer/preload layer so the preload can write to localStorage (for
example, send the payload over your existing IPC channel from ipc-handlers.ts by
calling into the main window/webContents and emitting a dedicated channel like
'PERSIST_STATE' or 'persist-state' with the state object), or if you don't want
forwarding, remove/replace the misleading comment; update the branch that
currently references 'STATE_UPDATE' and deps.logger to either send the payload
to the preload (via deps.window?.webContents.send or your app's window
reference) or remove the persistence comment.
packages/core/src/utils/slack-message-builder.ts-34-49 (1)

34-49: ⚠️ Potential issue | 🟡 Minor

Buffer API is Node.js-specific and won't work in browser environments.

Since this file is in packages/core which aims to be platform-agnostic, using Buffer.from() will fail in browser contexts. Consider using a universal Base64 encoding approach.

🛡️ Proposed fix using btoa (browser) or a universal approach
   // Add workspace name as Base64-encoded parameter for display in error dialogs
   if (block.workspaceName) {
-    params.set('workspaceName', Buffer.from(block.workspaceName, 'utf-8').toString('base64'));
+    // Use btoa for browser compatibility (handles UTF-8 via encodeURIComponent)
+    const encoded = typeof Buffer !== 'undefined'
+      ? Buffer.from(block.workspaceName, 'utf-8').toString('base64')
+      : btoa(unescape(encodeURIComponent(block.workspaceName)));
+    params.set('workspaceName', encoded);
   }

Alternatively, if this module is strictly server-side, move it to a Node-specific location or document the constraint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/slack-message-builder.ts` around lines 34 - 49, The
use of Node's Buffer in buildImportUri (Buffer.from(...).toString('base64')) is
not browser-safe; replace it with a platform-agnostic Base64 encoder (e.g., a
small helper used by buildImportUri that uses globalThis.btoa for browsers and
globalThis.Buffer as a fallback, or a universal base64 utility) to encode
block.workspaceName before calling params.set('workspaceName', ...);
alternatively, if this module is server-only, move buildImportUri to a
Node-specific package and document the constraint. Ensure the helper is
referenced from buildImportUri and that workspaceName encoding no longer calls
Buffer.from directly.
src/webview/src/services/electron-bridge-adapter.ts-32-32 (1)

32-32: ⚠️ Potential issue | 🟡 Minor

Add try-catch for JSON.parse to handle corrupted localStorage data.

If localStorage contains malformed JSON (e.g., due to corruption or manual editing), JSON.parse will throw, crashing the bridge initialization.

🛡️ Proposed fix
-    getState: () => JSON.parse(localStorage.getItem('app-state') || 'null'),
+    getState: () => {
+      try {
+        return JSON.parse(localStorage.getItem('app-state') || 'null');
+      } catch {
+        console.warn('Failed to parse app-state from localStorage');
+        return null;
+      }
+    },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/webview/src/services/electron-bridge-adapter.ts` at line 32, The getState
function currently calls JSON.parse(localStorage.getItem('app-state') || 'null')
which will throw on malformed JSON; wrap that parsing in a try-catch inside
getState, catch and handle any SyntaxError by logging (or using console.error)
and returning a safe fallback (e.g., null or {}), and optionally remove or reset
the corrupted 'app-state' entry in localStorage so future reads don’t repeatedly
throw; update the getState implementation to perform these steps.
packages/electron/src/main/ipc/ipc-handlers.ts-48-59 (1)

48-59: ⚠️ Potential issue | 🟡 Minor

Add validation for payload properties before access.

Accessing payload.workflow without validation could cause runtime errors if the renderer sends malformed messages. Consider adding defensive checks.

🛡️ Proposed fix
         case 'SAVE_WORKFLOW': {
+          if (!payload?.workflow) {
+            reply('ERROR', { message: 'Missing workflow in payload' });
+            break;
+          }
           if (!fileService) {
             // Use current working directory as workspace root
             fileService = new FileService(deps.fs, process.cwd());
           }

Apply similar validation for other handlers accessing payload properties.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/electron/src/main/ipc/ipc-handlers.ts` around lines 48 - 59, The
SAVE_WORKFLOW IPC handler accesses payload.workflow without validation which can
throw if payload is missing or malformed; add defensive checks at the start of
the case (in the SAVE_WORKFLOW branch) to ensure payload and payload.workflow
exist and that workflow.name is a non-empty string before using
fileService.getWorkflowFilePath, ensureWorkflowsDirectory, or writeFile, and
return an error reply (e.g., reply('SAVE_ERROR')) when validation fails; reuse
the existing fileService initialization logic and validate types/shape of
workflow to avoid runtime exceptions in getWorkflowFilePath/writeFile.
packages/core/src/utils/slack-error-handler.ts-255-260 (1)

255-260: ⚠️ Potential issue | 🟡 Minor

getRetryDelay can exceed maxDelay.

The base delay is capped first, but jitter is added afterward, so the final delay can overshoot the cap.

Suggested fix
 export function getRetryDelay(attempt: number, maxDelay = 60): number {
   // Exponential backoff: 2^attempt seconds, capped at maxDelay
   const delay = Math.min(2 ** attempt, maxDelay);
   // Add jitter (random 0-20%)
   const jitter = delay * 0.2 * Math.random();
-  return delay + jitter;
+  return Math.min(delay + jitter, maxDelay);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/slack-error-handler.ts` around lines 255 - 260, The
getRetryDelay function currently caps the exponential base then adds jitter,
which can push the final value above maxDelay; update getRetryDelay so the
jitter cannot cause an overshoot by either capping the returned value with
maxDelay or limiting the jitter to (maxDelay - baseDelay). Specifically, inside
getRetryDelay compute the baseDelay = Math.min(2 ** attempt, maxDelay), compute
jitter but limit it to Math.max(0, maxDelay - baseDelay) (or simply return
Math.min(baseDelay + jitter, maxDelay)), and return that capped result; this
change should be made in the getRetryDelay function to ensure the final delay
never exceeds maxDelay.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c824118 and e082ad0.

⛔ Files ignored due to path filters (2)
  • package-lock.json is excluded by !**/package-lock.json
  • src/webview/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (56)
  • .gitignore
  • package.json
  • packages/core/package.json
  • packages/core/src/adapters/console-logger.ts
  • packages/core/src/adapters/node-file-system.ts
  • packages/core/src/cli/mcp-server-cli.ts
  • packages/core/src/i18n/i18n-service.ts
  • packages/core/src/i18n/translation-keys.ts
  • packages/core/src/i18n/translations/en.ts
  • packages/core/src/i18n/translations/ja.ts
  • packages/core/src/i18n/translations/ko.ts
  • packages/core/src/i18n/translations/zh-CN.ts
  • packages/core/src/i18n/translations/zh-TW.ts
  • packages/core/src/index.ts
  • packages/core/src/interfaces/dialog-service.ts
  • packages/core/src/interfaces/file-system.ts
  • packages/core/src/interfaces/logger.ts
  • packages/core/src/interfaces/message-transport.ts
  • packages/core/src/interfaces/workflow-provider.ts
  • packages/core/src/services/file-service.ts
  • packages/core/src/services/fs-workflow-provider.ts
  • packages/core/src/services/logger-holder.ts
  • packages/core/src/services/mcp-server-service.ts
  • packages/core/src/services/mcp-server-tools.ts
  • packages/core/src/services/schema-loader-service.ts
  • packages/core/src/types/ai-metrics.ts
  • packages/core/src/types/mcp-node.ts
  • packages/core/src/types/messages.ts
  • packages/core/src/types/slack-integration-types.ts
  • packages/core/src/types/workflow-definition.ts
  • packages/core/src/utils/migrate-workflow.ts
  • packages/core/src/utils/path-utils.ts
  • packages/core/src/utils/schema-parser.ts
  • packages/core/src/utils/sensitive-data-detector.ts
  • packages/core/src/utils/slack-error-handler.ts
  • packages/core/src/utils/slack-message-builder.ts
  • packages/core/src/utils/validate-workflow.ts
  • packages/core/src/utils/workflow-validator.ts
  • packages/core/tsconfig.json
  • packages/electron/package.json
  • packages/electron/renderer/index.html
  • packages/electron/src/main/adapters/electron-dialog-service.ts
  • packages/electron/src/main/adapters/electron-message-transport.ts
  • packages/electron/src/main/ipc/ipc-handlers.ts
  • packages/electron/src/main/main.ts
  • packages/electron/src/main/preload.ts
  • packages/electron/tsconfig.main.json
  • packages/electron/tsconfig.preload.json
  • packages/vscode/src/adapters/vscode-file-system.ts
  • packages/vscode/src/adapters/vscode-logger.ts
  • packages/vscode/src/adapters/vscode-message-transport.ts
  • src/webview/src/main.tsx
  • src/webview/src/services/bridge.ts
  • src/webview/src/services/electron-bridge-adapter.ts
  • src/webview/src/services/vscode-bridge-adapter.ts
  • tsconfig.base.json

Comment on lines +8 to +9
import * as vscode from 'vscode';
import type { TranslationKeys } from './translation-keys';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Locate core package manifest ==="
CORE_PKG=$(fd '^package\.json$' packages/core 2>/dev/null | head -n1)
if [ -z "$CORE_PKG" ]; then
  echo "Core package.json not found. Listing directory structure:"
  fd . packages/core -type f -name "*.json" | head -20
  exit 1
fi
echo "Found: $CORE_PKG"
echo

echo "=== Core package.json contents ==="
cat "$CORE_PKG"
echo

echo "=== Check vscode in dependencies ==="
jq '.dependencies,.devDependencies,.peerDependencies' "$CORE_PKG" 2>/dev/null | grep -i vscode || echo "No vscode dependency found"
echo

echo "=== Direct vscode imports in packages/core/src ==="
rg -n "from ['\"]vscode|import \* as vscode" packages/core/src --no-heading -B 1 -A 1 || echo "No direct vscode imports found"
echo

echo "=== Examine i18n-service.ts around lines 8-9 and 23-26 ==="
if [ -f "packages/core/src/i18n/i18n-service.ts" ]; then
  echo "Lines 1-35:"
  head -n 35 packages/core/src/i18n/i18n-service.ts | cat -n
else
  echo "File not found at expected location. Searching:"
  fd i18n-service.ts packages/core
fi

Repository: breaking-brake/cc-wf-studio

Length of output: 2572


packages/core must not import vscode directly—it creates a missing dependency and breaks platform portability.

The core package lists no vscode dependency but imports it at line 8. This will fail at runtime for non-VSCode consumers and is particularly problematic for the cc-wf-mcp-server CLI tool, which should work standalone.

Extract the VSCode-specific logic into the extension layer using dependency injection:

Suggested patch
-import * as vscode from 'vscode';
@@
+let localeProvider: (() => string) | undefined;
+
+export function setLocaleProvider(provider: () => string): void {
+  localeProvider = provider;
+}
+
 export function getCurrentLocale(): string {
-  // Get VSCode's display language
-  return vscode.env.language;
+  return localeProvider?.() ?? 'en';
 }

Then the VSCode extension layer calls setLocaleProvider(() => vscode.env.language).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/i18n/i18n-service.ts` around lines 8 - 9, The core package
must stop importing vscode: remove the top-level "import * as vscode" from
packages/core/src/i18n/i18n-service.ts and convert the VSCode-specific locale
lookup into an injected provider (e.g. add a setLocaleProvider(localeProvider:
() => string) or accept a localeProvider parameter on the I18n service
constructor); update internal calls to use that locale provider instead of
vscode.env.language; do not add a vscode dependency to core—have the extension
layer call setLocaleProvider(() => vscode.env.language) (or pass that function
into the constructor) so VSCode-specific logic lives only in the extension.

Comment on lines +56 to +57
getWorkflowFilePath(workflowName: string): string {
return path.join(this.workflowsDirectory, `${workflowName}.json`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Validate workflowName before path join to prevent traversal.

workflowName can currently include path separators/.. and escape .vscode/workflows.

Suggested patch
   getWorkflowFilePath(workflowName: string): string {
+    if (!/^[^/\\]+$/.test(workflowName) || workflowName.includes('..')) {
+      throw new Error('Invalid workflow name');
+    }
     return path.join(this.workflowsDirectory, `${workflowName}.json`);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/services/file-service.ts` around lines 56 - 57, The
getWorkflowFilePath function currently joins workflowName into
workflowsDirectory allowing directory traversal; validate and sanitize
workflowName before joining (in getWorkflowFilePath) by rejecting any input
containing path separators or parent-segment tokens (e.g. '/' '\' or '..') or by
normalizing with path.basename and ensuring the basename equals the original;
enforce a restrictive whitelist (e.g. /^[A-Za-z0-9._-]+$/) or throw a clear
error if validation fails, then use path.join(this.workflowsDirectory,
`${sanitizedName}.json`) to build the path.

Comment on lines +122 to +130
function extractContext(content: string, position: number, contextLength = 50): string {
const start = Math.max(0, position - contextLength);
const end = Math.min(content.length, position + contextLength);

const contextBefore = content.substring(start, position);
const contextAfter = content.substring(position, end);

return `...${contextBefore}[REDACTED]${contextAfter}...`;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Context redaction currently leaks sensitive values.

extractContext() redacts at position but then includes content.substring(position, end), which still contains the matched secret. This defeats masking in security-sensitive output.

Suggested fix
-function extractContext(content: string, position: number, contextLength = 50): string {
+function extractContext(
+  content: string,
+  position: number,
+  matchLength: number,
+  contextLength = 50
+): string {
   const start = Math.max(0, position - contextLength);
-  const end = Math.min(content.length, position + contextLength);
+  const redactEnd = Math.min(content.length, position + matchLength);
+  const end = Math.min(content.length, redactEnd + contextLength);

   const contextBefore = content.substring(start, position);
-  const contextAfter = content.substring(position, end);
+  const contextAfter = content.substring(redactEnd, end);

   return `...${contextBefore}[REDACTED]${contextAfter}...`;
 }
-      const matchedValue = match[1] || match[0]; // Use capture group if available
-      const position = match.index;
+      const matchedValue = match[1] || match[0]; // Use capture group if available
+      const valueOffsetInMatch = match[0].indexOf(matchedValue);
+      const position =
+        valueOffsetInMatch >= 0 ? match.index + valueOffsetInMatch : match.index;

@@
-        context: extractContext(content, position),
+        context: extractContext(content, position, matchedValue.length),

Also applies to: 148-161

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/core/src/utils/sensitive-data-detector.ts` around lines 122 - 130,
The current extractContext function leaks secrets because it redacts only at
position but still includes content.substring(position, end); update
extractContext(content: string, position: number, contextLength = 50,
matchLength = 0): string to treat the redacted region as
position..position+matchLength so compute contextAfter from position +
matchLength to end (and clamp bounds), returning
`...${contextBefore}[REDACTED]${contextAfter}...`; also update any callers (and
the similar implementation around lines 148-161) to pass the matched secret
length (or 0 when unknown) so the matched text is excluded from context.

@breaking-brake
Copy link
Owner

Thanks for this big PR! Before diving into a full review, a couple of questions:

  1. What's the use case? Who would benefit from running this outside of VSCode (e.g., Electron standalone, Web application)?
  2. What's the roadmap? Could you share a rough outline of the next steps you have in mind? It would help us plan reviews.

Also, could you take a look at the 3 critical findings from CodeRabbit?

  • vscode import in core package (i18n-service.ts)
  • Path traversal in file-service.ts
  • Secret leakage in sensitive-data-detector.ts

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants