Releases: bitovi/superconnect
Releases · bitovi/superconnect
v0.2.10 - Windows PowerShell output visibility fix
Fixed
- Windows PowerShell output visibility: Child process output (Figma scan details, code generation progress) now displays correctly on Windows
- Root cause:
spawnSyncwithshell: truespawned cmd.exe as intermediary, which interfered with console output and ANSI color codes - Solution: Call Node.js scripts directly via
process.execPathwith argument arrays instead of shell command strings - Eliminates shell quoting/escaping issues and works identically across all platforms
- Root cause:
v0.2.9 - Hybrid AST/regex validation
Changed
- Refactor validation layer to use hybrid AST/regex approach
- Replace regex-based
figma.*()extraction with ts-morph AST traversal for better edge case handling - Keep proven regex patterns for template/JSX validation
- Improves correctness (+43%) and maintainability (+33%) with minimal performance impact
- Replace regex-based
v0.2.8 - 4x faster Windows validation
Performance
- Replace npx with direct CLI invocation for Figma validation (4x faster on Windows)
- Resolve @figma/code-connect binary path directly from node_modules
- Use process.execPath (node) to invoke CLI instead of npx
- Eliminates ~30s npx overhead per CLI call on Windows
- Windows CI improved from 64s to 16s for unit tests
- Reduce timeout from 120s to 30s (no package download overhead)
Changed
- Trigger CI unit tests on version tags in addition to branch pushes
- Improve release process documentation with pre-flight checks and error summaries
- Remove global Figma CLI pre-install from CI workflow (no longer needed)
v0.2.7 - Windows compatibility fix
Fixed
- CRITICAL: Windows compatibility for validation - add
shell: trueto all npx spawn calls- v0.2.6 used
npx.cmdbut still failed with EINVAL on Windows - Now uses
shell: truewhich is the Node.js recommended approach for cross-platform compatibility - Removes platform-specific npx.cmd detection in favor of simpler, more robust solution
- v0.2.6 used
v0.2.6 - Windows npx compatibility fix
Fixed
- CRITICAL: Windows npx compatibility - use npx.cmd on Windows to prevent ENOENT errors
- Previously caused 100% validation failure on Windows with "spawnSync npx ENOENT"
- Affects all Code Connect file validation attempts
- Now correctly detects Windows platform and uses npx.cmd
Changed
- Streamline agent documentation for clarity and reduced token usage
v0.2.5
🚨 CRITICAL BUG FIX
This release fixes a critical bug where @figma/code-connect was in devDependencies instead of dependencies, causing 100% validation failure for all global installs.
Fixed
- CRITICAL: Move @figma/code-connect from devDependencies to dependencies
- Previously, global npm installs (
npm install -g @bitovi/superconnect) did not include the Figma CLI - This caused 100% validation failure on all Code Connect files with "unknown error"
- Now the CLI is always installed with superconnect
- Previously, global npm installs (
- Add upfront check for Figma CLI availability with clear troubleshooting steps
- Improve validation error messages to show exit code, stdout, and stderr for debugging
- Fix Anthropic SDK 0.71.2 compatibility:
- Add explicit
stream: falsefor non-streaming requests - Increase timeout to 20 minutes for long-running orientation tasks (SDK default was 10 minutes)
- Add explicit
Changed
- Update dependencies to latest compatible versions:
- @anthropic-ai/sdk: 0.18.0 → 0.71.2
- commander: 12.1.0 → 14.0.2
- openai: 4.104.0 → 6.14.0
- undici: 6.18.1 → 7.16.0
- @figma/code-connect: 1.3.12 (already current)
- Keep chalk at 4.1.2 and p-limit at 3.1.0 (newer versions are ESM-only, incompatible with CommonJS)
Full Changelog: v0.2.4...v0.2.5
v0.2.4
Fixed
- Add helpful error messages when API returns "Invalid model name" errors (400 status)
- Shows current model being used
- Suggests common alternatives (gpt-4o, claude-sonnet-4-5, etc.)
- Explains how to set model via superconnect.toml or CLI flag
- Links to model documentation
- Warn users when using custom
base_urlwithout explicitly setting amodel- Prevents confusion when default model doesn't exist on custom endpoints
- Applies to LiteLLM, Azure OpenAI, vLLM, and other OpenAI-compatible proxies
v0.2.3
Fixed
- Windows compatibility: Use Node's native fetch instead of undici to fix "fetch failed" errors on Windows PowerShell with corporate networks
- Include CHANGELOG.md in npm package files array so users can see release notes
- Change CI workflow to use
npm ciinstead ofnpm installfor consistency with publish workflow - Add .npmrc with engine-strict=true to enforce Node >=22 requirement (was only advisory before)
- Remove redundant .npmignore file (files[] array in package.json already controls what's published)
v0.2.2
v0.2.1
Added
- Include git SHA in npm package version output via prepublishOnly script
- E2E tests now run in parallel using GitHub Actions matrix strategy
Fixed
- Read api_key from superconnect.toml when validating agent token (interactive setup bug where custom API keys were written to TOML but not read during validation)