Kai is a context-aware, AI-powered coding assistant designed to run locally and interact directly with your project's filesystem. It helps streamline development workflows through conversation-driven code generation, modification, and task execution.
- Conversation Mode: Engage in an interactive chat session with an AI (powered by Google Gemini by default). Kai automatically builds context from your project files to inform the AI's responses.
- Consolidation Mode: After a conversation, Kai can analyze the discussion and the current codebase to propose and apply consolidated code changes directly to your files, attempting to bring the project state in line with the conversation outcome.
- Context-Awareness:
- Reads your project files (respecting
.gitignore
and.kaiignore
) to provide relevant context to the AI. - Supports multiple context modes:
full
: Includes all non-ignored text files (suitable for smaller projects).analysis_cache
: Uses a pre-generated summary of the project structure and file purposes (faster for large projects, requires initial analysis).dynamic
: Uses the analysis cache and the current query/history to let the AI select the most relevant files to load fully (balances context relevance and token limits).
- Automatically determines the best mode on the first run or allows manual selection.
- Reads your project files (respecting
- Project Analysis: Can analyze your project to generate a cache (
.kai/project_analysis.json
) containing file summaries, types, and sizes, enabling efficient context handling for large repositories. - Direct Filesystem Interaction: Can create, modify, and delete files based on conversation analysis (Consolidation Mode) or direct instructions (future agentic modes).
- Iterative Compilation: After applying changes Kai can run
tsc --noEmit
and feed errors back to the AI for another pass. - AI-assisted Committing: Optionally generate a commit message with Gemini Flash and commit changes directly from Kai.
- Configurable: Uses a local
.kai/config.yaml
for settings like AI models, token limits, and directories. - Editor Integration: Opens conversations in your default command-line editor (tested with Sublime Text's
subl --wait
, basic support for JetBrains IDEs like WebStorm, CLion, IntelliJ IDEA via their command-line launchers).
- Initialization: On first run in a project, Kai checks for a Git repository. If none exists (and the directory is safe or user confirms), it initializes Git and creates a
.kai
directory for logs and configuration. - Context Mode: Determines the context mode (
full
,analysis_cache
,dynamic
) based on project size (token estimation) or existing configuration. Ifanalysis_cache
ordynamic
is selected and the cache doesn't exist, it runs the project analysis first. - Main Menu: Presents options to:
- Start/Continue Conversation: Loads existing history or starts a new conversation log (
.kai/logs/*.jsonl
). Opens your configured editor with the history, ready for your prompt. Context (based on the selected mode) is automatically prepended to your prompt before sending it to the AI. - Consolidate Changes: Select a conversation. Kai analyzes the history since the last successful consolidation, compares it with the current code, generates proposed file changes (creations, modifications, deletions), and applies them directly to your filesystem. It's crucial to review these changes using Git tools (
git status
,git diff
) before committing. - Re-run Project Analysis: Manually triggers the analysis process to update the
.kai/project_analysis.json
cache. Useful if you've made significant changes outside of Kai. - Change Context Mode: Allows you to manually switch between
full
,analysis_cache
, anddynamic
modes and saves the setting to.kai/config.yaml
. - Delete Conversation: Lets you select and remove conversation log files.
- Scaffold New Project: Create a fresh project directory with default Kai configuration and a basic TypeScript setup.
- Generate .kaiignore: Ask the AI to suggest a fresh
.kaiignore
based on your current files.
- Start/Continue Conversation: Loads existing history or starts a new conversation log (
After changes are applied, Kai guides you through committing them:
- Display of modified files: Prints the list from
git status --short
so you can review what changed. - Prompt to generate a commit message: Kai asks whether to create a commit message with Gemini.
- Confirmation before committing: The proposed message is shown and you must confirm before
git commit
runs.
Example session:
Modified files:
- src/index.ts
- README.md
Generate commit message with Gemini and commit all changes? (Y/n) y
Proposed commit message:
Add greeting to CLI output
Use this commit message? (Y/n) y
[main abc1234] Add greeting to CLI output
2 files changed, 5 insertions(+)
- Node.js: (Version specified in
package.json
or higher) - npm: (Comes with Node.js)
- Git: Required for context building (.gitignore handling) and change tracking.
- Command-Line Editor:
- Sublime Text (Recommended Default): Requires the
subl
command-line tool installed and in your system's PATH (usually configured during Sublime Text installation). Use the--wait
flag. - JetBrains IDEs (Experimental): Requires the command-line launcher to be created (e.g., via
Tools -> Create Command-line Launcher...
in the IDE) and the launcher's directory added to your system's PATH. Kai attempts to detectwebstorm
,clion
,idea
, etc., on macOS. - Other editors might work if they have a CLI command that waits for the file to be closed.
- Sublime Text (Recommended Default): Requires the
- Google Gemini API Key: Required for the AI interactions.
- Anthropic API Key: (optional) Needed to use Anthropic Claude models.
-
Install Globally (Recommended for Users):
npm install -g kai
-
Set Gemini API Key: Kai reads the API key from the
GEMINI_API_KEY
environment variable.# On Linux/macOS (add to ~/.bashrc, ~/.zshrc, etc. for persistence) export GEMINI_API_KEY='YOUR_API_KEY_HERE' # On Windows (Command Prompt - for current session only) set GEMINI_API_KEY=YOUR_API_KEY_HERE # On Windows (PowerShell - for current session only) $env:GEMINI_API_KEY = 'YOUR_API_KEY_HERE'
Tip: Use tools like
dotenv
or your shell's profile configuration for managing environment variables easily. -
Set Anthropic API Key (Optional): Kai reads the API key from the
ANTHROPIC_API_KEY
environment variable if you plan to use Anthropic Claude models.# On Linux/macOS export ANTHROPIC_API_KEY='YOUR_API_KEY_HERE' # On Windows (Command Prompt) set ANTHROPIC_API_KEY=YOUR_API_KEY_HERE # On Windows (PowerShell) $env:ANTHROPIC_API_KEY = 'YOUR_API_KEY_HERE'
- Clone the Repository:
git clone https://github.com/nodesman/kai.git cd kai
- Install Dependencies:
npm install
- Compile TypeScript: (Outputs to
bin/
)npm run build
- Set API Key: (See Step 2 above)
- Run:
(Optionally, use
node bin/kai.js
npm link
to make thekai
command available globally from your source directory)
Before running TypeScript or Jest commands, make sure to install all project dependencies:
npm install
# or
yarn install
If dependencies are missing, tsc
and jest
will report errors about missing type definitions.
- Navigate to your project's root directory in your terminal.
- Run the
kai
command:(If running from source withoutkai
npm link
, usenode bin/kai.js
) - Follow the interactive prompts to select a mode (Start/Continue Conversation, Consolidate Changes, etc.).
Important Notes:
- Consolidation is direct: Changes made during Consolidation Mode are applied directly to your files. Always review changes with
git status
,git diff
, or a Git GUI before committing. - Optional Auto Commit: If uncommitted changes are detected, Kai can generate a commit message and commit them for you.
- Context Limits: Be mindful of your AI model's token limits. For large projects, use the
analysis_cache
ordynamic
context modes. Re-run analysis if needed. - Editor Behavior: Kai relies on the editor's command-line tool supporting a "wait" flag (like
subl -w
) to pause execution until you close the file. If your editor doesn't wait, the conversation loop might proceed prematurely.
Kai uses a configuration file located at .kai/config.yaml
within your project directory. If it doesn't exist on the first run, a default one will be created.
Key settings include:
project.chats_dir
: Location for conversation logs (default:.kai/logs
).analysis.cache_file_path
: Location for the analysis cache (default:.kai/project_analysis.json
).context.mode
: (full
,analysis_cache
,dynamic
) - Often set automatically, but can be overridden.gemini.model_name
: Primary Gemini model to use.gemini.subsequent_chat_model_name
: Faster/cheaper Gemini model for subsequent turns (if configured).anthropic.api_key
: API key for Anthropic Claude model (loaded from theANTHROPIC_API_KEY
environment variable).anthropic.model_name
: Claude model to use for Anthropic requests (default:claude-opus-4-20250514
).gemini.max_output_tokens
: Max tokens for the AI's response.gemini.max_prompt_tokens
: Max tokens for the input prompt (context limit).gemini.generation_max_retries
: Retries for the file generation step in consolidation.gemini.generation_retry_base_delay_ms
: Base delay for generation retries.gemini.interactive_prompt_review
: Set totrue
to review/edit prompts in Sublime Text before sending to Gemini Pro models during chat.project.typescript_autofix
: Iftrue
, runtsc --noEmit
after each consolidation pass.project.autofix_iterations
: How many times Kai will attempt to re-run generation after compilation errors (default 3).project.coverage_iterations
: Maximum loops to generate tests and rerun coverage reports (default 3).
(Refer to the src/lib/config_defaults.ts
file for default values).
When the TypeScript feedback loop is enabled, Kai runs npx tsc --noEmit
after applying generated changes. Any compiler errors are appended to the conversation and the generation step is retried. The process repeats up to project.autofix_iterations
times.
Kai can also raise your test coverage automatically. The TestCoverageRaiser
utility runs your Jest suite with coverage enabled, identifies the file with the lowest coverage, and asks the AI to write a new test for it. Launch it by running kai
and choosing Harden from the main menu (select the desired test framework). Kai will iterate up to project.coverage_iterations
times, re-running coverage and generating tests until coverage improves. See docs/100coverageplay.md for a phased approach to reaching 100%. During hardening you can now pick which Gemini model to use, mirroring the options available for conversations and consolidation.
- Build:
npm run build
(Compiles TypeScript fromsrc/
tobin/
) - Test:
npm test
(Runs Jest tests) - Run Locally:
npm start
ornode bin/kai.js
This project uses npm version
for semantic versioning. The preversion
script runs tests and checks for a clean Git status, and postversion
pushes the commit and tag to the remote.
- Patch:
npm version patch -m "Upgrade to %s for [reason]"
- Minor:
npm version minor -m "Upgrade to %s for [feature]"
- Major:
npm version major -m "Upgrade to %s for [breaking change]"
Contributions are welcome! Please feel free to open an issue or submit a pull request. (Add more detailed guidelines if needed, e.g., link to CONTRIBUTING.md).
This project is licensed under the MIT License.