Add --no-context flag to disable context awareness#52
Conversation
プライバシー保護のため、コンテキスト認識(ウィンドウタイトル・アプリ名の AI への送信)を無効化するオプションを追加。 - CLI: `koe --no-context` フラグで起動時に無効化 - Config: `[ai] context_enabled = false` で永続的に無効化 - CLI フラグは config 設定をオーバーライド Closes #42 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds an AI context toggle (config + CLI override) to stop sending active-window info, and enforces configurable byte-size limits for API responses, IPC messages, and file reads across processors, IPC server, and file-backed loaders; constructors/signatures and call sites are updated accordingly. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c09e7fc6bb
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/ui/settings_window.rs (1)
78-95:⚠️ Potential issue | 🟠 MajorHardcoded
context_enabled: truewill overwrite user's config setting.When users save settings through this UI,
context_enabledis always set totrue, overwriting anycontext_enabled = falsethe user may have set in theirconfig.toml. This could unexpectedly re-enable context awareness for users who disabled it for privacy.Consider one of:
- Add a toggle widget to the AI settings page to let users control this setting.
- Preserve the existing config value when saving (read the current
context_enabledvalue from the loaded config).🛡️ Minimal fix to preserve existing config value
Add
context_enabledfield toWidgetsand read from loaded config:struct Widgets { // ... existing fields ... + context_enabled: bool, }In
build(), capture the loaded value:let widgets = Rc::new(Widgets { // ... existing fields ... + context_enabled: config.ai.context_enabled, });In
read_config():ai: AiConfig { engine: ai_engine, - context_enabled: true, + context_enabled: self.context_enabled,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/settings_window.rs` around lines 78 - 95, The AiConfig is hardcoding context_enabled: true in settings_window.rs which overwrites user config; modify the UI to preserve or expose that setting by adding a context_enabled field to Widgets, wiring a checkbox/toggle in build() to capture the current UI state (or initialize it from the loaded config in read_config()), and then set AiConfig.context_enabled from that widget instead of true; update read_config() to populate the new widget from the existing config and use the widget value when constructing AiConfig so existing false values are not overwritten.src/ipc/server.rs (1)
22-24:⚠️ Potential issue | 🟠 MajorIPC message limit is hardcoded, so
limits.max_ipc_message_bytesis ignored.The server always uses
DEFAULT_MAX_IPC_MESSAGE_BYTES; user-configured IPC limits never take effect. Wire the limit throughstart(...)and pass it from daemon config.Suggested wiring change
-pub async fn start( - mut shutdown_rx: tokio::sync::watch::Receiver<bool>, -) -> Result<mpsc::Receiver<IpcRequest>> { +pub async fn start( + mut shutdown_rx: tokio::sync::watch::Receiver<bool>, + max_message_bytes: usize, +) -> Result<mpsc::Receiver<IpcRequest>> { ... - if let Err(e) = handle_connection(stream, tx).await { + if let Err(e) = + handle_connection_with_limit(stream, tx, max_message_bytes).await + { tracing::error!("IPC connection error: {}", e); }// caller side (src/daemon.rs) let ipc_rx = ipc::server::start(shutdown_rx.clone(), config.limits.max_ipc_message_bytes) .await .context("starting IPC server")?;Also applies to: 91-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ipc/server.rs` around lines 22 - 24, start currently hardcodes DEFAULT_MAX_IPC_MESSAGE_BYTES so user-configured limits are ignored; change start to accept a max_ipc_message_bytes parameter (e.g. u32/usize) and use that instead of DEFAULT_MAX_IPC_MESSAGE_BYTES when creating the framed reader/codec and any other IPC reader/codec instantiation (references: start, DEFAULT_MAX_IPC_MESSAGE_BYTES); update the caller in daemon to pass config.limits.max_ipc_message_bytes into ipc::server::start and likewise replace the hardcoded constant in the other occurrences noted (around the 91-99 area) to use the new parameter so the config value is threaded through all IPC server construction sites.
🧹 Nitpick comments (1)
src/history/mod.rs (1)
48-53: Add a history-specific oversized-file test for the new limit path.
History::loadnow enforcesmax_file_bytes, but current updates only passu64::MAXin tests, so this behavior isn’t validated in this module. Please add a regression test that writeshistory.jsonlabove the limit and asserts load fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/history/mod.rs` around lines 48 - 53, Tests don't currently validate the new max_file_bytes enforcement in History::load; add a unit test that creates a temporary directory, writes a history.jsonl file whose size exceeds a chosen small max_file_bytes, then calls History::load(dir, _, max_file_bytes) and asserts it returns an Err; use the same History::load function and the underlying read_to_string_limited behavior to trigger the failure (do not pass u64::MAX) and clean up the temp file after the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/ai/claude.rs`:
- Around line 15-18: The function read_response_json is currently private but is
called from a sibling module (src/ai/ollama.rs); change its visibility to
pub(super) so modules under the parent ai module can call it — update the
function signature for read_response_json(response: reqwest::Response,
max_bytes: usize) -> Result<serde_json::Value> to be pub(super) async fn
read_response_json(...) and keep the body unchanged so the call from
super::claude::read_response_json(...) in ollama.rs compiles.
- Around line 28-39: The current code calls response.bytes().await which buffers
the entire body into memory after the Content-Length check, so a
missing/incorrect header can still cause OOM; replace the bytes() call with a
streaming read over response.bytes_stream() (or an equivalent size-limited
reader) that accumulates chunks into a buffer up to max_bytes and returns an
error if the limit is exceeded while streaming, then pass the collected buffer
to serde_json::from_slice; keep the existing content_length check but enforce
the max_bytes limit during the streaming read of response (referencing
response.bytes(), response.bytes_stream(), max_bytes, and
serde_json::from_slice).
In `@src/ai/mod.rs`:
- Line 8: Remove the unused import LimitsConfig from the use statement in
src/ai/mod.rs: update the line that currently reads use
crate::config::{AiConfig, AiEngine, LimitsConfig}; to only import the symbols
actually used (AiConfig and AiEngine) so that LimitsConfig is no longer
referenced; this eliminates the unused import warning without changing any other
logic in the module.
In `@src/config.rs`:
- Around line 265-277: The current read_to_string_limited function does a
metadata size check then calls std::fs::read_to_string, which creates a TOCTOU
window; to fix it open the file with std::fs::File::open and create a reader
that enforces the limit with std::io::Read::take(max_bytes) (e.g., let mut
limited = file.take(max_bytes);) then read from that reader into a String and
detect if the file exceeded the limit (if the reader returns more data or you
hit the exact limit and more bytes are available), removing the unsafe reliance
on the prior metadata check; update read_to_string_limited to use File::open,
.take(max_bytes), and read_to_string-like logic to enforce the size at
read-time.
In `@src/ipc/server.rs`:
- Around line 110-124: The current loop uses reader.read_line(&mut line) and
checks line.len() only after the full line is read, allowing an attacker to send
a very long/no-newline payload and grow memory unbounded; replace the unbounded
read with a bounded framing approach (e.g., wrap the underlying stream with
tokio_util::codec::Framed or FramedRead using tokio_util::codec::LinesCodec with
max_frame_length = max_message_bytes) or implement a manual bounded reader that
rejects/returns an error as soon as the accumulation exceeds max_message_bytes;
adjust the loop to read framed lines (or fall back to the bounded read) and
continue using IpcResponse::Error and writer.write_all for the error reply when
the limit is exceeded.
---
Outside diff comments:
In `@src/ipc/server.rs`:
- Around line 22-24: start currently hardcodes DEFAULT_MAX_IPC_MESSAGE_BYTES so
user-configured limits are ignored; change start to accept a
max_ipc_message_bytes parameter (e.g. u32/usize) and use that instead of
DEFAULT_MAX_IPC_MESSAGE_BYTES when creating the framed reader/codec and any
other IPC reader/codec instantiation (references: start,
DEFAULT_MAX_IPC_MESSAGE_BYTES); update the caller in daemon to pass
config.limits.max_ipc_message_bytes into ipc::server::start and likewise replace
the hardcoded constant in the other occurrences noted (around the 91-99 area) to
use the new parameter so the config value is threaded through all IPC server
construction sites.
In `@src/ui/settings_window.rs`:
- Around line 78-95: The AiConfig is hardcoding context_enabled: true in
settings_window.rs which overwrites user config; modify the UI to preserve or
expose that setting by adding a context_enabled field to Widgets, wiring a
checkbox/toggle in build() to capture the current UI state (or initialize it
from the loaded config in read_config()), and then set AiConfig.context_enabled
from that widget instead of true; update read_config() to populate the new
widget from the existing config and use the widget value when constructing
AiConfig so existing false values are not overwritten.
---
Nitpick comments:
In `@src/history/mod.rs`:
- Around line 48-53: Tests don't currently validate the new max_file_bytes
enforcement in History::load; add a unit test that creates a temporary
directory, writes a history.jsonl file whose size exceeds a chosen small
max_file_bytes, then calls History::load(dir, _, max_file_bytes) and asserts it
returns an Err; use the same History::load function and the underlying
read_to_string_limited behavior to trigger the failure (do not pass u64::MAX)
and clean up the temp file after the assertion.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7be9fe8d-82a6-408c-8dc2-4f0a21cd3114
📒 Files selected for processing (13)
config.tomlsrc/ai/claude.rssrc/ai/mod.rssrc/ai/ollama.rssrc/config.rssrc/daemon.rssrc/dictionary.rssrc/history/mod.rssrc/ipc/server.rssrc/main.rssrc/memory/mod.rssrc/ui/history_page.rssrc/ui/settings_window.rs
- [P0] create_processor に max_response_bytes を渡すよう修正 - [P0] read_response_json を pub(crate) にして ollama.rs から呼べるように - [P1] config reload 時に CLI --no-context override を保持 - [P1] settings_window が context_enabled/limits を既存設定から保持 - [Critical] read_to_string_limited の TOCTOU を修正(read→size check に変更) - [Critical] IPC read_line を fill_buf ベースのサイズ制限付き読み込みに変更 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/daemon.rs (1)
273-345:⚠️ Potential issue | 🟠 Major
ReloadConfigneeds to rebuild disabled/optional persisted state too.This path updates
configwithout fully reinitializing the runtime state.historyis never reloaded/cleared, andmemis only touched when the new config keeps memory enabled. After a GUI save + reload, disabling history can still keephist.add_entry(...)writing to disk, and disabling memory can keep previously learned context / Whisper hints active because the oldmeminstance is still used later.💡 Suggested fix shape
Ok(mut new_config) => { // Preserve CLI --no-context override if cli_no_context { new_config.ai.context_enabled = false; } + let previous_history = history.take(); + history = if new_config.history.enabled { + match History::load( + &new_config.history_dir(), + new_config.history.max_entries, + new_config.limits.max_file_size_bytes as u64, + ) { + Ok(new_history) => Some(new_history), + Err(e) => { + tracing::warn!( + "Failed to reload history (keeping current instance): {}", + e + ); + previous_history + } + } + } else { + None + }; + // Reload memory if new_config.memory.enabled { let new_memory_dir = new_config.memory_dir(); match memory::Memory::load(&new_memory_dir, new_config.limits.max_file_size_bytes as u64) { Ok(new_mem) => { mem = new_mem; tracing::info!("Memory reloaded"); } Err(e) => { tracing::error!("Failed to reload memory: {}", e); } } + } else { + mem = memory::Memory::default(); + let empty_hint = String::new(); + recognizer.set_prompt_hint(&empty_hint); } + last_consolidation_entry_count = mem.total_entries(); config = new_config; tracing::info!("Config reloaded successfully"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon.rs` around lines 273 - 345, The reload path must rebuild or clear optional persisted state when the new config disables features: after obtaining new_config, if new_config.history.enabled is false replace or clear the existing history instance (e.g., call history.clear() or reinitialize history to an empty History) and if history is enabled but path changed reload via history.load(...); likewise, if new_config.memory.enabled is false replace mem with an empty Memory (or call mem.clear()) and remove any Whisper hint from recognizer (e.g., recognizer.set_prompt_hint("") or reset prompt hint), otherwise continue to call memory::Memory::load(...) when enabled; ensure these adjustments occur before assigning config = new_config so subsequent logic uses the correct runtime state.
♻️ Duplicate comments (2)
src/config.rs (1)
268-280:⚠️ Potential issue | 🟠 MajorEnforce
max_bytesduring the read.This helper still allocates the whole file before rejecting it. A very large config/dictionary/memory file will be fully buffered by
std::fs::read, somax_file_size_bytesdoes not actually protect startup or reload from memory spikes.💡 Suggested fix
pub fn read_to_string_limited(path: &Path, max_bytes: u64) -> Result<String> { - let bytes = - std::fs::read(path).with_context(|| format!("reading {}", path.display()))?; + use std::io::Read; + + let mut file = + std::fs::File::open(path).with_context(|| format!("opening {}", path.display()))?; + let mut bytes = Vec::new(); + file.take(max_bytes.saturating_add(1)) + .read_to_end(&mut bytes) + .with_context(|| format!("reading {}", path.display()))?; if bytes.len() as u64 > max_bytes { anyhow::bail!( - "file {} is too large ({} bytes, limit {} bytes)", + "file {} is too large (more than {} bytes)", path.display(), - bytes.len(), max_bytes, ); } String::from_utf8(bytes) .map_err(|e| anyhow::anyhow!("file {} is not valid UTF-8: {}", path.display(), e)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 268 - 280, The function read_to_string_limited currently uses std::fs::read which reads the whole file into memory before checking max_bytes; change it to open the file (std::fs::File) and read with a bounded reader (e.g., std::io::Read or BufReader combined with .take(max_bytes + 1)) into a buffer, then check if the buffer length exceeds max_bytes and bail if so; finally convert the buffered bytes to UTF-8 and return the string. Update references in read_to_string_limited to use File::open, a bounded reader (.take) and only allocate up to max_bytes+1 to enforce the limit during read rather than after.src/ai/claude.rs (1)
15-39:⚠️ Potential issue | 🔴 CriticalStream the body instead of calling
bytes().
reqwest::Response::bytes()gets the full body, so this guard only runs after the allocation already happened. IfContent-Lengthis missing or wrong, a large Anthropic/Ollama response can still exhaust memory beforemax_response_bytesis enforced. Read incrementally withchunk()and abort once the accumulated size crosses the limit. (docs.rs)💡 Suggested fix
pub(crate) async fn read_response_json( response: reqwest::Response, max_bytes: usize, ) -> Result<serde_json::Value> { - if let Some(len) = response.content_length() { + let mut response = response; + if let Some(len) = response.content_length() { if len > max_bytes as u64 { anyhow::bail!( "API response too large (Content-Length: {} bytes, limit: {} bytes)", len, max_bytes ); } } - let bytes = response - .bytes() - .await - .context("reading API response body")?; - if bytes.len() > max_bytes { - anyhow::bail!( - "API response too large ({} bytes, limit: {} bytes)", - bytes.len(), - max_bytes - ); + let mut bytes = Vec::new(); + while let Some(chunk) = response + .chunk() + .await + .context("reading API response body")? + { + if bytes.len().saturating_add(chunk.len()) > max_bytes { + anyhow::bail!("API response too large (more than {} bytes)", max_bytes); + } + bytes.extend_from_slice(&chunk); } serde_json::from_slice(&bytes).context("parsing API response JSON") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ai/claude.rs` around lines 15 - 39, read_response_json currently calls response.bytes() which allocates the entire body before enforcing max_bytes; change it to stream the body in chunks (e.g., using response.chunk() or response.bytes_stream() + StreamExt::next) and incrementally append each chunk to a Vec<u8>, checking after each append that accumulated.len() <= max_bytes and bailing with the same error if exceeded; keep the existing Content-Length pre-check, propagate chunk read errors with context ("reading API response body"), and at the end call serde_json::from_slice(&accumulated) to parse JSON (function: read_response_json).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/ui/settings_window.rs`:
- Around line 170-177: The preservation step in save_from_widgets currently
calls Config::load(), which may read a project-local config instead of the
actual file written by save_from_widgets; change it to load the existing config
from the exact target path (the path returned by Config::config_path()) before
copying hidden fields. Concretely, replace the Config::load() call with a
load-from-path operation (e.g. a Config::load_from_path, Config::from_path, or
parse the file at the path variable) so you read the same file you will
overwrite; keep the same fallback semantics (only copy ai.context_enabled and
limits when the file exists and parses).
---
Outside diff comments:
In `@src/daemon.rs`:
- Around line 273-345: The reload path must rebuild or clear optional persisted
state when the new config disables features: after obtaining new_config, if
new_config.history.enabled is false replace or clear the existing history
instance (e.g., call history.clear() or reinitialize history to an empty
History) and if history is enabled but path changed reload via
history.load(...); likewise, if new_config.memory.enabled is false replace mem
with an empty Memory (or call mem.clear()) and remove any Whisper hint from
recognizer (e.g., recognizer.set_prompt_hint("") or reset prompt hint),
otherwise continue to call memory::Memory::load(...) when enabled; ensure these
adjustments occur before assigning config = new_config so subsequent logic uses
the correct runtime state.
---
Duplicate comments:
In `@src/ai/claude.rs`:
- Around line 15-39: read_response_json currently calls response.bytes() which
allocates the entire body before enforcing max_bytes; change it to stream the
body in chunks (e.g., using response.chunk() or response.bytes_stream() +
StreamExt::next) and incrementally append each chunk to a Vec<u8>, checking
after each append that accumulated.len() <= max_bytes and bailing with the same
error if exceeded; keep the existing Content-Length pre-check, propagate chunk
read errors with context ("reading API response body"), and at the end call
serde_json::from_slice(&accumulated) to parse JSON (function:
read_response_json).
In `@src/config.rs`:
- Around line 268-280: The function read_to_string_limited currently uses
std::fs::read which reads the whole file into memory before checking max_bytes;
change it to open the file (std::fs::File) and read with a bounded reader (e.g.,
std::io::Read or BufReader combined with .take(max_bytes + 1)) into a buffer,
then check if the buffer length exceeds max_bytes and bail if so; finally
convert the buffered bytes to UTF-8 and return the string. Update references in
read_to_string_limited to use File::open, a bounded reader (.take) and only
allocate up to max_bytes+1 to enforce the limit during read rather than after.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8a69b0eb-4567-4ea5-b4b4-2888e300ec98
📒 Files selected for processing (7)
src/ai/claude.rssrc/ai/mod.rssrc/config.rssrc/daemon.rssrc/ipc/server.rssrc/main.rssrc/ui/settings_window.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/ipc/server.rs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config.rs`:
- Around line 183-185: The config field max_ipc_message_bytes (and its
default_default_max_ipc_message_bytes) is not being used; update the IPC server
startup to pass the configured value into handle_connection instead of the
hardcoded DEFAULT_MAX_IPC_MESSAGE_BYTES. Locate where the server is spawned (the
call that currently uses DEFAULT_MAX_IPC_MESSAGE_BYTES) and change it to accept
the max_ipc_message_bytes from your loaded Config, propagate that value into the
server initialization signature if needed, and ensure handle_connection(...,
max_ipc_message_bytes) is called so the configured limit is enforced.
- Around line 264-280: read_to_string_limited currently uses std::fs::read which
allocates the whole file before checking size; change it to open the file and
stream-read with a hard cap so oversized files are never fully loaded.
Specifically, replace the std::fs::read call in read_to_string_limited with
File::open(path) and use file.take(max_bytes + 1).read_to_end(&mut buf) (or read
in chunks accumulating into a Vec<u8>) then check if buf.len() as u64 >
max_bytes and bail as before; preserve the with_context error on opening/reading
and keep the UTF-8 conversion/map_err logic for the final String::from_utf8
error message.
In `@src/daemon.rs`:
- Around line 333-336: The reload branch that calls memory::Memory::load must
also handle the case when new_config.memory.enabled is false by clearing the
existing mem and resetting the recognizer hint: replace or reset mem to an empty
Memory (e.g., Memory::default() or call mem.clear()) so mem.format_for_prompt()
yields nothing, reset the recognizer's Whisper hint to None/empty (the same
field set in the enabled branch), and ensure the processor is not passed stale
mem.format_for_prompt() when memory is disabled; apply the same fix to the other
reload site around the second load (lines 347–352).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bba9f1dc-9cf9-40fc-bfe1-79495d273f72
📒 Files selected for processing (2)
src/config.rssrc/daemon.rs
Summary
koe --no-contextCLIフラグおよび[ai] context_enabled = false設定オプションを追加get_active_window_context()をスキップし、ウィンドウタイトル・アプリ名を AI に送信しない動機
クラウド AI バックエンド使用時、ウィンドウタイトルにはファイルパス、URL トークン、メール件名等の機密情報が含まれうる。ユーザーがコンテキスト送信を制御できるようにする。
変更内容
AiConfigにcontext_enabled: bool(デフォルトtrue)を追加--no-contextフラグを追加、config をオーバーライドcontext_enabled == false時にWindowContext::default()を使用WindowContext::default()でプロンプトにコンテキストが含まれないことのテスト追加context_enabledのデフォルト値・明示的無効化のテスト追加テスト計画
cargo test全94テスト通過context_enabledがデフォルトでtrueであることを確認するテストcontext_enabled = falseで正しく無効化されることを確認するテストWindowContextでシステムプロンプトにコンテキストが含まれないことを確認するテストCloses #42
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
Tests