Skip to content

Add --no-context flag to disable context awareness#52

Merged
shuhei0866 merged 4 commits intomainfrom
feat/no-context-flag
Apr 4, 2026
Merged

Add --no-context flag to disable context awareness#52
shuhei0866 merged 4 commits intomainfrom
feat/no-context-flag

Conversation

@shuhei0866
Copy link
Copy Markdown
Owner

@shuhei0866 shuhei0866 commented Apr 4, 2026

Summary

  • koe --no-context CLIフラグおよび [ai] context_enabled = false 設定オプションを追加
  • 無効時、get_active_window_context() をスキップし、ウィンドウタイトル・アプリ名を AI に送信しない
  • CLIフラグは config 設定をオーバーライドする(一時的な無効化に便利)

動機

クラウド AI バックエンド使用時、ウィンドウタイトルにはファイルパス、URL トークン、メール件名等の機密情報が含まれうる。ユーザーがコンテキスト送信を制御できるようにする。

変更内容

  • src/config.rs: AiConfigcontext_enabled: bool(デフォルト true)を追加
  • src/main.rs: CLI に --no-context フラグを追加、config をオーバーライド
  • src/daemon.rs: context_enabled == false 時に WindowContext::default() を使用
  • config.toml: 設定例を追加
  • src/ai/mod.rs: WindowContext::default() でプロンプトにコンテキストが含まれないことのテスト追加
  • src/config.rs: context_enabled のデフォルト値・明示的無効化のテスト追加

テスト計画

  • cargo test 全94テスト通過
  • context_enabled がデフォルトで true であることを確認するテスト
  • context_enabled = false で正しく無効化されることを確認するテスト
  • 空の WindowContext でシステムプロンプトにコンテキストが含まれないことを確認するテスト

Closes #42

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Toggle window/application context via config (enabled by default)
    • New --no-context CLI flag to disable context in daemon mode
    • Configurable size limits for API responses, IPC messages, and data files
  • Improvements

    • Enforced size limits when reading files, loading resources, and parsing API/IPC responses
    • IPC now rejects oversized messages with a clear error response
    • UI preserves non-exposed config fields when saving
  • Tests

    • Added unit tests for prompt/context behavior and limits/defaults

プライバシー保護のため、コンテキスト認識(ウィンドウタイトル・アプリ名の
AI への送信)を無効化するオプションを追加。

- CLI: `koe --no-context` フラグで起動時に無効化
- Config: `[ai] context_enabled = false` で永続的に無効化
- CLI フラグは config 設定をオーバーライド

Closes #42

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

📝 Walkthrough

Walkthrough

Adds an AI context toggle (config + CLI override) to stop sending active-window info, and enforces configurable byte-size limits for API responses, IPC messages, and file reads across processors, IPC server, and file-backed loaders; constructors/signatures and call sites are updated accordingly.

Changes

Cohort / File(s) Summary
Config & UI
config.toml, src/config.rs, src/ui/settings_window.rs
Adds ai.context_enabled: bool (serde-default true), new limits: LimitsConfig with byte-size fields and defaults, read_to_string_limited(path, max_bytes) helper, and wiring in settings UI to initialize/preserve these fields.
CLI / Daemon Entrypoints
src/main.rs, src/daemon.rs
Adds --no-context CLI flag (Cli.no_context), run_daemon(..., cli_no_context: bool) parameter, and applies CLI override to config.ai.context_enabled at startup and on reload.
AI Module Factory & Tests
src/ai/mod.rs
create_processor(config, max_response_bytes: usize) updated to accept and thread max_response_bytes to engine constructors; adds #[cfg(test)] unit tests for prompt building with/without window context.
Claude Processor
src/ai/claude.rs
Adds read_response_json(response, max_bytes) helper enforcing Content-Length and post-read byte limits and better error context; ClaudeProcessor stores max_response_bytes and constructor signature changed to accept it.
Ollama Processor
src/ai/ollama.rs
Adds max_response_bytes field and constructor param; switches JSON parsing to use shared read_response_json helper.
IPC Server
src/ipc/server.rs
Adds DEFAULT_MAX_IPC_MESSAGE_BYTES, read_line_limited and handle_connection_with_limit to enforce inbound message size; oversized frames are drained and cause immediate IpcResponse::Error replies.
File-backed Loaders
src/memory/mod.rs, src/history/mod.rs, src/dictionary.rs, src/ui/history_page.rs
Loader APIs extended to accept max_file_bytes and use read_to_string_limited for bounded reads; call sites and tests updated to pass configured limits.
Config Loading & Boot
src/config.rs, src/daemon.rs (reload)
Config::load uses read_to_string_limited with default max file size; daemon enforces limits.max_file_size_bytes and limits.max_api_response_bytes when loading resources and creating AI processors.
Tests
src/config.rs (tests), src/ai/mod.rs (tests), other updated tests
Adds/updates unit tests for ai.context_enabled behavior, LimitsConfig defaults and parsing, read_to_string_limited edge cases, and prompt-building with/without window context.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client
participant IPC as "IPC Server"
participant Daemon
participant AI as "AI Processor"
participant Ext as "External AI API"
rect rgba(200,230,255,0.5)
Client->>IPC: send newline-terminated message
IPC->>IPC: read_line_limited(max_message_bytes)
alt message too large
IPC-->>Client: IpcResponse::Error ("message too large")\n
else within limit
IPC->>Daemon: forward parsed request
Daemon->>Daemon: if config.ai.context_enabled -> get_active_window_context()
Daemon->>AI: process(request + optional context)
AI->>Ext: HTTP request
Ext-->>AI: HTTP response (bounded by max_response_bytes)
AI->>Daemon: processed text
Daemon->>IPC: send response to client
IPC-->>Client: response\n
end
end

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I nibble bytes, both small and neat,

I guard the windows where secrets meet,
A hop, a toggle—context kept or spared,
Messages bounded, safely paired,
Hooray for tidy code and private sleep!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and directly summarizes the main change: adding a --no-context CLI flag to disable context awareness, which aligns with the primary objective.
Linked Issues check ✅ Passed The PR fully implements all coding requirements from issue #42: context_enabled config option defaulting to true, CLI --no-context flag, conditional get_active_window_context() calls, and proper tests validating the behavior.
Out of Scope Changes check ✅ Passed All changes are directly scoped to implementing context awareness control; infrastructure changes (message size limits, file reading limits) support the main feature but are not out-of-scope.
Docstring Coverage ✅ Passed Docstring coverage is 96.51% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/no-context-flag

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c09e7fc6bb

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/ai/claude.rs
Comment thread src/ai/ollama.rs
Comment thread src/main.rs
Comment thread src/ui/settings_window.rs
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/ui/settings_window.rs (1)

78-95: ⚠️ Potential issue | 🟠 Major

Hardcoded context_enabled: true will overwrite user's config setting.

When users save settings through this UI, context_enabled is always set to true, overwriting any context_enabled = false the user may have set in their config.toml. This could unexpectedly re-enable context awareness for users who disabled it for privacy.

Consider one of:

  1. Add a toggle widget to the AI settings page to let users control this setting.
  2. Preserve the existing config value when saving (read the current context_enabled value from the loaded config).
🛡️ Minimal fix to preserve existing config value

Add context_enabled field to Widgets and read from loaded config:

 struct Widgets {
     // ... existing fields ...
+    context_enabled: bool,
 }

In build(), capture the loaded value:

 let widgets = Rc::new(Widgets {
     // ... existing fields ...
+    context_enabled: config.ai.context_enabled,
 });

In read_config():

             ai: AiConfig {
                 engine: ai_engine,
-                context_enabled: true,
+                context_enabled: self.context_enabled,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ui/settings_window.rs` around lines 78 - 95, The AiConfig is hardcoding
context_enabled: true in settings_window.rs which overwrites user config; modify
the UI to preserve or expose that setting by adding a context_enabled field to
Widgets, wiring a checkbox/toggle in build() to capture the current UI state (or
initialize it from the loaded config in read_config()), and then set
AiConfig.context_enabled from that widget instead of true; update read_config()
to populate the new widget from the existing config and use the widget value
when constructing AiConfig so existing false values are not overwritten.
src/ipc/server.rs (1)

22-24: ⚠️ Potential issue | 🟠 Major

IPC message limit is hardcoded, so limits.max_ipc_message_bytes is ignored.

The server always uses DEFAULT_MAX_IPC_MESSAGE_BYTES; user-configured IPC limits never take effect. Wire the limit through start(...) and pass it from daemon config.

Suggested wiring change
-pub async fn start(
-    mut shutdown_rx: tokio::sync::watch::Receiver<bool>,
-) -> Result<mpsc::Receiver<IpcRequest>> {
+pub async fn start(
+    mut shutdown_rx: tokio::sync::watch::Receiver<bool>,
+    max_message_bytes: usize,
+) -> Result<mpsc::Receiver<IpcRequest>> {
...
-                                if let Err(e) = handle_connection(stream, tx).await {
+                                if let Err(e) =
+                                    handle_connection_with_limit(stream, tx, max_message_bytes).await
+                                {
                                     tracing::error!("IPC connection error: {}", e);
                                 }
// caller side (src/daemon.rs)
let ipc_rx = ipc::server::start(shutdown_rx.clone(), config.limits.max_ipc_message_bytes)
    .await
    .context("starting IPC server")?;

Also applies to: 91-99

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ipc/server.rs` around lines 22 - 24, start currently hardcodes
DEFAULT_MAX_IPC_MESSAGE_BYTES so user-configured limits are ignored; change
start to accept a max_ipc_message_bytes parameter (e.g. u32/usize) and use that
instead of DEFAULT_MAX_IPC_MESSAGE_BYTES when creating the framed reader/codec
and any other IPC reader/codec instantiation (references: start,
DEFAULT_MAX_IPC_MESSAGE_BYTES); update the caller in daemon to pass
config.limits.max_ipc_message_bytes into ipc::server::start and likewise replace
the hardcoded constant in the other occurrences noted (around the 91-99 area) to
use the new parameter so the config value is threaded through all IPC server
construction sites.
🧹 Nitpick comments (1)
src/history/mod.rs (1)

48-53: Add a history-specific oversized-file test for the new limit path.

History::load now enforces max_file_bytes, but current updates only pass u64::MAX in tests, so this behavior isn’t validated in this module. Please add a regression test that writes history.jsonl above the limit and asserts load fails.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/history/mod.rs` around lines 48 - 53, Tests don't currently validate the
new max_file_bytes enforcement in History::load; add a unit test that creates a
temporary directory, writes a history.jsonl file whose size exceeds a chosen
small max_file_bytes, then calls History::load(dir, _, max_file_bytes) and
asserts it returns an Err; use the same History::load function and the
underlying read_to_string_limited behavior to trigger the failure (do not pass
u64::MAX) and clean up the temp file after the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/ai/claude.rs`:
- Around line 15-18: The function read_response_json is currently private but is
called from a sibling module (src/ai/ollama.rs); change its visibility to
pub(super) so modules under the parent ai module can call it — update the
function signature for read_response_json(response: reqwest::Response,
max_bytes: usize) -> Result<serde_json::Value> to be pub(super) async fn
read_response_json(...) and keep the body unchanged so the call from
super::claude::read_response_json(...) in ollama.rs compiles.
- Around line 28-39: The current code calls response.bytes().await which buffers
the entire body into memory after the Content-Length check, so a
missing/incorrect header can still cause OOM; replace the bytes() call with a
streaming read over response.bytes_stream() (or an equivalent size-limited
reader) that accumulates chunks into a buffer up to max_bytes and returns an
error if the limit is exceeded while streaming, then pass the collected buffer
to serde_json::from_slice; keep the existing content_length check but enforce
the max_bytes limit during the streaming read of response (referencing
response.bytes(), response.bytes_stream(), max_bytes, and
serde_json::from_slice).

In `@src/ai/mod.rs`:
- Line 8: Remove the unused import LimitsConfig from the use statement in
src/ai/mod.rs: update the line that currently reads use
crate::config::{AiConfig, AiEngine, LimitsConfig}; to only import the symbols
actually used (AiConfig and AiEngine) so that LimitsConfig is no longer
referenced; this eliminates the unused import warning without changing any other
logic in the module.

In `@src/config.rs`:
- Around line 265-277: The current read_to_string_limited function does a
metadata size check then calls std::fs::read_to_string, which creates a TOCTOU
window; to fix it open the file with std::fs::File::open and create a reader
that enforces the limit with std::io::Read::take(max_bytes) (e.g., let mut
limited = file.take(max_bytes);) then read from that reader into a String and
detect if the file exceeded the limit (if the reader returns more data or you
hit the exact limit and more bytes are available), removing the unsafe reliance
on the prior metadata check; update read_to_string_limited to use File::open,
.take(max_bytes), and read_to_string-like logic to enforce the size at
read-time.

In `@src/ipc/server.rs`:
- Around line 110-124: The current loop uses reader.read_line(&mut line) and
checks line.len() only after the full line is read, allowing an attacker to send
a very long/no-newline payload and grow memory unbounded; replace the unbounded
read with a bounded framing approach (e.g., wrap the underlying stream with
tokio_util::codec::Framed or FramedRead using tokio_util::codec::LinesCodec with
max_frame_length = max_message_bytes) or implement a manual bounded reader that
rejects/returns an error as soon as the accumulation exceeds max_message_bytes;
adjust the loop to read framed lines (or fall back to the bounded read) and
continue using IpcResponse::Error and writer.write_all for the error reply when
the limit is exceeded.

---

Outside diff comments:
In `@src/ipc/server.rs`:
- Around line 22-24: start currently hardcodes DEFAULT_MAX_IPC_MESSAGE_BYTES so
user-configured limits are ignored; change start to accept a
max_ipc_message_bytes parameter (e.g. u32/usize) and use that instead of
DEFAULT_MAX_IPC_MESSAGE_BYTES when creating the framed reader/codec and any
other IPC reader/codec instantiation (references: start,
DEFAULT_MAX_IPC_MESSAGE_BYTES); update the caller in daemon to pass
config.limits.max_ipc_message_bytes into ipc::server::start and likewise replace
the hardcoded constant in the other occurrences noted (around the 91-99 area) to
use the new parameter so the config value is threaded through all IPC server
construction sites.

In `@src/ui/settings_window.rs`:
- Around line 78-95: The AiConfig is hardcoding context_enabled: true in
settings_window.rs which overwrites user config; modify the UI to preserve or
expose that setting by adding a context_enabled field to Widgets, wiring a
checkbox/toggle in build() to capture the current UI state (or initialize it
from the loaded config in read_config()), and then set AiConfig.context_enabled
from that widget instead of true; update read_config() to populate the new
widget from the existing config and use the widget value when constructing
AiConfig so existing false values are not overwritten.

---

Nitpick comments:
In `@src/history/mod.rs`:
- Around line 48-53: Tests don't currently validate the new max_file_bytes
enforcement in History::load; add a unit test that creates a temporary
directory, writes a history.jsonl file whose size exceeds a chosen small
max_file_bytes, then calls History::load(dir, _, max_file_bytes) and asserts it
returns an Err; use the same History::load function and the underlying
read_to_string_limited behavior to trigger the failure (do not pass u64::MAX)
and clean up the temp file after the assertion.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7be9fe8d-82a6-408c-8dc2-4f0a21cd3114

📥 Commits

Reviewing files that changed from the base of the PR and between 7c988c4 and c09e7fc.

📒 Files selected for processing (13)
  • config.toml
  • src/ai/claude.rs
  • src/ai/mod.rs
  • src/ai/ollama.rs
  • src/config.rs
  • src/daemon.rs
  • src/dictionary.rs
  • src/history/mod.rs
  • src/ipc/server.rs
  • src/main.rs
  • src/memory/mod.rs
  • src/ui/history_page.rs
  • src/ui/settings_window.rs

Comment thread src/ai/claude.rs Outdated
Comment thread src/ai/claude.rs
Comment thread src/ai/mod.rs Outdated
Comment thread src/config.rs
Comment thread src/ipc/server.rs Outdated
- [P0] create_processor に max_response_bytes を渡すよう修正
- [P0] read_response_json を pub(crate) にして ollama.rs から呼べるように
- [P1] config reload 時に CLI --no-context override を保持
- [P1] settings_window が context_enabled/limits を既存設定から保持
- [Critical] read_to_string_limited の TOCTOU を修正(read→size check に変更)
- [Critical] IPC read_line を fill_buf ベースのサイズ制限付き読み込みに変更

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/daemon.rs (1)

273-345: ⚠️ Potential issue | 🟠 Major

ReloadConfig needs to rebuild disabled/optional persisted state too.

This path updates config without fully reinitializing the runtime state. history is never reloaded/cleared, and mem is only touched when the new config keeps memory enabled. After a GUI save + reload, disabling history can still keep hist.add_entry(...) writing to disk, and disabling memory can keep previously learned context / Whisper hints active because the old mem instance is still used later.

💡 Suggested fix shape
                         Ok(mut new_config) => {
                             // Preserve CLI --no-context override
                             if cli_no_context {
                                 new_config.ai.context_enabled = false;
                             }

+                            let previous_history = history.take();
+                            history = if new_config.history.enabled {
+                                match History::load(
+                                    &new_config.history_dir(),
+                                    new_config.history.max_entries,
+                                    new_config.limits.max_file_size_bytes as u64,
+                                ) {
+                                    Ok(new_history) => Some(new_history),
+                                    Err(e) => {
+                                        tracing::warn!(
+                                            "Failed to reload history (keeping current instance): {}",
+                                            e
+                                        );
+                                        previous_history
+                                    }
+                                }
+                            } else {
+                                None
+                            };
+
                             // Reload memory
                             if new_config.memory.enabled {
                                 let new_memory_dir = new_config.memory_dir();
                                 match memory::Memory::load(&new_memory_dir, new_config.limits.max_file_size_bytes as u64) {
                                     Ok(new_mem) => {
                                         mem = new_mem;
                                         tracing::info!("Memory reloaded");
                                     }
                                     Err(e) => {
                                         tracing::error!("Failed to reload memory: {}", e);
                                     }
                                 }
+                            } else {
+                                mem = memory::Memory::default();
+                                let empty_hint = String::new();
+                                recognizer.set_prompt_hint(&empty_hint);
                             }
+                            last_consolidation_entry_count = mem.total_entries();

                             config = new_config;
                             tracing::info!("Config reloaded successfully");
                         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/daemon.rs` around lines 273 - 345, The reload path must rebuild or clear
optional persisted state when the new config disables features: after obtaining
new_config, if new_config.history.enabled is false replace or clear the existing
history instance (e.g., call history.clear() or reinitialize history to an empty
History) and if history is enabled but path changed reload via
history.load(...); likewise, if new_config.memory.enabled is false replace mem
with an empty Memory (or call mem.clear()) and remove any Whisper hint from
recognizer (e.g., recognizer.set_prompt_hint("") or reset prompt hint),
otherwise continue to call memory::Memory::load(...) when enabled; ensure these
adjustments occur before assigning config = new_config so subsequent logic uses
the correct runtime state.
♻️ Duplicate comments (2)
src/config.rs (1)

268-280: ⚠️ Potential issue | 🟠 Major

Enforce max_bytes during the read.

This helper still allocates the whole file before rejecting it. A very large config/dictionary/memory file will be fully buffered by std::fs::read, so max_file_size_bytes does not actually protect startup or reload from memory spikes.

💡 Suggested fix
 pub fn read_to_string_limited(path: &Path, max_bytes: u64) -> Result<String> {
-    let bytes =
-        std::fs::read(path).with_context(|| format!("reading {}", path.display()))?;
+    use std::io::Read;
+
+    let mut file =
+        std::fs::File::open(path).with_context(|| format!("opening {}", path.display()))?;
+    let mut bytes = Vec::new();
+    file.take(max_bytes.saturating_add(1))
+        .read_to_end(&mut bytes)
+        .with_context(|| format!("reading {}", path.display()))?;
     if bytes.len() as u64 > max_bytes {
         anyhow::bail!(
-            "file {} is too large ({} bytes, limit {} bytes)",
+            "file {} is too large (more than {} bytes)",
             path.display(),
-            bytes.len(),
             max_bytes,
         );
     }
     String::from_utf8(bytes)
         .map_err(|e| anyhow::anyhow!("file {} is not valid UTF-8: {}", path.display(), e))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config.rs` around lines 268 - 280, The function read_to_string_limited
currently uses std::fs::read which reads the whole file into memory before
checking max_bytes; change it to open the file (std::fs::File) and read with a
bounded reader (e.g., std::io::Read or BufReader combined with .take(max_bytes +
1)) into a buffer, then check if the buffer length exceeds max_bytes and bail if
so; finally convert the buffered bytes to UTF-8 and return the string. Update
references in read_to_string_limited to use File::open, a bounded reader (.take)
and only allocate up to max_bytes+1 to enforce the limit during read rather than
after.
src/ai/claude.rs (1)

15-39: ⚠️ Potential issue | 🔴 Critical

Stream the body instead of calling bytes().

reqwest::Response::bytes() gets the full body, so this guard only runs after the allocation already happened. If Content-Length is missing or wrong, a large Anthropic/Ollama response can still exhaust memory before max_response_bytes is enforced. Read incrementally with chunk() and abort once the accumulated size crosses the limit. (docs.rs)

💡 Suggested fix
 pub(crate) async fn read_response_json(
     response: reqwest::Response,
     max_bytes: usize,
 ) -> Result<serde_json::Value> {
-    if let Some(len) = response.content_length() {
+    let mut response = response;
+    if let Some(len) = response.content_length() {
         if len > max_bytes as u64 {
             anyhow::bail!(
                 "API response too large (Content-Length: {} bytes, limit: {} bytes)",
                 len,
                 max_bytes
             );
         }
     }
-    let bytes = response
-        .bytes()
-        .await
-        .context("reading API response body")?;
-    if bytes.len() > max_bytes {
-        anyhow::bail!(
-            "API response too large ({} bytes, limit: {} bytes)",
-            bytes.len(),
-            max_bytes
-        );
+    let mut bytes = Vec::new();
+    while let Some(chunk) = response
+        .chunk()
+        .await
+        .context("reading API response body")?
+    {
+        if bytes.len().saturating_add(chunk.len()) > max_bytes {
+            anyhow::bail!("API response too large (more than {} bytes)", max_bytes);
+        }
+        bytes.extend_from_slice(&chunk);
     }
     serde_json::from_slice(&bytes).context("parsing API response JSON")
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/ai/claude.rs` around lines 15 - 39, read_response_json currently calls
response.bytes() which allocates the entire body before enforcing max_bytes;
change it to stream the body in chunks (e.g., using response.chunk() or
response.bytes_stream() + StreamExt::next) and incrementally append each chunk
to a Vec<u8>, checking after each append that accumulated.len() <= max_bytes and
bailing with the same error if exceeded; keep the existing Content-Length
pre-check, propagate chunk read errors with context ("reading API response
body"), and at the end call serde_json::from_slice(&accumulated) to parse JSON
(function: read_response_json).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/ui/settings_window.rs`:
- Around line 170-177: The preservation step in save_from_widgets currently
calls Config::load(), which may read a project-local config instead of the
actual file written by save_from_widgets; change it to load the existing config
from the exact target path (the path returned by Config::config_path()) before
copying hidden fields. Concretely, replace the Config::load() call with a
load-from-path operation (e.g. a Config::load_from_path, Config::from_path, or
parse the file at the path variable) so you read the same file you will
overwrite; keep the same fallback semantics (only copy ai.context_enabled and
limits when the file exists and parses).

---

Outside diff comments:
In `@src/daemon.rs`:
- Around line 273-345: The reload path must rebuild or clear optional persisted
state when the new config disables features: after obtaining new_config, if
new_config.history.enabled is false replace or clear the existing history
instance (e.g., call history.clear() or reinitialize history to an empty
History) and if history is enabled but path changed reload via
history.load(...); likewise, if new_config.memory.enabled is false replace mem
with an empty Memory (or call mem.clear()) and remove any Whisper hint from
recognizer (e.g., recognizer.set_prompt_hint("") or reset prompt hint),
otherwise continue to call memory::Memory::load(...) when enabled; ensure these
adjustments occur before assigning config = new_config so subsequent logic uses
the correct runtime state.

---

Duplicate comments:
In `@src/ai/claude.rs`:
- Around line 15-39: read_response_json currently calls response.bytes() which
allocates the entire body before enforcing max_bytes; change it to stream the
body in chunks (e.g., using response.chunk() or response.bytes_stream() +
StreamExt::next) and incrementally append each chunk to a Vec<u8>, checking
after each append that accumulated.len() <= max_bytes and bailing with the same
error if exceeded; keep the existing Content-Length pre-check, propagate chunk
read errors with context ("reading API response body"), and at the end call
serde_json::from_slice(&accumulated) to parse JSON (function:
read_response_json).

In `@src/config.rs`:
- Around line 268-280: The function read_to_string_limited currently uses
std::fs::read which reads the whole file into memory before checking max_bytes;
change it to open the file (std::fs::File) and read with a bounded reader (e.g.,
std::io::Read or BufReader combined with .take(max_bytes + 1)) into a buffer,
then check if the buffer length exceeds max_bytes and bail if so; finally
convert the buffered bytes to UTF-8 and return the string. Update references in
read_to_string_limited to use File::open, a bounded reader (.take) and only
allocate up to max_bytes+1 to enforce the limit during read rather than after.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8a69b0eb-4567-4ea5-b4b4-2888e300ec98

📥 Commits

Reviewing files that changed from the base of the PR and between c09e7fc and 2fafb21.

📒 Files selected for processing (7)
  • src/ai/claude.rs
  • src/ai/mod.rs
  • src/config.rs
  • src/daemon.rs
  • src/ipc/server.rs
  • src/main.rs
  • src/ui/settings_window.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/ipc/server.rs

Comment thread src/ui/settings_window.rs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config.rs`:
- Around line 183-185: The config field max_ipc_message_bytes (and its
default_default_max_ipc_message_bytes) is not being used; update the IPC server
startup to pass the configured value into handle_connection instead of the
hardcoded DEFAULT_MAX_IPC_MESSAGE_BYTES. Locate where the server is spawned (the
call that currently uses DEFAULT_MAX_IPC_MESSAGE_BYTES) and change it to accept
the max_ipc_message_bytes from your loaded Config, propagate that value into the
server initialization signature if needed, and ensure handle_connection(...,
max_ipc_message_bytes) is called so the configured limit is enforced.
- Around line 264-280: read_to_string_limited currently uses std::fs::read which
allocates the whole file before checking size; change it to open the file and
stream-read with a hard cap so oversized files are never fully loaded.
Specifically, replace the std::fs::read call in read_to_string_limited with
File::open(path) and use file.take(max_bytes + 1).read_to_end(&mut buf) (or read
in chunks accumulating into a Vec<u8>) then check if buf.len() as u64 >
max_bytes and bail as before; preserve the with_context error on opening/reading
and keep the UTF-8 conversion/map_err logic for the final String::from_utf8
error message.

In `@src/daemon.rs`:
- Around line 333-336: The reload branch that calls memory::Memory::load must
also handle the case when new_config.memory.enabled is false by clearing the
existing mem and resetting the recognizer hint: replace or reset mem to an empty
Memory (e.g., Memory::default() or call mem.clear()) so mem.format_for_prompt()
yields nothing, reset the recognizer's Whisper hint to None/empty (the same
field set in the enabled branch), and ensure the processor is not passed stale
mem.format_for_prompt() when memory is disabled; apply the same fix to the other
reload site around the second load (lines 347–352).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bba9f1dc-9cf9-40fc-bfe1-79495d273f72

📥 Commits

Reviewing files that changed from the base of the PR and between 2fafb21 and af18825.

📒 Files selected for processing (2)
  • src/config.rs
  • src/daemon.rs

Comment thread src/config.rs
Comment thread src/config.rs
Comment thread src/daemon.rs
@shuhei0866 shuhei0866 merged commit d65a5c3 into main Apr 4, 2026
7 checks passed
@shuhei0866 shuhei0866 deleted the feat/no-context-flag branch April 4, 2026 13:24
shuhei0866 added a commit that referenced this pull request Apr 4, 2026
PR #52 (--no-context flag) と PR #53 (input size limits) の両方の変更を統合。
claude.rs はチャンクベースストリーミング実装を採用、config.rs は両方のテストを保持、
server.rs は重複した関数定義を整理。

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add option to disable context awareness for privacy

1 participant