feat(iii-worker): support multiple workers in add and remove commands#1440
feat(iii-worker): support multiple workers in add and remove commands#1440andersonleal merged 2 commits intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdd/Remove CLI subcommands now accept multiple worker names; per-worker runtime/address/port options removed. Managed handlers were extended for batch add/remove with a Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI as "CLI (user)"
participant App as "app.rs (arg parse)"
participant Managed as "managed.rs (batch handlers)"
participant Registry as "Registry / OCI puller"
participant FS as "Config file / Worker store"
CLI->>App: invoke `add` with multiple worker_names
App->>Managed: call handle_managed_add_many(worker_names)
Managed->>Registry: optionally fetch registry (once) / resolve images
Registry-->>Managed: return metadata / manifests
loop per worker
Managed->>Registry: pull OCI image or use binary routing (if needed)
Registry-->>Managed: pull result
Managed->>FS: write worker config / persist entry
FS-->>Managed: ack
end
Managed->>App: exit code / summary
App->>CLI: prints batch summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
crates/iii-worker/src/cli/managed.rs (1)
348-381: Consider extracting shared batch-loop logic.
handle_managed_remove_manyfollows the same pattern ashandle_managed_add_many: iterate with progress, aggregate failures, print summary, return exit code. While not a blocking issue, consider extracting a generic batch executor to reduce duplication if more batch operations are added in the future.♻️ Optional: Extract shared batch logic
// A helper could reduce duplication: async fn run_batch<F, Fut>( names: &[String], verb_present: &str, // "Adding" / "Removing" verb_past: &str, // "Added" / "Removed" mut handler: F, ) -> i32 where F: FnMut(&str, bool) -> Fut, Fut: std::future::Future<Output = i32>, { let total = names.len(); let brief = total > 1; let mut fail_count = 0; for (i, name) in names.iter().enumerate() { if brief { eprintln!(" [{}/{}] {}...", i + 1, total, name); } if handler(name, brief).await != 0 { fail_count += 1; } } if total > 1 { let succeeded = total - fail_count; // Print summary... } if fail_count == 0 { 0 } else { 1 } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/iii-worker/src/cli/managed.rs` around lines 348 - 381, handle_managed_remove_many duplicates the batch-loop, progress and summary logic already used by handle_managed_add_many; extract that shared logic into a generic async helper (e.g., run_batch) that accepts names: &[String], verb_present: &str, verb_past: &str and a handler: FnMut(&str, bool) -> Fut where Fut: Future<Output = i32>, move the iteration, brief/progress printing and final summary into run_batch, then replace handle_managed_remove_many and handle_managed_add_many to call run_batch with the appropriate verbs and their existing handlers (handle_managed_remove and handle_managed_add) so both functions become thin wrappers that return the helper's exit code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 348-381: handle_managed_remove_many duplicates the batch-loop,
progress and summary logic already used by handle_managed_add_many; extract that
shared logic into a generic async helper (e.g., run_batch) that accepts names:
&[String], verb_present: &str, verb_past: &str and a handler: FnMut(&str, bool)
-> Fut where Fut: Future<Output = i32>, move the iteration, brief/progress
printing and final summary into run_batch, then replace
handle_managed_remove_many and handle_managed_add_many to call run_batch with
the appropriate verbs and their existing handlers (handle_managed_remove and
handle_managed_add) so both functions become thin wrappers that return the
helper's exit code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2556d7e4-0199-4f73-b2a7-38d144203b31
📒 Files selected for processing (5)
crates/iii-worker/src/cli/app.rscrates/iii-worker/src/cli/managed.rscrates/iii-worker/src/main.rscrates/iii-worker/tests/config_file_integration.rscrates/iii-worker/tests/worker_integration.rs
There was a problem hiding this comment.
🧹 Nitpick comments (2)
crates/iii-worker/src/cli/managed.rs (2)
203-220: Minor: Flatten nested conditionals for readability.The
else { if already_exists { ... } else { ... } }pattern can be simplified toelse if/elsefor cleaner structure.♻️ Suggested refactor
} - } else { - if already_exists { - eprintln!( - "\n {} Worker {} updated in {} (merged with builtin defaults)", - "✓".green(), - image_or_name.bold(), - "config.yaml".dimmed(), - ); - } else { - eprintln!( - "\n {} Worker {} added to {}", - "✓".green(), - image_or_name.bold(), - "config.yaml".dimmed(), - ); - } - eprintln!(" Start the engine to run it, or edit config.yaml to customize."); + } else if already_exists { + eprintln!( + "\n {} Worker {} updated in {} (merged with builtin defaults)", + "✓".green(), + image_or_name.bold(), + "config.yaml".dimmed(), + ); + eprintln!(" Start the engine to run it, or edit config.yaml to customize."); + } else { + eprintln!( + "\n {} Worker {} added to {}", + "✓".green(), + image_or_name.bold(), + "config.yaml".dimmed(), + ); + eprintln!(" Start the engine to run it, or edit config.yaml to customize."); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/iii-worker/src/cli/managed.rs` around lines 203 - 220, The nested conditional in the worker add/update notification block should be flattened: replace the current "else { if already_exists { ... } else { ... } }" structure with "else if already_exists { ... } else { ... }" to improve readability; update the branch that prints success messages using the existing variables image_or_name and already_exists and keep the eprintln!(" Start the engine to run it, or edit config.yaml to customize."); line unchanged.
157-158: Consider logging a warning when registry pre-fetch fails.When
fetch_registry()fails, the error is silently discarded with.ok(). While the continue-on-failure semantics are intentional and each worker will retry individually, users see N separate error messages without understanding the root cause (e.g., registry unavailable). Atracing::warn!here would aid debugging.💡 Suggested improvement
// Pre-fetch registry once for all workers (avoids N HTTP roundtrips). - let registry = fetch_registry().await.ok(); + let registry = match fetch_registry().await { + Ok(r) => Some(r), + Err(e) => { + tracing::warn!("Registry pre-fetch failed, workers will retry individually: {}", e); + None + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/iii-worker/src/cli/managed.rs` around lines 157 - 158, The current pre-fetch silently discards errors by calling fetch_registry().await.ok(), losing root-cause context; change this to capture the Result from fetch_registry() and, on Err(e), emit a tracing::warn!(error = %e, "pre-fetch registry failed, workers will retry individually") while still allowing code to continue (set registry to None on error). Locate the fetch_registry() call and the registry binding in managed.rs and replace the .ok() usage with a match or map_err that logs the error and preserves the intended continue-on-failure semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 203-220: The nested conditional in the worker add/update
notification block should be flattened: replace the current "else { if
already_exists { ... } else { ... } }" structure with "else if already_exists {
... } else { ... }" to improve readability; update the branch that prints
success messages using the existing variables image_or_name and already_exists
and keep the eprintln!(" Start the engine to run it, or edit config.yaml to
customize."); line unchanged.
- Around line 157-158: The current pre-fetch silently discards errors by calling
fetch_registry().await.ok(), losing root-cause context; change this to capture
the Result from fetch_registry() and, on Err(e), emit a tracing::warn!(error =
%e, "pre-fetch registry failed, workers will retry individually") while still
allowing code to continue (set registry to None on error). Locate the
fetch_registry() call and the registry binding in managed.rs and replace the
.ok() usage with a match or map_err that logs the error and preserves the
intended continue-on-failure semantics.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: c40ea619-137a-4d3e-87fc-592e9056ec33
📒 Files selected for processing (2)
crates/iii-worker/src/cli/managed.rscrates/iii-worker/tests/config_file_integration.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/iii-worker/tests/config_file_integration.rs
Both `iii worker add` and `iii worker remove` now accept multiple worker names (e.g., `add w1 w2 w3`, `remove w1 w2`). Each worker is processed sequentially with continue-on-failure semantics. Multi-mode uses compact progress output with a summary at the end. Removes unused --runtime/--address/--port flags from both commands.
Fetch the registry a single time in handle_managed_add_many and pass it through to handle_managed_add and handle_binary_add, eliminating N-1 redundant HTTP roundtrips when adding multiple workers. Also gates all verbose OCI/resolve output on !brief so multi-mode shows only compact progress lines, matching the binary worker path.
7ab180a to
b86a912
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/iii-worker/src/cli/app.rs`:
- Around line 28-32: The help text for the Remove CLI variant currently claims
it "stops and removes containers" but the handler in managed.rs only deletes
entries from config.yaml; update the Remove variant docstring in the enum (the
Remove variant in app.rs) to accurately reflect behavior (e.g., "Remove one or
more workers from configuration (does not stop running containers)") so CLI help
matches managed.rs, or alternatively implement container cleanup by enhancing
the existing handler in managed.rs (where the remove logic lives) to call your
stop/remove container functions (invoke the container stop + delete APIs you
already use elsewhere) before updating config.yaml; adjust tests/docs
accordingly.
In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 226-248: The fast-path currently discards a user-specified version
because parse_worker_input's second value is ignored; change the code to capture
the parsed version (e.g., let (name, parsed_version) =
parse_worker_input(image_or_name)) and, when resolving an OCI worker in registry
(inspect registry.workers -> entry.image and entry.latest), prefer the
user-supplied parsed_version if present (use it to build image_ref as
"{image}:{parsed_version}") otherwise fall back to entry.latest; then call
handle_oci_pull_and_add(&name, &image_ref, brief). Ensure binary handling via
handle_binary_add remains unchanged.
- Around line 152-165: The code eagerly calls fetch_registry() before processing
any workers; change this to a lazy cached fetch so you only hit the registry
when a worker actually needs it. Replace the eager let registry =
fetch_registry().await.ok(); with let mut registry = None; inside the loop
detect if the current name needs registry access (e.g. implement a small helper
needs_registry(name) that returns false for builtin names and explicit OCI refs
like "oci://..." or similar), and if needs_registry(name) and registry.is_none()
then set registry = fetch_registry().await.ok(); finally call
handle_managed_add(name, brief, registry.as_ref()) as before so the registry
roundtrip happens only when required.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 18680007-26e6-4f95-a27b-4ce91d67c2c5
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
crates/iii-worker/src/cli/app.rscrates/iii-worker/src/cli/managed.rscrates/iii-worker/src/main.rscrates/iii-worker/tests/config_file_integration.rscrates/iii-worker/tests/worker_integration.rs
🚧 Files skipped from review as they are similar to previous changes (3)
- crates/iii-worker/tests/worker_integration.rs
- crates/iii-worker/tests/config_file_integration.rs
- crates/iii-worker/src/main.rs
Summary
iii worker addandiii worker removenow accept multiple worker names in a single invocation[1/3] Adding.../✓ worker-name), verbose output preserved for single-worker usage--runtime,--address,--portflags from both commandsUsage
Multi-mode output
Test plan
cargo test -p iii-worker— 198 tests passcargo clippy -p iii-worker— no new warnings<WORKER[@VERSION]>...for add and<WORKER>...for removeSummary by CodeRabbit
New Features
Tests