Skip to content

feat(iii-worker): support multiple workers in add and remove commands#1440

Merged
andersonleal merged 2 commits intomainfrom
feat/multi-worker-add-remove
Apr 8, 2026
Merged

feat(iii-worker): support multiple workers in add and remove commands#1440
andersonleal merged 2 commits intomainfrom
feat/multi-worker-add-remove

Conversation

@andersonleal
Copy link
Copy Markdown
Contributor

@andersonleal andersonleal commented Apr 8, 2026

Summary

  • iii worker add and iii worker remove now accept multiple worker names in a single invocation
  • Continue-on-failure: if one worker fails, the rest are still processed
  • Compact progress output in multi-mode ([1/3] Adding... / ✓ worker-name), verbose output preserved for single-worker usage
  • Removes unused --runtime, --address, --port flags from both commands
  • 198 tests passing (2 new for multi-remove, 3 new for multi-add)

Usage

iii worker add iii-http iii-state iii-cron
iii worker remove iii-http iii-state

Multi-mode output

  [1/2] Adding iii-http...
        ✓ iii-http
  [2/2] Adding iii-state...
        ✓ iii-state

  Added 2/2 workers.

Test plan

  • cargo test -p iii-worker — 198 tests pass
  • cargo clippy -p iii-worker — no new warnings
  • Single-worker add/remove still works with verbose output
  • Multi-worker add with partial failure returns exit code 1, successful workers still added
  • Multi-worker remove with missing worker returns exit code 1, valid workers still removed
  • CLI help shows <WORKER[@VERSION]>... for add and <WORKER>... for remove

Summary by CodeRabbit

  • New Features

    • Add/remove subcommands now accept multiple worker names in a single command.
    • Batch operations automatically use a concise, single-line summary output for multi-worker runs.
  • Tests

    • Added and updated integration tests validating multi-worker add/remove behavior, mixed-success handling, and config persistence across batch operations.

@vercel
Copy link
Copy Markdown
Contributor

vercel bot commented Apr 8, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
iii-website Ready Ready Preview, Comment Apr 8, 2026 7:58pm
motia-docs Ready Ready Preview, Comment Apr 8, 2026 7:58pm

Request Review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 8, 2026

📝 Walkthrough

Walkthrough

Add/Remove CLI subcommands now accept multiple worker names; per-worker runtime/address/port options removed. Managed handlers were extended for batch add/remove with a brief mode and optional cached registry usage; main dispatch and tests updated accordingly.

Changes

Cohort / File(s) Summary
CLI Definition
crates/iii-worker/src/cli/app.rs
Add/Remove changed to worker_names: Vec<String> (required, num_args = 1..); removed per-command runtime, address, port options.
Managed Handlers
crates/iii-worker/src/cli/managed.rs
Signatures updated to accept brief: bool and cached_registry: Option<&RegistryV2>; added handle_managed_add_many and handle_managed_remove_many; batch iteration, aggregated failure reporting, and gated verbose output for OCI/binary flows.
CLI Dispatch
crates/iii-worker/src/main.rs
Match arms updated to call handle_managed_add_many(&worker_names).await and handle_managed_remove_many(&worker_names).await instead of single-target handlers.
Integration Tests
crates/iii-worker/tests/config_file_integration.rs, crates/iii-worker/tests/worker_integration.rs
Tests updated for new Add shape and handler signatures; added tests for multi-worker parsing and multi-add/remove behavior including partial-failure persistence assertions.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant CLI as "CLI (user)"
  participant App as "app.rs (arg parse)"
  participant Managed as "managed.rs (batch handlers)"
  participant Registry as "Registry / OCI puller"
  participant FS as "Config file / Worker store"

  CLI->>App: invoke `add` with multiple worker_names
  App->>Managed: call handle_managed_add_many(worker_names)
  Managed->>Registry: optionally fetch registry (once) / resolve images
  Registry-->>Managed: return metadata / manifests
  loop per worker
    Managed->>Registry: pull OCI image or use binary routing (if needed)
    Registry-->>Managed: pull result
    Managed->>FS: write worker config / persist entry
    FS-->>Managed: ack
  end
  Managed->>App: exit code / summary
  App->>CLI: prints batch summary
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Possibly related PRs

Suggested reviewers

  • ytallo
  • sergiofilhowz

Poem

🐰 I nibble code and tidy the lair,

Names in a bundle now leap through the air,
Quiet hops when I work in brief,
Batch by batch I plant each leaf,
A happy hop — the workers pair.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: extending add and remove commands to support multiple workers in a single invocation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/multi-worker-add-remove

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
crates/iii-worker/src/cli/managed.rs (1)

348-381: Consider extracting shared batch-loop logic.

handle_managed_remove_many follows the same pattern as handle_managed_add_many: iterate with progress, aggregate failures, print summary, return exit code. While not a blocking issue, consider extracting a generic batch executor to reduce duplication if more batch operations are added in the future.

♻️ Optional: Extract shared batch logic
// A helper could reduce duplication:
async fn run_batch<F, Fut>(
    names: &[String],
    verb_present: &str,  // "Adding" / "Removing"
    verb_past: &str,     // "Added" / "Removed"
    mut handler: F,
) -> i32
where
    F: FnMut(&str, bool) -> Fut,
    Fut: std::future::Future<Output = i32>,
{
    let total = names.len();
    let brief = total > 1;
    let mut fail_count = 0;

    for (i, name) in names.iter().enumerate() {
        if brief {
            eprintln!("  [{}/{}] {}...", i + 1, total, name);
        }
        if handler(name, brief).await != 0 {
            fail_count += 1;
        }
    }

    if total > 1 {
        let succeeded = total - fail_count;
        // Print summary...
    }

    if fail_count == 0 { 0 } else { 1 }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/iii-worker/src/cli/managed.rs` around lines 348 - 381,
handle_managed_remove_many duplicates the batch-loop, progress and summary logic
already used by handle_managed_add_many; extract that shared logic into a
generic async helper (e.g., run_batch) that accepts names: &[String],
verb_present: &str, verb_past: &str and a handler: FnMut(&str, bool) -> Fut
where Fut: Future<Output = i32>, move the iteration, brief/progress printing and
final summary into run_batch, then replace handle_managed_remove_many and
handle_managed_add_many to call run_batch with the appropriate verbs and their
existing handlers (handle_managed_remove and handle_managed_add) so both
functions become thin wrappers that return the helper's exit code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 348-381: handle_managed_remove_many duplicates the batch-loop,
progress and summary logic already used by handle_managed_add_many; extract that
shared logic into a generic async helper (e.g., run_batch) that accepts names:
&[String], verb_present: &str, verb_past: &str and a handler: FnMut(&str, bool)
-> Fut where Fut: Future<Output = i32>, move the iteration, brief/progress
printing and final summary into run_batch, then replace
handle_managed_remove_many and handle_managed_add_many to call run_batch with
the appropriate verbs and their existing handlers (handle_managed_remove and
handle_managed_add) so both functions become thin wrappers that return the
helper's exit code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2556d7e4-0199-4f73-b2a7-38d144203b31

📥 Commits

Reviewing files that changed from the base of the PR and between ac32ecf and b13a209.

📒 Files selected for processing (5)
  • crates/iii-worker/src/cli/app.rs
  • crates/iii-worker/src/cli/managed.rs
  • crates/iii-worker/src/main.rs
  • crates/iii-worker/tests/config_file_integration.rs
  • crates/iii-worker/tests/worker_integration.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
crates/iii-worker/src/cli/managed.rs (2)

203-220: Minor: Flatten nested conditionals for readability.

The else { if already_exists { ... } else { ... } } pattern can be simplified to else if / else for cleaner structure.

♻️ Suggested refactor
         }
-        } else {
-            if already_exists {
-                eprintln!(
-                    "\n  {} Worker {} updated in {} (merged with builtin defaults)",
-                    "✓".green(),
-                    image_or_name.bold(),
-                    "config.yaml".dimmed(),
-                );
-            } else {
-                eprintln!(
-                    "\n  {} Worker {} added to {}",
-                    "✓".green(),
-                    image_or_name.bold(),
-                    "config.yaml".dimmed(),
-                );
-            }
-            eprintln!("  Start the engine to run it, or edit config.yaml to customize.");
+        } else if already_exists {
+            eprintln!(
+                "\n  {} Worker {} updated in {} (merged with builtin defaults)",
+                "✓".green(),
+                image_or_name.bold(),
+                "config.yaml".dimmed(),
+            );
+            eprintln!("  Start the engine to run it, or edit config.yaml to customize.");
+        } else {
+            eprintln!(
+                "\n  {} Worker {} added to {}",
+                "✓".green(),
+                image_or_name.bold(),
+                "config.yaml".dimmed(),
+            );
+            eprintln!("  Start the engine to run it, or edit config.yaml to customize.");
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/iii-worker/src/cli/managed.rs` around lines 203 - 220, The nested
conditional in the worker add/update notification block should be flattened:
replace the current "else { if already_exists { ... } else { ... } }" structure
with "else if already_exists { ... } else { ... }" to improve readability;
update the branch that prints success messages using the existing variables
image_or_name and already_exists and keep the eprintln!("  Start the engine to
run it, or edit config.yaml to customize."); line unchanged.

157-158: Consider logging a warning when registry pre-fetch fails.

When fetch_registry() fails, the error is silently discarded with .ok(). While the continue-on-failure semantics are intentional and each worker will retry individually, users see N separate error messages without understanding the root cause (e.g., registry unavailable). A tracing::warn! here would aid debugging.

💡 Suggested improvement
     // Pre-fetch registry once for all workers (avoids N HTTP roundtrips).
-    let registry = fetch_registry().await.ok();
+    let registry = match fetch_registry().await {
+        Ok(r) => Some(r),
+        Err(e) => {
+            tracing::warn!("Registry pre-fetch failed, workers will retry individually: {}", e);
+            None
+        }
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/iii-worker/src/cli/managed.rs` around lines 157 - 158, The current
pre-fetch silently discards errors by calling fetch_registry().await.ok(),
losing root-cause context; change this to capture the Result from
fetch_registry() and, on Err(e), emit a tracing::warn!(error = %e, "pre-fetch
registry failed, workers will retry individually") while still allowing code to
continue (set registry to None on error). Locate the fetch_registry() call and
the registry binding in managed.rs and replace the .ok() usage with a match or
map_err that logs the error and preserves the intended continue-on-failure
semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 203-220: The nested conditional in the worker add/update
notification block should be flattened: replace the current "else { if
already_exists { ... } else { ... } }" structure with "else if already_exists {
... } else { ... }" to improve readability; update the branch that prints
success messages using the existing variables image_or_name and already_exists
and keep the eprintln!("  Start the engine to run it, or edit config.yaml to
customize."); line unchanged.
- Around line 157-158: The current pre-fetch silently discards errors by calling
fetch_registry().await.ok(), losing root-cause context; change this to capture
the Result from fetch_registry() and, on Err(e), emit a tracing::warn!(error =
%e, "pre-fetch registry failed, workers will retry individually") while still
allowing code to continue (set registry to None on error). Locate the
fetch_registry() call and the registry binding in managed.rs and replace the
.ok() usage with a match or map_err that logs the error and preserves the
intended continue-on-failure semantics.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: c40ea619-137a-4d3e-87fc-592e9056ec33

📥 Commits

Reviewing files that changed from the base of the PR and between b13a209 and 7ab180a.

📒 Files selected for processing (2)
  • crates/iii-worker/src/cli/managed.rs
  • crates/iii-worker/tests/config_file_integration.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • crates/iii-worker/tests/config_file_integration.rs

Both `iii worker add` and `iii worker remove` now accept multiple
worker names (e.g., `add w1 w2 w3`, `remove w1 w2`). Each worker
is processed sequentially with continue-on-failure semantics.
Multi-mode uses compact progress output with a summary at the end.
Removes unused --runtime/--address/--port flags from both commands.
Fetch the registry a single time in handle_managed_add_many and pass
it through to handle_managed_add and handle_binary_add, eliminating
N-1 redundant HTTP roundtrips when adding multiple workers.

Also gates all verbose OCI/resolve output on !brief so multi-mode
shows only compact progress lines, matching the binary worker path.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/iii-worker/src/cli/app.rs`:
- Around line 28-32: The help text for the Remove CLI variant currently claims
it "stops and removes containers" but the handler in managed.rs only deletes
entries from config.yaml; update the Remove variant docstring in the enum (the
Remove variant in app.rs) to accurately reflect behavior (e.g., "Remove one or
more workers from configuration (does not stop running containers)") so CLI help
matches managed.rs, or alternatively implement container cleanup by enhancing
the existing handler in managed.rs (where the remove logic lives) to call your
stop/remove container functions (invoke the container stop + delete APIs you
already use elsewhere) before updating config.yaml; adjust tests/docs
accordingly.

In `@crates/iii-worker/src/cli/managed.rs`:
- Around line 226-248: The fast-path currently discards a user-specified version
because parse_worker_input's second value is ignored; change the code to capture
the parsed version (e.g., let (name, parsed_version) =
parse_worker_input(image_or_name)) and, when resolving an OCI worker in registry
(inspect registry.workers -> entry.image and entry.latest), prefer the
user-supplied parsed_version if present (use it to build image_ref as
"{image}:{parsed_version}") otherwise fall back to entry.latest; then call
handle_oci_pull_and_add(&name, &image_ref, brief). Ensure binary handling via
handle_binary_add remains unchanged.
- Around line 152-165: The code eagerly calls fetch_registry() before processing
any workers; change this to a lazy cached fetch so you only hit the registry
when a worker actually needs it. Replace the eager let registry =
fetch_registry().await.ok(); with let mut registry = None; inside the loop
detect if the current name needs registry access (e.g. implement a small helper
needs_registry(name) that returns false for builtin names and explicit OCI refs
like "oci://..." or similar), and if needs_registry(name) and registry.is_none()
then set registry = fetch_registry().await.ok(); finally call
handle_managed_add(name, brief, registry.as_ref()) as before so the registry
roundtrip happens only when required.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 18680007-26e6-4f95-a27b-4ce91d67c2c5

📥 Commits

Reviewing files that changed from the base of the PR and between 7ab180a and b86a912.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • crates/iii-worker/src/cli/app.rs
  • crates/iii-worker/src/cli/managed.rs
  • crates/iii-worker/src/main.rs
  • crates/iii-worker/tests/config_file_integration.rs
  • crates/iii-worker/tests/worker_integration.rs
🚧 Files skipped from review as they are similar to previous changes (3)
  • crates/iii-worker/tests/worker_integration.rs
  • crates/iii-worker/tests/config_file_integration.rs
  • crates/iii-worker/src/main.rs

@andersonleal andersonleal merged commit bb94236 into main Apr 8, 2026
26 checks passed
@andersonleal andersonleal deleted the feat/multi-worker-add-remove branch April 8, 2026 22:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants