Skip to content

feat: add iii-mcp and iii-a2a protocol workers#4

Open
rohitg00 wants to merge 9 commits intomainfrom
feat/mcp-a2a-workers
Open

feat: add iii-mcp and iii-a2a protocol workers#4
rohitg00 wants to merge 9 commits intomainfrom
feat/mcp-a2a-workers

Conversation

@rohitg00
Copy link
Copy Markdown

@rohitg00 rohitg00 commented Mar 30, 2026

Summary

  • Move MCP and A2A protocol workers from feat: add iii-mcp and iii-a2a protocol workers iii#1374 into the standalone workers repo
  • iii-mcp — MCP protocol worker (stdio + Streamable HTTP, 6 builtin tools, 4 resources, 4 prompts)
  • iii-a2a — A2A protocol worker (agent card at /.well-known/agent-card.json, JSON-RPC at /a2a)
  • Both use iii-sdk = "0.10.0" from crates.io (no monorepo path deps)
  • Updated registry/index.json with both workers at v0.3.0
  • Updated README with full usage docs

Usage

iii-mcp                     # MCP stdio (Claude Desktop, Cursor)
iii-mcp --no-stdio          # MCP HTTP only (POST /mcp)
iii-a2a                     # A2A HTTP (POST /a2a + GET /.well-known/agent-card.json)

Metadata filtering

Functions need metadata tags to appear as tools/skills:

iii.registerFunction({
  id: 'orders::process',
  metadata: { "mcp.expose": true, "a2a.expose": true }
}, handler)

--expose-all disables filtering.

Test plan

  • cd mcp && cargo build --release compiles
  • cd a2a && cargo build --release compiles
  • MCP stdio: initialize → tools/list → tools/call round-trip
  • MCP HTTP: POST /mcp tools/list + tools/call
  • A2A: agent card, message/send, tasks/get, tasks/cancel, tasks/list
  • Registry entries resolve correctly via iii worker add mcp / iii worker add a2a

Summary by CodeRabbit

  • New Features

    • MCP worker: stdio + HTTP transports, tool listing/call, prompts, and spawn/stop of Node/Python workers.
    • A2A worker: agent-discovery endpoint, HTTP JSON-RPC messaging, persistent task lifecycle APIs (create/get/list/cancel).
  • Documentation

    • README updated with mcp and a2a module docs, examples, build/usage instructions, and exposure metadata notes.
  • Chores

    • Worker registry added entries for mcp and a2a.
  • Tests

    • Manifest test now uses the compile-time package version.

Move MCP and A2A protocol workers from iii-hq/iii#1374 into
the standalone workers repo. Each is its own Rust crate with
crates.io iii-sdk dependency (no monorepo path refs).

- iii-mcp: MCP protocol worker (stdio + Streamable HTTP)
- iii-a2a: A2A protocol worker (agent card + JSON-RPC)
- Updated registry/index.json with both workers
- Updated README with usage docs
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 30, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds two new worker crates: an MCP worker (stdio + HTTP MCP JSON‑RPC, tools/resources/prompts, worker spawn/trigger management) and an A2A worker (HTTP agent-card discovery, JSON‑RPC A2A methods, task lifecycle persisted to engine KV), plus registry entries and README documentation updates.

Changes

Cohort / File(s) Summary
Documentation
README.md
Extended Modules section documenting new mcp and a2a workers: transports, exposure metadata (mcp.expose / a2a.expose), builtins, prompts/resources (MCP), A2A types/endpoints/state storage, examples and build/usage commands.
MCP crate metadata & entry
mcp/Cargo.toml, mcp/src/main.rs
New binary crate iii-mcp (CLI flags: engine_url, debug, no_stdio, expose_all); initializes tracing, registers worker, and sets up HTTP handler and/or stdio transport.
MCP core: handler & HTTP
mcp/src/handler.rs
Implements MCP JSON-RPC handler/router (initialize, ping, tools/list/call, resources/, prompts/), built-in tools for worker/trigger lifecycle, internal trigger registry, notification channel, and register_http for POST /mcp.
MCP runtime: transport & worker manager
mcp/src/transport.rs, mcp/src/worker_manager.rs
StdIO newline-delimited JSON transport that drains notifications and emits JSON-RPC errors; WorkerManager to generate, spawn, and stop temporary Node/Python workers, manage temp dirs/children, and return create/stop results.
MCP prompts
mcp/src/prompts.rs
Prompt listing and templating API (list and get) producing chat-style prompt payloads with basic argument validation.
A2A crate metadata & entry
a2a/Cargo.toml, a2a/src/main.rs
New binary crate iii-a2a (CLI flags: engine_url, debug, expose_all, base_url); initializes tracing, registers worker, and registers A2A endpoints.
A2A handler & types
a2a/src/handler.rs, a2a/src/types.rs
A2A request/response types and router exposing GET /.well-known/agent-card.json and POST /a2a; handles message/send and task APIs (tasks/get/list/cancel), persists tasks under a2a:tasks, resolves function targets and enforces exposure gate.
Registry
registry/index.json
Added workers.mcp and workers.a2a entries (version 0.3.0) with default config (engine_url, expose_all).
Minor test fix
image-resize/src/manifest.rs
Test updated to assert manifest version against env!(\"CARGO_PKG_VERSION\") instead of a hard-coded string.

Sequence Diagrams

sequenceDiagram
    participant Client
    participant MCPHandler
    participant WorkerManager
    participant Engine
    Client->>MCPHandler: POST /mcp (JSON-RPC tools/call iii_worker_register)
    MCPHandler->>WorkerManager: create_worker(language, code, function_name)
    WorkerManager->>Engine: spawn child process and register worker (engine_url)
    Engine-->>WorkerManager: registration confirmed
    WorkerManager-->>MCPHandler: WorkerCreateResult (id)
    MCPHandler-->>Client: JSON-RPC success response (worker id)
Loading
sequenceDiagram
    participant Client
    participant A2AHandler
    participant StateBackend
    participant Engine
    participant TargetFunction
    Client->>A2AHandler: POST /a2a (message/send)
    A2AHandler->>StateBackend: set task (a2a:tasks) -> Working
    StateBackend-->>A2AHandler: ack
    A2AHandler->>Engine: trigger target function (30s timeout)
    Engine->>TargetFunction: deliver invocation
    TargetFunction-->>Engine: result / error
    Engine-->>A2AHandler: invocation result
    A2AHandler->>StateBackend: update task -> Completed/Failed (artifact or error)
    A2AHandler-->>Client: A2A JSON-RPC response (task id + status)
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I dug a burrow in bytes and code,
I planted endpoints where messages flowed,
MCP spawns helpers, prompts and tools in tow,
A2A keeps tasks and stamps their status slow,
Hop on—workers bustle, and integrations grow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 2.78% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding two new protocol workers (iii-mcp and iii-a2a) to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/mcp-a2a-workers

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

The test was asserting "0.1.0" but Cargo.toml is at 0.1.2,
causing CI failure on every PR.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@a2a/src/handler.rs`:
- Around line 304-331: After resolving function_id (via resolve_function) but
before calling iii.trigger, enforce the a2a.expose metadata check (respecting
expose_all) inside handle_a2a_request/handle_send: lookup the target function's
exposure metadata for fn_name (or the resolved function_id) and if the function
is not exposed to this caller/context, set task.status to Failed with an
appropriate Message and return the A2AResponse instead of invoking iii.trigger;
ensure this gating logic is invoked on this code path (the branch that currently
calls iii.trigger) so non-exposed functions are rejected prior to any
iii.trigger(...) call.
- Around line 126-139: AgentCard currently hard-codes the interface URL
("http://localhost:3111") in supported_interfaces which breaks non-local
deployments; change AgentCard construction (the supported_interfaces /
AgentInterface.url field) to read from configuration or derive from the incoming
request base URL instead of a literal string—e.g., use your application's
config/public URL value (or an env var) and format the interface URL from that
when building AgentCard so deployed instances advertise a reachable endpoint.
- Around line 276-302: Load the existing task by task_id (e.g., call
get_task/get_task_by_id or the existing task retrieval helper) instead of
instantiating a new Task; then update its status (set TaskStatus.state =
TaskState::Working and set the in-progress Message), append params.message to
its history (preserving prior history), merge or preserve artifacts, and
preserve terminal states (Completed/Failed/Canceled) by early-returning or
skipping the overwrite if the stored task is already terminal; finally call
store_task(iii, &task) with the merged/updated task. Apply the same pattern to
the other creation site referenced (lines ~325-372) so repeated message/send
does not wipe history/artifacts or race with tasks/cancel.

In `@mcp/src/handler.rs`:
- Around line 418-452: dispatch_http currently returns only tools from
iii.list_functions() and doesn't include the builtin management tools or the
iii_worker_/iii_trigger_ built-ins that McpHandler::tools_list() and
tools_call() expose; update dispatch_http to mirror those behaviors by
prepending builtin_tools() to the tools list (as in McpHandler::tools_list())
and ensure tools/call routing resolves and routes iii_worker_* and iii_trigger_*
built-ins the same way tools_call() does (e.g., normalize function ids and
forward calls to iii.trigger or the builtin handler), so HTTP clients can both
discover and invoke the same built-in management tools as the stdio surface.
- Around line 239-255: When handling tools/call, don't just translate
params.name ("__" -> "::") and invoke self.iii.trigger; first enforce the same
mcp.expose visibility checks used by tools/list so hidden functions cannot be
invoked directly. Specifically, after computing function_id (and before calling
self.iii.trigger in the handler that builds TriggerRequest) lookup the
function's metadata or call the existing expose check logic (the same
function/path used by tools/list) to reject non-exposed functions and return a
tool_error if not allowed. Apply the identical guard to the other call-path
referenced (the handler around lines 439-451) so both places validate expose
permissions before triggering functions.
- Around line 135-151: The code currently maps every Err from the result match
to METHOD_NOT_FOUND when building the JsonRpcResponse; instead propagate the
actual JSON-RPC error code produced by parse()/called methods (e.g., tools_call,
resources_read, prompts::get) into the response. Change the result error type to
carry a JSON-RPC error code and message (or use an existing JsonRpcError
struct), return Err((code, message)) from callers where parse() can produce
"Invalid params", and when constructing the response from match result use
JsonRpcResponse::error(id, code, message) instead of always METHOD_NOT_FOUND so
invalid-params (-32602) and other codes are preserved.

In `@mcp/src/prompts.rs`:
- Around line 6-16: The prompt definitions declare arguments with "required":
true for commands like "register-function", "build-api", and "setup-cron", but
the later code that fills missing values (the logic currently silently
substituting defaults) must instead validate presence and fail-fast; update the
code that fills/injects missing prompt arguments to check the "required" flag
for each argument and return an error or log+abort the prompt construction when
a required argument is absent (do not substitute a default), adjust the handler
that currently does silent substitution to perform this validation, and
add/update tests to assert that missing required arguments produce an explicit
validation error rather than a defaulted value.

In `@mcp/src/transport.rs`:
- Around line 17-21: The protocol loop currently swallows write/flush errors (in
the notification loop using handler.take_notification().await and the other
write sites including write_json), which can silently drop responses; change the
loop and the write paths to propagate or handle write errors instead of
discarding them—check the Result from writer.write_all and writer.flush (and
from write_json), and on Err either return Err from the surrounding async
function or break the loop and close the connection so failures stop further
message processing; locate the notification loop that calls
writer.write_all(notification.as_bytes())/writer.flush(), the other write
occurrences around the write_json calls, and update them to handle errors
deterministically (propagate with ? or explicitly log+break) rather than
ignoring the Result.

In `@mcp/src/worker_manager.rs`:
- Around line 87-124: The code reports success immediately after spawn_worker
returns (in spawn_worker(&params.language, &temp_dir, file_name) -> child) but
does not verify the child actually stays up or captures stderr, so early exits
are stored and hidden; modify the flow in the worker creation path to wait for a
short readiness/handshake or at least poll the child process for an immediate
exit and read its stderr/stdout before inserting into self.workers and returning
WorkerCreateResult: after obtaining child from spawn_worker, attach/read its
stderr (and stdout) asynchronously, await a small ready signal or a brief
delay/poll to see if child has exited (using child.try_wait()/await child.wait
with timeout), on early failure remove_dir_all(&temp_dir), collect and include
stderr in the returned Err (or log it) and do not insert into self.workers; only
insert SpawnedWorker and return success after the readiness check passes.
- Around line 150-185: The generated JS bootstrap in generate_node_worker
injects raw values (engine_url, params.code, function_name, description) with
brittle manual escaping; replace the manual replace calls with a proper string
literal encoder (e.g. serde_json::to_string or a dedicated JS string escape
helper) and embed the encoded result into the format! output so each injected
value becomes a safe JS string literal (use the JSON-encoded string directly
rather than adding your own quotes/escapes). Update generate_node_worker to call
the encoder for self.engine_url, params.code, params.function_name, and
params.description (or their defaults) and use those encoded strings in the
format! invocation to avoid invalid JS when values contain quotes, newlines, or
other special chars.

In `@README.md`:
- Line 15: Update the typo in the README where the phrase "6 builtin tools:
worker register/stop, trigger register/unregister/void/enqueue" appears: change
"builtin" to the correct hyphenated form "built-in" so the line reads "6
built-in tools: worker register/stop, trigger register/unregister/void/enqueue".
Ensure the rest of the wording and punctuation remain unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b101925f-127d-4e7f-8262-6757ffe17427

📥 Commits

Reviewing files that changed from the base of the PR and between 2c8376d and 03ca455.

📒 Files selected for processing (12)
  • README.md
  • a2a/Cargo.toml
  • a2a/src/handler.rs
  • a2a/src/main.rs
  • a2a/src/types.rs
  • mcp/Cargo.toml
  • mcp/src/handler.rs
  • mcp/src/main.rs
  • mcp/src/prompts.rs
  • mcp/src/transport.rs
  • mcp/src/worker_manager.rs
  • registry/index.json

Comment on lines +135 to +151
let result = match method {
"initialize" => Ok(initialize_result()),
"ping" => Ok(json!({})),
"tools/list" => self.tools_list().await,
"tools/call" => self.tools_call(params).await,
"resources/list" => Ok(self.resources_list()),
"resources/read" => self.resources_read(params).await,
"resources/templates/list" => Ok(json!({ "resourceTemplates": [] })),
"prompts/list" => Ok(prompts::list()),
"prompts/get" => Ok(prompts::get(params)),
_ => Err(format!("Unknown method: {}", method)),
};

json!(match result {
Ok(value) => JsonRpcResponse::success(id, value),
Err(err) => JsonRpcResponse::error(id, METHOD_NOT_FOUND, err),
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Preserve the actual JSON-RPC error code here.

parse() can fail with "Invalid params", but this branch wraps every failure as METHOD_NOT_FOUND. On the stdio path, malformed tools/call / resources/read / prompts/get requests therefore come back as -32601 instead of -32602, and future non-method errors will be misclassified too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/handler.rs` around lines 135 - 151, The code currently maps every Err
from the result match to METHOD_NOT_FOUND when building the JsonRpcResponse;
instead propagate the actual JSON-RPC error code produced by parse()/called
methods (e.g., tools_call, resources_read, prompts::get) into the response.
Change the result error type to carry a JSON-RPC error code and message (or use
an existing JsonRpcError struct), return Err((code, message)) from callers where
parse() can produce "Invalid params", and when constructing the response from
match result use JsonRpcResponse::error(id, code, message) instead of always
METHOD_NOT_FOUND so invalid-params (-32602) and other codes are preserved.

Critical:
- Enforce a2a.expose metadata check before triggering functions (a2a handler)
- Enforce mcp.expose metadata check on tools/call, not just tools/list (both stdio and HTTP)

Major:
- Agent card URL now configurable via --base-url flag instead of hardcoded localhost
- Load existing task on repeated message/send to preserve history/artifacts
- Re-check task state after trigger to avoid overwriting cancel
- Preserve actual JSON-RPC error codes (INVALID_PARAMS vs METHOD_NOT_FOUND)
- HTTP dispatch now includes built-in tools matching stdio surface
- Propagate stdout write errors in transport instead of swallowing
- Verify spawned worker stays up (200ms check) before reporting success
- Use serde_json::to_string for safe string literal encoding in generated worker code
- Capture stderr from spawned workers for debugging

Minor:
- Validate required prompt arguments instead of silent defaults
- Fix "builtin" → "built-in" typo in README
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
mcp/src/worker_manager.rs (2)

226-228: Document or validate the implicit handler requirement.

Line 228 references handler, which must be defined by the user-provided code (line 226). If the code doesn't define handler, the worker fails at runtime with a NameError.

Consider either:

  1. Adding a doc comment on WorkerCreateParams::code specifying this requirement, or
  2. Validating the generated code contains handler before writing.
📝 Example documentation addition to WorkerCreateParams
#[derive(Debug, Clone, Deserialize)]
pub struct WorkerCreateParams {
    pub language: String,
    /// The function code to execute.
    /// - For JavaScript: Must be a valid expression evaluating to a function, e.g., `async (ctx) => { ... }`
    /// - For Python: Must define a `handler` function, e.g., `async def handler(ctx): ...`
    pub code: String,
    pub function_name: String,
    pub description: Option<String>,
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/worker_manager.rs` around lines 226 - 228, The runtime failure is
caused by user-supplied WorkerCreateParams::code not guaranteeing a handler
symbol expected by iii.register_function; either document the requirement on
WorkerCreateParams::code (e.g., specify that Python code must define a `handler`
function and JS must evaluate to a function) or add a validation step in the
worker creation flow (before calling iii.register_function) that inspects the
provided code string for the expected handler declaration (for Python look for
`def handler`/`async def handler`, for JS ensure the code is a function
expression or contains a `handler` identifier) and return a clear validation
error if missing; update error messages to mention `handler` and reference
WorkerCreateParams::code and the call site of iii.register_function so users
know how to fix their code.

102-116: Consider capturing stderr output on early worker exit.

The health check correctly detects immediate worker failures. However, stderr is piped (line 267) but not read here, so the error message lacks diagnostic details about why the worker failed (syntax errors, missing imports, etc.).

♻️ Suggested improvement to capture stderr
         tokio::time::sleep(std::time::Duration::from_millis(200)).await;
         match child.try_wait() {
             Ok(Some(status)) => {
+                let mut stderr_output = String::new();
+                if let Some(mut stderr) = child.stderr.take() {
+                    use tokio::io::AsyncReadExt;
+                    let _ = stderr.read_to_string(&mut stderr_output).await;
+                }
                 let _ = tokio::fs::remove_dir_all(&temp_dir).await;
-                return Err(format!(
-                    "Worker exited immediately with status: {}",
-                    status
-                ));
+                return Err(format!(
+                    "Worker exited immediately with status: {}{}",
+                    status,
+                    if stderr_output.is_empty() {
+                        String::new()
+                    } else {
+                        format!("\nstderr: {}", stderr_output.trim())
+                    }
+                ));
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/worker_manager.rs` around lines 102 - 116, When detecting an
immediate worker exit in the match on child.try_wait() (the Ok(Some(status))
branch) capture and include the worker's stderr output in the error returned:
take the piped stderr from the Child (use child.stderr.take()), asynchronously
read it to a String (tokio::io::AsyncReadExt::read_to_string or similar), await
that read, then include the stderr content alongside the exit status in the
Err(...) message before removing temp_dir; do the same for the Err(e) branch
where process.wait check fails to provide diagnostic stderr when available.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@a2a/src/handler.rs`:
- Around line 16-25: is_function_exposed currently swallows registry errors
(iii.list_functions()) and returns false, which misleadingly treats lookup
failures as "not exposed"; change is_function_exposed to propagate the error
instead of returning false (e.g., change signature to return Result<bool, E> and
on Err(e) return Err(e)), update all call sites (notably handle_send) to handle
the Result: when is_function_exposed returns Err, log/return a distinct registry
lookup failure error rather than treating it as a policy denial, and only
interpret Ok(false) as the a2a.expose absence; ensure you reference the
iii.list_functions() failure branch and adjust error types/await usage
accordingly.
- Around line 533-540: The current branch treats any message containing "::" as
a shorthand invocation because it uses text.contains("::") globally; instead,
split the input on whitespace first to get the first token (the variable text or
parts[0]) and only interpret shorthand if that first token contains "::". If the
first token contains "::", split that token on "::" (into function id and
optional inline JSON), parse the JSON into payload (falling back to json!({
"input": ... }) if parse fails) and return (function_id, payload); otherwise
fall back to returning (text.to_string(), json!({})). Use the existing variables
(text, parts, payload) and ensure you only treat the first token as a function
id.

In `@mcp/src/prompts.rs`:
- Around line 45-57: The current logic treats any non-"python" value as Node.js;
change the validation so only "python" or "node" are accepted: after obtaining
lang via require_arg(&args, "language") (and fid via require_arg(&args,
"function_id")), explicitly branch for lang == "python" and lang == "node" and
return an explicit error (e.g., Err(...) with a clear validation message) for
any other value instead of falling through to the Node.js template; update the
code paths that build the formatted prompt (the two format! blocks) to remain
unchanged for the allowed values.
- Around line 53-56: The prompt examples for registering a worker function (the
Python `iii.register_function('{fid}', handler)` and the Node.js
`iii.registerFunction({ id: '{fid}' }, handler)`) omit the required MCP exposure
metadata; update both formatted samples in prompts.rs so the registration call
includes the metadata/opts that set mcp.expose true (e.g., pass a
metadata/options object with "mcp.expose": true) so the function appears in
tools/list without --expose-all—look for the format! strings that build the
Python and Node.js snippets and add the metadata argument to the registration
invocation for fid.

In `@README.md`:
- Around line 64-69: Update the Usage section to document the new --base-url
flag: explain that the flag configures the public origin used to build the
agent-card URL (not the full /a2a path), give a concrete example invocation like
"iii-a2a --base-url https://example.com" and note that the agent-card will be
served at <base-url>/a2a and the /.well-known/agent-card.json will reference
that origin so non-local deployments can advertise a reachable interface;
reference the CLI invocation shown (iii-a2a) and the --base-url flag so readers
can find the relevant behavior.

---

Nitpick comments:
In `@mcp/src/worker_manager.rs`:
- Around line 226-228: The runtime failure is caused by user-supplied
WorkerCreateParams::code not guaranteeing a handler symbol expected by
iii.register_function; either document the requirement on
WorkerCreateParams::code (e.g., specify that Python code must define a `handler`
function and JS must evaluate to a function) or add a validation step in the
worker creation flow (before calling iii.register_function) that inspects the
provided code string for the expected handler declaration (for Python look for
`def handler`/`async def handler`, for JS ensure the code is a function
expression or contains a `handler` identifier) and return a clear validation
error if missing; update error messages to mention `handler` and reference
WorkerCreateParams::code and the call site of iii.register_function so users
know how to fix their code.
- Around line 102-116: When detecting an immediate worker exit in the match on
child.try_wait() (the Ok(Some(status)) branch) capture and include the worker's
stderr output in the error returned: take the piped stderr from the Child (use
child.stderr.take()), asynchronously read it to a String
(tokio::io::AsyncReadExt::read_to_string or similar), await that read, then
include the stderr content alongside the exit status in the Err(...) message
before removing temp_dir; do the same for the Err(e) branch where process.wait
check fails to provide diagnostic stderr when available.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 24297408-fdcb-4866-8754-82a129cbe06b

📥 Commits

Reviewing files that changed from the base of the PR and between 1ff864c and c350791.

📒 Files selected for processing (7)
  • README.md
  • a2a/src/handler.rs
  • a2a/src/main.rs
  • mcp/src/handler.rs
  • mcp/src/prompts.rs
  • mcp/src/transport.rs
  • mcp/src/worker_manager.rs
🚧 Files skipped from review as they are similar to previous changes (3)
  • a2a/src/main.rs
  • mcp/src/transport.rs
  • mcp/src/handler.rs

Comment on lines +16 to +25
async fn is_function_exposed(iii: &III, function_id: &str, expose_all: bool) -> bool {
if expose_all {
return true;
}
match iii.list_functions().await {
Ok(fns) => fns
.iter()
.any(|f| f.function_id == function_id && has_metadata_flag(f, "a2a.expose")),
Err(_) => false,
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't turn registry lookup failures into exposure denials.

When iii.list_functions() fails, this returns false, and handle_send() reports "not exposed via a2a.expose metadata" even though the worker could not determine exposure at all. Bubble that error up, or at least log it distinctly, so engine outages don't look like policy failures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@a2a/src/handler.rs` around lines 16 - 25, is_function_exposed currently
swallows registry errors (iii.list_functions()) and returns false, which
misleadingly treats lookup failures as "not exposed"; change is_function_exposed
to propagate the error instead of returning false (e.g., change signature to
return Result<bool, E> and on Err(e) return Err(e)), update all call sites
(notably handle_send) to handle the Result: when is_function_exposed returns
Err, log/return a distinct registry lookup failure error rather than treating it
as a policy denial, and only interpret Ok(false) as the a2a.expose absence;
ensure you reference the iii.list_functions() failure branch and adjust error
types/await usage accordingly.

Comment on lines +533 to +540
let text = text.trim();
if text.contains("::") {
let parts: Vec<&str> = text.splitn(2, char::is_whitespace).collect();
if parts.len() == 2 {
let payload = serde_json::from_str(parts[1]).unwrap_or(json!({ "input": parts[1] }));
return (parts[0].to_string(), payload);
}
return (text.to_string(), json!({}));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Only parse the shorthand when the first token is a function id.

This branch activates whenever the message contains :: anywhere, so "please run orders::process" resolves to function_id "please". Check the first token for :: before treating free-form text as a direct function invocation.

Suggested fix
-    let text = text.trim();
-    if text.contains("::") {
-        let parts: Vec<&str> = text.splitn(2, char::is_whitespace).collect();
-        if parts.len() == 2 {
-            let payload = serde_json::from_str(parts[1]).unwrap_or(json!({ "input": parts[1] }));
-            return (parts[0].to_string(), payload);
-        }
-        return (text.to_string(), json!({}));
-    }
+    let text = text.trim();
+    let parts: Vec<&str> = text.splitn(2, char::is_whitespace).collect();
+    if let Some(function_id) = parts.first().filter(|fid| fid.contains("::")) {
+        if parts.len() == 2 {
+            let payload = serde_json::from_str(parts[1]).unwrap_or(json!({ "input": parts[1] }));
+            return ((*function_id).to_string(), payload);
+        }
+        return ((*function_id).to_string(), json!({}));
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@a2a/src/handler.rs` around lines 533 - 540, The current branch treats any
message containing "::" as a shorthand invocation because it uses
text.contains("::") globally; instead, split the input on whitespace first to
get the first token (the variable text or parts[0]) and only interpret shorthand
if that first token contains "::". If the first token contains "::", split that
token on "::" (into function id and optional inline JSON), parse the JSON into
payload (falling back to json!({ "input": ... }) if parse fails) and return
(function_id, payload); otherwise fall back to returning (text.to_string(),
json!({})). Use the existing variables (text, parts, payload) and ensure you
only treat the first token as a function id.

Comment on lines +45 to +57
let lang = match require_arg(&args, "language") {
Ok(v) => v,
Err(e) => return e,
};
let fid = match require_arg(&args, "function_id") {
Ok(v) => v,
Err(e) => return e,
};
if lang == "python" {
format!("Register Python function `{fid}`:\n1. `iii_worker_register` with language='python'\n2. Code: `async def handler(input): ...`\n3. Wire trigger via `iii_trigger_register`\n\n```python\nfrom iii_sdk import register_worker, Logger\niii = register_worker('ws://localhost:49134')\niii.register_function('{fid}', handler)\n```")
} else {
format!("Register Node.js function `{fid}`:\n1. `iii_worker_register` with language='node'\n2. Code: `async (input) => {{ ... }}`\n3. Wire trigger via `iii_trigger_register`\n\n```js\nimport {{ registerWorker, Logger }} from 'iii-sdk'\nconst iii = registerWorker('ws://localhost:49134')\niii.registerFunction({{ id: '{fid}' }}, handler)\n```")
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reject unsupported language values instead of defaulting to Node.js.

Anything other than "python" falls through to the Node.js template today, so invalid inputs like "rust" generate a plausible but wrong prompt. Return an explicit validation message unless the value is "node" or "python".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/prompts.rs` around lines 45 - 57, The current logic treats any
non-"python" value as Node.js; change the validation so only "python" or "node"
are accepted: after obtaining lang via require_arg(&args, "language") (and fid
via require_arg(&args, "function_id")), explicitly branch for lang == "python"
and lang == "node" and return an explicit error (e.g., Err(...) with a clear
validation message) for any other value instead of falling through to the
Node.js template; update the code paths that build the formatted prompt (the two
format! blocks) to remain unchanged for the allowed values.

Comment on lines +53 to +56
if lang == "python" {
format!("Register Python function `{fid}`:\n1. `iii_worker_register` with language='python'\n2. Code: `async def handler(input): ...`\n3. Wire trigger via `iii_trigger_register`\n\n```python\nfrom iii_sdk import register_worker, Logger\niii = register_worker('ws://localhost:49134')\niii.register_function('{fid}', handler)\n```")
} else {
format!("Register Node.js function `{fid}`:\n1. `iii_worker_register` with language='node'\n2. Code: `async (input) => {{ ... }}`\n3. Wire trigger via `iii_trigger_register`\n\n```js\nimport {{ registerWorker, Logger }} from 'iii-sdk'\nconst iii = registerWorker('ws://localhost:49134')\niii.registerFunction({{ id: '{fid}' }}, handler)\n```")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The register-function prompt omits the metadata required for MCP exposure.

This worker only exposes functions tagged with mcp.expose: true by default, but both sample registrations here omit that step. Following this prompt will register a function that never appears in tools/list unless the server is started with --expose-all.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/prompts.rs` around lines 53 - 56, The prompt examples for registering
a worker function (the Python `iii.register_function('{fid}', handler)` and the
Node.js `iii.registerFunction({ id: '{fid}' }, handler)`) omit the required MCP
exposure metadata; update both formatted samples in prompts.rs so the
registration call includes the metadata/opts that set mcp.expose true (e.g.,
pass a metadata/options object with "mcp.expose": true) so the function appears
in tools/list without --expose-all—look for the format! strings that build the
Python and Node.js snippets and add the metadata argument to the registration
invocation for fid.

Comment on lines +64 to +69
#### Usage

```bash
iii-a2a # A2A HTTP (POST /a2a + GET /.well-known/agent-card.json)
iii-a2a --expose-all # show all functions as skills
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Document the public URL flag in the A2A usage section.

The PR adds a configurable agent-card URL, but this block only shows local/default invocations. Please document --base-url here and clarify whether it expects the public origin or the full /a2a endpoint; otherwise non-local deployments have no guidance for advertising a reachable interface.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 64 - 69, Update the Usage section to document the new
--base-url flag: explain that the flag configures the public origin used to
build the agent-card URL (not the full /a2a path), give a concrete example
invocation like "iii-a2a --base-url https://example.com" and note that the
agent-card will be served at <base-url>/a2a and the /.well-known/agent-card.json
will reference that origin so non-local deployments can advertise a reachable
interface; reference the CLI invocation shown (iii-a2a) and the --base-url flag
so readers can find the relevant behavior.

rohitg00 added 2 commits April 2, 2026 11:08
Generated Node.js and Python workers via iii_worker_register were
missing the mcp.expose metadata tag, causing registered functions to be
silently filtered from tools/list. Also fixes Python template passing
description as positional arg when it's keyword-only.
Covers web UI, CLI mode, and full end-to-end test setup with
config file, tool listing, tool calling, and metadata filtering.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mcp/src/worker_manager.rs`:
- Around line 69-89: The code creates temp_dir but doesn't remove it on early
returns (unsupported language or write failures); update the function containing
temp_dir (use the temp_dir, params.language, generate_node_worker,
generate_python_worker, file_path identifiers) to always clean up the created
directory before returning an Err—either by wrapping temp_dir in a scope guard
that calls tokio::fs::remove_dir_all(&temp_dir).await on drop, or by explicitly
calling tokio::fs::remove_dir_all(&temp_dir).await in each error branch (the
unsupported-language branch and in the map_err for file write) so the "iii-*"
temp directory is removed on every early exit.
- Around line 66-67: The worker ID generation truncates Uuid::new_v4() to 8
chars (worker_id) which gives only 32 bits of entropy and can cause collisions
that replace an existing Child in the HashMap and accidentally drop/kill the
wrong process (kill_on_drop(true)). Fix by generating a collision-safe worker
identifier: either use the full UUID (or full hex/numeric representation) for
worker_id, or loop/retry generating Uuid::new_v4() until the produced id is not
present in the workers HashMap before inserting; update all occurrences where
worker_id is constructed (the Uuid::new_v4() usage at the noted site and the
same pattern around lines 128–131) so no truncated IDs are used.
- Around line 91-131: The child process's stderr is piped
(stderr(Stdio::piped()) in spawn_worker) but never read, which can block the
worker when the pipe fills; fix by draining or redirecting stderr: update
spawn_worker (the function that sets stderr(Stdio::piped())) to either set
stderr to inherit/null or return a Child with piped stderr and then, where the
child is stored (the variable child used when building SpawnedWorker and
inserting into self.workers), immediately take child.stderr
(child.stderr.take()) and spawn a background tokio task that continuously reads
and discards the stderr stream; ensure the drain task is started before
inserting the child into self.workers so the pipe is always consumed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 59d72878-bd53-46ea-9a43-eaf8a2675505

📥 Commits

Reviewing files that changed from the base of the PR and between c350791 and c08eacb.

📒 Files selected for processing (1)
  • mcp/src/worker_manager.rs

Comment on lines +66 to +67
let uuid = Uuid::new_v4().to_string();
let worker_id = format!("worker-{}", &uuid[..8]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a collision-safe worker ID.

Truncating the UUID to 8 hex chars leaves only 32 bits of entropy. When that collides, HashMap::insert replaces the existing entry, and the dropped Child can kill the unrelated worker because kill_on_drop(true) is enabled.

Possible fix
-        let uuid = Uuid::new_v4().to_string();
-        let worker_id = format!("worker-{}", &uuid[..8]);
+        let worker_id = format!("worker-{}", Uuid::new_v4());

Also applies to: 128-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/worker_manager.rs` around lines 66 - 67, The worker ID generation
truncates Uuid::new_v4() to 8 chars (worker_id) which gives only 32 bits of
entropy and can cause collisions that replace an existing Child in the HashMap
and accidentally drop/kill the wrong process (kill_on_drop(true)). Fix by
generating a collision-safe worker identifier: either use the full UUID (or full
hex/numeric representation) for worker_id, or loop/retry generating
Uuid::new_v4() until the produced id is not present in the workers HashMap
before inserting; update all occurrences where worker_id is constructed (the
Uuid::new_v4() usage at the noted site and the same pattern around lines
128–131) so no truncated IDs are used.

Comment on lines +69 to +89
let temp_dir = std::env::temp_dir().join(format!("iii-{}", &worker_id));
tokio::fs::create_dir_all(&temp_dir)
.await
.map_err(|e| format!("Failed to create temp dir: {}", e))?;

let (file_name, code) = match params.language.as_str() {
"node" | "javascript" | "js" => {
let code = self.generate_node_worker(&params);
("index.mjs", code)
}
"python" | "py" => {
let code = self.generate_python_worker(&params);
("main.py", code)
}
_ => return Err(format!("Unsupported language: {}", params.language)),
};

let file_path = temp_dir.join(file_name);
tokio::fs::write(&file_path, &code)
.await
.map_err(|e| format!("Failed to write worker file: {}", e))?;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clean up temp_dir on every early return.

temp_dir is created before language validation and before the file-write failure path. Unsupported language values and write errors currently leave iii-* directories behind under the system temp dir, and mcp/src/handler.rs passes request language values straight into this method.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/worker_manager.rs` around lines 69 - 89, The code creates temp_dir
but doesn't remove it on early returns (unsupported language or write failures);
update the function containing temp_dir (use the temp_dir, params.language,
generate_node_worker, generate_python_worker, file_path identifiers) to always
clean up the created directory before returning an Err—either by wrapping
temp_dir in a scope guard that calls tokio::fs::remove_dir_all(&temp_dir).await
on drop, or by explicitly calling tokio::fs::remove_dir_all(&temp_dir).await in
each error branch (the unsupported-language branch and in the map_err for file
write) so the "iii-*" temp directory is removed on every early exit.

Comment on lines +91 to +131
let mut child = match self
.spawn_worker(&params.language, &temp_dir, file_name)
.await
{
Ok(c) => c,
Err(e) => {
let _ = tokio::fs::remove_dir_all(&temp_dir).await;
return Err(e);
}
};

tokio::time::sleep(std::time::Duration::from_millis(200)).await;
match child.try_wait() {
Ok(Some(status)) => {
let _ = tokio::fs::remove_dir_all(&temp_dir).await;
return Err(format!(
"Worker exited immediately with status: {}",
status
));
}
Err(e) => {
let _ = tokio::fs::remove_dir_all(&temp_dir).await;
return Err(format!("Failed to check worker status: {}", e));
}
Ok(None) => {}
}

let pid = child.id().unwrap_or(0);

let spawned = SpawnedWorker {
id: worker_id.clone(),
language: params.language.clone(),
function_name: params.function_name.clone(),
temp_dir: temp_dir.to_string_lossy().to_string(),
pid,
};

self.workers
.lock()
.await
.insert(worker_id.clone(), (spawned, child));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd mcp && find . -name "worker_manager.rs" -type f

Repository: iii-hq/workers

Length of output: 81


🏁 Script executed:

cat -n mcp/src/worker_manager.rs | head -150

Repository: iii-hq/workers

Length of output: 5497


🏁 Script executed:

cat -n mcp/src/worker_manager.rs | sed -n '260,275p'

Repository: iii-hq/workers

Length of output: 548


🏁 Script executed:

rg "spawn_worker" mcp/src/worker_manager.rs -A 30 -B 2

Repository: iii-hq/workers

Length of output: 1807


🏁 Script executed:

rg "fn spawn_worker" mcp/src/worker_manager.rs -A 50

Repository: iii-hq/workers

Length of output: 738


🏁 Script executed:

rg "stderr" mcp/src/worker_manager.rs

Repository: iii-hq/workers

Length of output: 93


🏁 Script executed:

rg "\.stderr\(\)" mcp/src/ -A 3 -B 3

Repository: iii-hq/workers

Length of output: 40


🏁 Script executed:

cd mcp && find . -name "*.rs" -type f | head -20

Repository: iii-hq/workers

Length of output: 148


🏁 Script executed:

rg "Child" mcp/src/worker_manager.rs

Repository: iii-hq/workers

Length of output: 194


🏁 Script executed:

rg "take_stderr\|stderr\(\)" mcp/src/ --type rust

Repository: iii-hq/workers

Length of output: 40


🏁 Script executed:

rg "tokio::spawn" mcp/src/worker_manager.rs -A 10

Repository: iii-hq/workers

Length of output: 40


🏁 Script executed:

grep -n "drain\|read" mcp/src/worker_manager.rs

Repository: iii-hq/workers

Length of output: 40


🏁 Script executed:

cat -n mcp/src/worker_manager.rs | sed -n '147,180p'

Repository: iii-hq/workers

Length of output: 1602


Drain piped stderr to prevent worker processes from blocking.

spawn_worker sets stderr(Stdio::piped()) but the spawned child's stderr is never read. When a worker writes to stderr, a full pipe buffer will block the process on writes, making it appear hung. Either inherit stderr, redirect to null, or start a background task to drain stderr.

Affects both the storage point (lines 91-131) and the spawn point (lines 263-269).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mcp/src/worker_manager.rs` around lines 91 - 131, The child process's stderr
is piped (stderr(Stdio::piped()) in spawn_worker) but never read, which can
block the worker when the pipe fills; fix by draining or redirecting stderr:
update spawn_worker (the function that sets stderr(Stdio::piped())) to either
set stderr to inherit/null or return a Child with piped stderr and then, where
the child is stored (the variable child used when building SpawnedWorker and
inserting into self.workers), immediately take child.stderr
(child.stderr.take()) and spawn a background tokio task that continuously reads
and discards the stderr stream; ensure the drain task is started before
inserting the child into self.workers so the pipe is always consumed.

Comment on lines +147 to +157
pub async fn stop_worker(&self, params: WorkerStopParams) -> Result<WorkerStopResult, String> {
let mut workers = self.workers.lock().await;

if let Some((info, mut child)) = workers.remove(&params.id) {
if let Err(e) = child.kill().await {
tracing::warn!(worker_id = %params.id, error = %e, "Failed to kill worker process");
}

if let Err(e) = tokio::fs::remove_dir_all(&info.temp_dir).await {
tracing::warn!(worker_id = %params.id, error = %e, "Failed to remove temp dir");
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate and read the worker_manager.rs file
fd -t f "worker_manager.rs" --exec wc -l {} \;

Repository: iii-hq/workers

Length of output: 89


🏁 Script executed:

# Find the file
find . -name "worker_manager.rs" -type f

Repository: iii-hq/workers

Length of output: 85


🏁 Script executed:

# Read the relevant section around lines 147-157
cat -n mcp/src/worker_manager.rs | sed -n '140,170p'

Repository: iii-hq/workers

Length of output: 1450


🏁 Script executed:

# Look at the full stop_worker implementation and surrounding context
cat -n mcp/src/worker_manager.rs | sed -n '145,180p'

Repository: iii-hq/workers

Length of output: 1623


🏁 Script executed:

# Check all methods accessing self.workers to understand contention risk
rg -n "self\.workers" mcp/src/worker_manager.rs | head -20

Repository: iii-hq/workers

Length of output: 139


🏁 Script executed:

# Find the struct definition and self.workers type
rg -n "self\.workers" mcp/src/worker_manager.rs -B 5 -A 5 | head -30

Repository: iii-hq/workers

Length of output: 925


🏁 Script executed:

# Look for the WorkerManager struct definition
rg -n "struct WorkerManager" mcp/src/worker_manager.rs -A 10

Repository: iii-hq/workers

Length of output: 384


🏁 Script executed:

# Check the create_worker method to see if it also locks self.workers
rg -n "fn create_worker\|pub async fn create_worker" mcp/src/worker_manager.rs -A 30

Repository: iii-hq/workers

Length of output: 40


🏁 Script executed:

# Look for create_worker method more carefully
rg -n "create_worker" mcp/src/worker_manager.rs -B 2 -A 40

Repository: iii-hq/workers

Length of output: 1678


🏁 Script executed:

# Let's get a broader view of all public async methods in WorkerManager
rg -n "pub async fn" mcp/src/worker_manager.rs

Repository: iii-hq/workers

Length of output: 196


Release the mutex before awaiting cleanup operations.

stop_worker acquires self.workers and holds it through child.kill().await and tokio::fs::remove_dir_all().await. Since the worker entry is removed from the map immediately upon lock acquisition, the cleanup operations do not require the mutex. Holding the lock across these I/O operations blocks concurrent create_worker and stop_worker requests from acquiring self.workers.

Suggested fix
-        let mut workers = self.workers.lock().await;
-
-        if let Some((info, mut child)) = workers.remove(&params.id) {
+        let worker = {
+            let mut workers = self.workers.lock().await;
+            workers.remove(&params.id)
+        };
+
+        if let Some((info, mut child)) = worker {

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
README.md (1)

145-150: ⚠️ Potential issue | 🟡 Minor

Please add --base-url usage guidance for A2A deployments.

Line 145-Line 150 still only documents local/default invocations. For non-local deployments, document iii-a2a --base-url <public-origin> and clarify that it expects the public origin (not a full /a2a path), so agent-card discovery advertises a reachable endpoint.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 145 - 150, Update the Usage docs to include guidance
for non-local A2A deployments by adding an example and note for the --base-url
flag: show the CLI invocation "iii-a2a --base-url <public-origin>" and clarify
that the argument should be the public origin (e.g., https://example.com), not a
full /a2a path, so the agent-card discovery advertises a reachable endpoint;
reference the existing local examples (iii-a2a and iii-a2a --expose-all) and
append the new line explaining expected input and its effect on agent-card
discovery.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@README.md`:
- Around line 145-150: Update the Usage docs to include guidance for non-local
A2A deployments by adding an example and note for the --base-url flag: show the
CLI invocation "iii-a2a --base-url <public-origin>" and clarify that the
argument should be the public origin (e.g., https://example.com), not a full
/a2a path, so the agent-card discovery advertises a reachable endpoint;
reference the existing local examples (iii-a2a and iii-a2a --expose-all) and
append the new line explaining expected input and its effect on agent-card
discovery.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 47ad463a-2aa7-470c-98fe-4cc1d93220f7

📥 Commits

Reviewing files that changed from the base of the PR and between c08eacb and bcdc5a1.

📒 Files selected for processing (1)
  • README.md

rohitg00 added 4 commits April 2, 2026 11:38
dispatch_http was skipping builtin tool routing (iii_trigger_void,
iii_trigger_enqueue, etc.) and directly triggering the tool name as
a function ID, which always failed with function_not_found.

Now routes iii_trigger_void and iii_trigger_enqueue correctly, and
returns a clear error for stdio-only tools (worker/trigger management).
- Use iii-sdk from git main (published 0.10.0 is missing
  RegisterTriggerInput.metadata field from #1378)
- Fix api_path: "mcp" not "/mcp" (engine prepends / in route
  registration, double-slash caused 404)
- Add metadata: None to all RegisterTriggerInput constructors

Tested against source-built engine (f5ac6934) — all transports working.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant