Skip to content

Fix: Improve reporting of user metrics to amplitude#1390

Merged
anthonyiscoding merged 21 commits intomainfrom
fix/amplitude-telemetry-ids
Apr 2, 2026
Merged

Fix: Improve reporting of user metrics to amplitude#1390
anthonyiscoding merged 21 commits intomainfrom
fix/amplitude-telemetry-ids

Conversation

@anthonyiscoding
Copy link
Copy Markdown
Contributor

@anthonyiscoding anthonyiscoding commented Apr 1, 2026

Summary by CodeRabbit

  • New Features

    • Added a top-level "cloud" CLI command.
    • CLI includes a hidden install mode to emit install lifecycle events.
  • Bug Fixes

    • Default framework names now applied for Node, Python, and Rust worker telemetry.
  • Chores

    • Telemetry persistence migrated to a YAML-backed schema with more stable device IDs.
    • Installer now delegates telemetry emission to the installed engine binary.
    • Container/Docker compose now sets an explicit execution context; host UID injection removed.

Impacts: iii-hq/cli-tooling#12

ytallo and others added 9 commits March 28, 2026 17:54
- Migrate device identity and execution context to ~/.iii/telemetry.yaml
  with schema version 2. Legacy telemetry.toml is ignored (not migrated).
- Use machineid-rs HMAC-SHA256 output directly as device_id without
  redundant second SHA256 pass.
- Route install/upgrade lifecycle telemetry through the iii binary via
  --install-only-generate-ids, --install-event-type, and
  --install-event-properties flags.
- Differentiate execution contexts: kubernetes, docker, container, ci,
  user. Unknown values normalize to "unknown".
- Set III_EXECUTION_CONTEXT=docker in Dockerfile and docker-compose files.
- Delay all telemetry (first_run, boot heartbeat) to 2 minutes after
  engine start to allow workers/functions to register and to filter out
  noise from ephemeral containers.
- Record first_run_sent in telemetry.yaml on first actual engine run
  (not during install-only ID generation).
…t_background_tasks

Extract EngineSnapshot, build_base_properties, and DeltaAccumulator/DeltaSnapshot
helpers to eliminate repeated property maps and delta counter boilerplate across
the boot heartbeat, periodic heartbeat, and engine_stopped events.
…ss engine and SDKs

Replace hardcoded motia/non-motia worker counts with a dynamic
worker_count_by_framework map. Rename iii-js to iii-node and default
the framework field in all SDKs so Amplitude events always carry a
framework identifier. Remove unused preferences/dev_optout config,
add platform tag to install-script events, and track to_version on
upgrades.
Remove redundant `or "iii-py"` fallback for the framework telemetry
field — the ternary already handles the None case. Black formatting
applied across the file (line wrapping, multi-line imports).
Install lifecycle events are sent only when the downloaded iii binary
supports --install-only-generate-ids. Older releases skip telemetry
silently instead of emitting low-value curl-based events.

Include to_version in upgrade_failed payloads (same release target as
upgrade_started).
@anthonyiscoding anthonyiscoding requested a review from ytallo April 1, 2026 22:33
@vercel
Copy link
Copy Markdown
Contributor

vercel bot commented Apr 1, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
iii-website Ready Ready Preview, Comment Apr 2, 2026 6:03pm
motia-docs Ready Ready Preview, Comment Apr 2, 2026 6:03pm

Request Review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 1, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Telemetry identity and lifecycle emission were moved into the installed binary; telemetry persistence migrated from TOML→YAML v2 using machineid-rs; execution-context resolution was added; install script and CLI received install-event wiring; iii-cloud registry entry and SDK telemetry framework defaults were introduced.

Changes

Cohort / File(s) Summary
Build & infra
engine/Cargo.toml, engine/Dockerfile, engine/docker-compose.yml, engine/docker-compose.prod.yml
Added machineid-rs dependency; renamed env var III_CONTAINERIII_EXECUTION_CONTEXT; removed host-user-id injection and set III_EXECUTION_CONTEXT=docker in compose.
Install script
engine/install.sh
Removed embedded Amplitude plumbing; added iii_emit_event() to call installed binary with --install-only-generate-ids / event flags; changed event timing, payload shapes, release_version derivation, and error handling.
Telemetry environment & core
engine/src/modules/telemetry/environment.rs, engine/src/modules/telemetry/mod.rs, engine/src/cli/telemetry.rs
Persistence migrated to telemetry.yaml v2 with migration; added get_or_create_device_id() (machineid-rs + UUID fallback); added resolve_execution_context() and iii_execution_context; removed local host/container id plumbing; refactored event building, heartbeat timing, worker framework counts, and install lifecycle API.
CLI & Registry
engine/src/cli/registry.rs, engine/src/main.rs
Added iii-cloud BinarySpec mapping cloudiii-cloud; added hidden flags --install-only-generate-ids, --install-event-type, --install-event-properties and an early-exit branch that generates IDs and optionally emits install events; tests updated.
SDKs — worker telemetry defaults & formatting
sdk/packages/node/iii/src/iii.ts, sdk/packages/python/iii/src/iii/iii.py, sdk/packages/rust/iii/src/iii.rs
Ensure worker telemetry framework defaults to trimmed non-empty values (iii-node, iii-py, iii-rust); Python changes are mostly formatting; Rust sets default framework before serialization.

Sequence Diagram(s)

sequenceDiagram
    participant InstallScript as Install Script
    participant Binary as Installed Binary
    participant Telemetry as Engine Telemetry Module
    participant Config as telemetry.yaml (FS)

    InstallScript->>Binary: invoke --install-only-generate-ids (optional event args)
    Binary->>Telemetry: get_or_create_device_id()
    Telemetry->>Config: read/create/migrate telemetry.yaml (v2)
    Config-->>Telemetry: return device_id / iii_execution_context
    Telemetry-->>Binary: device_id + iii_execution_context
    InstallScript->>Binary: iii_emit_event(event_type, properties)
    Binary->>Telemetry: send_install_lifecycle_event(platform="install-script", props)
    Telemetry->>Telemetry: build payload and send (async)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • guibeira
  • sergiofilhowz

Poem

🐰 I nibble YAML bytes and tap the binary door,

device IDs hum where host hashes lived before.
I hop an event to the tool that knows to send,
iii-cloud peeks out as defaults snugly mend.
Telemetry hops on—new context, new friend.

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Title check ⚠️ Warning The title mentions improving Amplitude metrics reporting, but the changeset makes extensive structural changes to telemetry identity generation (device ID, execution context) and installation-script event emission, going well beyond just metric reporting improvements. Revise the title to reflect the core change, such as: 'refactor: Centralize device identity and execution context generation' or 'refactor: Update telemetry to use YAML-backed device IDs and execution context'
Docstring Coverage ⚠️ Warning Docstring coverage is 74.19% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/amplitude-telemetry-ids

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
engine/src/modules/telemetry/environment.rs (1)

331-338: ⚠️ Potential issue | 🟠 Major

Restore the broader opt-out env vars here.

Both the CLI and engine telemetry paths now delegate to is_dev_optout(), but this implementation only honors III_TELEMETRY_DEV and the sentinel file. If a user sets III_TELEMETRY_OPTOUT=true or DO_NOT_TRACK=1, telemetry stays enabled. That's a privacy regression.

Possible fix
 pub fn is_dev_optout() -> bool {
     if std::env::var("III_TELEMETRY_DEV").ok().as_deref() == Some("true") {
         return true;
     }
+    if std::env::var("III_TELEMETRY_OPTOUT").ok().as_deref() == Some("true") {
+        return true;
+    }
+    if std::env::var("DO_NOT_TRACK").ok().as_deref() == Some("1") {
+        return true;
+    }
 
     let base_dir = dirs::home_dir().unwrap_or_else(std::env::temp_dir);
     base_dir.join(".iii").join("telemetry_dev_optout").exists()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/src/modules/telemetry/environment.rs` around lines 331 - 338, Update
is_dev_optout() to honor the broader opt-out env vars in addition to
III_TELEMETRY_DEV and the sentinel file: check for III_TELEMETRY_OPTOUT being
set to "true" (or truthy values like "1") and DO_NOT_TRACK being set to "1" (or
other common truthy values) before returning false, and return true if any of
those env vars indicate opt-out; keep the existing sentinel-file check
(base_dir.join(".iii").join("telemetry_dev_optout").exists()) and the current
III_TELEMETRY_DEV check in the same function so all four signals
(III_TELEMETRY_DEV, III_TELEMETRY_OPTOUT, DO_NOT_TRACK, and the sentinel file)
are considered.
engine/src/modules/telemetry/mod.rs (1)

684-724: ⚠️ Potential issue | 🟠 Major

Send first_run before the 120-second warm-up.

Because check_and_mark_first_run() now runs after the 2-minute sleep, any engine session shorter than 120 seconds exits without ever sending first_run or the boot heartbeat. That's common in local/dev flows and will undercount new engines.

Possible fix
         tokio::spawn(async move {
-            tokio::time::sleep(std::time::Duration::from_secs(120)).await;
-
-            let snap = collect_engine_snapshot(&engine_for_started);
+            let initial_snap = collect_engine_snapshot(&engine_for_started);
 
             if check_and_mark_first_run() {
                 let first_run_event = ctx_for_started.build_event(
                     "first_run",
                     serde_json::json!({
@@
                         "os": std::env::consts::OS,
                         "arch": std::env::consts::ARCH,
                         "install_method": environment::detect_install_method(),
                     }),
-                    snap.wd.sdk_telemetry.as_ref(),
+                    initial_snap.wd.sdk_telemetry.as_ref(),
                 );
                 let _ = client_for_started.send_event(first_run_event).await;
             }
+
+            tokio::time::sleep(std::time::Duration::from_secs(120)).await;
+            let snap = collect_engine_snapshot(&engine_for_started);
 
             let mut props = build_base_properties(&snap);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/src/modules/telemetry/mod.rs` around lines 684 - 724, The current
tokio::spawn block delays calling check_and_mark_first_run() (and sending the
first_run event) until after a 120s sleep, causing short-lived sessions to miss
first_run and the boot heartbeat; move the call to check_and_mark_first_run()
and the construction/sending of the first_run event (using
ctx_for_started.build_event and client_for_started.send_event) to occur
immediately after collecting the snapshot
(collect_engine_snapshot(&engine_for_started)) and before the
tokio::time::sleep, and likewise send the boot_heartbeat (built by
ctx_for_started.build_event into boot_heartbeat and sent via
client_for_started.send_event) before the sleep so both events are emitted for
short sessions while keeping the later delayed work intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@engine/install.sh`:
- Around line 15-23: The current payloads interpolate shell variables into JSON
(using tr on quotes) which fails for backslashes, newlines, tabs, etc.; replace
this with proper JSON encoding: escape each dynamic field
(error_message/_err_msg, _stage, from_version, release_version, BIN_NAME) using
a JSON-serializer (e.g., jq -Rs or a small python -c json.dumps) to build a
valid JSON object string, store that escaped JSON in a single variable (e.g.,
payload) and pass payload to iii_emit_event for both the "upgrade_failed" and
"install_failed" branches (and the other occurrences noted around lines 293-318)
so the emitter receives valid JSON.

In `@engine/src/cli/telemetry.rs`:
- Around line 48-59: build_user_properties currently derives install_method from
detect_install_method/current_exe which gets set to "manual" for install-script
lifecycle events; change build_user_properties to accept an optional override
(e.g., install_method_override: Option<&str> or Option<String>) and when
constructing the JSON set "install_method" to that override if Some(...) else
environment::detect_install_method(); then update build_event to pass
event_properties.install_method (or Some("sh")) through to build_user_properties
so user_properties.install_method matches event_properties.install_method for
install-script events; touch functions build_user_properties and build_event and
the field name "install_method" to implement this override.

In `@sdk/packages/node/iii/src/iii.ts`:
- Line 556: The telemetry "framework" assignment currently uses
"telemetryOpts?.framework ?? 'iii-node'" which still lets empty strings through;
update the assignment that builds the telemetry object in iii.ts to treat
undefined, null, empty, or whitespace-only telemetryOpts.framework as missing
and fall back to 'iii-node' (e.g., trim and use a truthy check), referencing the
telemetryOpts object and its framework property so engine aggregation always
receives a non-blank "framework" value.

In `@sdk/packages/python/iii/src/iii/iii.py`:
- Around line 1288-1291: The "framework" value currently uses
"telemetry_opts.framework if telemetry_opts else 'iii-py'", which lets a present
telemetry_opts with a falsy framework emit an empty/None value; change the
expression so it falls back to "iii-py" when telemetry_opts.framework is falsy
(e.g., use the telemetry_opts.framework or "iii-py" pattern) so the "framework"
dict key always defaults to "iii-py" unless a non-empty value is provided;
update the code where the dict is constructed (the "framework" entry in the
telemetry options block referencing telemetry_opts) accordingly.

---

Outside diff comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 331-338: Update is_dev_optout() to honor the broader opt-out env
vars in addition to III_TELEMETRY_DEV and the sentinel file: check for
III_TELEMETRY_OPTOUT being set to "true" (or truthy values like "1") and
DO_NOT_TRACK being set to "1" (or other common truthy values) before returning
false, and return true if any of those env vars indicate opt-out; keep the
existing sentinel-file check
(base_dir.join(".iii").join("telemetry_dev_optout").exists()) and the current
III_TELEMETRY_DEV check in the same function so all four signals
(III_TELEMETRY_DEV, III_TELEMETRY_OPTOUT, DO_NOT_TRACK, and the sentinel file)
are considered.

In `@engine/src/modules/telemetry/mod.rs`:
- Around line 684-724: The current tokio::spawn block delays calling
check_and_mark_first_run() (and sending the first_run event) until after a 120s
sleep, causing short-lived sessions to miss first_run and the boot heartbeat;
move the call to check_and_mark_first_run() and the construction/sending of the
first_run event (using ctx_for_started.build_event and
client_for_started.send_event) to occur immediately after collecting the
snapshot (collect_engine_snapshot(&engine_for_started)) and before the
tokio::time::sleep, and likewise send the boot_heartbeat (built by
ctx_for_started.build_event into boot_heartbeat and sent via
client_for_started.send_event) before the sleep so both events are emitted for
short sessions while keeping the later delayed work intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a3b8b082-2f43-4e9b-9ce6-ec6f97fa3ac2

📥 Commits

Reviewing files that changed from the base of the PR and between f5ac693 and a247fe2.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (13)
  • engine/Cargo.toml
  • engine/Dockerfile
  • engine/docker-compose.prod.yml
  • engine/docker-compose.yml
  • engine/install.sh
  • engine/src/cli/registry.rs
  • engine/src/cli/telemetry.rs
  • engine/src/main.rs
  • engine/src/modules/telemetry/environment.rs
  • engine/src/modules/telemetry/mod.rs
  • sdk/packages/node/iii/src/iii.ts
  • sdk/packages/python/iii/src/iii/iii.py
  • sdk/packages/rust/iii/src/iii.rs

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
engine/install.sh (1)

15-23: ⚠️ Potential issue | 🟠 Major

These event payloads still need real JSON encoding.

Line 16 only normalizes " in one field, and Lines 19, 22, 339, 342, 359, and 362 still interpolate shell variables directly into JSON. engine/src/main.rs parses --install-event-properties with serde_json::from_str, so a backslash, newline, tab, or quote in error_message, from_version, release_version, installed_version, or BIN_NAME will make the payload invalid and the event will be dropped. Please build these objects with a serializer/helper instead of manual string concatenation.

Also applies to: 338-343, 358-362

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/install.sh` around lines 15 - 23, The event payloads are built by
interpolating shell variables (e.g., error_message/_err_msg, from_version,
release_version, installed_version, BIN_NAME, _stage) directly into JSON strings
before calling iii_emit_event; replace that fragile concatenation with a proper
JSON-encoding helper and pass its output to iii_emit_event. Add a small function
(e.g., encode_json or json_props) that takes key/value pairs and returns a safe
JSON string by calling a reliable serializer (jq -n --arg ... '{...}' or python
-c 'import json,sys; print(json.dumps({...}))'), then use that helper when
constructing both the "upgrade_failed" and "install_failed" payloads (and the
other occurrences around the same block) so all variables are JSON-escaped
instead of being interpolated raw. Ensure iii_emit_event is called with the
helper's output as the --install-event-properties value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@engine/install.sh`:
- Around line 336-343: Initialize telemetry_emitter to the existing binary
(bin_dir/$BIN_NAME) before the download/extract steps and then reset
telemetry_emitter to "$bin_file" after successful extraction so failures before
the new binary exists still emit telemetry; wrap all failure-prone calls (curl,
tar, unzip, the install/cp/chmod sequence) in a small executor that traps errors
and calls err() (or add a trap 'ERR' handler) so set -e won't bypass
err()—update references to telemetry_emitter, iii_emit_event, err(), and the
download/install command blocks to use this executor and to reassign
telemetry_emitter after extraction.

---

Duplicate comments:
In `@engine/install.sh`:
- Around line 15-23: The event payloads are built by interpolating shell
variables (e.g., error_message/_err_msg, from_version, release_version,
installed_version, BIN_NAME, _stage) directly into JSON strings before calling
iii_emit_event; replace that fragile concatenation with a proper JSON-encoding
helper and pass its output to iii_emit_event. Add a small function (e.g.,
encode_json or json_props) that takes key/value pairs and returns a safe JSON
string by calling a reliable serializer (jq -n --arg ... '{...}' or python -c
'import json,sys; print(json.dumps({...}))'), then use that helper when
constructing both the "upgrade_failed" and "install_failed" payloads (and the
other occurrences around the same block) so all variables are JSON-escaped
instead of being interpolated raw. Ensure iii_emit_event is called with the
helper's output as the --install-event-properties value.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 64e80dd2-5e7c-4d61-89f5-1b2288b130aa

📥 Commits

Reviewing files that changed from the base of the PR and between a247fe2 and ad6770e.

📒 Files selected for processing (7)
  • engine/Cargo.toml
  • engine/install.sh
  • engine/src/main.rs
  • engine/src/modules/telemetry/mod.rs
  • sdk/packages/node/iii/src/iii.ts
  • sdk/packages/python/iii/src/iii/iii.py
  • sdk/packages/rust/iii/src/iii.rs
✅ Files skipped from review due to trivial changes (2)
  • engine/Cargo.toml
  • sdk/packages/python/iii/src/iii/iii.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • sdk/packages/node/iii/src/iii.ts
  • engine/src/main.rs
  • engine/src/modules/telemetry/mod.rs

Address CodeRabbit findings: the Node SDK used `??` which only catches
null/undefined, and the Python SDK used a conditional that passed through
falsy framework values. Both now fall back to their defaults when
framework is empty or whitespace-only.
…and install_method consistency

Address CodeRabbit findings across the install lifecycle telemetry pipeline:

- install.sh: replace tr-based quote escaping with a json_str() helper
  (jq -Rs with awk fallback) for all dynamic JSON fields
- install.sh: seed telemetry_emitter from the existing binary on upgrades
  so failures before extraction can still emit telemetry
- install.sh: add || err guards on curl, tar, unzip, and install/cp so
  set -e doesn't bypass err() and telemetry emission
- cli/telemetry.rs: add install_method_override param to
  build_user_properties/build_event so install-script lifecycle events
  carry consistent install_method in both event and user properties
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@engine/install.sh`:
- Around line 385-396: The script prints success and calls iii_emit_event
("install_succeeded"/"upgrade_succeeded") even when the installed binary is
missing or unusable because installed_version can be empty; update the logic
after computing installed_version to actually validate the installed binary runs
(e.g., invoke "$bin_dir/$BIN_NAME" --version or a lightweight health check and
confirm non-empty output/exit code), only then emit iii_emit_event and print the
success message; if the check fails, emit a failure event (or avoid emitting
*_succeeded) and print/exit with an error. Use the existing symbols
installed_version, BIN_NAME, iii_emit_event and install_event_prefix to locate
and change the relevant block.

In `@engine/src/cli/telemetry.rs`:
- Around line 76-82: The device-id creation is currently racy; modify
environment::get_or_create_device_id() to serialize the read/check/write of the
telemetry YAML so concurrent processes cannot mint different IDs. Concretely:
add a cross-process lock (e.g., advisory file lock or similar) around the
existing telemetry YAML read -> if missing generate UUID -> write -> release
sequence inside get_or_create_device_id(), re-reading the file after acquiring
the lock to avoid TOCTOU. Ensure build_event() (which calls
get_or_create_device_id()) and other callers need no changes because the
serialization is contained inside get_or_create_device_id().
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 5b945ac7-2712-410b-a2cb-feb72c95c4ca

📥 Commits

Reviewing files that changed from the base of the PR and between ad6770e and 2f87517.

📒 Files selected for processing (4)
  • engine/install.sh
  • engine/src/cli/telemetry.rs
  • sdk/packages/node/iii/src/iii.ts
  • sdk/packages/python/iii/src/iii/iii.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • sdk/packages/python/iii/src/iii/iii.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@engine/src/main.rs`:
- Around line 197-210: The calls to
iii::modules::telemetry::environment::get_or_create_device_id() and
resolve_execution_context() run even when telemetry is disabled, causing
telemetry.yaml and a device id to be created; wrap those initializers in the
same telemetry opt-out check used by build_event()/is_telemetry_disabled() or
move them into send_install_lifecycle_event() so initialization only occurs when
telemetry is enabled. Concretely, check the telemetry disabled flag (via
is_telemetry_disabled() or the same helper used by build_event()) before calling
get_or_create_device_id() and resolve_execution_context(), or remove those calls
here and ensure send_install_lifecycle_event() invokes them when needed.

In `@engine/src/modules/telemetry/environment.rs`:
- Around line 515-523: The test
test_environment_info_collect_returns_valid_fields calls
EnvironmentInfo::collect() which writes telemetry.yaml into the user's home
(~/.iii); isolate it by creating a temporary directory (e.g., via
tempfile::tempdir()), set the HOME (or the specific env var the collector reads,
if different) to that temp dir before calling EnvironmentInfo::collect(), run
the assertions, and then restore the original env or let the tempdir drop to
avoid polluting the real home; update the test to use this temp HOME setup
around the call to EnvironmentInfo::collect().
🪄 Autofix (Beta)

❌ Autofix failed (check again to retry)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: b2c395c5-c666-46f7-965c-28a274f8bc8e

📥 Commits

Reviewing files that changed from the base of the PR and between 2f87517 and 15963c7.

📒 Files selected for processing (2)
  • engine/src/main.rs
  • engine/src/modules/telemetry/environment.rs

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 2, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

An unexpected error occurred while generating fixes: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
engine/src/modules/telemetry/environment.rs (1)

529-537: ⚠️ Potential issue | 🟡 Minor

Isolate EnvironmentInfo::collect() from the real home directory.

Line 530 now creates or updates telemetry.yaml. Because this test is still not #[serial] and does not redirect HOME, it can mutate the runner's real ~/.iii and race with the temp-home tests in this module.

🧪 Suggested isolation
     #[test]
+    #[serial]
     fn test_environment_info_collect_returns_valid_fields() {
+        let dir = tempfile::tempdir().unwrap();
+        unsafe {
+            env::set_var("HOME", dir.path());
+        }
         let info = EnvironmentInfo::collect();
         assert!(!info.machine_id.is_empty());
         assert!(!info.iii_execution_context.is_empty());
         assert!(info.cpu_cores >= 1);
         assert!(!info.os.is_empty());
         assert!(!info.arch.is_empty());
         assert!(!info.timezone.is_empty());
+        unsafe {
+            env::remove_var("HOME");
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/src/modules/telemetry/environment.rs` around lines 529 - 537, The test
test_environment_info_collect_returns_valid_fields calls
EnvironmentInfo::collect which writes to ~/.iii/telemetry.yaml; isolate it by
creating a TempDir (e.g., tempfile::TempDir) and setting the HOME env var for
the test to that temp dir before calling EnvironmentInfo::collect, then restore
the original HOME (or rely on TempDir drop) so the real home is not mutated;
update the test to set std::env::set_var("HOME", tmpdir.path()) (or equivalent)
around the call to EnvironmentInfo::collect to prevent races with other tests
that expect a clean ~/.iii.
🧹 Nitpick comments (1)
engine/src/modules/telemetry/environment.rs (1)

561-567: This assertion no longer checks the “none on host” behavior.

At Line 562 you're accepting every literal detect_container_runtime() currently returns, so the test passes even if the host branch regresses. Either pin a no-container environment and assert "none", or rename this to a simple return-domain test.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@engine/src/modules/telemetry/environment.rs` around lines 561 - 567, The
assertion around detect_container_runtime() is too permissive and masks
regressions in the host/no-container branch; update the test so it either pins a
no-container environment and asserts detect_container_runtime() == "none" (to
validate the host branch explicitly) or change the assertion to verify the
return is within the allowed domain (e.g., match/contains one of
["none","docker","container","kubernetes"]) rather than accepting whatever
literal it currently returns; reference detect_container_runtime() and the
existing assert block when making the change so the host behavior is explicitly
covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 487-495: The test
test_resolve_execution_context_prefers_explicit_env should neutralize ambient CI
markers before asserting "docker": clear common CI environment variables (e.g.,
CI, GITHUB_ACTIONS, GITHUB_RUN_ID or any vars your resolve_execution_context()
checks) or save and restore them around the test, then set EXECUTION_CONTEXT_ENV
to "docker", call resolve_execution_context(), assert it equals "docker", and
finally restore/cleanup both EXECUTION_CONTEXT_ENV and the CI vars; update the
test to use the test name test_resolve_execution_context_prefers_explicit_env
and the resolve_execution_context()/EXECUTION_CONTEXT_ENV symbols to find the
location.

---

Duplicate comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 529-537: The test
test_environment_info_collect_returns_valid_fields calls
EnvironmentInfo::collect which writes to ~/.iii/telemetry.yaml; isolate it by
creating a TempDir (e.g., tempfile::TempDir) and setting the HOME env var for
the test to that temp dir before calling EnvironmentInfo::collect, then restore
the original HOME (or rely on TempDir drop) so the real home is not mutated;
update the test to set std::env::set_var("HOME", tmpdir.path()) (or equivalent)
around the call to EnvironmentInfo::collect to prevent races with other tests
that expect a clean ~/.iii.

---

Nitpick comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 561-567: The assertion around detect_container_runtime() is too
permissive and masks regressions in the host/no-container branch; update the
test so it either pins a no-container environment and asserts
detect_container_runtime() == "none" (to validate the host branch explicitly) or
change the assertion to verify the return is within the allowed domain (e.g.,
match/contains one of ["none","docker","container","kubernetes"]) rather than
accepting whatever literal it currently returns; reference
detect_container_runtime() and the existing assert block when making the change
so the host behavior is explicitly covered.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2c34d313-dadf-4005-9a89-b6c6c2a441b8

📥 Commits

Reviewing files that changed from the base of the PR and between 15963c7 and 6dd6481.

📒 Files selected for processing (1)
  • engine/src/modules/telemetry/environment.rs

sergiofilhowz
sergiofilhowz previously approved these changes Apr 2, 2026
Detect container runtimes and derive device IDs from project.ini,
host user ID, or hostname with salted SHA-256 hashing. Set
III_EXECUTION_CONTEXT and III_ENV in the debug Dockerfile.
…_user_id field

Wrap test_environment_info_collect_returns_valid_fields with a temp HOME
so it no longer writes telemetry.yaml into the real ~/.iii, and add
#[serial] to match the rest of the test suite. Also add the missing
host_user_id field to all EnvironmentInfo test initializers in mod.rs
so the crate compiles after the struct gained that field.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@anthonyiscoding anthonyiscoding merged commit 946eca4 into main Apr 2, 2026
23 checks passed
@anthonyiscoding anthonyiscoding deleted the fix/amplitude-telemetry-ids branch April 2, 2026 18:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants