Fix: Improve reporting of user metrics to amplitude#1390
Fix: Improve reporting of user metrics to amplitude#1390anthonyiscoding merged 21 commits intomainfrom
Conversation
- Migrate device identity and execution context to ~/.iii/telemetry.yaml with schema version 2. Legacy telemetry.toml is ignored (not migrated). - Use machineid-rs HMAC-SHA256 output directly as device_id without redundant second SHA256 pass. - Route install/upgrade lifecycle telemetry through the iii binary via --install-only-generate-ids, --install-event-type, and --install-event-properties flags. - Differentiate execution contexts: kubernetes, docker, container, ci, user. Unknown values normalize to "unknown". - Set III_EXECUTION_CONTEXT=docker in Dockerfile and docker-compose files. - Delay all telemetry (first_run, boot heartbeat) to 2 minutes after engine start to allow workers/functions to register and to filter out noise from ephemeral containers. - Record first_run_sent in telemetry.yaml on first actual engine run (not during install-only ID generation).
…t_background_tasks Extract EngineSnapshot, build_base_properties, and DeltaAccumulator/DeltaSnapshot helpers to eliminate repeated property maps and delta counter boilerplate across the boot heartbeat, periodic heartbeat, and engine_stopped events.
…ss engine and SDKs Replace hardcoded motia/non-motia worker counts with a dynamic worker_count_by_framework map. Rename iii-js to iii-node and default the framework field in all SDKs so Amplitude events always carry a framework identifier. Remove unused preferences/dev_optout config, add platform tag to install-script events, and track to_version on upgrades.
Remove redundant `or "iii-py"` fallback for the framework telemetry field — the ternary already handles the None case. Black formatting applied across the file (line wrapping, multi-line imports).
Install lifecycle events are sent only when the downloaded iii binary supports --install-only-generate-ids. Older releases skip telemetry silently instead of emitting low-value curl-based events. Include to_version in upgrade_failed payloads (same release target as upgrade_started).
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughTelemetry identity and lifecycle emission were moved into the installed binary; telemetry persistence migrated from TOML→YAML v2 using machineid-rs; execution-context resolution was added; install script and CLI received install-event wiring; Changes
Sequence Diagram(s)sequenceDiagram
participant InstallScript as Install Script
participant Binary as Installed Binary
participant Telemetry as Engine Telemetry Module
participant Config as telemetry.yaml (FS)
InstallScript->>Binary: invoke --install-only-generate-ids (optional event args)
Binary->>Telemetry: get_or_create_device_id()
Telemetry->>Config: read/create/migrate telemetry.yaml (v2)
Config-->>Telemetry: return device_id / iii_execution_context
Telemetry-->>Binary: device_id + iii_execution_context
InstallScript->>Binary: iii_emit_event(event_type, properties)
Binary->>Telemetry: send_install_lifecycle_event(platform="install-script", props)
Telemetry->>Telemetry: build payload and send (async)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
engine/src/modules/telemetry/environment.rs (1)
331-338:⚠️ Potential issue | 🟠 MajorRestore the broader opt-out env vars here.
Both the CLI and engine telemetry paths now delegate to
is_dev_optout(), but this implementation only honorsIII_TELEMETRY_DEVand the sentinel file. If a user setsIII_TELEMETRY_OPTOUT=trueorDO_NOT_TRACK=1, telemetry stays enabled. That's a privacy regression.Possible fix
pub fn is_dev_optout() -> bool { if std::env::var("III_TELEMETRY_DEV").ok().as_deref() == Some("true") { return true; } + if std::env::var("III_TELEMETRY_OPTOUT").ok().as_deref() == Some("true") { + return true; + } + if std::env::var("DO_NOT_TRACK").ok().as_deref() == Some("1") { + return true; + } let base_dir = dirs::home_dir().unwrap_or_else(std::env::temp_dir); base_dir.join(".iii").join("telemetry_dev_optout").exists() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/src/modules/telemetry/environment.rs` around lines 331 - 338, Update is_dev_optout() to honor the broader opt-out env vars in addition to III_TELEMETRY_DEV and the sentinel file: check for III_TELEMETRY_OPTOUT being set to "true" (or truthy values like "1") and DO_NOT_TRACK being set to "1" (or other common truthy values) before returning false, and return true if any of those env vars indicate opt-out; keep the existing sentinel-file check (base_dir.join(".iii").join("telemetry_dev_optout").exists()) and the current III_TELEMETRY_DEV check in the same function so all four signals (III_TELEMETRY_DEV, III_TELEMETRY_OPTOUT, DO_NOT_TRACK, and the sentinel file) are considered.engine/src/modules/telemetry/mod.rs (1)
684-724:⚠️ Potential issue | 🟠 MajorSend
first_runbefore the 120-second warm-up.Because
check_and_mark_first_run()now runs after the 2-minute sleep, any engine session shorter than 120 seconds exits without ever sendingfirst_runor the boot heartbeat. That's common in local/dev flows and will undercount new engines.Possible fix
tokio::spawn(async move { - tokio::time::sleep(std::time::Duration::from_secs(120)).await; - - let snap = collect_engine_snapshot(&engine_for_started); + let initial_snap = collect_engine_snapshot(&engine_for_started); if check_and_mark_first_run() { let first_run_event = ctx_for_started.build_event( "first_run", serde_json::json!({ @@ "os": std::env::consts::OS, "arch": std::env::consts::ARCH, "install_method": environment::detect_install_method(), }), - snap.wd.sdk_telemetry.as_ref(), + initial_snap.wd.sdk_telemetry.as_ref(), ); let _ = client_for_started.send_event(first_run_event).await; } + + tokio::time::sleep(std::time::Duration::from_secs(120)).await; + let snap = collect_engine_snapshot(&engine_for_started); let mut props = build_base_properties(&snap);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/src/modules/telemetry/mod.rs` around lines 684 - 724, The current tokio::spawn block delays calling check_and_mark_first_run() (and sending the first_run event) until after a 120s sleep, causing short-lived sessions to miss first_run and the boot heartbeat; move the call to check_and_mark_first_run() and the construction/sending of the first_run event (using ctx_for_started.build_event and client_for_started.send_event) to occur immediately after collecting the snapshot (collect_engine_snapshot(&engine_for_started)) and before the tokio::time::sleep, and likewise send the boot_heartbeat (built by ctx_for_started.build_event into boot_heartbeat and sent via client_for_started.send_event) before the sleep so both events are emitted for short sessions while keeping the later delayed work intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/install.sh`:
- Around line 15-23: The current payloads interpolate shell variables into JSON
(using tr on quotes) which fails for backslashes, newlines, tabs, etc.; replace
this with proper JSON encoding: escape each dynamic field
(error_message/_err_msg, _stage, from_version, release_version, BIN_NAME) using
a JSON-serializer (e.g., jq -Rs or a small python -c json.dumps) to build a
valid JSON object string, store that escaped JSON in a single variable (e.g.,
payload) and pass payload to iii_emit_event for both the "upgrade_failed" and
"install_failed" branches (and the other occurrences noted around lines 293-318)
so the emitter receives valid JSON.
In `@engine/src/cli/telemetry.rs`:
- Around line 48-59: build_user_properties currently derives install_method from
detect_install_method/current_exe which gets set to "manual" for install-script
lifecycle events; change build_user_properties to accept an optional override
(e.g., install_method_override: Option<&str> or Option<String>) and when
constructing the JSON set "install_method" to that override if Some(...) else
environment::detect_install_method(); then update build_event to pass
event_properties.install_method (or Some("sh")) through to build_user_properties
so user_properties.install_method matches event_properties.install_method for
install-script events; touch functions build_user_properties and build_event and
the field name "install_method" to implement this override.
In `@sdk/packages/node/iii/src/iii.ts`:
- Line 556: The telemetry "framework" assignment currently uses
"telemetryOpts?.framework ?? 'iii-node'" which still lets empty strings through;
update the assignment that builds the telemetry object in iii.ts to treat
undefined, null, empty, or whitespace-only telemetryOpts.framework as missing
and fall back to 'iii-node' (e.g., trim and use a truthy check), referencing the
telemetryOpts object and its framework property so engine aggregation always
receives a non-blank "framework" value.
In `@sdk/packages/python/iii/src/iii/iii.py`:
- Around line 1288-1291: The "framework" value currently uses
"telemetry_opts.framework if telemetry_opts else 'iii-py'", which lets a present
telemetry_opts with a falsy framework emit an empty/None value; change the
expression so it falls back to "iii-py" when telemetry_opts.framework is falsy
(e.g., use the telemetry_opts.framework or "iii-py" pattern) so the "framework"
dict key always defaults to "iii-py" unless a non-empty value is provided;
update the code where the dict is constructed (the "framework" entry in the
telemetry options block referencing telemetry_opts) accordingly.
---
Outside diff comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 331-338: Update is_dev_optout() to honor the broader opt-out env
vars in addition to III_TELEMETRY_DEV and the sentinel file: check for
III_TELEMETRY_OPTOUT being set to "true" (or truthy values like "1") and
DO_NOT_TRACK being set to "1" (or other common truthy values) before returning
false, and return true if any of those env vars indicate opt-out; keep the
existing sentinel-file check
(base_dir.join(".iii").join("telemetry_dev_optout").exists()) and the current
III_TELEMETRY_DEV check in the same function so all four signals
(III_TELEMETRY_DEV, III_TELEMETRY_OPTOUT, DO_NOT_TRACK, and the sentinel file)
are considered.
In `@engine/src/modules/telemetry/mod.rs`:
- Around line 684-724: The current tokio::spawn block delays calling
check_and_mark_first_run() (and sending the first_run event) until after a 120s
sleep, causing short-lived sessions to miss first_run and the boot heartbeat;
move the call to check_and_mark_first_run() and the construction/sending of the
first_run event (using ctx_for_started.build_event and
client_for_started.send_event) to occur immediately after collecting the
snapshot (collect_engine_snapshot(&engine_for_started)) and before the
tokio::time::sleep, and likewise send the boot_heartbeat (built by
ctx_for_started.build_event into boot_heartbeat and sent via
client_for_started.send_event) before the sleep so both events are emitted for
short sessions while keeping the later delayed work intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a3b8b082-2f43-4e9b-9ce6-ec6f97fa3ac2
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
engine/Cargo.tomlengine/Dockerfileengine/docker-compose.prod.ymlengine/docker-compose.ymlengine/install.shengine/src/cli/registry.rsengine/src/cli/telemetry.rsengine/src/main.rsengine/src/modules/telemetry/environment.rsengine/src/modules/telemetry/mod.rssdk/packages/node/iii/src/iii.tssdk/packages/python/iii/src/iii/iii.pysdk/packages/rust/iii/src/iii.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
engine/install.sh (1)
15-23:⚠️ Potential issue | 🟠 MajorThese event payloads still need real JSON encoding.
Line 16 only normalizes
"in one field, and Lines 19, 22, 339, 342, 359, and 362 still interpolate shell variables directly into JSON.engine/src/main.rsparses--install-event-propertieswithserde_json::from_str, so a backslash, newline, tab, or quote inerror_message,from_version,release_version,installed_version, orBIN_NAMEwill make the payload invalid and the event will be dropped. Please build these objects with a serializer/helper instead of manual string concatenation.Also applies to: 338-343, 358-362
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/install.sh` around lines 15 - 23, The event payloads are built by interpolating shell variables (e.g., error_message/_err_msg, from_version, release_version, installed_version, BIN_NAME, _stage) directly into JSON strings before calling iii_emit_event; replace that fragile concatenation with a proper JSON-encoding helper and pass its output to iii_emit_event. Add a small function (e.g., encode_json or json_props) that takes key/value pairs and returns a safe JSON string by calling a reliable serializer (jq -n --arg ... '{...}' or python -c 'import json,sys; print(json.dumps({...}))'), then use that helper when constructing both the "upgrade_failed" and "install_failed" payloads (and the other occurrences around the same block) so all variables are JSON-escaped instead of being interpolated raw. Ensure iii_emit_event is called with the helper's output as the --install-event-properties value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/install.sh`:
- Around line 336-343: Initialize telemetry_emitter to the existing binary
(bin_dir/$BIN_NAME) before the download/extract steps and then reset
telemetry_emitter to "$bin_file" after successful extraction so failures before
the new binary exists still emit telemetry; wrap all failure-prone calls (curl,
tar, unzip, the install/cp/chmod sequence) in a small executor that traps errors
and calls err() (or add a trap 'ERR' handler) so set -e won't bypass
err()—update references to telemetry_emitter, iii_emit_event, err(), and the
download/install command blocks to use this executor and to reassign
telemetry_emitter after extraction.
---
Duplicate comments:
In `@engine/install.sh`:
- Around line 15-23: The event payloads are built by interpolating shell
variables (e.g., error_message/_err_msg, from_version, release_version,
installed_version, BIN_NAME, _stage) directly into JSON strings before calling
iii_emit_event; replace that fragile concatenation with a proper JSON-encoding
helper and pass its output to iii_emit_event. Add a small function (e.g.,
encode_json or json_props) that takes key/value pairs and returns a safe JSON
string by calling a reliable serializer (jq -n --arg ... '{...}' or python -c
'import json,sys; print(json.dumps({...}))'), then use that helper when
constructing both the "upgrade_failed" and "install_failed" payloads (and the
other occurrences around the same block) so all variables are JSON-escaped
instead of being interpolated raw. Ensure iii_emit_event is called with the
helper's output as the --install-event-properties value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 64e80dd2-5e7c-4d61-89f5-1b2288b130aa
📒 Files selected for processing (7)
engine/Cargo.tomlengine/install.shengine/src/main.rsengine/src/modules/telemetry/mod.rssdk/packages/node/iii/src/iii.tssdk/packages/python/iii/src/iii/iii.pysdk/packages/rust/iii/src/iii.rs
✅ Files skipped from review due to trivial changes (2)
- engine/Cargo.toml
- sdk/packages/python/iii/src/iii/iii.py
🚧 Files skipped from review as they are similar to previous changes (3)
- sdk/packages/node/iii/src/iii.ts
- engine/src/main.rs
- engine/src/modules/telemetry/mod.rs
Address CodeRabbit findings: the Node SDK used `??` which only catches null/undefined, and the Python SDK used a conditional that passed through falsy framework values. Both now fall back to their defaults when framework is empty or whitespace-only.
…and install_method consistency Address CodeRabbit findings across the install lifecycle telemetry pipeline: - install.sh: replace tr-based quote escaping with a json_str() helper (jq -Rs with awk fallback) for all dynamic JSON fields - install.sh: seed telemetry_emitter from the existing binary on upgrades so failures before extraction can still emit telemetry - install.sh: add || err guards on curl, tar, unzip, and install/cp so set -e doesn't bypass err() and telemetry emission - cli/telemetry.rs: add install_method_override param to build_user_properties/build_event so install-script lifecycle events carry consistent install_method in both event and user properties
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/install.sh`:
- Around line 385-396: The script prints success and calls iii_emit_event
("install_succeeded"/"upgrade_succeeded") even when the installed binary is
missing or unusable because installed_version can be empty; update the logic
after computing installed_version to actually validate the installed binary runs
(e.g., invoke "$bin_dir/$BIN_NAME" --version or a lightweight health check and
confirm non-empty output/exit code), only then emit iii_emit_event and print the
success message; if the check fails, emit a failure event (or avoid emitting
*_succeeded) and print/exit with an error. Use the existing symbols
installed_version, BIN_NAME, iii_emit_event and install_event_prefix to locate
and change the relevant block.
In `@engine/src/cli/telemetry.rs`:
- Around line 76-82: The device-id creation is currently racy; modify
environment::get_or_create_device_id() to serialize the read/check/write of the
telemetry YAML so concurrent processes cannot mint different IDs. Concretely:
add a cross-process lock (e.g., advisory file lock or similar) around the
existing telemetry YAML read -> if missing generate UUID -> write -> release
sequence inside get_or_create_device_id(), re-reading the file after acquiring
the lock to avoid TOCTOU. Ensure build_event() (which calls
get_or_create_device_id()) and other callers need no changes because the
serialization is contained inside get_or_create_device_id().
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 5b945ac7-2712-410b-a2cb-feb72c95c4ca
📒 Files selected for processing (4)
engine/install.shengine/src/cli/telemetry.rssdk/packages/node/iii/src/iii.tssdk/packages/python/iii/src/iii/iii.py
🚧 Files skipped from review as they are similar to previous changes (1)
- sdk/packages/python/iii/src/iii/iii.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/src/main.rs`:
- Around line 197-210: The calls to
iii::modules::telemetry::environment::get_or_create_device_id() and
resolve_execution_context() run even when telemetry is disabled, causing
telemetry.yaml and a device id to be created; wrap those initializers in the
same telemetry opt-out check used by build_event()/is_telemetry_disabled() or
move them into send_install_lifecycle_event() so initialization only occurs when
telemetry is enabled. Concretely, check the telemetry disabled flag (via
is_telemetry_disabled() or the same helper used by build_event()) before calling
get_or_create_device_id() and resolve_execution_context(), or remove those calls
here and ensure send_install_lifecycle_event() invokes them when needed.
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 515-523: The test
test_environment_info_collect_returns_valid_fields calls
EnvironmentInfo::collect() which writes telemetry.yaml into the user's home
(~/.iii); isolate it by creating a temporary directory (e.g., via
tempfile::tempdir()), set the HOME (or the specific env var the collector reads,
if different) to that temp dir before calling EnvironmentInfo::collect(), run
the assertions, and then restore the original env or let the tempdir drop to
avoid polluting the real home; update the test to use this temp HOME setup
around the call to EnvironmentInfo::collect().
🪄 Autofix (Beta)
❌ Autofix failed (check again to retry)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: b2c395c5-c666-46f7-965c-28a274f8bc8e
📒 Files selected for processing (2)
engine/src/main.rsengine/src/modules/telemetry/environment.rs
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. An unexpected error occurred while generating fixes: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error) |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
engine/src/modules/telemetry/environment.rs (1)
529-537:⚠️ Potential issue | 🟡 MinorIsolate
EnvironmentInfo::collect()from the real home directory.Line 530 now creates or updates
telemetry.yaml. Because this test is still not#[serial]and does not redirectHOME, it can mutate the runner's real~/.iiiand race with the temp-home tests in this module.🧪 Suggested isolation
#[test] + #[serial] fn test_environment_info_collect_returns_valid_fields() { + let dir = tempfile::tempdir().unwrap(); + unsafe { + env::set_var("HOME", dir.path()); + } let info = EnvironmentInfo::collect(); assert!(!info.machine_id.is_empty()); assert!(!info.iii_execution_context.is_empty()); assert!(info.cpu_cores >= 1); assert!(!info.os.is_empty()); assert!(!info.arch.is_empty()); assert!(!info.timezone.is_empty()); + unsafe { + env::remove_var("HOME"); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/src/modules/telemetry/environment.rs` around lines 529 - 537, The test test_environment_info_collect_returns_valid_fields calls EnvironmentInfo::collect which writes to ~/.iii/telemetry.yaml; isolate it by creating a TempDir (e.g., tempfile::TempDir) and setting the HOME env var for the test to that temp dir before calling EnvironmentInfo::collect, then restore the original HOME (or rely on TempDir drop) so the real home is not mutated; update the test to set std::env::set_var("HOME", tmpdir.path()) (or equivalent) around the call to EnvironmentInfo::collect to prevent races with other tests that expect a clean ~/.iii.
🧹 Nitpick comments (1)
engine/src/modules/telemetry/environment.rs (1)
561-567: This assertion no longer checks the “none on host” behavior.At Line 562 you're accepting every literal
detect_container_runtime()currently returns, so the test passes even if the host branch regresses. Either pin a no-container environment and assert"none", or rename this to a simple return-domain test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@engine/src/modules/telemetry/environment.rs` around lines 561 - 567, The assertion around detect_container_runtime() is too permissive and masks regressions in the host/no-container branch; update the test so it either pins a no-container environment and asserts detect_container_runtime() == "none" (to validate the host branch explicitly) or change the assertion to verify the return is within the allowed domain (e.g., match/contains one of ["none","docker","container","kubernetes"]) rather than accepting whatever literal it currently returns; reference detect_container_runtime() and the existing assert block when making the change so the host behavior is explicitly covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 487-495: The test
test_resolve_execution_context_prefers_explicit_env should neutralize ambient CI
markers before asserting "docker": clear common CI environment variables (e.g.,
CI, GITHUB_ACTIONS, GITHUB_RUN_ID or any vars your resolve_execution_context()
checks) or save and restore them around the test, then set EXECUTION_CONTEXT_ENV
to "docker", call resolve_execution_context(), assert it equals "docker", and
finally restore/cleanup both EXECUTION_CONTEXT_ENV and the CI vars; update the
test to use the test name test_resolve_execution_context_prefers_explicit_env
and the resolve_execution_context()/EXECUTION_CONTEXT_ENV symbols to find the
location.
---
Duplicate comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 529-537: The test
test_environment_info_collect_returns_valid_fields calls
EnvironmentInfo::collect which writes to ~/.iii/telemetry.yaml; isolate it by
creating a TempDir (e.g., tempfile::TempDir) and setting the HOME env var for
the test to that temp dir before calling EnvironmentInfo::collect, then restore
the original HOME (or rely on TempDir drop) so the real home is not mutated;
update the test to set std::env::set_var("HOME", tmpdir.path()) (or equivalent)
around the call to EnvironmentInfo::collect to prevent races with other tests
that expect a clean ~/.iii.
---
Nitpick comments:
In `@engine/src/modules/telemetry/environment.rs`:
- Around line 561-567: The assertion around detect_container_runtime() is too
permissive and masks regressions in the host/no-container branch; update the
test so it either pins a no-container environment and asserts
detect_container_runtime() == "none" (to validate the host branch explicitly) or
change the assertion to verify the return is within the allowed domain (e.g.,
match/contains one of ["none","docker","container","kubernetes"]) rather than
accepting whatever literal it currently returns; reference
detect_container_runtime() and the existing assert block when making the change
so the host behavior is explicitly covered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2c34d313-dadf-4005-9a89-b6c6c2a441b8
📒 Files selected for processing (1)
engine/src/modules/telemetry/environment.rs
Detect container runtimes and derive device IDs from project.ini, host user ID, or hostname with salted SHA-256 hashing. Set III_EXECUTION_CONTEXT and III_ENV in the debug Dockerfile.
…_user_id field Wrap test_environment_info_collect_returns_valid_fields with a temp HOME so it no longer writes telemetry.yaml into the real ~/.iii, and add #[serial] to match the rest of the test suite. Also add the missing host_user_id field to all EnvironmentInfo test initializers in mod.rs so the crate compiles after the struct gained that field.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary by CodeRabbit
New Features
Bug Fixes
Chores
Impacts: iii-hq/cli-tooling#12