refactor: modernize toolchain, migrate to ESM, and remove axios/lodash#30
refactor: modernize toolchain, migrate to ESM, and remove axios/lodash#30mateuscardosodeveloper wants to merge 38 commits intomasterfrom
Conversation
Phase 1 — Tooling: remove SWC and Biome; adopt oxlint for lint; drop dead config and unused devDeps. Phase 2 — ESM: set `"type": "module"` and `engines.node >= 24.0.0`; add `.js` extensions to all local imports; replace bare node builtins with `node:` protocol; swap `__dirname` for `import.meta.dirname`. Phase 3 — Errors and logs: centralize all error paths through `errorHandler` (exit 1, `[ERROR]` on stderr); remove redundant `process.exit()` calls next to `errorHandler`; standardize `successMSG` prefix to `[OK]` and route logs through `messages.ts`. Phase 4 — Output: structured `key=value` output on mutating commands (change-network, change-bucket-type, duplicate-analysis, analysis-set-mode, copy-tab, deploy); canonical prefixes `[OK]`, `[INFO]`, `[ERROR]`. Phase 5 — Dependencies: - Bump to latest stable: ora 5→9 (CJS→ESM, 13 files updated for the `ora.Ora` namespace breaking change), typescript 5.9→6.0 (4 files updated for stricter catch-variable and type narrowing), @types/node 24→25, plus minor bumps for dotenv, vitest, oxlint, commander, @types/lodash. - Replace axios with native fetch in 10 production files across 4 patterns (JSON GET, JSON POST, ArrayBuffer GET, stream GET via `Readable.fromWeb(response.body)`); simplify `handleBackupError` now that axios-shaped errors are gone; drop axios from dependencies. - Remove unused deps: lodash → `structuredClone`; drop @types/lodash and the redundant @types/eventsource (eventsource v5 ships types). - Keep async (14 files use `queue`; reimplementing it would add LOC with no bundle benefit for a CLI). Test infrastructure for both PRs lands here: `src/test-utils/` (Proxy-based Account mock, env-config factory, output capture, prompt reset, fetch mock helper). Existing tests for run-analysis, data-get, collect-ids, and run-buttons-export are updated to use the new helpers and ESM-compliant imports. The full expanded test suite lands in the cascading PR #2. Result: 192 packages (down from 213), 2 vulnerabilities (transitive from @tago-io/sdk → qs, not fixable here), oxlint 0/0 on 170 files, build clean, CLI functional.
`errorHandler` already terminates the process, so the `throw false` lines after it are unreachable. Aligns with the Phase 3 cleanup applied to the rest of the codebase.
Expands the test suite from 35 to 358 passing tests, lifting line
coverage from ~45% to 80.16% (above the SPEC threshold of 80%).
Coverage per layer:
- lib/: 89.8%
- prompt/: 95.5%
- commands/: 74.3%
- profile/backup/resources/: 84.5%
Patterns standardized across the suite:
- Proxy-based Account/Resources mock via makeAccount (shared helper)
- makeEnvironmentConfig factory for config stubs
- resetInjectedPrompts between tests to avoid prompts.inject() leakage
- installFetchMock + makeFetchResponse/ArrayBuffer/Stream helpers
- Dynamic `await import("./foo.js")` so vi.mock() factories hoist cleanly
- Fake timers (vi.useFakeTimers) for async.queue resources with
setTimeout-based pacing
Notes on known gaps:
- `src/commands/profile/backup/restore.ts` (9% covered) exercises axios
streams + unzipper in ways that are not easily mocked without a real
zip fixture; left as-is.
- `src/commands/start-config.ts` (21%) has deeply interactive branches
that would need heavy prompt scripting to cover fully.
- One latent `async.queue.drain()` hang when the queue never receives
items is documented in fixtures (all test fixtures force at least one
item to avoid triggering it).
CI impact: `npm run linter` → 0/0 on 170 files; `npm test` → 358/358.
Expands existing test files across the suite to lift branch coverage from 73.9% to 80.01%, closing the last remaining SPEC threshold. Coverage per metric (final): - Lines 86.63% - Statements 86.49% - Functions 82.50% - Branches 80.01% All four thresholds ≥ 80%, matching the SPEC acceptance criteria. Files expanded (targeting highest-ROI branch gaps): - backup/resources/*: add granularItem selection flows (10 files) - profile/export/services: collect-ids, widgets-export, dashboards-export, devices-export, analysis-export, actions-export - profile/export/export.ts: per-entity isolation tests that exercise the "collect dependencies first" branches that single-entity runs trigger (devices alone, dashboards alone, access alone, etc.) - profile/backup/create.ts: non-ok POST, custom API base, null-backup poll continue - profile/backup/lib.ts: 2FA-cancel and pin-cancel paths - commands: login (URL helper branches + deploy URL derivation), deploy (analysis.info reject, upload failure, default paths), duplicate-analysis (pick cancellation, fetch non-ok), change-network (both prompts empty, partial override), copy-tab (prompt for dashboard id, tab pick cancel, no arrangement) - lib: token (empty folder branch), config-file (create-on-missing, unparseable file, writeConfigFileEnv, writeToConfigFile, setDefault, resolveCLIPath), notify-update (fetch helper + getLatestVersion) - devices: data-post (prompt path, fallback to Device lookup, silent-fail branch), analysis-console (scriptObj unresolved, onmessage, onopen, onerror) Restore.ts (9% coverage) and start-config.ts (21%) intentionally left as-is: - restore.ts: streams + unzipper + disk I/O + ~13 cascaded SDK calls across every backup resource would require a real zip fixture and is fragile to mock. - start-config.ts: deeply interactive prompt chains with tmpdir filesystem walks. Extensive prompt scripting would be brittle. CI: switch `code-quality.yml` from `npm test` to `npm run test:coverage` so the 80% threshold is actually enforced, and bump `actions/checkout` and `actions/setup-node` to v6.
Adds --all to deploy every analysis in tagoconfig.json without any prompt, and -t/--token to supply a profile token at invocation time (bypassing the lock file, which doesn't exist in CI runners). Also rejects the legacy "deploy all" positional with a helpful pointer to --all. The "all" positional has never worked as the help text suggested — it opened an interactive prompt — so dropping it is a bug fix, not a breaking change. Help examples updated: remove the misleading "deploy all" line and add pipeline-friendly usages covering --all, --all + stage-env, and the full CI combination (--all -e prod -t $TAGOIO_TOKEN --silent). Surgical scope: no changes to init, login, or any shared infrastructure. Only deploy.ts and analysis/index.ts are touched.
Documents the deploy flow for CI runners using --all + -e + -t + --silent, including the install step for @tago-io/cli and @tago-io/builder (deploy shells out to analysis-builder, so the builder needs to be on PATH). Notes that -t accepts either a profile token or an external-analysis token with the proper Access Management permissions, and that a pre-configured tagoconfig.json in the repository is required.
Adds coverage for the four new behaviors:
- legacy positional "all" is rejected with a pointer to --all
- --all deploys every analysis with no prompt
- -t/--token overrides the lock-file token for the run
- --all + -t end-to-end works with no lock file (CI flow)
Also updates the existing "env missing" test to reflect the new
error message ("No profile token found" instead of "Environment not
found") and introduces a defaultOptions() helper to DRY up the test
setup now that IDeployOptions has 5 required fields.
Introduces oxfmt as the project formatter, mirroring the settings the Biome formatter used before Phase 1 removed it: - `useTabs: false`, `tabWidth: 2` — 2-space indentation - `endOfLine: "lf"` — LF line endings - `printWidth: 160` — 160-column print width - `singleAttributePerLine: false` — auto attribute position Oxfmt uses Prettier-style JSON keys, not Biome-style — a Biome-shaped config (e.g. `lineWidth`) is silently ignored and defaults kick in. This config was verified end-to-end: with `printWidth: 160`, only 18 files in src/ need reformatting, versus 166 under the default 100. Adds two npm scripts, matching the `linter`/`linter:fix` pair: - `format` → `oxfmt --check` (CI-friendly, no writes) - `format:fix` → `oxfmt` (formats in place) Does NOT run a bulk `oxfmt` over src/ — that comes as a separate one-shot reformat commit so the churn is isolated and easy to bisect/ignore with `git blame --ignore-rev`.
Ships the workspace-level editor config so every contributor formats TypeScript the same way on save, driven by the oxc-vscode extension reading `.oxfmtrc.json`. Settings: - `editor.defaultFormatter: "oxc.oxc-vscode"` — use the oxc extension - `editor.formatOnSave: true` — format on every save - `editor.formatOnSaveMode: "file"` — format the whole file, not just edited regions (keeps whole-file invariants consistent) Scoped to `[typescript]` only to avoid overriding user defaults for JSON, markdown, etc. Widen later if other file types need it.
Mirrors the rule surface the old biome.json enforced, using the closest equivalents available in oxlint. Most of Biome's correctness category is already on by default via oxlint's own correctness category — this commit only adds the rules that are off by default but were explicitly enabled in biome.json. Added as error: - no-empty, no-fallthrough, no-prototype-builtins, no-redeclare, getter-return, curly - typescript/no-namespace - typescript/no-unnecessary-type-constraint Added as warn: - typescript/no-empty-object-type - unicorn/no-array-for-each - unicorn/prefer-array-flat-map - import/no-default-export Changed typescript/no-explicit-any from "off" to "warn" to match biome.json (which had it as warn). Keeping it as warn rather than error because 46 existing uses in src/ are intentional (errorHandler, SDK narrowing, etc.) and fixing them all is out of scope here. Biome rules without direct oxlint equivalents — documented here, not ported: - complexity.noAdjacentSpacesInRegex - style.useAsConstAssertion - suspicious.noMisleadingInstantiator - suspicious.useIsArray Scripts: `linter` and `linter:fix` gain `--quiet`, which suppresses warning output but keeps errors and the final totals. Net effect: the CI script prints only actionable errors, and warnings do not inflate exit codes or console noise. Result: `npm run linter` → 0 errors, 52 warnings (all no-explicit-any), exits 0.
Enables `allowUnreachableCode: false` in tsconfig.json so the TypeScript compiler flags any statement placed after a call to a function typed `: never` (i.e. after `errorHandler(...)`). With the flag flipped, the build surfaced 54 dead `return;` (and a few `throw false`) statements written right after `errorHandler(...)` across 26 command files. Every one of them was unreachable — `errorHandler` calls `process.exit(1)` and is typed `: never`, so control never returns. Removed all of them. Also documents the contract on `errorHandler` itself with an expanded JSDoc explaining the `: never` return type and the no-statement-after invariant that `allowUnreachableCode: false` enforces. From here on, adding a `return;` (or any statement) after `errorHandler(...)` fails `tsc --build` with TS7027, so the pattern can't creep back in without being caught. No behavior change — the removed statements were dead before and are dead now; the compiler just enforces it explicitly.
…p test assertions
…gister `tagoio run <name>` was broken on every Node run: the command called `node -r <swc-register-path>` via `resolveCLIPath`, but that package was removed during Phase 1 when SWC was ripped out. Every invocation failed with `Cannot find module '.../node_modules/@swc-node/register/index'`. Switches the Node path in `_buildCMD` to `node --experimental-strip-types --watch`. Node 24+ strips TypeScript type annotations natively, so no external loader is required. The flag is stable on Node 24 and does not emit a runtime warning (unlike `--experimental-transform-types`, which does). Limitation: strip-types does not handle `enum` or `namespace` — not used in customer analyses today. Also: - Drops the now-unused `resolveCLIPath` import from run-analysis.ts. - Updates the three `_buildCMD` tests that asserted the old SWC path. - README: removes the `.swcrc` / `.swcrc.example` note from the Analysis Runner section (the debugger no longer needs source-map config — Node's strip-types preserves line numbers). - README (CI/CD): documents the Access Management rule required when using an external-analysis token in `-t, --token` (Access Analysis + Edit Analysis + Upload Analysis Script), so least-privilege pipelines don't hit Authorization Denied at deploy time. - `analysis/index.ts`: spells out `--env` instead of `-e` in the help examples for consistency with the option name shown above the list.
`tagoio run <name>` was broken for legacy analysis projects that use
`"module": "CommonJS"` + `"moduleResolution": "Node"` in their tsconfig
and import subpaths without a `.js` extension (e.g.
`import { Data } from "@tago-io/sdk/lib/types"`). Commit 60c5792 had
swapped the runner to `node --experimental-transform-types --watch`, but
Node's built-in TS execution routes those files through the ESM loader,
which rejects extensionless specifiers with `ERR_MODULE_NOT_FOUND`.
Every legacy analysis failed at import time.
Switches `_buildCMD` to invoke tsx's CLI via the cached binary shipped
inside the CLI's own node_modules:
`node <cli-path>/node_modules/tsx/dist/cli.mjs watch <script>`. tsx
pairs esbuild (TS transpile) with oxc-resolver and CJS-aware module
resolution, so the same classic imports resolve cleanly. Watch mode is
provided by `tsx watch` instead of `node --watch`, and `--inspect` /
`--clear` continue to work.
Why tsx over reviving `@swc-node/register`:
- Install footprint: ~12 MB per platform (esbuild single native binary)
vs ~25-36 MB for @swc/core + oxc-resolver
- One native binary to ship, not two
- Watch + REPL ship built-in
- Same resolver internally (oxc-resolver), so no correctness difference
- 10x the weekly npm downloads and stronger maintenance signal
Also:
- Adds `tsx@^4.21.0` to dependencies.
- Reintroduces the `resolveCLIPath` import removed in 60c5792.
- Updates the three `_buildCMD` tests that asserted the Node native
flags to assert the tsx CLI path and `watch` subcommand instead.
The `--tsnd` and `--deno` flags are unchanged.
Device restore was only recreating the device record itself. Two pieces of state that users expect to survive a backup/restore cycle were silently dropped: - Configuration parameters (`device.params`) — these carry per-device runtime settings and are required by most integrations. - Tokens tied to a physical device by `serie_number` — without these, the device cannot communicate after restore. Adds three helpers in `devices.ts`: - `stripDeviceFields` centralizes the list of fields the TagoIO API rejects on create/edit (ids, timestamps, profile, and the two restored-separately fields `params` and `tokens`). Both `processCreateTask` and `processEditTask` now delegate to it, removing the previously duplicated destructuring. - `restoreDeviceParams` replays the backup's params via the dedicated `paramSet` endpoint. - `restoreDeviceTokens` recreates only tokens with a `serie_number` (the identifying attribute for physical devices); ephemeral tokens without one are intentionally skipped. On the edit path it fetches the current token list first and skips serials already present, so re-runs are idempotent. Token values cannot be preserved because the backup stores them masked — callers must update integrations that relied on the old value. Tests cover: param restoration on create/edit, serial-token recreation, skipping tokens without a serial, idempotency when a serial already exists on the target device, and graceful handling of a failed `tokenCreate` (logged, restore continues).
feat(cli): CI/CD flags, fix legacy TS runner, restore device params and tokens (cascades on #31)
test: add unit test suite reaching 80% coverage on all metrics (cascades on #30)
Move infoMSG, successMSG, errorHandler, displayWarning, and the update-available banner to stderr so stdout stays reserved for command data. Sweep blank-line spacers across backup commands and the widgets-export queue error handler. Update the messages and notify-update tests to spy on process.stderr.write rather than console.info / console.log.
Missing analysis-builder produced a raw Node stack trace; wrap execSync to route ENOENT/exit-127 through errorHandler with a concrete install hint. Replace three console.log runtime banners with infoMSG so deploy output follows the same prefix scheme as the rest of the CLI.
The new-mode assignment was reading options.filterMode (the filter flag) instead of options.mode (the set-mode flag), so -m external/-m tago was silently ignored and users were always prompted. Read the correct option.
The analysis-found template literal had a stray closing brace and printed an empty bracket pair when the analysis had no token. Match the parens and only render the token suffix when a token is present.
--json was rendering with console.dir (JS-literal syntax, not parseable JSON) and the bogus default:true in commander made the bare invocation always hit the json branch, leaving console.table as dead code. Switch to JSON.stringify (compact for --json, pretty for --stringify) and drop the default-true so bare runs return a real table. Same default-true removal applied to device-info and data so they behave consistently.
Both commands were calling console.dir for --json which produces JS literal syntax (single quotes, unquoted keys), failing JSON.parse. Switch to process.stdout.write with JSON.stringify; compact for --json, pretty (indent 2) for --stringify. Update unit tests to assert that the emitted bytes actually parse as JSON.
The buckets.info(id) lookup was unguarded so an invalid ID escaped as UnhandledPromiseRejection with a raw Node stack trace. Wrap in .catch and route through errorHandler with an actionable message that includes the ID and the SDK reason.
Local backup printed "Backup stored on <name>.json" but actually wrote ./<id>.json, leaving users hunting for the wrong filename. Show the real path; keep the human name in parens. Also append a trailing newline so the file is POSIX-compliant and cat output renders cleanly.
tokenList(deviceID) without a fields filter returned tokens without serie_number, so the recreate loop silently dropped every device's serial on every network change. Request the serie_number field explicitly so serials survive the swap and the success line correctly reports them as the originating issue requires.
copy-tab printed [OK] Dashboard tab copied even when the source tab had no widgets matching the tab key (typical of arrangement entries with tab=null). Return the copied count from copyWidgetsFromTab and errorHandler when copied is zero so users see an actionable message instead of a lying success. Add the count to the success line as widgets=N for visibility.
Both SDK calls in the export-devices loop were unguarded, so a missing connector or network on the target profile escaped as UnhandledPromiseRejection. Wrap each call in .catch and route through errorHandler with a message that names the device and surfaces the SDK reason.
CI flagged a branch-coverage regression after the deploy and devices- export error-handling fixes (79.62% < 80% threshold). Add three deploy tests for the execSync ENOENT, exit-127, and generic-failure paths, plus two devices-export tests for the create- and edit-rejection branches. Branch coverage returns to 80.56% with 443/443 tests green.
PR 30 Manual Test Report —
|
| Category | Count |
|---|---|
| Test IDs executed | 36 |
| Passed | 35 |
| Pass with note | 1 (T-5.copy-tab — zero-copy guard fixed; deeper data-model follow-up only) |
| Failed and not fixed | 0 |
| Issues found and fixed during session | 12 (one has an additional data-model follow-up — see issue 10) |
| Unit tests passing after all fixes | 443 / 443 |
| Line coverage after all fixes | 86.91 % |
| Branch coverage after all fixes | 80.56 % |
Test environment
| Item | Value |
|---|---|
| OS | macOS (darwin 25.2.0) |
| Node version | v24.14.1 |
| npm version | 11.11.0 |
oxlint clean |
yes (0 errors, 52 warnings — warnings don't block CI) |
npm test green |
yes — 76 files, 443 tests |
npm run build clean |
yes — tsc silent, ./build/index.js produced and made executable |
| Coverage (line / branch / fn / stmt) | 86.91% / 80.56% / 82.7% / 86.91% — clears 80% threshold |
tagoio --version |
3.2.0 |
Design reference for all fixes
All fixes applied during this test session follow the Command Line Interface Guidelines (clig.dev) — the canonical, vendor-neutral reference for modern CLI design. The principles enforced:
- stdout is for data only. Tables, JSON, scalar output a script consumes. Anything else (status, progress, success confirmations, errors, banners) goes to stderr so
cmd | jqandcmd > filework without filtering. - Errors are actionable. Tell the user what failed and how to recover. Always to stderr, always exit non-zero.
- Flags do what they say. A
--jsonflag must produce valid JSON; a boolean flag like--modemust take effect when set without forcing a redundant prompt. - Defaults are sensible. Boolean flags default to
false. Bare commands produce human-friendly output; machine-readable formats are opt-in via flag. - Dependencies fail loudly with a fix path. Missing peer-deps print a single
[ERROR] <tool> not found. Install it with: <command>line — never a raw stack trace.
Each issue logged below maps back to one or more of these principles, and each fix was rebuilt and re-verified before moving on. The unit-test suite was updated alongside each fix.
Result legend
- ✅ Pass — actual matches expected
⚠️ Pass with note — passed but something worth flagging- ❌ Fail — actual differs from expected
T-1 — Pre-flight
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-1.versions | Node 24+ / npm 10+ | ✅ | Node v24.14.1 / npm 11.11.0 |
| T-1.install-and-build | npm ci, linter, test, build, --version |
✅ | npm ci clean (2 pre-existing transitive vulnerabilities are out of scope). Linter exits 0 with 0 errors. Tests fully green. Build silent. --version returns the expected string. |
| T-1.coverage | npm run test:coverage ≥ 80% |
✅ | 86.91% lines / 80.56% branches across all files. Two interactive flows pull the average down (the backup-restore orchestrator and the init walkthrough — both prompt-driven and hard to unit test). Manual coverage in T-2.init and T-6.backup.restore compensates. Otherwise every command/lib/prompt module sits ≥ 76% with most ≥ 95%. |
| T-1.format-check | oxfmt --check reports only the expected unformatted files |
✅ | oxfmt --check flags the expected files. Reformatting is intentionally deferred to a post-merge one-shot commit per PR description so the formatting churn stays out of git blame. No action needed in this PR. |
| T-1.linked-binary | readlink -f resolves into this repo's build/ |
✅ | The linked global tagoio binary resolves into the repo's build/index.js; tagoio --version prints the expected version. |
| T-1.dependency-spot-check | no axios / lodash / Biome; modern majors for everything else | ✅ | dependencies: no axios, no lodash; ora ^9, tsx ^4. devDependencies: no @biomejs/biome; oxlint, oxfmt, typescript ^6, @types/node ^25. All eight expected items present/absent as specified. |
T-2 — Auth and config
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-2.init | tagoio init writes tagoconfig.ts, ends in [OK] |
✅ | Walked through login confirm → API endpoint → email → password → 2FA prompt → profile picker → analysis picker → file association. tagoconfig.ts was written and the final [OK] line printed. Output prefixes correct. |
| T-2.login.A | Interactive tagoio login <env> |
✅ | Walked through the prompts (API endpoint → email → password → 2FA → profile picker). Final line: [OK] Token successfully written to the environment <env>. |
| T-2.login.B | tagoio login <env> -t <token> |
✅ | Token short-circuit skipped credential prompts as expected. Final [OK] confirmation. |
| T-2.login.C | tagoio login <env> -u <email> -p <password> (incl. OTP if enabled) |
✅ | Skipped the email/password prompts; correctly detected 2FA and prompted for the OTP only. Final [OK] confirmation. |
| T-2.set-env | set-env updates the default; list-env reflects it |
✅ | set-env <other-env> flipped the Default column; set-env <original> flipped it back. Round-trip verified. |
| T-2.stream-routing | [OK] / [INFO] on stdout vs [ERROR] on stderr (verified via redirect) |
✅ | Success command: status lines on stdout, stderr empty, exit 0. Forced failure: [INFO] on stdout, [ERROR] on stderr, exit 1, no stack trace. Routing matches the PR's standardization claim. (Routing was later refined in issue 5 — see issues table.) |
T-3 — Analysis namespace
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-3.deploy.interactive | Picker + confirmation + [OK] Script uploaded. |
✅ | First run surfaced issue 1 (raw Node stack trace when the analysis builder peer-dep wasn't installed). After fix and rebuild, both branches verified: without the builder → graceful [ERROR] with install hint and exit 1; with the builder installed → bundle pass-through, then [OK] Script uploaded. script=... analysis=..., exit 0. |
| T-3.deploy.silent | No prompt; [OK] line emitted |
✅ | --silent skipped the picker entirely and went straight to upload. |
| T-3.deploy.all-positional (neg) | tagoio deploy all rejected with helpful [ERROR] |
✅ | stdout empty; stderr exactly: [ERROR] Did you mean "tagoio deploy --all"? The "all" positional argument is no longer supported.; exit 1. |
| T-3.run-analysis | tsx loader works under ESM; clean Ctrl-C | ✅ | Process started clean: no ERR_REQUIRE_ESM, no loader errors. Connected to the platform, listened for triggers, exited cleanly on Ctrl-C. The tsx-loader regression fix is confirmed working under ESM. The success line had a cosmetic typo (issue 2) which was caught and fixed during this run. |
| T-3.trigger | analysis-trigger returns [OK] |
✅ | Trigger fired, success line printed, exit 0. |
| T-3.console | analysis-console streams the triggered run via SSE |
✅ | SSE connection clean ([INFO] + [OK] + Waiting for logs... lines). With a background-delayed trigger, a real analysis log line (Starting analysis.) streamed through the console after the trigger fired. Ctrl-C exited cleanly. |
| T-3.duplicate | New analysis on platform; script bytes match (Pattern C) | ✅ | Pattern C (ArrayBuffer fetch + gunzip) verified end-to-end. Output is a textbook example of the issue's stdout-describes-changes requirement (key=value backup trail). |
| T-3.duplicate (neg) | Invalid ID → [ERROR] on stderr, exit 1 |
✅ | Clean error path through errorHandler. No stack trace. |
| T-3.mode | -m external and -m tago both apply, both [OK] |
✅ | Surfaced issue 3 (the -m flag was silently ignored). After fix and rebuild: both directions of the round-trip succeeded with the expected [OK] lines and no extra mode-prompt. |
T-4 — Device namespace
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-4.list | Bare list, -k/-v filter, --json |
✅ | Surfaced issues 4 and 5. After the fixes and the broader stdout/stderr refactor, end-to-end split verified: bare → table on stdout, status on stderr (zero pollution of stdout). -k/-v filters narrow correctly. --json → compact JSON on stdout that parses cleanly. --stringify → pretty-printed JSON on stdout. Pipes (`device-list |
| T-4.info | device-info bare, with tokens flag, with --json (Pattern A) |
✅ | Bare prints a table on stdout with status on stderr. --json initially produced JS-literal output (issue 6); after fix, it emits parseable JSON. Pattern A (JSON GET via fetch) verified. |
| T-4.data.read | data <id> --qty N returns formatted output |
✅ | Bare: empty table on stdout (no data on the test device), status on stderr. --json: empty array on stdout, parseable, status on stderr. Stream split confirmed. |
| T-4.data.post | data -p '[...]' returns [OK] (Pattern B) |
✅ | Round-trip verified: posted a data point, then read it back via the variable filter and the value matched. Pattern B (JSON POST via fetch) confirmed end-to-end. |
| T-4.device-network | Output includes device=, serial=, network=, connector= (issue requirement) |
✅ | First live run uncovered issue 9 — the underlying SDK call was dropping the serial field. After fix and a fresh serial-bearing token: [OK] line includes serial=... correctly, and the recreated tokens preserve their serials through the network swap. The originating issue's headline requirement is met both in format (output line) and in data preservation (recreated tokens carry serials). |
| T-4.device-network (neg) | Same network/connector → [ERROR] already set..., exit 1 |
✅ | stdout empty (no data leak); stderr has the env line then the actionable error; exit 1; no stack trace. |
| T-4.device-type | mutable → immutable conversion, settings echoed | ✅ | Round-trip verified. immutable → mutable: chunk fields correctly omitted for mutable target. mutable → immutable: full key=value backup trail (device=... type=... chunk_period=... chunk_retention=...). All status on stderr, prompts on stdin/tty, no stdout pollution. |
| T-4.device-type (neg) | Forced 4xx → [ERROR] <message>, exit 1 |
✅ | First run surfaced issue 7 (an unguarded SDK call escaping as an unhandled rejection). After fix and rebuild: stderr ends in a clean actionable [ERROR] line, exit 1, no stack trace. |
| T-4.device-backup --local | Writes ./<id>.json; restore reads it back |
✅ | Backup writes the file, restore reads it. Surfaced issue 8 (cosmetic: success line referenced the device name but the file was named after the ID, plus the JSON file lacked a trailing newline). Fix applied and verified: success line now shows the actual path; file ends in \n. |
| T-4.device-backup (neg) | Bad URL / invalid id → [ERROR] on stderr, no unhandled rejection |
✅ | Clean error path through errorHandler — actionable message, exit 1, no stack trace. |
| T-4.device-copy | Data copy from source to target | ✅ | UX fix applied: the source/target announcement now lives inside the confirm prompt (Copy data from X to Y?), and the [INFO] Copying data... line only fires after the user agrees. Cleaner sequencing — no more implication that the action has already started before confirmation. |
| T-4.device-inspector | Live SSE stream; clean Ctrl-C | ✅ | Connection clean, live request streamed in (real-time HTTP request captured with payload parsers running), Ctrl-C exited cleanly. Streamed log lines correctly went to stdout; status lines went to stderr per the new split. |
T-5 — Dashboard namespace
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-5.copy-tab | Confirmation prompt; target replaced; cancel does nothing | Surfaced issue 10 — silent zero-copy false success on a dashboard whose arrangement entries had tab: null. Probed the dashboard via the SDK after the run: target tab still had 0 widgets even though the CLI printed [OK]. Immediate fix applied: zero-copy now triggers an actionable [ERROR]. After the source widget was deleted-and-re-added (forcing a proper tab key on it), a follow-up run produced a normal success with widgets=1 and the target tab received a copy. The deeper data-model fix (handling tab: null semantics in modern dashboards) is a follow-up. |
T-6 — Profile namespace
| ID | Description | Result | Evidence / notes |
|---|---|---|---|
| T-6.export.run | -e devices; per-entity progress; final [OK] |
✅ | First run surfaced issue 11 — UnhandledPromiseRejection: Invalid connector (raw stack trace). Root cause: unguarded SDK calls in the device-export path. After fix and a TagoIO-side configuration adjustment (sharing the connector across both profiles), re-ran clean: env lines, picker → devices, export tag, confirm yes, per-device progress, final [OK] ====Exporting ended with success====. New device created on target profile. The error path itself wasn't directly re-triggered live, but the code change is sound: any future Invalid-connector / Invalid-network on devices.create or devices.edit now routes through errorHandler. |
| T-6.backup.create | Confirmation, polling, [OK]; appears in backup list |
✅ | Spinner polled until the backup completed; final [OK] line with the new backup ID. |
| T-6.backup.list | Tabular output | ✅ | console.table with backups (IDs, status, created_at, size). Status on stderr, table on stdout. |
| T-6.backup.download | Pattern D; zip lands in profile-backup-download/; size matches |
✅ | Picker, password + 2FA, download URL, ora spinner, final [OK]. Resulting zip size matched the API-reported size. Pattern D (Readable.fromWeb + pipeline() + createWriteStream) verified end-to-end. |
| T-6.backup.restore -r | Resource-mode prompts work; per-resource progress | ✅ | After creating a backup on the throwaway profile, ran resource-mode restore against it. Full flow clean: picker, warning box (correctly on stderr after the refactor), password + OTP, download via ora + extract, backup-contents table, resource picker, confirm yes, per-resource progress, final summary [OK]. |
| T-6.backup.restore -i | Item-mode autocomplete works; final [OK] |
✅ | Same flow as -r but with item-mode autocomplete-multiselect. Picked a single item; restoration touched only that item, others untouched. Granular targeting confirmed. |
T-7 — Output prefix matrix
Reflects the clig.dev contract enforced by issue 5's refactor: stdout carries data only (tables, JSON, streamed log lines, scalar values); stderr carries every prefix line (
[OK],[INFO],[ERROR]) and progress.
| Command | stdout (data) | stderr (status) | Verified in | Result |
|---|---|---|---|---|
tagoio init |
(none) | [OK], [INFO], prompts |
T-2.init | ✅ |
tagoio login |
(none) | [OK], [INFO], prompts |
T-2.login.A/B/C | ✅ |
tagoio list-env |
table | [INFO] |
T-2.set-env, T-2.stream-routing | ✅ |
tagoio analysis-deploy |
builder bundle output passes through | [OK], [INFO], [ERROR] |
T-3.deploy.* | ✅ |
tagoio analysis-trigger |
(none) | [OK], [INFO] |
T-3.trigger | ✅ |
tagoio analysis-console |
live SSE log lines | [OK], [INFO] |
T-3.console | ✅ |
tagoio analysis-duplicate |
(none) | [OK], [ERROR] (neg) |
T-3.duplicate, T-3.duplicate (neg) | ✅ |
tagoio analysis-mode |
(none) | [OK], [INFO] |
T-3.mode | ✅ |
tagoio device-list |
table or JSON (--json) |
[INFO], [OK] N devices found. |
T-4.list | ✅ |
tagoio device-info |
table or JSON (--json) |
[INFO] device-found line |
T-4.info | ✅ |
tagoio data (read) |
table / JSON | [INFO], [OK] N data found. |
T-4.data.read | ✅ |
tagoio data -p (post) |
(none) | [OK] 1 Data Added |
T-4.data.post | ✅ |
tagoio device-network |
(none) | [OK] with device=, serial=, network=, connector= |
T-4.device-network | ✅ |
tagoio device-type |
(none) | [OK] with device=, type=, chunk_period=, chunk_retention=; [ERROR] on neg |
T-4.device-type, T-4.device-type (neg) | ✅ |
tagoio device-backup |
(none; writes file) | [INFO], per-month [OK], final [OK]; [ERROR] on neg |
T-4.device-backup, T-4.device-backup (neg) | ✅ |
tagoio device-copy |
(none) | [INFO], prompts |
T-4.device-copy | ✅ |
tagoio device-inspector |
live SSE traffic lines | [OK], [INFO] |
T-4.device-inspector | ✅ |
tagoio copy-tab |
(none) | [OK] widgets=N, [ERROR] on zero-copy |
T-5.copy-tab | |
tagoio export |
(none) | [INFO] per-step, [OK] summary, [ERROR] on per-device fail |
T-6.export.run | ✅ |
tagoio backup create |
(none) | [INFO], polling status, [OK] Backup ID: |
T-6.backup.create | ✅ |
tagoio backup list |
table | [INFO] |
T-6.backup.list | ✅ |
tagoio backup download |
(none; writes zip) | [OK], [INFO], ora spinners |
T-6.backup.download | ✅ |
tagoio backup restore (-r / -i) |
"Backup Contents" table (informational) | [OK], [INFO], warning box |
T-6.backup.restore -r, -i | ✅ |
T-8 — Regression guardrails (axios → fetch parity)
| Pattern | Verified by | Result | Notes |
|---|---|---|---|
| A — JSON GET | T-4.info, T-6.backup.list | ✅ | Parseable payloads via fetch + .json() in both commands. |
| B — JSON POST | T-4.data.post, T-4.device-type | ✅ | Round-trip data post; bucket-type conversion (both directions). |
| C — ArrayBuffer GET | T-3.duplicate | ✅ | Source script fetched as arrayBuffer(), gunzipped, re-uploaded as a new analysis. |
| D — Stream GET | T-6.backup.download | ✅ | Backup zip downloaded via Readable.fromWeb(response.body) + pipeline() + createWriteStream; size matched the API-reported size. |
structuredClone parity |
T-6.export.run | ✅ | Export flow's deep-clone path (post-lodash removal) ran cleanly. |
| ESM smoke | T-3.run-analysis | ✅ | Process started clean under tsx loader, no ERR_REQUIRE_ESM / ERR_MODULE_NOT_FOUND / loader errors. |
oxfmt no surprise drift |
T-1.format-check | ✅ | Reformatting intentionally deferred to a post-merge commit per PR description. |
T-9 — Spec compliance (originating issue)
| Issue requirement | Verified by | Pass | Notes |
|---|---|---|---|
| Node 24, OXC stack, no Biome | T-1.versions, T-1.dependency-spot-check | ✅ | Node v24.14.1; oxlint + oxfmt present; no @biomejs/biome in devDeps. |
| Packages updated | T-1.dependency-spot-check | ✅ | ora 9.x, typescript 6.x, @types/node 25.x, commander 14.x, tsx 4.x, vitest 4.x. |
| ESM migration (must-have) | T-3.run-analysis, T-1.dependency-spot-check | ✅ | tagoio analysis-run starts clean under tsx loader. package.json declares "type": "module", engines.node >= 24.0.0. |
stdout describes changes (change-network must emit serial) |
T-4.device-network | ✅ | Surfaced issue 9 (the underlying SDK call wasn't requesting the serial field). After fix: full key=value output line including serial= and recreated tokens preserve their serials. The headline requirement is met both in format and in data preservation. Same key=value pattern verified on device-type, analysis-duplicate, and data -p. |
| Graceful, actionable errors | T-3.duplicate (neg), T-4.device-network (neg), T-4.device-type (neg), T-4.device-backup (neg), T-3.deploy.interactive (no builder) | ✅ | Surfaced and fixed five unhandled-rejection / silent-failure bugs. After fixes, every error path goes through errorHandler: clean [ERROR] <actionable-message> on stderr, exit 1, no Node stack trace. |
| Test coverage ~80% via Vitest | T-1.coverage | ✅ | 86.91% lines, 80.56% branches overall (target 80%). |
Issues found during testing
| # | Surfaced in | File:Line | Severity | Description | Repro | Status |
|---|---|---|---|---|---|---|
| 1 | T-3.deploy.interactive | src/commands/analysis/deploy.ts |
HIGH | execSync for the analysis builder (and the deno bundler) was not wrapped in try/catch. When the binary was missing the call threw and the error escaped to Node's default handler — raw stack trace, no [ERROR] prefix, no install hint. Violated the PR's "all errors converge on errorHandler" claim. Fix: wrap execSync in try/catch; ENOENT/exit-127 routes to errorHandler with a tool-specific install hint; other failures route to a generic build-failed message. Also routed three console.log deploy banners through infoMSG. |
tagoio deploy on a system without the analysis builder installed |
fixed |
| 2 | T-3.run-analysis | src/commands/analysis/run-analysis.ts |
LOW | Cosmetic typo in the success template literal (a stray closing brace) plus an empty bracket pair when the analysis had no token. Fix: corrected the parens; conditionally render the token suffix only when the token is non-empty. | tagoio analysis-run <name> |
fixed |
| 3 | T-3.mode | src/commands/analysis/analysis-set-mode.ts |
MED | The -m / --mode flag was silently ignored: the assignment read options.filterMode (the filter flag) instead of options.mode, so users were always prompted to pick a mode regardless of the flag. Fix: read options.mode. The filter flag continues to filter which analyses are shown. |
tagoio analysis-mode <name> -m external (pre-fix prompts; post-fix runs through) |
fixed |
| 4 | T-4.list | src/commands/devices/device-list.ts, src/commands/devices/index.ts |
MED | tagoio device-list --json produced output that wasn't valid JSON for three reasons: (a) the env banner was emitted on stdout before the data; (b) the data was rendered with console.dir (JS-literal syntax with single quotes and unquoted keys), not JSON.stringify; (c) --json was declared in commander with default true on device-info, device-list, and data, so bare invocations always hit the json branch — console.table was effectively dead code. Fix: emit real JSON via JSON.stringify (compact for --json, pretty for --stringify); restore the bare-table path; remove the bogus default: true. The banner-on-stdout part was later subsumed by issue 5's broader refactor. |
`tagoio device-list --json | ; tagoio device-list` (no flag) |
| 5 | architectural — surfaced during T-4.list | src/lib/messages.ts, src/lib/config-file.ts, src/lib/display-warning.ts, src/lib/notify-update.ts, src/commands/profile/backup/**, src/commands/profile/export/services/widgets-export.ts |
MED | The PR's stdout/stderr split sent [OK] and [INFO] to stdout, breaking pipes. Diverges from clig.dev's "only data on stdout, everything else on stderr" rule. Fix: route infoMSG / successMSG / errorHandler and displayWarning to stderr via process.stderr.write. Sweep blank-line spacers and stray console.log status calls (the update banner, the export queue error) to stderr. Streaming live output (analysis console, device live inspector) intentionally remains on stdout because that is the data. Unit tests updated. |
npm test; `tagoio device-list |
` (no banner pollution after fix) |
| 6 | T-4.info | src/commands/devices/device-info.ts, src/commands/devices/data-get.ts |
MED | Same JSON-output bug as issue 4 in two more commands. --json and --stringify were calling console.dir which produces JS-literal syntax, not parseable JSON. Fix: emit real JSON via process.stdout.write(JSON.stringify(...)) — compact for --json, pretty for --stringify. Unit tests updated to assert the bytes actually parse. |
`tagoio device-info --json | ` |
| 7 | T-4.device-type (neg) | src/commands/devices/change-bucket-type.ts |
HIGH | tagoio device-type <invalid-id> produced an unhandled-rejection stack trace. The upstream SDK lookup for the bucket info was not guarded with .catch(), so any rejection escaped to Node's default handler. Fix: wrap the call in .catch and route through errorHandler with an actionable message. |
tagoio device-type <invalid-id> |
fixed |
| 8 | T-4.device-backup --local | src/commands/devices/device-bkp.ts |
LOW | Two cosmetic issues in the local-backup output: (a) the success line referenced the device name but the file was named after the device id — misleading; (b) the JSON file had no trailing newline (POSIX text-file convention). Fix: show the real path in the success line (with the human-friendly name in parens); append a trailing newline to the JSON output. | tagoio device-backup <id> --local |
fixed |
| 9 | T-4.device-network | src/commands/devices/change-network.ts |
HIGH | change-network was silently destroying device serial numbers on every run. The SDK token-list call wasn't requesting the serial field; the recreate loop then read undefined for every serial and created tokens without them. The PR's own unit tests mocked tokenList to return tokens with the field populated, so the test mocks bypassed the bug. This directly defeated the originating issue's headline requirement. Fix: add an explicit fields filter on the SDK call so the serial field is returned. |
Add a token with a serial number to a test device; run tagoio device-network <id> -n <other> -c <other>; pre-fix: serial silently dropped from output and from recreated tokens. Post-fix: serial preserved in both. |
fixed |
| 10 | T-5.copy-tab | src/commands/dashboard/copy-tab.ts |
HIGH | tagoio copy-tab reported [OK] Dashboard tab copied. even when zero widgets were actually copied. Probed the dashboard via the SDK after a run: the target tab still had 0 widgets. The arrangement entries on this dashboard had tab: null rather than the tab key the code filtered on, so the filter returned an empty list and the copy loop was a no-op — but the success message printed regardless. Immediate fix: track the copied count and route through errorHandler if it's zero, so the false-success becomes a clear [ERROR]. The success line also reports the count as widgets=N. Follow-up needed: handle the modern arrangement model where tab can be null (likely meaning "first/default tab" or unassigned). Requires SDK/backend domain knowledge. |
tagoio copy-tab <dashboard-id> against a dashboard whose arrangement entries have tab: null |
fixed with follow-up |
| 11 | T-6.export.run | src/commands/profile/export/services/devices-export.ts |
HIGH | tagoio export produced an unhandled-rejection stack trace when the target profile lacked the source's connector or network. Two unguarded SDK calls (devices.create and devices.edit) escaped to Node's default handler. Fix: wrap both calls in .catch and route through errorHandler with a message that names the device and surfaces the SDK reason. |
tagoio export --from <a> --to <b> -e devices where the target lacks the source's connector or network |
fixed |
| 12 | T-4.device-copy | src/commands/devices/copy-data.ts |
LOW | The [INFO] Copying data from X to Y... line printed before the confirm prompt, implying the action had already started before the user agreed. Misleading sequencing. Fix: carry the source/target context inside the confirm prompt itself (Copy data from X to Y?), and only emit the [INFO] Copying... line after the user confirms — so announcement follows agreement. |
tagoio device-copy --from <src> --to <tgt> |
fixed |
Recommendation to project lead
The PR delivers on every stated goal — Node 24 + OXC stack, ESM migration, axios → fetch (all four patterns A/B/C/D verified live), package upgrades, 86.91% line coverage, error-handler unification, output prefix scheme — but live testing surfaced 12 distinct issues that the unit-test suite did not catch. Most stemmed from the same anti-pattern (SDK or execSync calls awaited without .catch(errorHandler), escaping as raw unhandled rejections). One was a silent data-loss bug that defeated the originating issue's headline requirement. One was a silent false-success in copy-tab. One was an architectural mismatch with clig.dev. All 12 were fixed in this branch during testing; 443/443 unit tests still pass and the binary was rebuilt and re-verified after every change.
Recommendation: merge after the team reviews and accepts the in-session fixes. All fixes are on the same branch (feat/cli-refactoring-deps) and contained to the files listed in the issues table. The deeper data-model fix referenced in issue 10 (handling arrangement[i].tab === null) and a final live run of change-network against any LoRaWAN device with a serial-bearing token should land before the next release.
- All sections T-1 through T-9 executed
- Every failure has an entry in "Issues found"
The [INFO] Copying data... line printed before the confirm prompt, so it looked like the action had already started before the user agreed. Move the source/target context into the confirm prompt itself (Copy data from X to Y?) and only emit the [INFO] line after yes.
Summary
Core refactoring across 5 phases: remove legacy tooling (SWC, Biome), migrate to ESM, centralize errors and output, bump all deps to latest stable, and drop
axiosandlodashin favor of native platform APIs.A cascading PR with the expanded unit-test suite (71 new test files) will follow on top of this branch.
Phase 1 — Tooling
oxfmtas the project formatter, replacing Biome's formatting role. Config in.oxfmtrc.jsonmirrors the old Biome settings (useTabs: false,tabWidth: 2,endOfLine: "lf",printWidth: 160). Addsformatandformat:fixnpm scripts. Does not bulk-reformat existing files — that lands in a separate one-shot commit after merge so the churn stays isolated (and can be added to.git-blame-ignore-revs)..vscode/settings.jsonso every contributor gets format-on-save through theoxc.oxc-vscodeextension (scoped to TypeScript).Phase 2 — ESM migration
"type": "module"+engines.node >= 24.0.0..jsextensions on every local import (~200 imports across 88 files).node:protocol for built-in modules.import.meta.dirnamereplaces__dirname.Phase 3 — Errors and logs
errorHandler(exit 1,[ERROR]on stderr).process.exit()calls next toerrorHandlerremoved.successMSGprefix standardized to[OK]; logs routed throughmessages.ts.Phase 4 — Output standardization
key=valueoutput on mutating commands (change-network, change-bucket-type, duplicate-analysis, analysis-set-mode, copy-tab, deploy).[OK],[INFO],[ERROR].Phase 5 — Dependencies
ora5 → 9 (CJS → ESM): 13 files updated for the removedora.Oranamespace (now imported astype Ora).typescript5.9 → 6.0: 4 files updated for stricterunknowncatch variables and tighterAnalysisListItemnarrowing.@types/node24 → 25; minor bumps fordotenv,vitest,oxlint,commander,@types/lodash.axioswith nativefetchin 10 production files across 4 patterns: JSON GET, JSON POST, ArrayBuffer GET (Buffer.from(await response.arrayBuffer())), stream GET (Readable.fromWeb(response.body)piped tocreateWriteStream).axiosremoved from dependencies.lodash→structuredClone(native); drop@types/lodashand the redundant@types/eventsource(eventsource v5 ships its own types).async: 14 files usequeue; reimplementing it in-house would add LOC with no real benefit for a CLI.Test infrastructure
src/test-utils/ships in this PR (Proxy-based Account mock, env-config factory, output capture, prompt reset, fetch mock helper) because 4 existing test files (run-analysis,data-get,collect-ids,run-buttons-export) already depend on it after being updated for ESM + the SDK/deps changes. The full expanded test suite lands in the cascading PR.Result
@tago-io/sdk→qs, not fixable here).oxlint: 0 warnings / 0 errors on 170 files.oxfmt --check: reports 18 files that need reformatting (to be applied in the post-merge formatting commit).npm test: 35/35 passing.npm run build→node ./build/index.js --version→3.2.0.Test plan
npm run linter,npm test,npm run build.node ./build/index.js --versionreturns the expected version.tagoio set-env <name>) and confirm[OK]/[INFO]/[ERROR]prefixes appear on the correct streams (stdout vs stderr).tagoio device-info <id>or backup list.tagoio change-bucket-type.tagoio analysis duplicate <id>.tagoio profile backup download.