diff --git a/README.md b/README.md index 05f77de1..77f63144 100644 --- a/README.md +++ b/README.md @@ -472,15 +472,26 @@ brew uninstall rtk # If installed via Homebrew ## Privacy & Telemetry -RTK collects **anonymous, aggregate usage metrics** once per day, **enabled by default**. This helps prioritize development. See opt-out options below. - -**What is collected:** -- Device hash (salted SHA-256 — per-user random salt stored locally, not reversible) -- RTK version, OS, architecture -- Command count (last 24h) and top command names (e.g. "git", "cargo" — no arguments, no file paths) -- Token savings percentage - -**What is NOT collected:** source code, file paths, command arguments, secrets, environment variables, or any personally identifiable information. +RTK collects **anonymous, aggregate usage metrics** once per day, **enabled by default**. This data helps us build a better product: identifying which commands need filters, which filters need improvement, and how much value RTK delivers. For the full list of fields, data handling, and contributor guidelines, see **[docs/TELEMETRY.md](docs/TELEMETRY.md)**. + +**What is collected and why:** + +| Category | Data | Why | +|----------|------|-----| +| Identity | Salted device hash (SHA-256, not reversible) | Count unique installations without tracking individuals | +| Environment | RTK version, OS, architecture, install method | Know which platforms to support and test | +| Usage volume | Command count (24h), total commands, tokens saved (24h/30d/total) | Measure adoption and value delivered | +| Quality | Top 5 passthrough commands (0% savings), parse failure count, commands with <30% savings | Identify missing filters and weak ones to improve | +| Ecosystem | Command category distribution (e.g. git 45%, cargo 20%, js 15%) | Prioritize filter development for popular ecosystems | +| Retention | Days since first use, active days in last 30 | Understand engagement and detect churn | +| Adoption | AI agent hook type (claude/gemini/codex), custom TOML filter count | Track integration coverage and DSL adoption | +| Configuration | Whether config.toml exists, number of excluded commands, project count | Understand user maturity and customization patterns | +| Features | Usage counts for meta-commands (gain, discover, proxy, verify) | Know which RTK features are valued vs unused | +| Economics | Estimated USD savings (based on API token pricing) | Quantify the value RTK provides to users | + +All data is **aggregate counts or anonymized command names** (first 3 words, no arguments). Top commands report only tool names (e.g. "git", "cargo"), never full command lines. + +**What is NOT collected:** source code, file paths, command arguments, secrets, environment variables, personal data, or repository contents. **Opt-out** (any of these): ```bash diff --git a/docs/TELEMETRY.md b/docs/TELEMETRY.md new file mode 100644 index 00000000..c364349a --- /dev/null +++ b/docs/TELEMETRY.md @@ -0,0 +1,154 @@ +# Telemetry + +RTK collects anonymous, aggregate usage metrics once per day to help improve the product. Telemetry is **enabled by default** and can be disabled at any time. + +## Why we collect telemetry + +RTK supports 100+ commands across 15+ ecosystems. Without telemetry, we have no way to know: + +- Which commands are used most and need the best filters +- Which filters are underperforming and need improvement +- Which ecosystems to prioritize for new filter development +- How much value RTK delivers to users (token savings in $ terms) +- Whether users stay engaged over time or churn after trying RTK + +This data directly drives our roadmap. For example, if telemetry shows that 40% of users run Python commands but only 10% of our filters cover Python, we know where to invest next. + +## How it works + +1. **Once per day** (23-hour interval), RTK sends a single HTTPS POST to our telemetry endpoint +2. The ping runs in a **background thread** and never blocks the CLI (2-second timeout) +3. A marker file prevents duplicate pings within the interval +4. If the server is unreachable, the ping is silently dropped — no retries, no queue + +**Source code**: [`src/core/telemetry.rs`](../src/core/telemetry.rs) + +## What is collected + +### Identity (anonymous) + +| Field | Example | Purpose | +|-------|---------|---------| +| `device_hash` | `a3f8c9...` (64 hex chars) | Count unique installations. Salted SHA-256 of hostname + username with a per-device random salt stored locally (`~/.local/share/rtk/.device_salt`). Not reversible. | + +### Environment + +| Field | Example | Purpose | +|-------|---------|---------| +| `version` | `0.34.1` | Track adoption of new versions | +| `os` | `macos` | Know which platforms to support and test | +| `arch` | `aarch64` | Prioritize ARM vs x86 builds | +| `install_method` | `homebrew` | Understand distribution channels (homebrew/cargo/script/nix) | + +### Usage volume + +| Field | Example | Purpose | +|-------|---------|---------| +| `commands_24h` | `142` | Daily activity level | +| `commands_total` | `32888` | Lifetime usage — segment light vs heavy users | +| `top_commands` | `["git", "cargo", "ls"]` | Most popular tools (names only, max 5) | +| `tokens_saved_24h` | `450000` | Daily value delivered | +| `tokens_saved_total` | `96500000` | Lifetime value delivered | +| `savings_pct` | `72.5` | Overall effectiveness | + +### Quality (filter improvement) + +| Field | Example | Purpose | +|-------|---------|---------| +| `passthrough_top` | `["git tag:15", "npm ci:8"]` | Top 5 commands with 0% savings — these need filters | +| `parse_failures_24h` | `3` | Filter fragility — high count means filters are breaking | +| `low_savings_commands` | `["rtk docker ps:25%"]` | Commands averaging <30% savings — filters to improve | +| `avg_savings_per_command` | `68.5` | Unweighted average (vs global which is volume-biased) | + +### Ecosystem distribution + +| Field | Example | Purpose | +|-------|---------|---------| +| `ecosystem_mix` | `{"git": 45, "cargo": 20, "js": 15}` | Category percentages — where to invest filter development | + +### Retention (engagement) + +| Field | Example | Purpose | +|-------|---------|---------| +| `first_seen_days` | `45` | Installation age in days | +| `active_days_30d` | `22` | Days with at least 1 command in last 30 days — measures stickiness | + +### Economics + +| Field | Example | Purpose | +|-------|---------|---------| +| `tokens_saved_30d` | `12000000` | 30-day token savings for trend analysis | +| `estimated_savings_usd_30d` | `60.0` | Estimated dollar value saved (at ~$5/Mtok average API pricing) | + +### Adoption + +| Field | Example | Purpose | +|-------|---------|---------| +| `hook_type` | `claude` | Which AI agent hook is installed (claude/gemini/codex/cursor/none) | +| `custom_toml_filters` | `3` | Number of user-created TOML filter files — DSL adoption | + +### Configuration (user maturity) + +| Field | Example | Purpose | +|-------|---------|---------| +| `has_config_toml` | `true` | Whether user has customized RTK config | +| `exclude_commands_count` | `2` | Commands excluded from rewriting — high count may indicate frustration | +| `projects_count` | `5` | Distinct project paths — multi-project = power user | + +### Feature adoption + +| Field | Example | Purpose | +|-------|---------|---------| +| `meta_usage` | `{"gain": 5, "discover": 2}` | Which RTK features are actually used | + +## What is NOT collected + +- Source code or file contents +- Full command lines or arguments (only tool names like "git", "cargo") +- File paths or directory structures +- Secrets, API keys, or environment variable values +- Repository names or URLs +- Personally identifiable information +- IP addresses (not logged server-side) + +## Opt-out + +Telemetry can be disabled instantly with either method: + +```bash +# Environment variable (per-session or in shell profile) +export RTK_TELEMETRY_DISABLED=1 + +# Or permanently in config file +# ~/.config/rtk/config.toml +[telemetry] +enabled = false +``` + +When disabled, `rtk init` shows `[info] Anonymous telemetry is disabled`. No data is sent, no background thread is spawned, no network requests are made. + +## Data handling + +- Telemetry endpoint URL and auth token are injected at **compile time** via `option_env!()` — they are not in the source code +- The server is hosted on GCP Cloud Run with TLS +- Data is used exclusively for RTK product improvement +- No data is sold or shared with third parties +- Aggregate statistics may be published (e.g. "70% of RTK users are on macOS") + +## For contributors + +The telemetry implementation lives in `src/core/telemetry.rs`. Key design decisions: + +- **Fire-and-forget**: errors are silently ignored, never shown to users +- **Non-blocking**: runs in a `std::thread::spawn`, 2-second timeout +- **No async**: consistent with RTK's single-threaded design +- **Compile-time gating**: if `RTK_TELEMETRY_URL` is not set at build time, all telemetry code is dead — the binary makes zero network calls +- **23-hour interval**: prevents clock-drift accumulation that a strict 24h interval would cause + +When adding new fields: +1. Add the query method to `src/core/tracking.rs` +2. Add the field to `EnrichedStats` in `src/core/telemetry.rs` +3. Populate it in `get_enriched_stats()` +4. Add it to the JSON payload in `send_ping()` +5. Update this document and the README.md privacy table +6. Ensure the field contains only **aggregate counts or anonymized names** — no raw paths, arguments, or user data diff --git a/src/core/telemetry.rs b/src/core/telemetry.rs index bba19991..d4bfefb8 100644 --- a/src/core/telemetry.rs +++ b/src/core/telemetry.rs @@ -64,6 +64,7 @@ fn send_ping() -> Result<(), Box> { // Get stats from tracking DB let (commands_24h, top_commands, savings_pct, tokens_saved_24h, tokens_saved_total) = get_stats(); + let enriched = get_enriched_stats(); let payload = serde_json::json!({ "device_hash": device_hash, @@ -76,6 +77,29 @@ fn send_ping() -> Result<(), Box> { "savings_pct": savings_pct, "tokens_saved_24h": tokens_saved_24h, "tokens_saved_total": tokens_saved_total, + // Quality: identify gaps and weak filters + "passthrough_top": enriched.passthrough_top, + "parse_failures_24h": enriched.parse_failures_24h, + "low_savings_commands": enriched.low_savings_commands, + "avg_savings_per_command": enriched.avg_savings_per_command, + // Adoption: which tools and configs + "hook_type": enriched.hook_type, + "custom_toml_filters": enriched.custom_toml_filters, + // Retention: engagement signals + "first_seen_days": enriched.first_seen_days, + "active_days_30d": enriched.active_days_30d, + "commands_total": enriched.commands_total, + // Ecosystem: where to invest filters + "ecosystem_mix": enriched.ecosystem_mix, + // Economics: value delivered + "tokens_saved_30d": enriched.tokens_saved_30d, + "estimated_savings_usd_30d": enriched.estimated_savings_usd_30d, + // Configuration: user maturity + "has_config_toml": enriched.has_config_toml, + "exclude_commands_count": enriched.exclude_commands_count, + "projects_count": enriched.projects_count, + // Meta-commands: feature adoption + "meta_usage": enriched.meta_usage, }); let mut req = ureq::post(url).set("Content-Type", "application/json"); @@ -187,6 +211,206 @@ fn get_stats() -> (i64, Vec, Option, i64, i64) { ) } +struct EnrichedStats { + // Quality: identify gaps and weak filters + passthrough_top: Vec, + parse_failures_24h: i64, + low_savings_commands: Vec, + avg_savings_per_command: f64, + // Adoption: which tools and configs + hook_type: String, + custom_toml_filters: usize, + // Retention: engagement signals + first_seen_days: i64, + active_days_30d: i64, + commands_total: i64, + // Ecosystem: where to invest filters + ecosystem_mix: serde_json::Value, + // Economics: value delivered + tokens_saved_30d: i64, + estimated_savings_usd_30d: f64, + // Configuration: user maturity + has_config_toml: bool, + exclude_commands_count: usize, + projects_count: i64, + // Meta-commands: feature adoption + meta_usage: serde_json::Value, +} + +fn get_enriched_stats() -> EnrichedStats { + let defaults = || EnrichedStats { + passthrough_top: vec![], + parse_failures_24h: 0, + low_savings_commands: vec![], + avg_savings_per_command: 0.0, + hook_type: detect_hook_type(), + custom_toml_filters: count_custom_toml_filters(), + first_seen_days: 0, + active_days_30d: 0, + commands_total: 0, + ecosystem_mix: serde_json::json!({}), + tokens_saved_30d: 0, + estimated_savings_usd_30d: 0.0, + has_config_toml: detect_has_config(), + exclude_commands_count: count_exclude_commands(), + projects_count: 0, + meta_usage: serde_json::json!({}), + }; + + let tracker = match tracking::Tracker::new() { + Ok(t) => t, + Err(_) => return defaults(), + }; + + let since_24h = chrono::Utc::now() - chrono::Duration::hours(24); + + let passthrough_top = tracker + .top_passthrough(5) + .unwrap_or_default() + .into_iter() + .map(|(cmd, count)| format!("{}:{}", cmd, count)) + .collect(); + + let parse_failures_24h = tracker.parse_failures_since(since_24h).unwrap_or(0); + + let low_savings_commands = tracker + .low_savings_commands(5) + .unwrap_or_default() + .into_iter() + .map(|(cmd, pct)| format!("{}:{:.0}%", cmd, pct)) + .collect(); + + let avg_savings_per_command = tracker.avg_savings_per_command().unwrap_or(0.0); + + let first_seen_days = tracker.first_seen_days().unwrap_or(0); + let active_days_30d = tracker.active_days_30d().unwrap_or(0); + let commands_total = tracker.commands_total().unwrap_or(0); + + let ecosystem_mix = serde_json::Value::Object( + tracker + .ecosystem_mix() + .unwrap_or_default() + .into_iter() + .map(|(k, v)| (k, serde_json::json!(v))) + .collect(), + ); + + let tokens_saved_30d = tracker.tokens_saved_30d().unwrap_or(0); + // Estimate USD savings: Claude Sonnet input $3/Mtok, output $15/Mtok + // Weighted average ~$5/Mtok for typical input-heavy agent usage + let estimated_savings_usd_30d = tokens_saved_30d as f64 / 1_000_000.0 * 5.0; + + let projects_count = tracker.projects_count().unwrap_or(0); + + let meta_usage = build_meta_usage(&tracker); + + EnrichedStats { + passthrough_top, + parse_failures_24h, + low_savings_commands, + avg_savings_per_command, + hook_type: detect_hook_type(), + custom_toml_filters: count_custom_toml_filters(), + first_seen_days, + active_days_30d, + commands_total, + ecosystem_mix, + tokens_saved_30d, + estimated_savings_usd_30d, + projects_count, + has_config_toml: detect_has_config(), + exclude_commands_count: count_exclude_commands(), + meta_usage, + } +} + +/// Build meta-command usage counts (gain, discover, proxy, verify, learn). +fn build_meta_usage(tracker: &tracking::Tracker) -> serde_json::Value { + let meta_cmds = ["gain", "discover", "proxy", "verify", "learn", "init"]; + let top = tracker.top_commands(50).unwrap_or_default(); + let mut usage = serde_json::Map::new(); + for meta in &meta_cmds { + let count = top.iter().filter(|c| c == meta).count(); + if count > 0 { + usage.insert(meta.to_string(), serde_json::json!(count)); + } + } + serde_json::Value::Object(usage) +} + +/// Check if user has a config.toml file. +fn detect_has_config() -> bool { + dirs::config_dir() + .map(|d| d.join("rtk/config.toml").exists()) + .unwrap_or(false) +} + +/// Count commands in exclude_commands config. +fn count_exclude_commands() -> usize { + crate::core::config::Config::load() + .map(|c| c.hooks.exclude_commands.len()) + .unwrap_or(0) +} + +/// Detect which AI agent hook is installed. +fn detect_hook_type() -> String { + let home = match dirs::home_dir() { + Some(h) => h, + None => return "unknown".to_string(), + }; + + // Check in order of popularity + let checks = [ + (home.join(".claude/hooks/rtk-rewrite.sh"), "claude"), + (home.join(".claude/hooks/rtk-rewrite.json"), "claude"), + (home.join(".gemini/hooks/rtk-hook.sh"), "gemini"), + (home.join(".codex/AGENTS.md"), "codex"), + (home.join(".cursor/hooks/rtk-rewrite.json"), "cursor"), + ]; + + for (path, name) in &checks { + if path.exists() { + return name.to_string(); + } + } + + // Check project-level hooks + if let Ok(cwd) = std::env::current_dir() { + if cwd.join(".claude/hooks/rtk-rewrite.sh").exists() { + return "claude".to_string(); + } + } + + "none".to_string() +} + +/// Count user-defined TOML filter files (project-local + global). +fn count_custom_toml_filters() -> usize { + let mut count = 0; + + // Project-local: .rtk/filters/*.toml + if let Ok(cwd) = std::env::current_dir() { + if let Ok(entries) = std::fs::read_dir(cwd.join(".rtk/filters")) { + count += entries + .filter_map(|e| e.ok()) + .filter(|e| e.path().extension().is_some_and(|ext| ext == "toml")) + .count(); + } + } + + // Global: ~/.config/rtk/filters/*.toml + if let Some(config_dir) = dirs::config_dir() { + if let Ok(entries) = std::fs::read_dir(config_dir.join("rtk/filters")) { + count += entries + .filter_map(|e| e.ok()) + .filter(|e| e.path().extension().is_some_and(|ext| ext == "toml")) + .count(); + } + } + + count +} + fn detect_install_method() -> &'static str { let exe = match std::env::current_exe() { Ok(p) => p, @@ -336,4 +560,37 @@ mod tests { assert!((0.0..=100.0).contains(&p)); } } + + #[test] + fn test_enriched_stats_returns_valid_data() { + let stats = get_enriched_stats(); + assert!(stats.passthrough_top.len() <= 5); + assert!(stats.parse_failures_24h >= 0); + assert!(stats.low_savings_commands.len() <= 5); + assert!((0.0..=100.0).contains(&stats.avg_savings_per_command)); + assert!( + ["claude", "gemini", "codex", "cursor", "none", "unknown"] + .iter() + .any(|&h| stats.hook_type.starts_with(h)), + "Unexpected hook type: {}", + stats.hook_type + ); + } + + #[test] + fn test_detect_hook_type_returns_known() { + let ht = detect_hook_type(); + assert!( + ["claude", "gemini", "codex", "cursor", "none", "unknown"].contains(&ht.as_str()), + "Unexpected hook type: {}", + ht + ); + } + + #[test] + fn test_count_custom_toml_filters() { + // Should not panic even if directories don't exist + let count = count_custom_toml_filters(); + assert!(count < 10000); // sanity check + } } diff --git a/src/core/tracking.rs b/src/core/tracking.rs index d6a248af..ae3cfa59 100644 --- a/src/core/tracking.rs +++ b/src/core/tracking.rs @@ -956,6 +956,180 @@ impl Tracker { )?; Ok(saved) } + + /// Top N passthrough commands (0% savings) — commands missing a filter. + pub fn top_passthrough(&self, limit: usize) -> Result> { + let mut stmt = self.conn.prepare( + "SELECT original_cmd, COUNT(*) as cnt FROM commands + WHERE input_tokens = 0 AND output_tokens = 0 + GROUP BY original_cmd ORDER BY cnt DESC LIMIT ?1", + )?; + let rows = stmt.query_map(params![limit as i64], |row| { + let cmd: String = row.get(0)?; + let count: i64 = row.get(1)?; + let short = cmd.split_whitespace().take(3).collect::>().join(" "); + Ok((short, count)) + })?; + Ok(rows.filter_map(|r| r.ok()).collect()) + } + + /// Count parse failures in the last 24 hours. + pub fn parse_failures_since(&self, since: chrono::DateTime) -> Result { + let ts = since.format("%Y-%m-%dT%H:%M:%S").to_string(); + let count: i64 = self.conn.query_row( + "SELECT COUNT(*) FROM parse_failures WHERE timestamp >= ?1", + params![ts], + |row| row.get(0), + )?; + Ok(count) + } + + /// Count commands with low savings (<30%) — filters that need improvement. + pub fn low_savings_commands(&self, limit: usize) -> Result> { + let mut stmt = self.conn.prepare( + "SELECT rtk_cmd, AVG(savings_pct) as avg_sav FROM commands + WHERE input_tokens > 0 + GROUP BY rtk_cmd + HAVING avg_sav < 30.0 AND avg_sav > 0.0 + ORDER BY COUNT(*) DESC LIMIT ?1", + )?; + let rows = stmt.query_map(params![limit as i64], |row| { + let cmd: String = row.get(0)?; + let sav: f64 = row.get(1)?; + let short = cmd.split_whitespace().take(3).collect::>().join(" "); + Ok((short, sav)) + })?; + Ok(rows.filter_map(|r| r.ok()).collect()) + } + + /// Average savings percentage per command (unweighted by volume). + pub fn avg_savings_per_command(&self) -> Result { + let avg: f64 = self.conn.query_row( + "SELECT COALESCE(AVG(savings_pct), 0.0) FROM commands WHERE input_tokens > 0", + [], + |row| row.get(0), + )?; + Ok(avg) + } + + /// Days since first recorded command (installation age). + pub fn first_seen_days(&self) -> Result { + let oldest: Option = self + .conn + .query_row("SELECT MIN(timestamp) FROM commands", [], |row| row.get(0)) + .unwrap_or(None); + match oldest { + Some(ts) => { + let first = chrono::NaiveDateTime::parse_from_str(&ts, "%Y-%m-%dT%H:%M:%S") + .or_else(|_| chrono::NaiveDateTime::parse_from_str(&ts, "%Y-%m-%d %H:%M:%S")) + .map(|dt| dt.and_utc()) + .unwrap_or_else(|_| chrono::Utc::now()); + let days = (chrono::Utc::now() - first).num_days(); + Ok(days.max(0)) + } + None => Ok(0), + } + } + + /// Number of distinct active days in the last 30 days. + pub fn active_days_30d(&self) -> Result { + let since = (chrono::Utc::now() - chrono::Duration::days(30)) + .format("%Y-%m-%dT%H:%M:%S") + .to_string(); + let count: i64 = self.conn.query_row( + "SELECT COUNT(DISTINCT DATE(timestamp)) FROM commands WHERE timestamp >= ?1", + params![since], + |row| row.get(0), + )?; + Ok(count) + } + + /// Total number of recorded commands. + pub fn commands_total(&self) -> Result { + let count: i64 = self + .conn + .query_row("SELECT COUNT(*) FROM commands", [], |row| row.get(0))?; + Ok(count) + } + + /// Ecosystem distribution as percentages (top categories by command prefix). + pub fn ecosystem_mix(&self) -> Result> { + let total: f64 = self.conn.query_row( + "SELECT COUNT(*) FROM commands WHERE input_tokens > 0", + [], + |row| row.get(0), + )?; + if total == 0.0 { + return Ok(vec![]); + } + let mut stmt = self.conn.prepare( + "SELECT rtk_cmd, COUNT(*) as cnt FROM commands + WHERE input_tokens > 0 + GROUP BY rtk_cmd ORDER BY cnt DESC", + )?; + let mut categories: std::collections::HashMap = + std::collections::HashMap::new(); + let rows = stmt.query_map([], |row| { + let cmd: String = row.get(0)?; + let cnt: f64 = row.get(1)?; + Ok((cmd, cnt)) + })?; + for row in rows.flatten() { + let cat = categorize_command(&row.0); + *categories.entry(cat).or_default() += row.1; + } + let mut result: Vec<(String, f64)> = categories + .into_iter() + .map(|(cat, cnt)| (cat, (cnt / total * 100.0).round())) + .collect(); + result.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap_or(std::cmp::Ordering::Equal)); + result.truncate(8); + Ok(result) + } + + /// Tokens saved in the last 30 days. + pub fn tokens_saved_30d(&self) -> Result { + let since = (chrono::Utc::now() - chrono::Duration::days(30)) + .format("%Y-%m-%dT%H:%M:%S") + .to_string(); + let saved: i64 = self.conn.query_row( + "SELECT COALESCE(SUM(saved_tokens), 0) FROM commands WHERE timestamp >= ?1", + params![since], + |row| row.get(0), + )?; + Ok(saved) + } + + /// Number of distinct project paths. + pub fn projects_count(&self) -> Result { + let count: i64 = self.conn.query_row( + "SELECT COUNT(DISTINCT project_path) FROM commands WHERE project_path != ''", + [], + |row| row.get(0), + )?; + Ok(count) + } +} + +/// Map an rtk_cmd to an ecosystem category for telemetry. +fn categorize_command(rtk_cmd: &str) -> String { + let parts: Vec<&str> = rtk_cmd.split_whitespace().collect(); + let tool = parts.get(1).copied().unwrap_or("other"); + match tool { + "git" | "gh" | "gt" => "git", + "cargo" => "cargo", + "npm" | "npx" | "pnpm" | "vitest" | "tsc" | "lint" | "prettier" | "next" | "playwright" + | "prisma" => "js", + "pytest" | "ruff" | "mypy" | "pip" => "python", + "go" | "golangci-lint" => "go", + "docker" | "kubectl" => "cloud", + "rspec" | "rubocop" | "rake" => "ruby", + "dotnet" => "dotnet", + "ls" | "tree" | "grep" | "find" | "wc" | "read" | "env" | "json" | "log" | "smart" + | "diff" | "deps" | "summary" | "format" => "system", + _ => "other", + } + .to_string() } fn get_db_path() -> Result {