Skip to content

feat: claude code skill for automated i18n translations#11985

Merged
premiumjibles merged 11 commits intodevelopfrom
agent-translations
Feb 25, 2026
Merged

feat: claude code skill for automated i18n translations#11985
premiumjibles merged 11 commits intodevelopfrom
agent-translations

Conversation

@premiumjibles
Copy link
Collaborator

@premiumjibles premiumjibles commented Feb 23, 2026

Description

Adds a new Claude Code /translate skill that automatically translates new/changed English UI strings into all 9 supported non-English languages (de, es, fr, ja, pt, ru, tr, uk, zh) using a translate-review-refine pipeline.

Skill components:

  • /translate skill definition with full orchestration logic
  • /benchmark-translate skill for quality regression testing
  • Per-locale translation rules and terminology glossary
  • Validation scripts for placeholder integrity, glossary compliance, length ratios, and script detection
  • Merge/diff tooling for incremental translation updates (merge.js only adds new keys, never overwrites existing translations)

Initial translation run included: 11 missing strings translated across all 9 languages — RFOX FAQ entries, action center failure messages, and yield cooldown notices.

Locale guide hardening:

  • French elision rule for dynamic placeholders (l'%{placeholder} not le %{placeholder})
  • Ukrainian preposition alternation rules — use "у" (not "в") and "з" (not "із"/"зі") before %{placeholder} since runtime values are unknown at translation time
  • Turkish vowel harmony rules — avoid attaching case suffixes directly to %{placeholder}, restructure with postpositions (için, üzerinde) since crypto symbols span all 4 vowel classes

Issue (if applicable)

closes #11935
https://linear.app/shapeshift-dao/issue/SS-5524/claude-code-skill-automated-i18n-translations

Risk

Low risk. This PR adds new skill files under .claude/skills/ and translation infrastructure. The only production-impacting changes are 11 new translation keys added to each non-English main.json file and the new glossary.json. No existing translations are modified. English source strings are unchanged. The translation fallback mechanism means any issues would simply show English text.

No protocols, transaction types, wallets, or contract interactions are affected.

Testing

Engineering

  1. Skill invocation: Run /translate in Claude Code — it should detect no new untranslated strings and exit early with "Translations are up to date"
  2. Adding new strings: Add a test key to en/main.json, run /translate, verify it appears translated in all 9 locale files
  3. Validation: Run node .claude/skills/translate/scripts/validate-file.js <locale> for any locale to verify file integrity
  4. Benchmark: Run /benchmark-translate to measure translation quality against stratified test keys

Operations

  • 🏁 My feature is behind a flag and doesn't require operations testing (yet)

Translation changes are immediately live but non-breaking — the app's fallback mechanism means missing or incorrect translations simply show English text.

Screenshots (if applicable)

N/A — no UI changes, developer tooling only.

Summary by CodeRabbit

  • New Features

    • Expanded translations across 9 locales with richer staking/unstaking/yield copy, cooldown notices, and explicit failure-state messages for deposits, withdrawals, and claims.
    • Added a translation glossary to enforce consistent DeFi terminology.
    • Added translation tooling to support key selection, translation/merge/restore workflows, validation, and report compilation for benchmarks.
  • Documentation

    • New locale-specific style guides per language (de, es, fr, ja, pt, ru, tr, uk, zh).
    • Added end-to-end translation benchmark and QA workflow documentation.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a translation management suite: benchmark docs and scripts, a translate skill with many helper scripts and locale guides, a DeFi glossary and marker SHA, small git config tweaks, and expanded localized content across nine locales for staking/yield flows.

Changes

Cohort / File(s) Summary
PR Tracking Metadata
\.beads/pr-context.jsonl
Replaced in-progress PR-tracking entries with closed/finished regeneration/merge records documenting regen, merge resolutions, and asset index updates.
Benchmarking Skill & Scripts
\.claude/skills/benchmark-translate/SKILL.md, \.claude/skills/benchmark-translate/scripts/*
Adds benchmark documentation and tooling for translation QA: key selection, setup, restore, and report compilation (select-keys.js, setup.js, restore.js, compile-report.js).
Translate Skill Documentation
\.claude/skills/translate/SKILL.md
New, detailed translate pipeline spec covering diff detection, batching, multi-agent translate/validate/review/refine, glossary rules, and merge semantics.
Translate Scripts & Utils
\.claude/skills/translate/scripts/*
Adds many CLI scripts and a utils module for diffing, preparing bundles, term-context extraction, validation, file validation, merging, missing-key detection, and locale prep (diff.js, missing-keys.js, prepare-locale.js, term-context.js, validate.js, validate-file.js, merge.js, script-utils.js).
Locale Guides
\.claude/skills/translate/locales/*
Adds per-locale guidance files for de, es, fr, ja, pt, ru, tr, uk, zh covering register, terminology, placeholder rules and style.
Glossary & Config
src/assets/translations/glossary.json, src/assets/translations/.last-translation-sha, \.gitattributes, \.gitignore
Adds a DeFi/glossary JSON resource, marker SHA for diffing, a beads merge attribute, and a gitignore rule for benchmark outputs.
Localized Content Updates
src/assets/translations/*/main.json
de, es, fr, ja, pt, ru, tr, uk, zh
Expands translations: adds bodies for why/stake/unstake/connect, yield lifecycle keys (unstakeAvailableIn, unstakeReady, unstakeClaimed), cooldownNotice, deposit/withdrawal/claim .failed variants, and structural normalizations (explicit empty objects).

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,230,255,0.5)
    participant Dev as Developer/Operator
    participant Repo as Git Repo
    participant Diff as diff.js
    end
    rect rgba(200,255,200,0.5)
    participant Select as select-keys.js
    participant Prepare as prepare-locale.js
    participant Agent as Translation Agents
    participant Validate as validate.js
    participant Merge as merge.js
    end

    Dev->>Diff: run (reads `.last-translation-sha`)
    Diff->>Repo: git show old `en/main.json`
    Repo-->>Diff: returns old `en/main.json`
    Diff-->>Dev: list of changed/new key paths

    Dev->>Select: build `testKeys.json`
    Select->>Repo: read `en` & locales, write test keys

    Dev->>Prepare: create per-locale bundles (glossary, term-context, batches)
    Prepare->>Agent: send bundles per-locale

    Agent->>Validate: submit translations for validation
    Validate-->>Agent: rejected / flagged / passed feedback
    Agent->>Merge: pass approved translations
    Merge->>Repo: backup & write locale `main.json`, update ordering
    Merge-->>Repo: optionally update `.last-translation-sha`
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • 0xApotheosis
  • kaladinlight

Poem

"I hopped through keys both old and new,
I nudged the glossary, stitched each glue,
Nine tongues now hum the staking song,
Agents judged and scripts ran long,
The rabbit's warren reads along." 🐇✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: claude code skill for automated i18n translations' accurately summarizes the main change—adding a Claude Code skill for automated translation across 9 locales. It is concise, specific, and directly reflects the primary objective.
Linked Issues check ✅ Passed The PR comprehensively addresses all major requirements from issue #11935: implements /translate skill with detect→translate→validate→review→refine pipeline, adds glossary.json, includes per-locale guides, provides validation scripts, adds merge tooling with JSON ordering preservation, supports initial backfill, and produces console summaries with skipped items.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the translation skill implementation scope. Core additions include skill definitions, locale guides, validation/merge scripts, glossary, and translation data files. Minor additions (.gitattributes, .gitignore, .last-translation-sha marker) support the translation workflow; no unrelated code changes detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch agent-translations

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@premiumjibles premiumjibles marked this pull request as ready for review February 23, 2026 23:39
@premiumjibles premiumjibles requested a review from a team as a code owner February 23, 2026 23:39
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (5)
.claude/skills/translate/scripts/term-context.js (1)

36-47: Local flatten duplicates flattenJson from script-utils.js.

validate-file.js already imports flattenJson from ./script-utils. This local re-implementation creates a divergence risk if script-utils.js evolves.

♻️ Proposed refactor
+const { flattenJson } = require('./script-utils');
+
 const fs = require('fs');
 // ... (keep existing code, remove the local flatten function)
-function flatten(obj, prefix) {
-  const result = {};
-  for (const [key, value] of Object.entries(obj)) {
-    const path = prefix ? `${prefix}.${key}` : key;
-    if (typeof value === 'string') {
-      result[path] = value;
-    } else if (typeof value === 'object' && value !== null) {
-      Object.assign(result, flatten(value, path));
-    }
-  }
-  return result;
-}
-
-const flatEn = flatten(en, '');
-const flatLocale = flatten(localeData, '');
+const flatEn = flattenJson(en, '');
+const flatLocale = flattenJson(localeData, '');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/term-context.js around lines 36 - 47, The
local flatten function duplicates flattenJson from script-utils and should be
removed to avoid drift; delete the flatten implementation in term-context.js and
replace any local calls to flatten(...) with the imported flattenJson(...)
symbol, ensuring you import flattenJson from ./script-utils at the top of the
file and preserve the same call signature (pass the same object/prefix
arguments) so validate-file.js’s use of flattenJson remains the single source of
truth.
.claude/skills/benchmark-translate/scripts/select-keys.js (2)

148-176: Consider wrapping file reads in try/catch for friendlier errors.

Lines 168, 174, and 190 all use JSON.parse(fs.readFileSync(...)) without error handling. If en/main.json or any locale file is malformed or missing, the script will crash with a raw Node stack trace. For a developer-facing CLI tool this is tolerable, but a quick try/catch with a descriptive message would improve DX.

Example for the locale loading loop
   const localeData = {}
   for (const locale of TEST_LOCALES) {
-    localeData[locale] = JSON.parse(fs.readFileSync(path.join(TRANSLATIONS_DIR, `${locale}/main.json`), 'utf8'))
+    const localePath = path.join(TRANSLATIONS_DIR, `${locale}/main.json`)
+    try {
+      localeData[locale] = JSON.parse(fs.readFileSync(localePath, 'utf8'))
+    } catch (err) {
+      console.error(`Error reading ${localePath}: ${err.message}`)
+      process.exit(1)
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/select-keys.js around lines 148 -
176, The file reads and JSON.parse calls in main (specifically reading
TRANSLATIONS_DIR + 'en/main.json' into enData and the loop that reads each
`${locale}/main.json` into localeData for TEST_LOCALES) should be wrapped in
try/catch blocks: catch filesystem or JSON errors, log a clear descriptive error
including the file path and error.message (via console.error or processLogger),
and exit with process.exit(1); do this for the en/main.json read and for each
locale read so malformed or missing files produce friendly messages instead of
raw stack traces.

35-75: Minor: category name single-word matches strings up to 3 words.

The name is slightly misleading. Consider renaming to short-label or short-phrase to better reflect the wordCount <= 3 condition on Line 61. Very minor — won't affect correctness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/select-keys.js around lines 35 -
75, Rename the misleading category value "single-word" in the categorize
function to a clearer name like "short-label" (or "short-phrase"): update the
return value in categorize where it currently returns 'single-word' (the branch
that checks wordCount <= 3 and !value.includes('%{')) and also update any
consumers, tests, or mappings that rely on the 'single-word' string to use the
new name so behavior stays consistent across the codebase.
.claude/skills/translate/scripts/prepare-locale.js (2)

78-80: Deduplicate tagKeys to avoid repeated entries across batches.

A Set keeps the tag list stable and avoids downstream duplication.

💡 Suggested change
-const tagKeys = batchArray.flatMap(batch => Object.keys(batch).filter(k => k.includes('.tags.')));
+const tagKeys = Array.from(new Set(
+  batchArray.flatMap(batch => Object.keys(batch).filter(k => k.includes('.tags.')))
+));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/prepare-locale.js around lines 78 - 80, The
tagKeys array is built by flatMapping keys across batches which can produce
duplicate entries; update the creation of tagKeys (variable name: tagKeys,
derived from batchArray) to deduplicate results (e.g., convert to a Set and back
to an array) so downstream code receives a stable, unique list of tag keys
across all batches.

35-66: Add a clearer error when the batches file path is invalid.

If --batches points to a missing file, readFileSync will throw a stack trace. A quick existsSync guard keeps failures user‑friendly.

💡 Suggested change
 if (!batchesPath) {
   console.error('Error: --batches=<file> is required');
   process.exit(1);
 }
+
+if (!fs.existsSync(batchesPath)) {
+  console.error('Error: batches file not found at ' + batchesPath);
+  process.exit(1);
+}
 
 const meta = LOCALE_META[locale];
@@
-const batches = JSON.parse(fs.readFileSync(batchesPath, 'utf8'));
+const batches = JSON.parse(fs.readFileSync(batchesPath, 'utf8'));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/prepare-locale.js around lines 35 - 66, The
script reads the batchesPath with JSON.parse(fs.readFileSync(batchesPath,
'utf8')) without checking the file exists, which causes a stack trace when the
path is invalid; add a guard using fs.existsSync(batchesPath) before attempting
to read/parse, log a clear error like "Error: batches file not found at <path>"
via console.error, and call process.exit(1) if missing so the code that
constructs batches is only executed when the file is present (refer to
batchesPath, fs.existsSync, fs.readFileSync, batches).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.beads/pr-context.jsonl:
- Around line 1-4: The commit includes ephemeral .beads task-tracking JSONL
entries (e.g., .beads/pr-context.jsonl) that should not be committed; remove
these files from the PR and stop tracking them by removing them from the index,
add the .beads/*.json or .beads/*.jsonl pattern to the repo ignore rules, and
re-commit so only permanent source files remain (verify the PR no longer
contains .beads/pr-context.jsonl and related .beads entries before pushing).

In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 27-39: The glossaryKey values in GLOSSARY_TARGET_TERMS do not
match the keys in the translations glossary, so checkGlossaryCorrectness never
finds them; update each entry in the GLOSSARY_TARGET_TERMS array to use the
exact key strings that exist in src/assets/translations/glossary.json (or
alternatively change checkGlossaryCorrectness to look up by englishPattern),
e.g., ensure entries referenced by glossaryKey in the GLOSSARY_TARGET_TERMS
constant match the exact property names used by checkGlossaryCorrectness so the
validation can run properly.

In @.claude/skills/translate/locales/ru.md:
- Around line 1-3: The header and guidance text use a mixed-script form "vы"
(Latin v + Cyrillic ы); replace all occurrences in this file of the header "#
Russian (ru) — Formal (vы)" and the guidance line "Use formal \"vы\" address
consistently" with the correct all‑Cyrillic form "ви" (e.g., "# Russian (ru) —
Formal (ви)" and "Use formal \"ви\" address consistently") so the Russian locale
matches the Cyrillic-only style used in uk.md.

In @.claude/skills/translate/scripts/merge.js:
- Around line 14-22: The file-path heuristic in merge.js incorrectly treats any
translationsArg containing '/' as a filepath (causing File not found) and also
crashes when translationsArg is missing; update the logic around translationsArg
and newTranslations to first check whether translationsArg is provided, and then
use fs.existsSync(translationsArg) alone to decide if it should be read from
disk (mirroring term-context.js) instead of translationsArg.includes('/') or
.endsWith('.json'); if translationsArg is absent, print a clear usage message
and exit, and if fs.existsSync returns true read and JSON.parse the file safely
(catch JSON parse errors) otherwise attempt to JSON.parse(translationsArg) and
handle/report parse errors before calling process.exit(1).

In @.claude/skills/translate/scripts/missing-keys.js:
- Around line 21-25: The loop that reads and parses locale files (the for...of
over locales using fs.readFileSync and JSON.parse) lacks error handling and will
crash on missing or malformed files; wrap the read+parse+findMissing steps for
each locale in a try/catch, and in the catch log the locale and error
(console.error or process.stderr) and continue to the next locale instead of
letting the exception abort the script; optionally record the failure in result
(e.g., result[locale] = { error: error.message }) so the caller/Claude still
receives JSON output.

In @.claude/skills/translate/scripts/term-context.js:
- Line 106: The loop currently skips entries using a falsy check `if
(!flatLocale[path]) continue;` which incorrectly drops valid empty-string
translations; change the guard to an explicit undefined test (e.g., `if
(flatLocale[path] === undefined) continue;`) or use `if (!(path in flatLocale))
continue;` so empty string values are preserved; update the code surrounding the
`flatLocale` lookup in term-context generation accordingly.

In @.claude/skills/translate/scripts/validate-file.js:
- Around line 28-44: The script currently calls process.exit(0) in error catch
blocks (the JSON parse catch that pushes `JSON parse error: ${e.message}` and
the English file read catch that pushes `Cannot read English file:
${e.message}`), which masks failures; change those process.exit(0) calls to
process.exit(1) so errors return a non‑zero exit code, and verify any final exit
after printing the summary (the code that prints JSON.stringify({ valid: ...,
errors, warnings })) also uses process.exit(1) when errors.length > 0 to
maintain consistent failure signaling; update the calls near the
`errors.push(...)` lines and any final successful/failed exit handling
accordingly.

In @.claude/skills/translate/scripts/validate.js:
- Around line 62-107: The loop assumes src and tgt are strings but may be
objects; before calling extractPlaceholders, .length, or .split(), add type
guards: if typeof src !== 'string' or src.trim() === '' then push rejected with
reason 'empty source' (or 'non-string') and continue; if typeof tgt !== 'string'
then push rejected with reason 'non-string' and continue. Ensure
extractPlaceholders(src) and extractPlaceholders(tgt) are only invoked after
these checks, and adjust the untranslated/wordCount check to compute wordCount
only when src is a non-empty string. Use the existing symbols (source, target,
rejected, rejectedPaths, extractPlaceholders, flags, locale, CJK_LOCALES) to
locate and implement these guards.

---

Nitpick comments:
In @.claude/skills/benchmark-translate/scripts/select-keys.js:
- Around line 148-176: The file reads and JSON.parse calls in main (specifically
reading TRANSLATIONS_DIR + 'en/main.json' into enData and the loop that reads
each `${locale}/main.json` into localeData for TEST_LOCALES) should be wrapped
in try/catch blocks: catch filesystem or JSON errors, log a clear descriptive
error including the file path and error.message (via console.error or
processLogger), and exit with process.exit(1); do this for the en/main.json read
and for each locale read so malformed or missing files produce friendly messages
instead of raw stack traces.
- Around line 35-75: Rename the misleading category value "single-word" in the
categorize function to a clearer name like "short-label" (or "short-phrase"):
update the return value in categorize where it currently returns 'single-word'
(the branch that checks wordCount <= 3 and !value.includes('%{')) and also
update any consumers, tests, or mappings that rely on the 'single-word' string
to use the new name so behavior stays consistent across the codebase.

In @.claude/skills/translate/scripts/prepare-locale.js:
- Around line 78-80: The tagKeys array is built by flatMapping keys across
batches which can produce duplicate entries; update the creation of tagKeys
(variable name: tagKeys, derived from batchArray) to deduplicate results (e.g.,
convert to a Set and back to an array) so downstream code receives a stable,
unique list of tag keys across all batches.
- Around line 35-66: The script reads the batchesPath with
JSON.parse(fs.readFileSync(batchesPath, 'utf8')) without checking the file
exists, which causes a stack trace when the path is invalid; add a guard using
fs.existsSync(batchesPath) before attempting to read/parse, log a clear error
like "Error: batches file not found at <path>" via console.error, and call
process.exit(1) if missing so the code that constructs batches is only executed
when the file is present (refer to batchesPath, fs.existsSync, fs.readFileSync,
batches).

In @.claude/skills/translate/scripts/term-context.js:
- Around line 36-47: The local flatten function duplicates flattenJson from
script-utils and should be removed to avoid drift; delete the flatten
implementation in term-context.js and replace any local calls to flatten(...)
with the imported flattenJson(...) symbol, ensuring you import flattenJson from
./script-utils at the top of the file and preserve the same call signature (pass
the same object/prefix arguments) so validate-file.js’s use of flattenJson
remains the single source of truth.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9d73634 and 212e1d1.

📒 Files selected for processing (37)
  • .beads/pr-context.jsonl
  • .claude/skills/benchmark-translate/SKILL.md
  • .claude/skills/benchmark-translate/scripts/compile-report.js
  • .claude/skills/benchmark-translate/scripts/restore.js
  • .claude/skills/benchmark-translate/scripts/select-keys.js
  • .claude/skills/benchmark-translate/scripts/setup.js
  • .claude/skills/translate/SKILL.md
  • .claude/skills/translate/locales/de.md
  • .claude/skills/translate/locales/es.md
  • .claude/skills/translate/locales/fr.md
  • .claude/skills/translate/locales/ja.md
  • .claude/skills/translate/locales/pt.md
  • .claude/skills/translate/locales/ru.md
  • .claude/skills/translate/locales/tr.md
  • .claude/skills/translate/locales/uk.md
  • .claude/skills/translate/locales/zh.md
  • .claude/skills/translate/scripts/diff.js
  • .claude/skills/translate/scripts/merge.js
  • .claude/skills/translate/scripts/missing-keys.js
  • .claude/skills/translate/scripts/prepare-locale.js
  • .claude/skills/translate/scripts/script-utils.js
  • .claude/skills/translate/scripts/term-context.js
  • .claude/skills/translate/scripts/validate-file.js
  • .claude/skills/translate/scripts/validate.js
  • .gitattributes
  • .gitignore
  • src/assets/translations/.last-translation-sha
  • src/assets/translations/de/main.json
  • src/assets/translations/es/main.json
  • src/assets/translations/fr/main.json
  • src/assets/translations/glossary.json
  • src/assets/translations/ja/main.json
  • src/assets/translations/pt/main.json
  • src/assets/translations/ru/main.json
  • src/assets/translations/tr/main.json
  • src/assets/translations/uk/main.json
  • src/assets/translations/zh/main.json

premiumjibles and others added 5 commits February 24, 2026 07:34
Refactor translation pipeline so each per-language sub-agent owns its
full lifecycle (translate → validate → retry → review → refine → merge
→ verify) instead of the orchestrator managing all steps across 9
languages. Reduces orchestrator to a lightweight coordinator that spawns
agents and reads status files.

- Extract shared script-detection utilities into script-utils.js
- Refactor validate.js to import from script-utils.js (no behavior change)
- Add validate-file.js for post-merge full-file validation (JSON validity,
  key completeness, aggregate script ratio, regression detection)
- Simplify merge.js: remove duplicate script-validation, add pre-merge
  backup for rollback support
- Rewrite SKILL.md Steps 5-8 for self-contained language agent architecture

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Translate 11 missing English strings into de, es, fr, ja, pt, ru, tr,
uk, zh using the new /translate Claude Code skill. Covers RFOX FAQ
entries, action center failure messages, and yield cooldown notices.

Also fixes merge.js to only add new keys by default, never overwriting
existing translations. A --force flag is available for intentional
re-translation of changed English strings.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (7)
.beads/pr-context.jsonl (1)

1-4: ⚠️ Potential issue | 🟡 Minor

Ephemeral .beads task-tracking file still committed (previously flagged).

This file was already flagged and marked as addressed in commits 888534f–6e87928, but new closed entries have been committed again. These remain ephemeral operational notes that should not be in the repo. Please remove the file, ensure .beads/*.jsonl (in addition to .beads/*.json) is covered by .gitignore, and avoid tracking it going forward.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.beads/pr-context.jsonl around lines 1 - 4, Remove the ephemeral
.beads/pr-context.jsonl from the repository, stop tracking it (git rm --cached
.beads/pr-context.jsonl and commit the deletion), and add ignore rules for
.beads/*.jsonl and .beads/*.json to .gitignore so future entries aren’t
committed; verify no other .beads JSON/JSONL files are staged or tracked and
commit the updated .gitignore so the ephemeral task-tracking file is excluded
going forward.
.claude/skills/translate/scripts/validate-file.js (1)

28-44: process.exit(0) on error paths and missing exit code on failure — already flagged.

Lines 31, 44, and the implicit 0 exit after line 82 are covered by the existing review comment on lines 28–44.

Also applies to: 81-82

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate-file.js around lines 28 - 44, The
error-handling catch blocks in validate-file.js (the JSON parse catch that
pushes to errors and the enData file-read catch, both logging { valid: false,
errors, warnings }) currently call process.exit(0) which signals success; change
those calls to a non-zero exit (e.g., process.exit(1)) and ensure any other
failure paths (including the implicit exit after the final invalid log around
the flattenJson/validation flow) also exit with a non-zero code; locate the
catch blocks that reference errors, warnings, localeData, flattenJson and enData
and replace process.exit(0) with process.exit(1), and ensure the only
process.exit(0) is used on the true success path.
.claude/skills/translate/scripts/term-context.js (1)

99-107: ⚠️ Potential issue | 🟡 Minor

Preserve empty-string locale entries when building term context.
Line 106’s falsy check skips valid empty translations; only undefined should be excluded.

🔧 Proposed fix
-    if (!flatLocale[path]) continue;
+    if (flatLocale[path] === undefined) continue;
#!/bin/bash
# Locate the falsy check and confirm its context
rg -n "flatLocale\\[path\\]" .claude/skills/translate/scripts/term-context.js -C 2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/term-context.js around lines 99 - 107, The
loop building term context wrongly skips valid empty-string translations because
it uses a falsy check `if (!flatLocale[path]) continue;`; instead only undefined
should be excluded—update the condition in the term iteration (the block using
termSet, escapedTerm, regex, flatEn, flatLocale, newStringPaths, path, enValue)
to continue only when flatLocale[path] is strictly undefined (e.g.
`flatLocale[path] === undefined`) so empty string values are preserved for
matching.
.claude/skills/benchmark-translate/scripts/compile-report.js (1)

27-39: Verify GLOSSARY_TARGET_TERMS keys actually exist in glossary.json.

The previous review flagged that keys like 'dust (crypto)' and 'claim (DeFi rewards)' wouldn't match the glossary, causing checkGlossaryCorrectness to silently skip all validation. That comment is marked as addressed — but the current code is unchanged. Please confirm whether glossary.json was updated to carry these descriptive key names, or whether this fix still needs to be applied here.

#!/bin/bash
# Verify top-level keys in glossary.json and check for descriptive key names
rg -n '"dust \(crypto\)"\|"claim \(DeFi rewards\)"\|"trade \(action\)"\|"approve \(on-chain\)"\|"deposit \(funds\)"' src/assets/translations/glossary.json
# Also dump all top-level keys for comparison
python3 -c "
import json, sys
g = json.load(open('src/assets/translations/glossary.json'))
print([k for k in g.keys() if k != '_meta'])
"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines 27
- 39, GLOSSARY_TARGET_TERMS contains descriptive keys like 'dust (crypto)' and
'claim (DeFi rewards)' that likely don't match top-level keys in glossary.json,
causing checkGlossaryCorrectness to skip validation; inspect
src/assets/translations/glossary.json and either rename those glossary.json keys
to match the descriptive glossaryKey values or change the glossaryKey entries in
GLOSSARY_TARGET_TERMS to the actual top-level keys present in glossary.json
(preserve englishPattern and isNeverTranslate), then run the provided
grep/python checks to confirm checkGlossaryCorrectness will find and validate
each term.
.claude/skills/translate/scripts/merge.js (1)

14-22: Fragile file-path heuristic still present (follow-up on prior review).

The translationsArg.includes('/') branch is still unresolved: any JSON value containing a URL triggers the file-path branch and exits with "File not found". Additionally, omitting process.argv[3] causes JSON.parse(undefined) to throw an uncaught SyntaxError.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/merge.js around lines 14 - 22,
translationsArg can be undefined and the current includes('/') heuristic falsely
treats JSON strings with slashes as file paths; update the logic around
translationsArg and newTranslations to first validate presence (if
translationsArg is undefined or empty, log an error and exit), then try to
JSON.parse(translationsArg) inside a try/catch and, if parsing succeeds, assign
to newTranslations; if parsing fails, treat translationsArg as a file path:
check fs.existsSync(translationsArg) and read+JSON.parse the file (with error
handling for read/parse errors) so you no longer rely on
translationsArg.includes('/') to decide file vs JSON.
.claude/skills/translate/scripts/validate.js (2)

62-108: ⚠️ Potential issue | 🔴 Critical

Missing type guards before string operations on src / tgt — previously flagged, still unresolved

The loop at Line 65–66 assigns src = source[path] and tgt = target[path] without verifying both are strings. The empty-check at Line 69 only guards tgt against undefined, null, and blank strings — an object value (e.g., from un-flattened nested JSON) passes straight through to:

  • Line 75–76: extractPlaceholders(src) / extractPlaceholders(tgt) — calls .matchAll(), which throws TypeError on a non-string.
  • Line 95: tgt.length / src.lengthundefined / undefined = NaN; comparisons with NaN are always false, silently skipping the length-ratio check.
  • Line 104: src.split(/\s+/) — throws on a non-string.
🛡️ Proposed fix (same as prior suggestion)
   const src = source[path];
   const tgt = target[path];

+  if (typeof src !== 'string' || typeof tgt !== 'string') {
+    rejected.push({
+      path,
+      reason: 'non-string',
+      details: `Expected string values but got ${typeof src}/${typeof tgt}`,
+    });
+    continue;
+  }
+
   // Check 3: Empty/whitespace
   if (tgt === undefined || tgt === null || (typeof tgt === 'string' && tgt.trim() === '')) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate.js around lines 62 - 108, src and
tgt are used as strings but not type-checked, causing extractPlaceholders,
.length, and .split to throw or produce NaN for non-strings; add explicit type
guards before doing any string operations (replace the empty-check with a guard
that verifies typeof tgt === 'string' && tgt.trim() !== ''), and before calling
extractPlaceholders(src/tgt) ensure typeof src === 'string' and typeof tgt ===
'string' (otherwise push {path, reason: 'invalid type', details: 'source/target
not a string'} to rejected and continue); only compute ratio if both are strings
and src.length > 0 to avoid NaN, and only call src.split(...) when src is a
string. Ensure you update the logic around extractPlaceholders, ratio
calculation, and untranslated detection to use these guards.

95-101: ⚠️ Potential issue | 🟡 Minor

Division by zero when src is an empty string

If src.length === 0, ratio becomes Infinity, which exceeds maxRatio and produces a false-positive 'length ratio' flag for every non-empty translation of an empty source string. Add a guard:

🛡️ Proposed fix
+  if (src.length === 0) {
+    // Nothing meaningful to ratio-check; skip
+    passed.push(path);
+    continue;
+  }
+
   // Check 2: Length ratio
   const ratio = tgt.length / src.length;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate.js around lines 95 - 101, The
length-ratio check divides by src.length and will yield Infinity when src is
empty, causing false-positive 'length ratio' flags; update the logic around
ratio/CJK_LOCALES so you first handle the empty-source case (e.g., if src.length
=== 0) — decide whether to flag as an "empty source" / "unexpected translation"
(push a distinct flags entry with path and reason) or skip the ratio check
entirely when both src and tgt are empty, and only compute ratio = tgt.length /
src.length when src.length > 0; adjust setting of isFlagged and use the same
flags array (symbols: ratio, src, tgt, CJK_LOCALES, flags, isFlagged, path) so
behavior is correct for empty-source inputs.
🧹 Nitpick comments (5)
.claude/skills/translate/SKILL.md (1)

43-45: Add language identifiers to fenced code blocks (MD040).

markdownlint flags these blocks as missing a language. Please add language tags (e.g., bash, json, text) to the fences.

Also applies to: 171-173, 236-238, 268-270, 329-331, 359-361

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/SKILL.md around lines 43 - 45, Several fenced code
blocks in SKILL.md are missing language identifiers (markdownlint MD040); open
the file and add appropriate language tags to each triple-backtick fence (for
example use bash for shell commands like `Read
src/assets/translations/.last-translation-sha`, json for JSON snippets, or text
when generic), updating the fences near the shown occurrences (the block
containing "Read src/assets/translations/.last-translation-sha" and the other
blocks around the reported regions) so every ``` fence becomes e.g. ```bash,
```json or ```text as appropriate.
.claude/skills/translate/scripts/validate-file.js (1)

30-30: Inconsistent JSON output format between error-path exits and final output.

Lines 30 and 43 emit compact JSON (JSON.stringify(...) with no indent), while line 82 emits pretty-printed JSON (JSON.stringify(..., null, 2)). Any caller that reads stdout will see different whitespace depending on the exit path. Aligning to the same format (preferably pretty-printed for human-readable CI logs) avoids the inconsistency.

♻️ Proposed fix
-  console.log(JSON.stringify({ valid: false, errors, warnings }));
+  console.log(JSON.stringify({ valid: false, errors, warnings }, null, 2));
   process.exit(0);

(apply the same change at both lines 30 and 43)

Also applies to: 43-43

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate-file.js at line 30, The script
emits compact JSON on error paths (the console.log that prints { valid: false,
errors, warnings } and the similar error-path log referenced around the other
exit) but prints pretty JSON later; make the error-path outputs use the same
pretty-printed format as the final output by calling JSON.stringify with the
indentation parameter (null, 2) so both error and success stdout use consistent,
human-readable JSON; update the console.log calls that print errors/warnings to
use JSON.stringify(..., null, 2).
.claude/skills/benchmark-translate/scripts/compile-report.js (1)

75-76: hasJudgeScores is declared and mutated but never consumed.

The variable is set to false when a locale's judge-score file is absent, but nothing downstream reads it — the report proceeds identically either way. Safe to remove or replace with a one-off check where needed.

♻️ Proposed cleanup
-let hasJudgeScores = true
 for (const locale of TEST_LOCALES) {
   const scorePath = `/tmp/${locale}-judge-scores.json`
   if (!fs.existsSync(scorePath)) {
     console.error(`Warning: ${scorePath} not found — skipping judge scores for ${locale}`)
-    hasJudgeScores = false
     judgeScores[locale] = new Map()
     continue
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines 75
- 76, The local variable hasJudgeScores is declared and mutated inside the loop
over TEST_LOCALES but never used later; remove the hasJudgeScores declaration
and any assignments that set it (look for "hasJudgeScores" and the for (const
locale of TEST_LOCALES) loop) and either delete that dead logic entirely or
replace it with an inline/one-off existence check where the presence of a
judge-score file is actually needed (e.g., compute fs.existsSync or similar at
the point you consume that information instead of mutating hasJudgeScores).
.claude/skills/benchmark-translate/scripts/select-keys.js (1)

114-122: O(n) category count inside the selection loop

selected.filter(s => s.category === category).length at Line 120 re-scans the entire selected array on every iteration. Replace with a per-category counter for O(1) tracking:

♻️ Suggested refactor
   function selectFrom(entries, category, target) {
     const shuffled = shuffleArray(entries)
+    let categoryCount = selected.filter(s => s.category === category).length
     for (const entry of shuffled) {
       if (usedPaths.has(entry.dottedPath)) continue
       selected.push({ dottedPath: entry.dottedPath, category })
       usedPaths.add(entry.dottedPath)
-      if (selected.filter(s => s.category === category).length >= target) break
+      categoryCount++
+      if (categoryCount >= target) break
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/select-keys.js around lines 114 -
122, The loop in selectFrom performs an O(n) scan each iteration by calling
selected.filter(...).length; replace that with a per-category counter (e.g., a
local map/object like categoryCounts) tracked inside selectFrom or passed in so
you can increment when you push to selected and check categoryCounts[category]
>= target to break, and decrement if you ever undo selection; update/selectFrom,
selected push, and usedPaths.add lines to maintain this counter for O(1) checks.
.claude/skills/translate/scripts/validate.js (1)

88-92: Dead condition — && srcSet === tgtSet is always true here

If srcSet !== tgtSet, the continue at Line 82 exits the loop iteration. Reaching Line 89 implies srcSet === tgtSet already. The second operand is unreachable dead code.

♻️ Suggested simplification
-  if (srcPlaceholders.join(',') !== tgtPlaceholders.join(',') && srcSet === tgtSet) {
+  if (srcPlaceholders.join(',') !== tgtPlaceholders.join(',')) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate.js around lines 88 - 92, The
condition in the placeholder-order check contains a dead operand: remove the
redundant "&& srcSet === tgtSet" because srcSet and tgtSet are already
guaranteed equal by the earlier continue, so change the if to only compare
placeholder order (i.e. compare srcPlaceholders.join(',') !==
tgtPlaceholders.join(',') ) and keep the existing body that pushes to flags and
sets isFlagged (references: srcPlaceholders, tgtPlaceholders, srcSet, tgtSet,
flags.push, isFlagged).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Line 59: Add an existsSync guard before reading GLOSSARY_PATH so missing
glossary.json yields a clear error instead of an uncaught ENOENT; check
fs.existsSync(GLOSSARY_PATH) and if false log or throw a helpful message and
skip/abort as appropriate, then only call
JSON.parse(fs.readFileSync(GLOSSARY_PATH, 'utf8')) to populate the glossary
variable; mirror the pattern used for the ground-truth guard (the nearby check
around lines that handle ground-truth) to keep behavior consistent.

In @.claude/skills/benchmark-translate/scripts/select-keys.js:
- Around line 189-219: The stale-key pruning updates coreKeys in memory but
never sets coreCreated, so the write guarded by if
(!fs.existsSync(CORE_KEYS_PATH) || coreCreated) is skipped; change the logic in
the CORE_KEYS_PATH load block to mark coreCreated (or otherwise trigger a write)
when loaded.length !== validated.length (i.e., staleCount > 0) so the pruned
validated array is persisted—update the block that computes
loaded/validated/staleCount and set coreCreated = true (or set a flag like
needsWrite) so the subsequent writeFileSync writes coreKeys back to
CORE_KEYS_PATH.

In @.claude/skills/translate/scripts/diff.js:
- Around line 14-15: Wrap the JSON.parse/fs.readFileSync calls that set
oldStrings and newStrings in a try/catch so malformed JSON doesn't throw a stack
trace; on error, mirror the earlier error handling by writing a machine-readable
JSON error to stdout/stderr (e.g., an object with an "error" message and
details) and exiting non‑zero. Specifically, guard the code that assigns
oldStrings = JSON.parse(oldContent) and newStrings =
JSON.parse(fs.readFileSync(...)) and ensure the catch formats and emits a JSON
error payload consistent with the existing error block.
- Around line 23-25: The ternary uses a truthy check (previousValue ? 'modified'
: 'new') which treats existing empty-string values as new; change the existence
test to check for undefined instead (e.g. use previousValue !== undefined ?
'modified' : 'new') so keys with previousValue == "" are classified as
'modified'; update the expression in the block where currentValue,
previousValue, results and currentPath are used.

In @.claude/skills/translate/scripts/merge.js:
- Around line 75-83: The merge loop over Object.entries(newTranslations) assumes
flat dot-path keys and silently skips nested objects; add input validation
before the loop to detect nested JSON values (e.g., if any value in
newTranslations is an object/array rather than a string/primitive) and throw or
log a clear error unless forceOverwrite is explicitly set. Use the
newTranslations variable and the existing getByPath/setByPath logic to validate:
iterate newTranslations keys and if typeof value === 'object' (and not null)
then raise an error indicating nested JSON detected for that key (mention the
key name) so the caller can convert to dot-path format or set forceOverwrite to
proceed. Ensure the validation runs prior to the for (const [path, value] of
Object.entries(newTranslations)) loop so no silent skips occur.

In @.claude/skills/translate/scripts/validate-file.js:
- Around line 58-76: The comment claims detection of "deleted or corrupted" keys
but the code only checks existence; update the check to also detect value
changes by comparing flatPreMerge[key] to flatLocale[key] (both produced by
flattenJson) and collect mismatches into a separate corruptedKeys array (handle
undefined/null and stringify non-primitive values consistently), then include
corruptedKeys in the errors message similar to deletedKeys; reference
variables/functions: preMergePath, preMergeData, flattenJson, flatPreMerge,
flatLocale, deletedKeys.

In `@src/assets/translations/es/main.json`:
- Line 2612: The title string with key "title" currently uses the masculine past
participle "quemado"; change it to the feminine "quemada" so it agrees with the
feminine noun "cantidad" (i.e., update the value for "title" in
src/assets/translations/es/main.json from "¿Cómo se relacionan las tarifas
totales recaudadas, total Staking FOX, emisiones y cantidad quemado?" to use
"cantidad quemada").
- Around line 2843-2855: The `failed` translations for deposit, withdrawal, and
claim use informal "Tu" and an inconsistent noun ("reclamación"); update the
keys "deposit.failed", "withdrawal.failed", and "claim.failed" to use the formal
register "Su" to match existing "pending" and "complete" messages and change
"reclamación" in "claim.failed" to "reclamo" so the term matches "claim.pending"
("reclamo") for consistency across the claim messages.

In `@src/assets/translations/uk/main.json`:
- Line 2612: The FAQ title value for the translation key "connect.title" starts
with a lowercase "як" and should be capitalized to match other entries; update
the string value so the first character is uppercase ("Як ...") in
src/assets/translations/uk/main.json for the "connect.title" entry to keep
heading capitalization consistent across the section.

---

Duplicate comments:
In @.beads/pr-context.jsonl:
- Around line 1-4: Remove the ephemeral .beads/pr-context.jsonl from the
repository, stop tracking it (git rm --cached .beads/pr-context.jsonl and commit
the deletion), and add ignore rules for .beads/*.jsonl and .beads/*.json to
.gitignore so future entries aren’t committed; verify no other .beads JSON/JSONL
files are staged or tracked and commit the updated .gitignore so the ephemeral
task-tracking file is excluded going forward.

In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 27-39: GLOSSARY_TARGET_TERMS contains descriptive keys like 'dust
(crypto)' and 'claim (DeFi rewards)' that likely don't match top-level keys in
glossary.json, causing checkGlossaryCorrectness to skip validation; inspect
src/assets/translations/glossary.json and either rename those glossary.json keys
to match the descriptive glossaryKey values or change the glossaryKey entries in
GLOSSARY_TARGET_TERMS to the actual top-level keys present in glossary.json
(preserve englishPattern and isNeverTranslate), then run the provided
grep/python checks to confirm checkGlossaryCorrectness will find and validate
each term.

In @.claude/skills/translate/scripts/merge.js:
- Around line 14-22: translationsArg can be undefined and the current
includes('/') heuristic falsely treats JSON strings with slashes as file paths;
update the logic around translationsArg and newTranslations to first validate
presence (if translationsArg is undefined or empty, log an error and exit), then
try to JSON.parse(translationsArg) inside a try/catch and, if parsing succeeds,
assign to newTranslations; if parsing fails, treat translationsArg as a file
path: check fs.existsSync(translationsArg) and read+JSON.parse the file (with
error handling for read/parse errors) so you no longer rely on
translationsArg.includes('/') to decide file vs JSON.

In @.claude/skills/translate/scripts/term-context.js:
- Around line 99-107: The loop building term context wrongly skips valid
empty-string translations because it uses a falsy check `if (!flatLocale[path])
continue;`; instead only undefined should be excluded—update the condition in
the term iteration (the block using termSet, escapedTerm, regex, flatEn,
flatLocale, newStringPaths, path, enValue) to continue only when
flatLocale[path] is strictly undefined (e.g. `flatLocale[path] === undefined`)
so empty string values are preserved for matching.

In @.claude/skills/translate/scripts/validate-file.js:
- Around line 28-44: The error-handling catch blocks in validate-file.js (the
JSON parse catch that pushes to errors and the enData file-read catch, both
logging { valid: false, errors, warnings }) currently call process.exit(0) which
signals success; change those calls to a non-zero exit (e.g., process.exit(1))
and ensure any other failure paths (including the implicit exit after the final
invalid log around the flattenJson/validation flow) also exit with a non-zero
code; locate the catch blocks that reference errors, warnings, localeData,
flattenJson and enData and replace process.exit(0) with process.exit(1), and
ensure the only process.exit(0) is used on the true success path.

In @.claude/skills/translate/scripts/validate.js:
- Around line 62-108: src and tgt are used as strings but not type-checked,
causing extractPlaceholders, .length, and .split to throw or produce NaN for
non-strings; add explicit type guards before doing any string operations
(replace the empty-check with a guard that verifies typeof tgt === 'string' &&
tgt.trim() !== ''), and before calling extractPlaceholders(src/tgt) ensure
typeof src === 'string' and typeof tgt === 'string' (otherwise push {path,
reason: 'invalid type', details: 'source/target not a string'} to rejected and
continue); only compute ratio if both are strings and src.length > 0 to avoid
NaN, and only call src.split(...) when src is a string. Ensure you update the
logic around extractPlaceholders, ratio calculation, and untranslated detection
to use these guards.
- Around line 95-101: The length-ratio check divides by src.length and will
yield Infinity when src is empty, causing false-positive 'length ratio' flags;
update the logic around ratio/CJK_LOCALES so you first handle the empty-source
case (e.g., if src.length === 0) — decide whether to flag as an "empty source" /
"unexpected translation" (push a distinct flags entry with path and reason) or
skip the ratio check entirely when both src and tgt are empty, and only compute
ratio = tgt.length / src.length when src.length > 0; adjust setting of isFlagged
and use the same flags array (symbols: ratio, src, tgt, CJK_LOCALES, flags,
isFlagged, path) so behavior is correct for empty-source inputs.

---

Nitpick comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 75-76: The local variable hasJudgeScores is declared and mutated
inside the loop over TEST_LOCALES but never used later; remove the
hasJudgeScores declaration and any assignments that set it (look for
"hasJudgeScores" and the for (const locale of TEST_LOCALES) loop) and either
delete that dead logic entirely or replace it with an inline/one-off existence
check where the presence of a judge-score file is actually needed (e.g., compute
fs.existsSync or similar at the point you consume that information instead of
mutating hasJudgeScores).

In @.claude/skills/benchmark-translate/scripts/select-keys.js:
- Around line 114-122: The loop in selectFrom performs an O(n) scan each
iteration by calling selected.filter(...).length; replace that with a
per-category counter (e.g., a local map/object like categoryCounts) tracked
inside selectFrom or passed in so you can increment when you push to selected
and check categoryCounts[category] >= target to break, and decrement if you ever
undo selection; update/selectFrom, selected push, and usedPaths.add lines to
maintain this counter for O(1) checks.

In @.claude/skills/translate/scripts/validate-file.js:
- Line 30: The script emits compact JSON on error paths (the console.log that
prints { valid: false, errors, warnings } and the similar error-path log
referenced around the other exit) but prints pretty JSON later; make the
error-path outputs use the same pretty-printed format as the final output by
calling JSON.stringify with the indentation parameter (null, 2) so both error
and success stdout use consistent, human-readable JSON; update the console.log
calls that print errors/warnings to use JSON.stringify(..., null, 2).

In @.claude/skills/translate/scripts/validate.js:
- Around line 88-92: The condition in the placeholder-order check contains a
dead operand: remove the redundant "&& srcSet === tgtSet" because srcSet and
tgtSet are already guaranteed equal by the earlier continue, so change the if to
only compare placeholder order (i.e. compare srcPlaceholders.join(',') !==
tgtPlaceholders.join(',') ) and keep the existing body that pushes to flags and
sets isFlagged (references: srcPlaceholders, tgtPlaceholders, srcSet, tgtSet,
flags.push, isFlagged).

In @.claude/skills/translate/SKILL.md:
- Around line 43-45: Several fenced code blocks in SKILL.md are missing language
identifiers (markdownlint MD040); open the file and add appropriate language
tags to each triple-backtick fence (for example use bash for shell commands like
`Read src/assets/translations/.last-translation-sha`, json for JSON snippets, or
text when generic), updating the fences near the shown occurrences (the block
containing "Read src/assets/translations/.last-translation-sha" and the other
blocks around the reported regions) so every ``` fence becomes e.g. ```bash,
```json or ```text as appropriate.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 212e1d1 and 6e87928.

📒 Files selected for processing (37)
  • .beads/pr-context.jsonl
  • .claude/skills/benchmark-translate/SKILL.md
  • .claude/skills/benchmark-translate/scripts/compile-report.js
  • .claude/skills/benchmark-translate/scripts/restore.js
  • .claude/skills/benchmark-translate/scripts/select-keys.js
  • .claude/skills/benchmark-translate/scripts/setup.js
  • .claude/skills/translate/SKILL.md
  • .claude/skills/translate/locales/de.md
  • .claude/skills/translate/locales/es.md
  • .claude/skills/translate/locales/fr.md
  • .claude/skills/translate/locales/ja.md
  • .claude/skills/translate/locales/pt.md
  • .claude/skills/translate/locales/ru.md
  • .claude/skills/translate/locales/tr.md
  • .claude/skills/translate/locales/uk.md
  • .claude/skills/translate/locales/zh.md
  • .claude/skills/translate/scripts/diff.js
  • .claude/skills/translate/scripts/merge.js
  • .claude/skills/translate/scripts/missing-keys.js
  • .claude/skills/translate/scripts/prepare-locale.js
  • .claude/skills/translate/scripts/script-utils.js
  • .claude/skills/translate/scripts/term-context.js
  • .claude/skills/translate/scripts/validate-file.js
  • .claude/skills/translate/scripts/validate.js
  • .gitattributes
  • .gitignore
  • src/assets/translations/.last-translation-sha
  • src/assets/translations/de/main.json
  • src/assets/translations/es/main.json
  • src/assets/translations/fr/main.json
  • src/assets/translations/glossary.json
  • src/assets/translations/ja/main.json
  • src/assets/translations/pt/main.json
  • src/assets/translations/ru/main.json
  • src/assets/translations/tr/main.json
  • src/assets/translations/uk/main.json
  • src/assets/translations/zh/main.json
✅ Files skipped from review due to trivial changes (1)
  • src/assets/translations/.last-translation-sha
🚧 Files skipped from review as they are similar to previous changes (14)
  • .claude/skills/benchmark-translate/SKILL.md
  • .claude/skills/translate/locales/ru.md
  • .gitattributes
  • .claude/skills/translate/scripts/missing-keys.js
  • .gitignore
  • .claude/skills/translate/scripts/script-utils.js
  • src/assets/translations/glossary.json
  • .claude/skills/translate/locales/fr.md
  • src/assets/translations/ru/main.json
  • .claude/skills/benchmark-translate/scripts/restore.js
  • .claude/skills/translate/locales/uk.md
  • .claude/skills/benchmark-translate/scripts/setup.js
  • src/assets/translations/tr/main.json
  • .claude/skills/translate/locales/pt.md

- Fix glossary key mismatch in compile-report.js (disambiguated keys
  didn't match actual glossary.json keys, silently skipping 4 checks)
- Fix mixed Latin/Cyrillic in ru.md locale guide (vы → вы)
- Fix fragile file-path detection in merge.js (use fs.existsSync instead
  of includes('/'), add missing-arg guard and JSON.parse try/catch)
- Add try/catch in missing-keys.js for corrupt/missing locale files
- Add French elision rule to fr.md: use "de" when numeric %{amount}
  buffers the symbol, use "en" when symbol placeholder is directly
  after the preposition (avoids runtime elision ambiguity)
- Retranslate French yield/unstake strings applying the new rule:
  "déstaking de %{symbol}" → "déstake en %{symbol}"

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/assets/translations/fr/main.json (1)

2592-2613: ⚠️ Potential issue | 🟡 Minor

Double-check reward cadence wording consistency (“par époque” vs “par bloc”).

The RFOX FAQ copy mixes “par époque” (Line 2593) with “par bloc” (Line 2597). If rewards are not actually per-block, this could mislead users—worth verifying against the English source / product behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/fr/main.json` around lines 2592 - 2613, The French
FAQ mixes reward cadence wording: the "body" under the "Pourquoi staker vos FOX
?" entry uses "par époque" while the "body" under the "stake" entry uses "par
bloc"; verify the English source/product behavior and make both strings
consistent (either change "par époque" to "par bloc" or vice versa) so the
"Pourquoi staker vos FOX ?" body and the "stake" -> "body" use the same cadence
term.
♻️ Duplicate comments (1)
.claude/skills/benchmark-translate/scripts/compile-report.js (1)

53-60: ⚠️ Potential issue | 🟡 Minor

Add an explicit glossary.json existence guard (currently crashes with ENOENT).

You already guard ground-truth.json; glossary.json is read unconditionally, so a missing file yields an unhelpful stack trace.

Proposed diff
 if (!fs.existsSync(GT_PATH)) {
   console.error('Error: ground-truth.json not found. Run setup.js first.')
   process.exit(1)
 }
 
+if (!fs.existsSync(GLOSSARY_PATH)) {
+  console.error('Error: glossary.json not found at', GLOSSARY_PATH)
+  process.exit(1)
+}
+
 const gt = JSON.parse(fs.readFileSync(GT_PATH, 'utf8'))
 const glossary = JSON.parse(fs.readFileSync(GLOSSARY_PATH, 'utf8'))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines 53
- 60, The script reads GLOSSARY_PATH without checking existence causing an
ENOENT crash; add an existence guard like the GT_PATH check: use
fs.existsSync(GLOSSARY_PATH) and if false call console.error with a clear
message ('Error: glossary.json not found. Run setup.js first.') and
process.exit(1) before attempting JSON.parse, ensuring the code references
GLOSSARY_PATH and mirrors the existing GT_PATH/gt parsing flow.
🧹 Nitpick comments (2)
.claude/skills/benchmark-translate/scripts/compile-report.js (2)

157-173: Fail fast with a clear message if ground-truth is missing keys/locales.

Right now, a missing gt.english[tk.dottedPath] (or missing locale in gt.groundTruth) will crash later (e.g., english.trim() in validate) with a less actionable error. Adding explicit checks here keeps failures easy to diagnose.

Proposed diff
   for (const tk of gt.testKeys) {
     const english = gt.english[tk.dottedPath]
-    const human = gt.groundTruth[locale][tk.dottedPath]
+    const human = gt.groundTruth?.[locale]?.[tk.dottedPath]
+    if (typeof english !== 'string') {
+      throw new Error(`Missing gt.english for key: ${tk.dottedPath}`)
+    }
+    if (typeof human !== 'string') {
+      throw new Error(`Missing gt.groundTruth[${locale}] for key: ${tk.dottedPath}`)
+    }
     const skill = getVal(localeData, tk.dottedPath) || ''
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines
157 - 173, The loop can later crash with opaque errors when ground-truth is
missing; add explicit checks before using gt values: for each iteration over
TEST_LOCALES and tk, verify gt.english[tk.dottedPath] exists and
gt.groundTruth[locale] exists and gt.groundTruth[locale][tk.dottedPath] exists,
and throw a clear error (including locale and dottedPath) if any is missing so
failures are fail-fast and actionable; keep other logic (getVal, validate,
judgeScores lookup) unchanged but only run them after these validations.

233-258: Make “valid” consistent across overall/category/locale summaries (or rename it).

Overall “Validation pass rate” uses placeholder integrity + length ratio + glossary compliance + script correctness + non-empty + non-untranslated. But the per-category/per-locale “valid” counts omit glossary/script checks, so the numbers aren’t comparable.

Proposed diff (align definitions)
-  const cv = cr.filter(r => r.validation.placeholderIntegrity && r.validation.lengthRatioOk && !r.validation.isEmpty && !r.validation.isUntranslated).length
+  const cv = cr.filter(r =>
+    r.validation.placeholderIntegrity && r.validation.lengthRatioOk &&
+    r.validation.glossaryCompliance && r.validation.scriptCorrect &&
+    !r.validation.isEmpty && !r.validation.isUntranslated
+  ).length
-  const lv = lr.filter(r => r.validation.placeholderIntegrity && r.validation.lengthRatioOk && !r.validation.isEmpty && !r.validation.isUntranslated).length
+  const lv = lr.filter(r =>
+    r.validation.placeholderIntegrity && r.validation.lengthRatioOk &&
+    r.validation.glossaryCompliance && r.validation.scriptCorrect &&
+    !r.validation.isEmpty && !r.validation.isUntranslated
+  ).length
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines
233 - 258, The per-category cv and per-locale lv counts use a narrower
validation check than the overall "Validation pass rate"; update the cv and lv
filters to use the same full set of validation predicates used by the overall
metric (include placeholderIntegrity, lengthRatioOk, glossary compliance flag,
script correctness flag, and !isEmpty and !isUntranslated) so the "valid" counts
are comparable, i.e. change the filter expressions that compute cv (in the
category loop) and lv (in the locale loop) to mirror the overall validation
boolean used elsewhere (reference variables: results, cv, lv, TEST_LOCALES,
r.validation, judgeScore).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 391-397: The "Skill win rate" currently divides bs.skillWins and
overall.skillWins by bs.totalComparisons and overall.total which under-reports
when some comparisons are unjudged; change the denominators to the
judged-comparisons counters (e.g., bs.judgedComparisons and
overall.judgedComparisons or whatever the codebase uses for "judged" counts)
when computing bsSwr and r3swr, keep the same rounding/formatting and guard
against zero (null) denominators exactly as the existing zero checks do so you
don't divide by zero or show misleading percentages; update references to
bs.totalComparisons and overall.total in the bsSwr and r3swr calculations to use
the judged counts instead.

In `@src/assets/translations/fr/main.json`:
- Around line 2908-2911: Remove the unused empty "actions": {} object under the
"yield" translation block (the "yield.actions" key) from both en/main.json and
fr/main.json; verify there are no calls to translate('yield.actions') or
references to yield.actions in components like YieldForm.tsx or
YieldActionModal.tsx, and then delete the key to avoid unused translation
entries.

---

Outside diff comments:
In `@src/assets/translations/fr/main.json`:
- Around line 2592-2613: The French FAQ mixes reward cadence wording: the "body"
under the "Pourquoi staker vos FOX ?" entry uses "par époque" while the "body"
under the "stake" entry uses "par bloc"; verify the English source/product
behavior and make both strings consistent (either change "par époque" to "par
bloc" or vice versa) so the "Pourquoi staker vos FOX ?" body and the "stake" ->
"body" use the same cadence term.

---

Duplicate comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 53-60: The script reads GLOSSARY_PATH without checking existence
causing an ENOENT crash; add an existence guard like the GT_PATH check: use
fs.existsSync(GLOSSARY_PATH) and if false call console.error with a clear
message ('Error: glossary.json not found. Run setup.js first.') and
process.exit(1) before attempting JSON.parse, ensuring the code references
GLOSSARY_PATH and mirrors the existing GT_PATH/gt parsing flow.

---

Nitpick comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 157-173: The loop can later crash with opaque errors when
ground-truth is missing; add explicit checks before using gt values: for each
iteration over TEST_LOCALES and tk, verify gt.english[tk.dottedPath] exists and
gt.groundTruth[locale] exists and gt.groundTruth[locale][tk.dottedPath] exists,
and throw a clear error (including locale and dottedPath) if any is missing so
failures are fail-fast and actionable; keep other logic (getVal, validate,
judgeScores lookup) unchanged but only run them after these validations.
- Around line 233-258: The per-category cv and per-locale lv counts use a
narrower validation check than the overall "Validation pass rate"; update the cv
and lv filters to use the same full set of validation predicates used by the
overall metric (include placeholderIntegrity, lengthRatioOk, glossary compliance
flag, script correctness flag, and !isEmpty and !isUntranslated) so the "valid"
counts are comparable, i.e. change the filter expressions that compute cv (in
the category loop) and lv (in the locale loop) to mirror the overall validation
boolean used elsewhere (reference variables: results, cv, lv, TEST_LOCALES,
r.validation, judgeScore).

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6e87928 and 41fc67d.

📒 Files selected for processing (6)
  • .claude/skills/benchmark-translate/scripts/compile-report.js
  • .claude/skills/translate/locales/fr.md
  • .claude/skills/translate/locales/ru.md
  • .claude/skills/translate/scripts/merge.js
  • .claude/skills/translate/scripts/missing-keys.js
  • src/assets/translations/fr/main.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • .claude/skills/translate/scripts/merge.js
  • .claude/skills/translate/scripts/missing-keys.js
  • .claude/skills/translate/locales/ru.md

premiumjibles and others added 2 commits February 24, 2026 11:02
Ukrainian: в/у and з/із/зі preposition alternation rules for dynamic
placeholders where runtime values are unknown at translation time.

Turkish: vowel harmony rules for dynamic placeholders — prefer
postpositions over direct suffixes on placeholders since crypto symbols
span all vowel classes.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Add register examples to all 9 locale files (de, es, fr, ja, pt, ru, tr, uk, zh) with correct/incorrect pairs for non-pronoun register markers
- Add register consistency as 6th reviewer focus in SKILL.md
- Add "Multichain Snap" and "Snap" to glossary never-translate list
- Fix 2 broken community translations across all 9 locales (stale multiChain.body, missing %{symbol} in getAssets.about)
- Update compile-report.js to use stemMatch instead of raw .includes() for glossary metrics
- Improve stemMatch with language-aware morphological matching (suffix stripping, Levenshtein distance, CJK character overlap)

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/assets/translations/ja/main.json (1)

2592-2613: ⚠️ Potential issue | 🟡 Minor

Align reward cadence wording (ブロックごと vs エポックごと).

Line 2597 says rewards are paid “ブロックごと” while Line 2593 mentions “エポックごと”. Please align these to avoid conflicting cadence.

✏️ Proposed fix
-        "body": "FOXをステーキングするには、ステーキングインターフェースを使用してFOXトークンをステーキングコントラクトにロックします。ステーキング後、ステーキング特典の一環としてブロックごとにUSDC報酬の獲得が始まります。"
+        "body": "FOXをステーキングするには、ステーキングインターフェースを使用してFOXトークンをステーキングコントラクトにロックします。ステーキング後、ステーキング特典の一環としてエポックごとにUSDC報酬の獲得が始まります。"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/ja/main.json` around lines 2592 - 2613, The
translation contains conflicting reward cadence wording: the earlier FAQ body
uses "エポックごと" while the "stake" object's body uses "ブロックごと"; update the "stake"
-> "body" string to use the same cadence ("エポックごと") so both sections
consistently state rewards are paid per epoch, ensuring the strings in the
top-level FAQ paragraph (which mentions "エポックごと") and the "stake" entry match.
src/assets/translations/fr/main.json (1)

2592-2613: ⚠️ Potential issue | 🟡 Minor

Align reward cadence wording (par bloc vs par époque).

Line 2597 says rewards are paid “par bloc” while Line 2593 states payouts are “par époque”. That mixed cadence is confusing—please align these to the same unit.

✏️ Proposed fix
-        "body": "Pour staker votre FOX, utilisez l'interface de staking afin de verrouiller vos jetons FOX dans le contrat de staking. Une fois staké, vous commencez à gagner des récompenses en USDC par bloc dans le cadre de vos avantages de staking."
+        "body": "Pour staker votre FOX, utilisez l'interface de staking afin de verrouiller vos jetons FOX dans le contrat de staking. Une fois staké, vous commencez à gagner des récompenses en USDC par époque dans le cadre de vos avantages de staking."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/fr/main.json` around lines 2592 - 2613, The
translation uses mixed cadence terms: "par époque" in the first FAQ body and
"par bloc" in the "stake.body" entry; choose one consistent term (either "par
époque" or "par bloc") and update both occurrences so they match; specifically
edit the string in the first FAQ body (the paragraph containing "flux constant
de revenus passifs en USDC par époque") and the "stake.body" value (the
paragraph containing "commencez à gagner des récompenses en USDC par bloc") to
use the same cadence word and ensure grammar/agreement is correct.
♻️ Duplicate comments (8)
src/assets/translations/ja/main.json (1)

2910-2910: Duplicate: yieldXYZ.actions empty object already flagged.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/ja/main.json` at line 2910, The JSON contains a
duplicate empty "actions" object for the key path yieldXYZ.actions; remove the
redundant entry (or merge if the other duplicate has content) so there's only
one "actions" object for yieldXYZ, ensuring you keep any non-empty values from
the other occurrence and remove this empty `"actions": {}` duplicate.
src/assets/translations/fr/main.json (1)

2910-2910: Duplicate: yieldXYZ.actions empty object already flagged.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/fr/main.json` at line 2910, Remove the duplicate
empty "actions": {} entry for the yieldXYZ translation object — locate the
yieldXYZ object that contains "actions" and either delete this redundant
"actions" key or consolidate it with the existing one so only a single "actions"
property remains; ensure no other translation objects reference this duplicate
key before committing.
src/assets/translations/es/main.json (2)

2612-2613: Gender agreement in “cantidad quemado” (already noted).
This is the same issue previously flagged for this segment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/es/main.json` around lines 2612 - 2613, The Spanish
translation has incorrect gender agreement in the phrase "cantidad quemado"
inside the JSON entry (check the "title" / "body" strings for this FAQ item);
change "cantidad quemado" to "cantidad quemada" and verify any other occurrences
in the same "body" or nearby keys use the feminine form to match "cantidad" so
the sentence reads "...la cantidad de quema de FOX es la porción..." or use
"cantidad quemada" where appropriate for grammatical correctness and
consistency.

2843-2855: Register/terminology consistency in failed messages (already noted).
Same issue as earlier review comment for this segment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/es/main.json` around lines 2843 - 2855, The
failed-message strings under the "deposit", "withdrawal", and "claim" keys are
inconsistent in register/terminology (they use "Tu" and "reclamación" while
related keys use "Su" and "reclamo"); update deposit.failed, withdrawal.failed
and claim.failed to match the same pronoun and noun usage as their corresponding
pending/complete entries (e.g., use "Su" and the same term used in
"claim.complete"/"deposit.complete"/"withdrawal.complete") and ensure the verb
phrasing mirrors the other statuses (e.g., "Su depósito de %{amount} %{symbol}
ha fallado.").
src/assets/translations/uk/main.json (1)

2612-2613: Capitalize the FAQ title to match other headings.
Same issue as previously flagged — the title still starts lowercase.

✏️ Proposed fix
-        "title": "як пов'язані між собою загальна сума зібраних комісій, загальна сума в стейкінгу FOX, пул монет та кількість спалених FOX?",
+        "title": "Як пов'язані між собою загальна сума зібраних комісій, загальна сума в стейкінгу FOX, пул монет та кількість спалених FOX?",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/assets/translations/uk/main.json` around lines 2612 - 2613, Update the
JSON "title" value so its heading is capitalized to match other FAQ headings:
locate the entry with the "title" key whose value currently begins "як
пов'язані..." and change it to start with an uppercase "Як..." (preserve the
rest of the string and the "body" unchanged).
.claude/skills/benchmark-translate/scripts/compile-report.js (2)

55-61: ⚠️ Potential issue | 🟡 Minor

Guard missing glossary.json read with existsSync.

glossary.json is read unconditionally; a missing file will throw ENOENT. Mirror the ground-truth guard so the error is explicit and actionable.

🛡️ Proposed fix
 if (!fs.existsSync(GT_PATH)) {
   console.error('Error: ground-truth.json not found. Run setup.js first.')
   process.exit(1)
 }
 
 const gt = JSON.parse(fs.readFileSync(GT_PATH, 'utf8'))
+if (!fs.existsSync(GLOSSARY_PATH)) {
+  console.error('Error: glossary.json not found at', GLOSSARY_PATH)
+  process.exit(1)
+}
 const glossary = JSON.parse(fs.readFileSync(GLOSSARY_PATH, 'utf8'))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines 55
- 61, Add the same existence guard for GLOSSARY_PATH as is used for GT_PATH:
before reading glossary.json check fs.existsSync(GLOSSARY_PATH), and if false
call console.error with a clear message (e.g., "Error: glossary.json not found.
Run setup.js first.") and process.exit(1); then safely call
JSON.parse(fs.readFileSync(GLOSSARY_PATH, 'utf8')) to assign glossary. This
change touches compile-report.js and the symbols GT_PATH, GLOSSARY_PATH,
glossary, and uses fs.existsSync to mirror the ground-truth guard.

395-398: ⚠️ Potential issue | 🟠 Major

Skill win rate denominator should use judged comparisons.

skillWins is computed only for judged items, but the denominator uses total comparisons, which under-reports win rate when judge scores are partial/missing.

🔧 Proposed fix
-  if (bs.skillWins !== null && bs.totalComparisons > 0) {
-    const bsSwr = Math.round((bs.skillWins / bs.totalComparisons) * 1000) / 10
-    const r3swr = overall.total > 0 ? Math.round((overall.skillWins / overall.total) * 1000) / 10 : null
-    if (r3swr !== null) {
+  if (bs.skillWins !== null) {
+    const bsJudgedCount =
+      bs.humanWins !== null && bs.skillWins !== null && bs.ties !== null
+        ? bs.humanWins + bs.skillWins + bs.ties
+        : null
+    const curJudgedCount = overall.judged.length
+    const bsSwr = bsJudgedCount && bsJudgedCount > 0
+      ? Math.round((bs.skillWins / bsJudgedCount) * 1000) / 10
+      : null
+    const r3swr = curJudgedCount > 0
+      ? Math.round((overall.skillWins / curJudgedCount) * 1000) / 10
+      : null
+    if (bsSwr !== null && r3swr !== null) {
       console.log(`  Skill win rate: ${bsSwr}% → ${r3swr}% (${r3swr >= bsSwr ? '+' : ''}${(r3swr - bsSwr).toFixed(1)}pp)`)
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/benchmark-translate/scripts/compile-report.js around lines
395 - 398, The skill win rate currently divides bs.skillWins by
bs.totalComparisons, but skillWins is computed only over judged comparisons;
change the denominator to the judged-comparisons count used when computing
skillWins (e.g., replace bs.totalComparisons with the corresponding property
like bs.judgedComparisons) and do the same for the overall rate (replace
overall.total with overall.judgedComparisons or the matching judged count field)
so both r3swr and bsSwr use judged comparison counts consistently (refer to
bs.skillWins, bs.totalComparisons, overall.skillWins, overall.total in the
snippet).
.claude/skills/translate/scripts/validate.js (1)

2-26: ⚠️ Potential issue | 🔴 Critical

Prevent runtime crashes by flattening inputs + rejecting non-string/empty source values.

Right now the script assumes source[path]/target[path] are strings and will crash (or miscompute) if it receives nested locale JSON (top-level values are objects) or any non-string leaf. Since script-utils.js already ships flattenJson(), pulling it in here makes validate.js resilient to both nested and already-flattened inputs.

Proposed fix (flatten + type/empty guards)
 const {
   CJK_LOCALES,
   LOCALE_CONFIGS,
   extractPlaceholders,
   stemMatch,
   loadGlossary,
+  flattenJson,
 } = require('./script-utils');

 // ...

-const source = loadJsonArg(sourceArg);
-const target = loadJsonArg(targetArg);
+const source = flattenJson(loadJsonArg(sourceArg));
+const target = flattenJson(loadJsonArg(targetArg));

 // ...

 for (const path of Object.keys(source)) {
   if (rejectedPaths.has(path)) continue;

   const src = source[path];
   const tgt = target[path];

+  if (typeof src !== 'string' || src.trim() === '') {
+    rejected.push({ path, reason: 'empty/non-string source', details: `Expected non-empty string but got ${typeof src}` });
+    continue;
+  }
+
+  if (typeof tgt !== 'string') {
+    rejected.push({ path, reason: 'non-string target', details: `Expected string but got ${typeof tgt}` });
+    continue;
+  }
+
   // Check 3: Empty/whitespace
-  if (tgt === undefined || tgt === null || (typeof tgt === 'string' && tgt.trim() === '')) {
+  if (tgt.trim() === '') {
     rejected.push({ path, reason: 'empty', details: 'Translated value is empty or whitespace-only' });
     continue;
   }

To confirm what shape validate.js receives today (nested vs flattened), you can quickly inspect call sites:

#!/bin/bash
set -euo pipefail

rg -n -C2 --type=js --type=ts --type=sh --type=json "node .*validate\\.js\\b|validate\\.js\\b" || true
rg -n -C2 --type=js "flattenJson\\(|flattenJson\\s*\\(" .claude/skills/translate/scripts || true

Also applies to: 63-110

🧹 Nitpick comments (9)
.claude/skills/translate/SKILL.md (2)

80-86: Language agent has no reliable signal to decide when to pass --force to merge.js.

Step 2 specifies batch items as { dottedPath, englishValue } with no isChanged flag. Step 5 instructs agents to pass --force "only when re-translating changed English strings," but neither the locale bundle schema (Step 4b) nor the agent prompt exposes which strings come from Step 1b (diff-based, potentially changed) versus Step 1c (missing-key detection, always new). Without this metadata, agents cannot decide correctly — universally omitting --force leaves stale translations in place; universally adding it discards valid existing translations.

Consider enriching the batch item schema with an origin field ("new" | "changed") so prepare-locale.js can surface it and the agent can apply --force selectively per string (or per batch when all items are changed).

Also applies to: 296-302

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/SKILL.md around lines 80 - 86, Batch items
currently lack metadata to tell agents which strings are re-translations vs
newly-missing, so add an origin flag and surface it through the pipeline: change
the batch item schema (used by prepare-locale.js) from { dottedPath,
englishValue } to { dottedPath, englishValue, origin: "new" | "changed" },
update prepare-locale.js to include origin for each item it emits, update any
agent prompt templates to show origin per item (or summarize per-batch), and
have agents use origin to decide whether to call merge.js with --force
(per-string or set per-batch when all items are "changed").

43-43: Add language identifiers to fenced code blocks (MD040).

Six fenced code blocks lack a language specifier. These trigger MD040 warnings from markdownlint-cli2. Use text for path snippets and agent prompt templates, bash for shell commands, and json for JSON samples.

Also applies to: 171-171, 236-236, 269-269, 330-330, 360-360

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/SKILL.md at line 43, Several fenced code blocks are
missing language specifiers causing MD040 errors; update each fenced block by
adding the appropriate language after the opening backticks: use "text" for path
snippets and agent prompt templates, "bash" for shell/cli commands, and "json"
for JSON samples. Locate the six offending blocks (the ones containing path
examples, agent prompt templates, shell commands, and JSON examples) and edit
their opening fences to include the correct language identifier so
markdownlint-cli2 no longer emits MD040 warnings.
.claude/skills/translate/scripts/prepare-locale.js (2)

58-64: approvedTerms drops alternates when glossary provides an array.

If the glossary uses arrays to represent multiple acceptable translations, keeping the full array in the bundle improves downstream validation/prompting fidelity.

Option (preserve arrays)
   if (term === '_meta') continue;
   if (typeof value === 'object' && value !== null && value[locale]) {
-    approvedTerms[term] = Array.isArray(value[locale]) ? value[locale][0] : value[locale];
+    approvedTerms[term] = value[locale];
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/prepare-locale.js around lines 58 - 64, The
approvedTerms build currently collapses arrays to their first element, losing
alternates: in prepare-locale.js change the assignment in the loop that fills
approvedTerms (the block referencing glossary, term, value, and locale) so it
preserves the full value[locale] (whether array or scalar) instead of taking
value[locale][0]; keep the existing guards (skip '_meta', null/object check) but
assign approvedTerms[term] = value[locale] so arrays of alternates are retained
for downstream validation/prompting.

26-33: Use path/os.tmpdir() instead of hardcoding /tmp and manual path joins.

This makes the script work consistently on non-Unix environments and avoids path-separator issues.

Proposed refactor (portable paths)
 const fs = require('fs');
+const os = require('os');
+const path = require('path');
 
 // ...
 let batchesPath;
 let termContextPath;
 let fewShotPath;
-let outputPath = `/tmp/translate-${locale}.json`;
+let outputPath = path.join(os.tmpdir(), `translate-${locale}.json`);
 
 // ...
-const localeRulesPath = `${__dirname}/../locales/${locale}.md`;
+const localeRulesPath = path.join(__dirname, '..', 'locales', `${locale}.md`);
 
-const glossaryPath = 'src/assets/translations/glossary.json';
+const glossaryPath = path.resolve('src/assets/translations/glossary.json');

Also applies to: 42-47, 120-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/prepare-locale.js around lines 26 - 33,
Replace hardcoded "/tmp" usages by constructing temp-file paths with the OS temp
directory and path.join: import or require the path and os modules and set
outputPath = path.join(os.tmpdir(), `translate-${locale}.json`) instead of
`/tmp/translate-${locale}.json`; likewise change other manual joins that build
temp file names (the assignments manipulating batchesPath, termContextPath,
fewShotPath and any later occurrences around the blocks referenced at lines
~42-47 and ~120-121) to use path.join(os.tmpdir(), ...) or path.join(baseDir,
...) so path separators are correct across platforms, and update any code that
slices process.argv the same way but assigns into these new portable paths
(preserve the existing flag parsing logic and variable names like outputPath,
batchesPath, termContextPath, fewShotPath).
.claude/skills/translate/scripts/validate.js (1)

131-146: Term-consistency check is case-sensitive on the target.

If established differs only by case (or locale-specific casing), !tgt.includes(established) may false-negative. A simple case-insensitive compare would make this less brittle.

Possible tweak
-    if (termRegex.test(src) && !tgt.includes(established)) {
+    if (termRegex.test(src) && !tgt.toLowerCase().includes(established.toLowerCase())) {
       flags.push({ path, reason: 'term consistency', details: `"${term}" is typically "${established}" but not found in translation` });
       isFlagged = true;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/validate.js around lines 131 - 146, The
term-consistency check compares established to tgt case-sensitively; change the
check to a case-insensitive, locale-aware comparison so matches differing only
by case don't trigger a flag: when computing the containment check in the loop
over termContext (symbols: termContext, translations, established, termRegex,
tgt, flags, isFlagged, locale), use locale-aware lowercasing on both tgt and
established (e.g., toLocaleLowerCase(locale)) before calling includes (or use an
Intl.Collator configured for base sensitivity) and keep the rest of the logic
the same so flags.push and isFlagged behave unchanged.
.claude/skills/translate/scripts/script-utils.js (4)

5-7: Make extractPlaceholders() failure mode explicit (non-string input).

If any caller passes a non-string, the current implementation throws with a generic matchAll error. Consider guarding to either (a) return [], or (b) throw a clearer TypeError so the pipeline can surface a targeted message.

Proposed tweak (non-string → empty placeholders)
 function extractPlaceholders(str) {
+  if (typeof str !== 'string') return [];
   return [...str.matchAll(/%\{(\w+)\}/g)].map(m => m[1]);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/script-utils.js around lines 5 - 7, The
extractPlaceholders function should handle non-string inputs explicitly: at the
start of extractPlaceholders(str) check typeof str === 'string' and if not,
return an empty array ([]) instead of calling matchAll which throws; keep the
existing regex-based extraction for valid strings so callers receive [] for
non-strings rather than a generic matchAll error.

100-103: Don’t silently swallow a missing/invalid glossary unless callers opt in.

loadGlossary() returning {} on missing file is convenient, but it can also hide a mis-typed --glossary path and accidentally disable glossary validation. Consider adding an optional required flag or at least wrapping parse errors with path context.

Possible direction (better parse error context)
 function loadGlossary(glossaryPath) {
   if (!fs.existsSync(glossaryPath)) return {};
-  return JSON.parse(fs.readFileSync(glossaryPath, 'utf8'));
+  try {
+    return JSON.parse(fs.readFileSync(glossaryPath, 'utf8'));
+  } catch (e) {
+    throw new Error(`Failed to parse glossary JSON at "${glossaryPath}": ${e.message}`);
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/script-utils.js around lines 100 - 103, The
loadGlossary function currently swallows missing/invalid glossaries; change
loadGlossary(glossaryPath) to accept an optional boolean parameter (e.g.,
required = false) so callers can opt-in to hard failures; if required is true
and fs.existsSync(glossaryPath) is false, throw a clear Error indicating the
missing path, and when reading/parsing with fs.readFileSync/JSON.parse catch any
parse errors and rethrow or wrap them with the glossaryPath context (include the
path in the error message) so callers see why parsing failed instead of getting
an empty object.

109-120: Consider explicit array handling in flattenJson() (today it flattens arrays implicitly).

Right now arrays are treated as generic objects (numeric keys). If arrays are not expected in translation payloads, explicitly skipping them (or handling them intentionally) reduces surprising output.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/script-utils.js around lines 109 - 120, The
flattenJson function currently treats arrays as plain objects causing numeric
keys to be flattened unexpectedly; update flattenJson to explicitly detect
arrays (Array.isArray(value)) and either skip them (continue) if arrays are not
valid translation nodes, or intentionally handle them by iterating values and
using index-aware keys (e.g., `${path}[${i}]`) before recursing; modify the
branch that currently checks typeof value === 'object' to check for arrays first
and then proceed with the chosen behavior so Object.assign(flattenJson(...)) is
only used for real object recursion.

9-65: Ensure suffix stripping prefers longest matches (avoid early short-suffix hits).

stripSuffix() returns on the first matching suffix, so correctness depends on suffixes being ordered longest→shortest. It looks mostly ordered already, but it’s easy to regress later—sorting once defensively avoids subtle false matches.

Proposed tweak (sort suffixes once)
 const LOCALE_CONFIGS = {
   de: {
     stemRatio: 0.75,
     suffixes: [
       'ungen', 'keit', 'lich', 'isch', 'ung', 'ten', 'tet', 'en', 'er', 'es', 'em', 'te', 'st', 'e',
     ],
   },
   // ...
 };
+
+for (const config of Object.values(LOCALE_CONFIGS)) {
+  config.suffixes.sort((a, b) => b.length - a.length);
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/translate/scripts/script-utils.js around lines 9 - 65,
stripSuffix currently returns on the first matching suffix which can produce
wrong roots if shorter suffixes appear before longer ones; ensure suffix lists
are processed longest-first by sorting each locale's suffix array once at module
load (e.g., transform LOCALE_CONFIGS.*.suffixes into a sorted copy ordered by
descending length) and then have stripSuffix iterate those pre-sorted arrays;
update references to LOCALE_CONFIGS and the stripSuffix function to use the
sorted suffix lists so future edits won't regress ordering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/translate/locales/ja.md:
- Line 12: Replace the imprecise wording in the ja.md rule that reads "Avoid
mixing katakana and Japanese" with "Avoid mixing katakana and kanji" so the
contrast is clearly between scripts; update the sentence fragment "Avoid mixing
katakana and Japanese in a single compound where it reads unnaturally (e.g.,
ペンディング中の → 保留中の)" to use "kanji" instead of "Japanese" in the locales/ja.md
content.

In @.claude/skills/translate/scripts/prepare-locale.js:
- Around line 23-38: The script currently reads and JSON.parses batchesPath (and
later termContextPath/fewShotPath) directly which throws on missing file or
invalid JSON; update the code around the uses of batchesPath and the similar
reads at the termContextPath/fewShotPath usage (the
JSON.parse(fs.readFileSync(...)) calls) to first check fs.existsSync(path) and
then wrap fs.readFileSync + JSON.parse in a try/catch that prints a friendly
error (including the file path and the parse error message) to console.error and
exits with process.exit(1); ensure you reference the existing variable names
batchesPath, termContextPath, fewShotPath and the parse locations so the guard
is applied to all three reads.

In @.claude/skills/translate/scripts/validate.js:
- Around line 111-153: The glossary “info” flags are never emitted because flags
are only pushed when isFlagged is true; change the end-of-entry logic to always
push all collected flags into flagged (flagged.push(...flags)) and use isFlagged
solely to decide whether to add path to flagged vs passed (if isFlagged then
keep path out of passed otherwise add to passed); also remove the global "g"
flag from the RegExp constructions named termRegex (both in the glossary loop
and the termContext loop) since you only call .test() / .includes() to avoid
lastIndex issues.

In `@src/assets/translations/pt/main.json`:
- Line 3002: The translation key "cooldownNotice" already correctly preserves
the placeholder %{cooldownDuration} and reads naturally in PT-BR, so leave the
value for cooldownNotice unchanged; ensure any automated translation tooling
does not alter the %{cooldownDuration} token. Additionally, as a follow-up
cleanup (not required for this PR), review the "successUnstaked" string and
replace the mixed English/loan-word phrase with a proper Portuguese translation
(e.g., fix "You unstakou com sucesso %{amount} %{symbol}" to a consistent PT-BR
version).

In `@src/assets/translations/zh/main.json`:
- Line 3002: The translation for the key "cooldownNotice" uses the inconsistent
term "提款"; update the value for "cooldownNotice" to use "提现" instead (e.g.,
change "您的提款将在 %{cooldownDuration} 后可用。" to use "提现") so it matches neighboring
yield strings and maintains consistent wording across translations.

---

Outside diff comments:
In `@src/assets/translations/fr/main.json`:
- Around line 2592-2613: The translation uses mixed cadence terms: "par époque"
in the first FAQ body and "par bloc" in the "stake.body" entry; choose one
consistent term (either "par époque" or "par bloc") and update both occurrences
so they match; specifically edit the string in the first FAQ body (the paragraph
containing "flux constant de revenus passifs en USDC par époque") and the
"stake.body" value (the paragraph containing "commencez à gagner des récompenses
en USDC par bloc") to use the same cadence word and ensure grammar/agreement is
correct.

In `@src/assets/translations/ja/main.json`:
- Around line 2592-2613: The translation contains conflicting reward cadence
wording: the earlier FAQ body uses "エポックごと" while the "stake" object's body uses
"ブロックごと"; update the "stake" -> "body" string to use the same cadence ("エポックごと")
so both sections consistently state rewards are paid per epoch, ensuring the
strings in the top-level FAQ paragraph (which mentions "エポックごと") and the "stake"
entry match.

---

Duplicate comments:
In @.claude/skills/benchmark-translate/scripts/compile-report.js:
- Around line 55-61: Add the same existence guard for GLOSSARY_PATH as is used
for GT_PATH: before reading glossary.json check fs.existsSync(GLOSSARY_PATH),
and if false call console.error with a clear message (e.g., "Error:
glossary.json not found. Run setup.js first.") and process.exit(1); then safely
call JSON.parse(fs.readFileSync(GLOSSARY_PATH, 'utf8')) to assign glossary. This
change touches compile-report.js and the symbols GT_PATH, GLOSSARY_PATH,
glossary, and uses fs.existsSync to mirror the ground-truth guard.
- Around line 395-398: The skill win rate currently divides bs.skillWins by
bs.totalComparisons, but skillWins is computed only over judged comparisons;
change the denominator to the judged-comparisons count used when computing
skillWins (e.g., replace bs.totalComparisons with the corresponding property
like bs.judgedComparisons) and do the same for the overall rate (replace
overall.total with overall.judgedComparisons or the matching judged count field)
so both r3swr and bsSwr use judged comparison counts consistently (refer to
bs.skillWins, bs.totalComparisons, overall.skillWins, overall.total in the
snippet).

In `@src/assets/translations/es/main.json`:
- Around line 2612-2613: The Spanish translation has incorrect gender agreement
in the phrase "cantidad quemado" inside the JSON entry (check the "title" /
"body" strings for this FAQ item); change "cantidad quemado" to "cantidad
quemada" and verify any other occurrences in the same "body" or nearby keys use
the feminine form to match "cantidad" so the sentence reads "...la cantidad de
quema de FOX es la porción..." or use "cantidad quemada" where appropriate for
grammatical correctness and consistency.
- Around line 2843-2855: The failed-message strings under the "deposit",
"withdrawal", and "claim" keys are inconsistent in register/terminology (they
use "Tu" and "reclamación" while related keys use "Su" and "reclamo"); update
deposit.failed, withdrawal.failed and claim.failed to match the same pronoun and
noun usage as their corresponding pending/complete entries (e.g., use "Su" and
the same term used in "claim.complete"/"deposit.complete"/"withdrawal.complete")
and ensure the verb phrasing mirrors the other statuses (e.g., "Su depósito de
%{amount} %{symbol} ha fallado.").

In `@src/assets/translations/fr/main.json`:
- Line 2910: Remove the duplicate empty "actions": {} entry for the yieldXYZ
translation object — locate the yieldXYZ object that contains "actions" and
either delete this redundant "actions" key or consolidate it with the existing
one so only a single "actions" property remains; ensure no other translation
objects reference this duplicate key before committing.

In `@src/assets/translations/ja/main.json`:
- Line 2910: The JSON contains a duplicate empty "actions" object for the key
path yieldXYZ.actions; remove the redundant entry (or merge if the other
duplicate has content) so there's only one "actions" object for yieldXYZ,
ensuring you keep any non-empty values from the other occurrence and remove this
empty `"actions": {}` duplicate.

In `@src/assets/translations/uk/main.json`:
- Around line 2612-2613: Update the JSON "title" value so its heading is
capitalized to match other FAQ headings: locate the entry with the "title" key
whose value currently begins "як пов'язані..." and change it to start with an
uppercase "Як..." (preserve the rest of the string and the "body" unchanged).

---

Nitpick comments:
In @.claude/skills/translate/scripts/prepare-locale.js:
- Around line 58-64: The approvedTerms build currently collapses arrays to their
first element, losing alternates: in prepare-locale.js change the assignment in
the loop that fills approvedTerms (the block referencing glossary, term, value,
and locale) so it preserves the full value[locale] (whether array or scalar)
instead of taking value[locale][0]; keep the existing guards (skip '_meta',
null/object check) but assign approvedTerms[term] = value[locale] so arrays of
alternates are retained for downstream validation/prompting.
- Around line 26-33: Replace hardcoded "/tmp" usages by constructing temp-file
paths with the OS temp directory and path.join: import or require the path and
os modules and set outputPath = path.join(os.tmpdir(),
`translate-${locale}.json`) instead of `/tmp/translate-${locale}.json`; likewise
change other manual joins that build temp file names (the assignments
manipulating batchesPath, termContextPath, fewShotPath and any later occurrences
around the blocks referenced at lines ~42-47 and ~120-121) to use
path.join(os.tmpdir(), ...) or path.join(baseDir, ...) so path separators are
correct across platforms, and update any code that slices process.argv the same
way but assigns into these new portable paths (preserve the existing flag
parsing logic and variable names like outputPath, batchesPath, termContextPath,
fewShotPath).

In @.claude/skills/translate/scripts/script-utils.js:
- Around line 5-7: The extractPlaceholders function should handle non-string
inputs explicitly: at the start of extractPlaceholders(str) check typeof str ===
'string' and if not, return an empty array ([]) instead of calling matchAll
which throws; keep the existing regex-based extraction for valid strings so
callers receive [] for non-strings rather than a generic matchAll error.
- Around line 100-103: The loadGlossary function currently swallows
missing/invalid glossaries; change loadGlossary(glossaryPath) to accept an
optional boolean parameter (e.g., required = false) so callers can opt-in to
hard failures; if required is true and fs.existsSync(glossaryPath) is false,
throw a clear Error indicating the missing path, and when reading/parsing with
fs.readFileSync/JSON.parse catch any parse errors and rethrow or wrap them with
the glossaryPath context (include the path in the error message) so callers see
why parsing failed instead of getting an empty object.
- Around line 109-120: The flattenJson function currently treats arrays as plain
objects causing numeric keys to be flattened unexpectedly; update flattenJson to
explicitly detect arrays (Array.isArray(value)) and either skip them (continue)
if arrays are not valid translation nodes, or intentionally handle them by
iterating values and using index-aware keys (e.g., `${path}[${i}]`) before
recursing; modify the branch that currently checks typeof value === 'object' to
check for arrays first and then proceed with the chosen behavior so
Object.assign(flattenJson(...)) is only used for real object recursion.
- Around line 9-65: stripSuffix currently returns on the first matching suffix
which can produce wrong roots if shorter suffixes appear before longer ones;
ensure suffix lists are processed longest-first by sorting each locale's suffix
array once at module load (e.g., transform LOCALE_CONFIGS.*.suffixes into a
sorted copy ordered by descending length) and then have stripSuffix iterate
those pre-sorted arrays; update references to LOCALE_CONFIGS and the stripSuffix
function to use the sorted suffix lists so future edits won't regress ordering.

In @.claude/skills/translate/scripts/validate.js:
- Around line 131-146: The term-consistency check compares established to tgt
case-sensitively; change the check to a case-insensitive, locale-aware
comparison so matches differing only by case don't trigger a flag: when
computing the containment check in the loop over termContext (symbols:
termContext, translations, established, termRegex, tgt, flags, isFlagged,
locale), use locale-aware lowercasing on both tgt and established (e.g.,
toLocaleLowerCase(locale)) before calling includes (or use an Intl.Collator
configured for base sensitivity) and keep the rest of the logic the same so
flags.push and isFlagged behave unchanged.

In @.claude/skills/translate/SKILL.md:
- Around line 80-86: Batch items currently lack metadata to tell agents which
strings are re-translations vs newly-missing, so add an origin flag and surface
it through the pipeline: change the batch item schema (used by
prepare-locale.js) from { dottedPath, englishValue } to { dottedPath,
englishValue, origin: "new" | "changed" }, update prepare-locale.js to include
origin for each item it emits, update any agent prompt templates to show origin
per item (or summarize per-batch), and have agents use origin to decide whether
to call merge.js with --force (per-string or set per-batch when all items are
"changed").
- Line 43: Several fenced code blocks are missing language specifiers causing
MD040 errors; update each fenced block by adding the appropriate language after
the opening backticks: use "text" for path snippets and agent prompt templates,
"bash" for shell/cli commands, and "json" for JSON samples. Locate the six
offending blocks (the ones containing path examples, agent prompt templates,
shell commands, and JSON examples) and edit their opening fences to include the
correct language identifier so markdownlint-cli2 no longer emits MD040 warnings.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between e7f7ef3 and 71dbb57.

📒 Files selected for processing (24)
  • .claude/skills/benchmark-translate/scripts/compile-report.js
  • .claude/skills/translate/SKILL.md
  • .claude/skills/translate/locales/de.md
  • .claude/skills/translate/locales/es.md
  • .claude/skills/translate/locales/fr.md
  • .claude/skills/translate/locales/ja.md
  • .claude/skills/translate/locales/pt.md
  • .claude/skills/translate/locales/ru.md
  • .claude/skills/translate/locales/tr.md
  • .claude/skills/translate/locales/uk.md
  • .claude/skills/translate/locales/zh.md
  • .claude/skills/translate/scripts/prepare-locale.js
  • .claude/skills/translate/scripts/script-utils.js
  • .claude/skills/translate/scripts/validate.js
  • src/assets/translations/de/main.json
  • src/assets/translations/es/main.json
  • src/assets/translations/fr/main.json
  • src/assets/translations/glossary.json
  • src/assets/translations/ja/main.json
  • src/assets/translations/pt/main.json
  • src/assets/translations/ru/main.json
  • src/assets/translations/tr/main.json
  • src/assets/translations/uk/main.json
  • src/assets/translations/zh/main.json
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/assets/translations/glossary.json
  • .claude/skills/translate/locales/fr.md
  • .claude/skills/translate/locales/es.md
  • .claude/skills/translate/locales/ru.md
  • .claude/skills/translate/locales/de.md

Copy link
Member

@0xApotheosis 0xApotheosis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the surface area of this guy I have no issue with getting it in and then iterating on it. Great PoC @premiumjibles.

premiumjibles and others added 3 commits February 25, 2026 10:23
… İ/i, add "stake" glossary

Strip %{placeholder} tokens before glossary term matching to eliminate
false positives (e.g. %{tradeFeeSource} triggering "trade" match).
Add locale-aware toLower() for Turkish İ→i case mapping in stemMatch.
Add "stake" glossary entry for root-form matches across locales.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
- Fix Spanish gender agreement: "cantidad quemado" → "cantidad quemada"
- Fix Spanish register inconsistency: align failed messages to formal "Su"
- Fix Ukrainian FAQ title capitalization: "як" → "Як"
- Fix Chinese withdrawal term consistency: "提款" → "提现"
- Fix ja.md wording: "Japanese" → "kanji" for script contrast
- Fix validate.js: emit info-severity glossary flags instead of dropping them
- Fix select-keys.js: persist stale key pruning to disk

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@premiumjibles premiumjibles enabled auto-merge (squash) February 25, 2026 22:16
@premiumjibles premiumjibles merged commit 6a4d0ee into develop Feb 25, 2026
4 checks passed
@premiumjibles premiumjibles deleted the agent-translations branch February 25, 2026 22:29
"volume24h": "Volume sur 24h",
"volMcap": "Vol/Cap",
"about": "Informations sur",
"about": "À propos de %{symbol}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@premiumjibles @0xApotheosis For information, this will suffer from the "de" elision issue ("de ETH" which is incorrect vs. "d'ETH").
The human translation was deliberately not literal to avoid this issue and used "sur" (which like "en" doesn't need elisions before vowels or silent h's, but "en" in this context would be wrong).

The placeholder %{symbol} in English has been added by myself in #11927 (to allow more flexibility in translations for languages that have another order for words) and was supposed to get added/updated in the next sync in the translations.

As it's uncertain if/when there will be a next sync it's good that it gets updated... but clearly if more translations are going to be updated this way, before the rules/glossary get improved a lot, we will drop the quality of translations in all languages.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@premiumjibles do we have a plan to address @firebomb1's comment above?

Copy link
Contributor

@firebomb1 firebomb1 Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some context, we've started the work to build better rules for this skill in #12028 and I've gathered (unpaid) feedback about the AI suggestions from our bounty translators... since the amount of work is consequent and should be compensated my own plan is to include this in the upcoming governance proposal for translations.

But in the meantime my suggestion would be to refrain from changing translations without human reviews, better to let them fallback to English (and maybe use this script to invalidate older translations if needed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Claude Code skill: Automated i18n translations

3 participants