Conversation
📝 WalkthroughWalkthroughRefactored DeFi L2 data storage layer with new file-based caching for historical data, DB-side daily aggregation, enhanced pricing verification including chain-specific validation, dryRun mode support, and improved Stellar supply deduplication. Added new verification module and file-cache module. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant DB as PostgreSQL
participant FileCache as File Cache
participant R2 as External Storage
App->>DB: storeHistoricalToDB (write event)
activate DB
DB-->>App: Write confirmed
deactivate DB
App->>App: precomputeHistoricalCache()
activate App
App->>DB: Read chainassets2
activate DB
DB-->>App: Return latest data
deactivate DB
App->>FileCache: storeChainHistory(chain, data)
activate FileCache
FileCache-->>App: Cache stored
deactivate FileCache
App->>FileCache: storeAllChainsHistory(data)
activate FileCache
FileCache-->>App: All-chains cache stored
deactivate FileCache
deactivate App
App->>FileCache: fetchHistoricalFromDB(chain, breakdown=false)
activate FileCache
FileCache->>FileCache: readChainHistory(chain)
alt Cache Hit
FileCache-->>App: Return cached data
else Cache Miss
deactivate FileCache
App->>DB: fetchHistoricalFromDBDirect()
activate DB
DB-->>App: Return DB data
deactivate DB
App->>FileCache: Warm cache with fetched data
activate FileCache
FileCache-->>App: Return fetched data
deactivate FileCache
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@defi/l2/v2/storeToDb.ts`:
- Line 39: The call to precomputeHistoricalCache() in storeHistoricalToDB() must
be treated as best-effort so a cache rebuild failure does not reject the DB
persistence; wrap the precomputeHistoricalCache() invocation in a try/catch
inside storeHistoricalToDB() (catch and log the error with context but do not
rethrow) so the DB write succeeds even if cache rebuild fails, and adjust the
cache-read logic referenced around the acceptance-of-any-non-empty-cache (the
code at the acceptance check used when serving cached reads) to gate
freshness—add a simple freshness check (e.g., timestamp or size+age) before
returning the cached file and fall back to serve the persisted DB row if the
cache is stale or empty. Ensure you update references in the same function
(storeHistoricalToDB) and the cache-read/acceptance logic (the non-empty-cache
check) so failures are non-fatal and stale caches aren’t served.
- Around line 45-50: The daily bucketing uses date_trunc('day',
to_timestamp(timestamp)) which respects the session timezone; update all three
queries in precomputeHistoricalCache() and fetchHistoricalFromDBDirect() to
bucket in UTC by applying AT TIME ZONE 'UTC' to the timestamp expression (e.g.
replace date_trunc('day', to_timestamp(timestamp)) with date_trunc('day',
to_timestamp(timestamp) AT TIME ZONE 'UTC') and use the same expression in the
corresponding ORDER BY/distinct on clauses) so the Unix epoch seconds are
grouped by UTC days consistently.
In `@defi/src/getChainAssetsChart.ts`:
- Around line 8-9: The code calls chainParam.replace(...) without checking
chainParam exists, causing a TypeError for malformed requests; update the
handler to guard against a missing path param by checking
event.pathParameters?.chain (or chainParam) before normalizing, and if missing
return/throw a clear validation error; specifically modify the logic around
chainParam and the call to getChainIdFromDisplayName to only call
chainParam.replace when chainParam is a non-empty string and otherwise handle
the missing param with a proper validation response.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 924828c4-f637-4816-be8b-0cb451ac0f6d
📒 Files selected for processing (3)
defi/l2/v2/storeToDb.tsdefi/src/getChainAssets.tsdefi/src/getChainAssetsChart.ts
| // delete old hourly datas here?? | ||
| sql.end(); | ||
|
|
||
| await precomputeHistoricalCache(); |
There was a problem hiding this comment.
Treat cache rebuild failures as non-fatal.
By Line 39 the DB row is already persisted, but precomputeHistoricalCache() can still reject storeHistoricalToDB(). In defi/l2/v2/index.ts this runs inside Promise.all(...), so a file-cache failure can make the publish look failed after persistence, while the concurrent chainAssets2 upload may already have completed. Because Lines 95-99 accept any non-empty cache, the old file can then keep being served until the next successful rebuild.
Make the cache refresh best-effort, or freshness-gate cached reads before returning them.
Also applies to: 95-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/storeToDb.ts` at line 39, The call to precomputeHistoricalCache()
in storeHistoricalToDB() must be treated as best-effort so a cache rebuild
failure does not reject the DB persistence; wrap the precomputeHistoricalCache()
invocation in a try/catch inside storeHistoricalToDB() (catch and log the error
with context but do not rethrow) so the DB write succeeds even if cache rebuild
fails, and adjust the cache-read logic referenced around the
acceptance-of-any-non-empty-cache (the code at the acceptance check used when
serving cached reads) to gate freshness—add a simple freshness check (e.g.,
timestamp or size+age) before returning the cached file and fall back to serve
the persisted DB row if the cache is stale or empty. Ensure you update
references in the same function (storeHistoricalToDB) and the
cache-read/acceptance logic (the non-empty-cache check) so failures are
non-fatal and stale caches aren’t served.
| const rows = await queryPostgresWithRetry( | ||
| sql` | ||
| select distinct on (date_trunc('day', to_timestamp(timestamp))) | ||
| timestamp, value | ||
| from chainassets2 | ||
| order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In PostgreSQL, does date_trunc('day', to_timestamp(epoch_seconds)) use the current session time zone, and what SQL pattern should be used to bucket Unix epoch timestamps by UTC day?
💡 Result:
Yes, date_trunc('day', to_timestamp(epoch_seconds)) uses the current session time zone. to_timestamp(double precision) returns timestamp with time zone (timestamptz), representing the UTC instant from Unix epoch seconds since 1970-01-01 00:00:00+00. However, date_trunc('day', timestamptz) truncates to the start of the day in the session's TimeZone: it converts the UTC instant to session local time, truncates to midnight local, and returns the equivalent UTC instant. For example, with TimeZone 'America/New_York', a timestamp near UTC midnight may truncate to the previous calendar day in local time. To bucket Unix epoch timestamps by UTC day reliably, use date_trunc('day', to_timestamp(epoch_seconds), 'UTC'). This truncates explicitly in UTC to 00:00:00+00 of the UTC day. Alternatively, cast to date after explicit UTC conversion: date_trunc('day', to_timestamp(epoch_seconds) AT TIME ZONE 'UTC')::date, which returns a timezone-agnostic date.
Citations:
- 1: https://www.postgresql.org/docs/15/functions-datetime.html
- 2: https://www.postgresql.org/docs/current/functions-datetime.html
- 3: https://database.guide/how-to_timestamp-works-in-postgresql/
- 4: https://postgresql.org/docs/current/datatype-datetime.html
- 5: https://stackoverflow.com/questions/16609722/postgresql-how-to-convert-from-unix-epoch-to-date
- 6: https://www.sqliz.com/postgresql-ref/to_timestamp/
- 7: https://postgresql.org/docs/current/functions-datetime.html
- 8: https://neon.com/docs/functions/date_trunc
- 9: https://stackoverflow.com/questions/38785924/postgresql-date-trunc-with-time-zone-shifts-zone-by-1-hr
- 10: https://stackoverflow.com/questions/24040039/timezone-aware-date-trunc-function
- 11: https://www.postgrespro.com/docs/enterprise/15/functions-datetime
- 12: https://database.guide/how-date_trunc-works-in-postgresql/
🏁 Script executed:
find . -name "storeToDb.ts" -type fRepository: DefiLlama/defillama-server
Length of output: 118
🏁 Script executed:
wc -l ./defi/l2/v2/storeToDb.tsRepository: DefiLlama/defillama-server
Length of output: 99
🏁 Script executed:
cat -n ./defi/l2/v2/storeToDb.ts | sed -n '40,55p'Repository: DefiLlama/defillama-server
Length of output: 635
🏁 Script executed:
cat -n ./defi/l2/v2/storeToDb.ts | sed -n '110,135p'Repository: DefiLlama/defillama-server
Length of output: 1057
Bucket the daily snapshots in UTC.
The queries use date_trunc('day', to_timestamp(timestamp)) which buckets by the session's configured time zone, not UTC. Since timestamp is a Unix epoch in seconds, a non-UTC session will select the last row of the local day instead of the UTC day, shifting data points around midnight. Apply the fix to all three affected queries:
- Lines 47, 50 in
precomputeHistoricalCache() - Lines 117, 120 in
fetchHistoricalFromDBDirect()(first query branch) - Lines 128, 131 in
fetchHistoricalFromDBDirect()(second query branch)
Suggested query change
- select distinct on (date_trunc('day', to_timestamp(timestamp)))
+ select distinct on (date_trunc('day', to_timestamp(timestamp), 'UTC'))
timestamp, value
from chainassets2
- order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc
+ order by date_trunc('day', to_timestamp(timestamp), 'UTC') asc, timestamp descAlternatively, use AT TIME ZONE 'UTC' instead of the third parameter.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const rows = await queryPostgresWithRetry( | |
| sql` | |
| select distinct on (date_trunc('day', to_timestamp(timestamp))) | |
| timestamp, value | |
| from chainassets2 | |
| order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc | |
| const rows = await queryPostgresWithRetry( | |
| sql` | |
| select distinct on (date_trunc('day', to_timestamp(timestamp), 'UTC')) | |
| timestamp, value | |
| from chainassets2 | |
| order by date_trunc('day', to_timestamp(timestamp), 'UTC') asc, timestamp desc |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/storeToDb.ts` around lines 45 - 50, The daily bucketing uses
date_trunc('day', to_timestamp(timestamp)) which respects the session timezone;
update all three queries in precomputeHistoricalCache() and
fetchHistoricalFromDBDirect() to bucket in UTC by applying AT TIME ZONE 'UTC' to
the timestamp expression (e.g. replace date_trunc('day',
to_timestamp(timestamp)) with date_trunc('day', to_timestamp(timestamp) AT TIME
ZONE 'UTC') and use the same expression in the corresponding ORDER BY/distinct
on clauses) so the Unix epoch seconds are grouped by UTC days consistently.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
defi/l2/v2/file-cache.ts (1)
27-35: Consider atomic writes to avoid serving partial JSON.
writeFileis not atomic: a crash or concurrent overlap can leave a truncated file on disk. Readers would recover via thecatchinreadData(returningnull), so it's not data-corrupting, but a cached day could silently disappear until the next refresh. Writing to a temp path andrename-ing is cheap insurance here.♻️ Proposed change
async function storeData(subPath: string, data: any): Promise<void> { const filePath = path.join(VERSIONED_CACHE_DIR, subPath); await ensureDirExists(path.dirname(filePath)); try { - await fs.promises.writeFile(filePath, JSON.stringify(data)); + const tmpPath = `${filePath}.${process.pid}.${Date.now()}.tmp`; + await fs.promises.writeFile(tmpPath, JSON.stringify(data)); + await fs.promises.rename(tmpPath, filePath); } catch (e) { console.error(`Error storing cache ${filePath}:`, (e as any)?.message); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@defi/l2/v2/file-cache.ts` around lines 27 - 35, storeData currently writes JSON directly with fs.promises.writeFile which can leave a truncated file on crash; to make writes atomic, write the JSON to a temporary file in the same directory (e.g., derive a temp name from filePath like filePath + `.tmp-${process.pid}-${Date.now()}`), fs.promises.writeFile the temp, then fs.promises.rename(tempPath, filePath) to atomically replace the target; update storeData to perform this temp-write-and-rename sequence and keep the existing error handling (and ensure ensureDirExists is still called on path.dirname(filePath)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@defi/l2/v2/file-cache.ts`:
- Around line 47-53: Sanitize the chain parameter before using it in file paths
in storeChainHistory and readChainHistory by normalizing it the same way as
fileNameNormalizer in defi/src/api2/cache/file-cache.ts: remove characters
matching /[^a-zA-Z0-9\/\.]/g, convert to lowercase, and fall back to a safe
default (e.g., "unknown") if the result is empty; apply this normalizedChain
when building the path (`history/${normalizedChain}.json`) and update both
functions accordingly (or import and reuse the existing normalizer if
available).
---
Nitpick comments:
In `@defi/l2/v2/file-cache.ts`:
- Around line 27-35: storeData currently writes JSON directly with
fs.promises.writeFile which can leave a truncated file on crash; to make writes
atomic, write the JSON to a temporary file in the same directory (e.g., derive a
temp name from filePath like filePath + `.tmp-${process.pid}-${Date.now()}`),
fs.promises.writeFile the temp, then fs.promises.rename(tempPath, filePath) to
atomically replace the target; update storeData to perform this
temp-write-and-rename sequence and keep the existing error handling (and ensure
ensureDirExists is still called on path.dirname(filePath)).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| export async function storeChainHistory(chain: string, data: any[]): Promise<void> { | ||
| await storeData(`history/${chain}.json`, data); | ||
| } | ||
|
|
||
| export async function readChainHistory(chain: string): Promise<any[] | null> { | ||
| return readData(`history/${chain}.json`); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm all call sites of storeChainHistory/readChainHistory pass safe identifiers.
rg -nP -C2 '\b(storeChainHistory|readChainHistory)\s*\('Repository: DefiLlama/defillama-server
Length of output: 1297
🏁 Script executed:
#!/bin/bash
# 1. Examine storeToDb.ts to understand where chain comes from
head -100 defi/l2/v2/storeToDb.ts | tail -50Repository: DefiLlama/defillama-server
Length of output: 1631
🏁 Script executed:
#!/bin/bash
# 2. Check what storeData and readData do in the same file
grep -n "function storeData\|function readData" defi/l2/v2/file-cache.tsRepository: DefiLlama/defillama-server
Length of output: 202
🏁 Script executed:
#!/bin/bash
# 3. Look at actual chain identifiers - search for common chain names
rg -o '\b(ethereum|polygon|arbitrum|optimism|avalanche|fantom|bsc|gnosis)\b' --max-count=5Repository: DefiLlama/defillama-server
Length of output: 41326
🏁 Script executed:
#!/bin/bash
# 4. Compare with defi/src/api2/cache/file-cache.ts to see the normalizer pattern
cat -n defi/src/api2/cache/file-cache.ts | head -80Repository: DefiLlama/defillama-server
Length of output: 2951
🏁 Script executed:
#!/bin/bash
cat -n defi/l2/v2/file-cache.tsRepository: DefiLlama/defillama-server
Length of output: 2351
🏁 Script executed:
#!/bin/bash
# Also check if there are similar pattern normalizers elsewhere in the l2 directory
rg -n "fileNameNormalizer|toLowerCase|replace" defi/l2/v2/file-cache.tsRepository: DefiLlama/defillama-server
Length of output: 52
Sanitize chain before using it as a path segment.
chain is interpolated directly into the file path without validation. While current callers pass known identifiers (database keys from Object.keys(perChainTotals)), a guard is prudent for a public module export.
However, the proposed regex /^[a-zA-Z0-9_.-]+$/ diverges from the normalization pattern already established in defi/src/api2/cache/file-cache.ts, which uses /[^a-zA-Z0-9\/\.]/g — removing hyphens and underscores entirely rather than allowing them. For consistency, align the guard with the existing fileNameNormalizer approach: strip invalid characters and lowercase, rather than rejecting on presence.
Consistent guard pattern
+function normalizeChainKey(chain: string): string {
+ return chain.replace(/[^a-zA-Z0-9_.-]/g, '').toLowerCase();
+}
+
export async function storeChainHistory(chain: string, data: any[]): Promise<void> {
- await storeData(`history/${chain}.json`, data);
+ await storeData(`history/${normalizeChainKey(chain)}.json`, data);
}
export async function readChainHistory(chain: string): Promise<any[] | null> {
- return readData(`history/${chain}.json`);
+ return readData(`history/${normalizeChainKey(chain)}.json`);
}This also avoids case-collision issues on case-insensitive filesystems (macOS, some deployments) vs. case-sensitive Linux in prod — a pattern already used in the repo's other file-cache normalizers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/file-cache.ts` around lines 47 - 53, Sanitize the chain parameter
before using it in file paths in storeChainHistory and readChainHistory by
normalizing it the same way as fileNameNormalizer in
defi/src/api2/cache/file-cache.ts: remove characters matching
/[^a-zA-Z0-9\/\.]/g, convert to lowercase, and fall back to a safe default
(e.g., "unknown") if the result is empty; apply this normalizedChain when
building the path (`history/${normalizedChain}.json`) and update both functions
accordingly (or import and reuse the existing normalizer if available).
… verify & cache
- stellar: uppercase asset_issuer in getStellarSupplies (Horizon is case-sensitive
on issuer); dedupe by (code, issuer) so versioned ES variants don't double-count;
add stellar to chainsWithCaseSensitiveDataProviders.
Result: stellar data restored (was total=0); USDC $240M matches Horizon supply.
- aptos: fall back to public fullnode when APTOS_RPC is unset so per-token
supply fetches succeed; recovers ~\$1.69B of aptos TVL that was dropping
to zero.
- canonical / protocol bridges (noble, xion, echelon_initia, inertia):
after fetchOutgoingAmountsFromDB, batch-resolve any bridge-emitted keys
missing from allPrices via coins.getPrices. Adds a symbol-keyed fallback
for adapters that emit plain symbols. Lowercase source-chain key in
fetchOutgoingAmountsFromDB so sourceChainAmounts aligns with allChainKeys.
- verify: new verifyChangesV2 that compares against the latest chainassets2
row (instead of the V1 /chain-assets/chains API), adds a "regressed to
zero" check with a \$100k floor, and skips the drift check when prev is
>6h stale. V1 verifyChanges kept unchanged for V1.
- read-path: warm the file cache on DB-direct hits (25s cold -> <50ms warm),
add a null-chain guard to fetchHistoricalFromDBDirect, and fix a
d[c]/d[chain] alias in the single-chain breakdown branch.
- yields/stablecoins endpoints: prefer pro-api.llama.fi/\${INTERNAL_API_KEY}
when the key is set, with graceful degradation when upstream returns a
non-array (e.g. 402 paywall).
- storeChainAssetsV2 now takes an optional dryRun flag that skips all prod
writes; used for local end-to-end verification.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
defi/l2/v2/index.ts (1)
432-490:⚠️ Potential issue | 🟠 MajorStore symbol-fallback amounts under the resolved price key.
The fallback resolves
coinDataby symbol, but Line 490 still stores the amount under the unresolved key likecoingecko:axlusdc. That key usually has noallMcapsentry and noallPricesentry, so the amount is dropped later or breaks symbol mapping.Suggested fix
- const symbolToPrice: { [symbol: string]: CoinsApiData } = {}; - Object.values(allPrices).forEach((p) => { + const symbolToPrice: { [symbol: string]: { key: string; data: CoinsApiData } } = {}; + Object.entries(allPrices).forEach(([priceKey, p]) => { const sym = p?.symbol?.toUpperCase(); if (!sym) return; // Prefer the first resolution; symbol collisions are ambiguous by construction. - if (!symbolToPrice[sym]) symbolToPrice[sym] = p; + if (!symbolToPrice[sym]) symbolToPrice[sym] = { key: priceKey, data: p }; });let coinData = allPrices[key]; + let outputKey = key; // When the adapter was symbol-keyed, `key` looks like "coingecko:axlusdc" but // there is no such coingecko id. Fall back to a symbol reverse lookup. @@ const symbol = key.slice("coingecko:".length).toUpperCase(); const resolved = symbolToPrice[symbol]; - if (resolved?.price) { - coinData = resolved; + if (resolved?.data?.price) { + coinData = resolved.data; + outputKey = resolved.key; isWholeTokenAmount = true; } } @@ - nativeDataAfterDeductions[chain][key] = usdAmount; + nativeDataAfterDeductions[chain][outputKey] = usdAmount;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@defi/l2/v2/index.ts` around lines 432 - 490, The symbol-fallback branch currently uses the original unresolved key (e.g., "coingecko:axlusdc") when writing into nativeDataAfterDeductions, which causes those amounts to be lost; update the destination handling so that when you resolve coinData via symbolToPrice (the resolved variable), compute and use the resolved price key (e.g., the normalized key for the resolved CoinGecko id) instead of the original key, keep the isWholeTokenAmount logic for divisor calculation, and write nativeDataAfterDeductions[chain][resolvedKey] = usdAmount (or nativeDataAfterDeductions[chain][key] = usdAmount only when no resolution happened); refer to symbolToPrice, key, resolved, coinData, isWholeTokenAmount, destinationChainAmount and nativeDataAfterDeductions to locate and change the assignment.
♻️ Duplicate comments (2)
defi/l2/v2/storeToDb.ts (2)
47-50:⚠️ Potential issue | 🟠 MajorBucket daily snapshots in UTC.
These
date_trunc('day', to_timestamp(timestamp))expressions use the Postgres session timezone. Sincetimestampis Unix seconds, bucket explicitly in UTC in all three daily queries.Suggested query change
- select distinct on (date_trunc('day', to_timestamp(timestamp))) + select distinct on (date_trunc('day', to_timestamp(timestamp) AT TIME ZONE 'UTC')) timestamp, value from chainassets2 - order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc + order by date_trunc('day', to_timestamp(timestamp) AT TIME ZONE 'UTC') asc, timestamp desc#!/bin/bash # Verify all day buckets are explicitly UTC-based. rg -n "date_trunc\('day', to_timestamp\(timestamp\)(?!\s+AT\s+TIME\s+ZONE\s+'UTC')" --pcre2 --iglob '*.ts'Also applies to: 129-132, 140-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@defi/l2/v2/storeToDb.ts` around lines 47 - 50, The SQL daily bucketing uses date_trunc('day', to_timestamp(timestamp)) which depends on the session timezone; update each occurrence in storeToDb.ts (the three daily snapshot queries around the select distinct block and the two other blocks referenced) to explicitly bucket in UTC by adding AT TIME ZONE 'UTC' inside the date_trunc call and in corresponding ORDER BY expressions (e.g. use date_trunc('day', to_timestamp(timestamp) AT TIME ZONE 'UTC') and order by that same expression and timestamp desc) so all three daily queries consistently use UTC-based day buckets.
39-39:⚠️ Potential issue | 🟠 MajorKeep cache rebuild/read freshness from affecting persisted writes.
precomputeHistoricalCache()can still reject after the DB insert succeeds, while the read path accepts any non-empty cached file. This can report a failed publish after persistence and keep serving stale cache data.Also applies to: 95-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@defi/l2/v2/storeToDb.ts` at line 39, The DB write flow currently awaits precomputeHistoricalCache() so a cache rebuild error can cause the publish to fail after persistence; change the publish flow in storeToDb to treat precomputeHistoricalCache() as best-effort: call it but catch and log any errors (do not rethrow or make the DB insert promise fail). Additionally, harden the cache read logic (the code that currently accepts "any non-empty cached file") to validate cache freshness/version (e.g., compare a timestamp, version, or consistency marker produced at write time) before serving it so stale cache files are rejected; apply the same catch-and-log pattern and freshness check for the other occurrences noted around the block at lines ~95-99.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@defi/l2/utils.ts`:
- Around line 315-318: The regex that normalizes versioned asset keys is off by
one and only captures 55 issuer chars; update the pattern used around
classicKey/canonicalKey to match the full 56-character Stellar issuer (change
the quantifier from {55} to {56}) so keys like "USDC-G...56chars-1" strip the
"-1" correctly; locate the normalization using classicKey.match(...) that
assigns canonicalKey and replace the pattern accordingly, then run or add a unit
test that verifies "CODE-G<56chars>-<version>" normalizes to "CODE-G<56chars>".
- Around line 335-338: getSorobanTokenSupply() returns number | null but the
code in the isSorobanContractId branch compares it to BigInt(0); change the
comparison to use the number domain (e.g., compare supply > 0) and keep the null
guard, so in the isSorobanContractId branch check that supply !== null && supply
> 0 before assigning supplies[`stellar:${rawToken}`] = Number(supply); this
ensures consistent numeric types for isSorobanContractId, getSorobanTokenSupply,
supplies, and rawToken.
In `@defi/l2/v2/index.ts`:
- Around line 407-430: When filling missing bridge/protocol keys (allBridgeKeys
→ missingBridgeKeys) you only merge prices into allPrices but later filtering
(nativeDataAfterMcaps) drops keys missing in allMcaps; fix by also fetching and
merging market caps for those same missingBridgeKeys. After the coins.getPrices
call (or in the same branch) call the corresponding mcaps API (e.g.
coins.getMcaps or coins.getMarketCaps) with missingBridgeKeys and timestamp,
then merge results into allMcaps (same key semantics and any decimals handling)
so keys resolved here are preserved by nativeDataAfterMcaps; apply the same
change to the similar block around lines 495-503.
- Around line 574-577: The symbol map R2 write is started before validation, so
move the invocation of storeR2JSONString into the post-validation path: don't
create symbolMapPromise (or call storeR2JSONString) until after
verifyChangesV2() has completed successfully and only when dryRun is false;
ensure symbolMapPromise is assigned inside the success branch (and
resolved/no-op when dryRun) so that the symbol map is written only if
verifyChangesV2 passes (apply the same change to the other similar block
referencing chainassets2).
In `@defi/l2/v2/storeToDb.ts`:
- Around line 44-54: The DB connection opened by iniDbConnection() around the
queryPostgresWithRetry call must be closed in a finally block to avoid leaking
when the query throws; wrap the query and any subsequent use of the sql client
(including the allData query in fetchHistoricalFromDBDirect()) in try { ... }
finally { sql.end() } similar to the pattern in verifyChangesV2(), ensuring
sql.end() is always called even on error.
- Around line 194-197: The breakdown assignment currently overwrites values when
multiple asset keys map to the same symbol; update the loop that iterates
Object.keys(d[chain][section].breakdown) to aggregate (e.g., sum) values into
symbolEntry.data[section].breakdown[symbolMap[asset]] instead of assigning,
handling missing keys by initializing to zero before adding the incoming amount
from d[chain][section].breakdown[asset]; apply the identical aggregation fix to
the analogous block around the other occurrence (lines referenced in the
comment) so both places use summation rather than direct assignment.
---
Outside diff comments:
In `@defi/l2/v2/index.ts`:
- Around line 432-490: The symbol-fallback branch currently uses the original
unresolved key (e.g., "coingecko:axlusdc") when writing into
nativeDataAfterDeductions, which causes those amounts to be lost; update the
destination handling so that when you resolve coinData via symbolToPrice (the
resolved variable), compute and use the resolved price key (e.g., the normalized
key for the resolved CoinGecko id) instead of the original key, keep the
isWholeTokenAmount logic for divisor calculation, and write
nativeDataAfterDeductions[chain][resolvedKey] = usdAmount (or
nativeDataAfterDeductions[chain][key] = usdAmount only when no resolution
happened); refer to symbolToPrice, key, resolved, coinData, isWholeTokenAmount,
destinationChainAmount and nativeDataAfterDeductions to locate and change the
assignment.
---
Duplicate comments:
In `@defi/l2/v2/storeToDb.ts`:
- Around line 47-50: The SQL daily bucketing uses date_trunc('day',
to_timestamp(timestamp)) which depends on the session timezone; update each
occurrence in storeToDb.ts (the three daily snapshot queries around the select
distinct block and the two other blocks referenced) to explicitly bucket in UTC
by adding AT TIME ZONE 'UTC' inside the date_trunc call and in corresponding
ORDER BY expressions (e.g. use date_trunc('day', to_timestamp(timestamp) AT TIME
ZONE 'UTC') and order by that same expression and timestamp desc) so all three
daily queries consistently use UTC-based day buckets.
- Line 39: The DB write flow currently awaits precomputeHistoricalCache() so a
cache rebuild error can cause the publish to fail after persistence; change the
publish flow in storeToDb to treat precomputeHistoricalCache() as best-effort:
call it but catch and log any errors (do not rethrow or make the DB insert
promise fail). Additionally, harden the cache read logic (the code that
currently accepts "any non-empty cached file") to validate cache
freshness/version (e.g., compare a timestamp, version, or consistency marker
produced at write time) before serving it so stale cache files are rejected;
apply the same catch-and-log pattern and freshness check for the other
occurrences noted around the block at lines ~95-99.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9ca3362d-24e7-4f64-a3a8-1d9670c6371e
📒 Files selected for processing (4)
defi/l2/utils.tsdefi/l2/v2/index.tsdefi/l2/v2/storeToDb.tsdefi/l2/v2/verifyChanges.ts
| // base (code, issuer); issuer is the trailing 56-char G-prefixed string; | ||
| // anything after that (e.g. "-1", "-2") is a versioning artifact. | ||
| const m = classicKey.match(/^(.+-[A-Z2-7]{55})(?:-\d+)?$/); | ||
| canonicalKey = m ? m[1] : classicKey; |
There was a problem hiding this comment.
Use the full 56-character Stellar issuer when stripping version suffixes.
Line 317 only matches 55 issuer characters after the dash, so canonical assets like USDC-G...56chars-1 won’t normalize to the unversioned key and can still be double-counted.
Suggested fix
- const m = classicKey.match(/^(.+-[A-Z2-7]{55})(?:-\d+)?$/);
+ const m = classicKey.match(/^(.+-G[A-Z2-7]{55})(?:-\d+)?$/);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // base (code, issuer); issuer is the trailing 56-char G-prefixed string; | |
| // anything after that (e.g. "-1", "-2") is a versioning artifact. | |
| const m = classicKey.match(/^(.+-[A-Z2-7]{55})(?:-\d+)?$/); | |
| canonicalKey = m ? m[1] : classicKey; | |
| // base (code, issuer); issuer is the trailing 56-char G-prefixed string; | |
| // anything after that (e.g. "-1", "-2") is a versioning artifact. | |
| const m = classicKey.match(/^(.+-G[A-Z2-7]{55})(?:-\d+)?$/); | |
| canonicalKey = m ? m[1] : classicKey; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/utils.ts` around lines 315 - 318, The regex that normalizes versioned
asset keys is off by one and only captures 55 issuer chars; update the pattern
used around classicKey/canonicalKey to match the full 56-character Stellar
issuer (change the quantifier from {55} to {56}) so keys like
"USDC-G...56chars-1" strip the "-1" correctly; locate the normalization using
classicKey.match(...) that assigns canonicalKey and replace the pattern
accordingly, then run or add a unit test that verifies
"CODE-G<56chars>-<version>" normalizes to "CODE-G<56chars>".
| if (isSorobanContractId(token)) { | ||
| const supply = await getSorobanTokenSupply(token); | ||
| if (supply != null && supply > BigInt(0)) supplies[`stellar:${token}`] = Number(supply); | ||
| if (supply != null && supply > BigInt(0)) supplies[`stellar:${rawToken}`] = Number(supply); | ||
| return; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the mixed number/bigint comparison is removed.
rg -n -C2 'getSorobanTokenSupply|supply\s*>\s*BigInt\(0\)' --iglob '*.ts'Repository: DefiLlama/defillama-server
Length of output: 954
Compare Soroban supplies as numbers.
getSorobanTokenSupply() returns number | null, but line 337 compares it to BigInt(0). Keep the comparison in the same numeric domain.
Suggested fix
- if (supply != null && supply > BigInt(0)) supplies[`stellar:${rawToken}`] = Number(supply);
+ if (supply != null && supply > 0) supplies[`stellar:${rawToken}`] = supply;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (isSorobanContractId(token)) { | |
| const supply = await getSorobanTokenSupply(token); | |
| if (supply != null && supply > BigInt(0)) supplies[`stellar:${token}`] = Number(supply); | |
| if (supply != null && supply > BigInt(0)) supplies[`stellar:${rawToken}`] = Number(supply); | |
| return; | |
| if (isSorobanContractId(token)) { | |
| const supply = await getSorobanTokenSupply(token); | |
| if (supply != null && supply > 0) supplies[`stellar:${rawToken}`] = supply; | |
| return; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/utils.ts` around lines 335 - 338, getSorobanTokenSupply() returns
number | null but the code in the isSorobanContractId branch compares it to
BigInt(0); change the comparison to use the number domain (e.g., compare supply
> 0) and keep the null guard, so in the isSorobanContractId branch check that
supply !== null && supply > 0 before assigning supplies[`stellar:${rawToken}`] =
Number(supply); this ensures consistent numeric types for isSorobanContractId,
getSorobanTokenSupply, supplies, and rawToken.
| // Bridge / protocol-bridge adapters (noble, xion, echelon_initia, inertia, ...) | ||
| // emit tokens keyed by CoinGecko slug ("usd-coin", "xion-2", "celestia", "initia") | ||
| // or sometimes by bare symbol ("osmo", "axl"). `fetchNativeAndMcaps` only prices | ||
| // tokens it saw on per-chain native lists, so keys like `coingecko:usd-coin` | ||
| // are absent from allPrices. Collect everything the bridge-fetch step produced, | ||
| // pick out whatever allPrices is missing, and resolve it in one batch. | ||
| const allBridgeKeys = new Set<string>(); | ||
| Object.values(destinationChainAmounts).forEach((m) => Object.keys(m).forEach((k) => allBridgeKeys.add(k))); | ||
| Object.values(protocolAmounts).forEach((m) => Object.keys(m).forEach((k) => allBridgeKeys.add(k))); | ||
| const missingBridgeKeys = [...allBridgeKeys].filter((k) => !allPrices[k]); | ||
| if (missingBridgeKeys.length) { | ||
| console.log(`[bridge-price-fill] resolving ${missingBridgeKeys.length} missing bridge/protocol keys via coins API`); | ||
| try { | ||
| const extra = await coins.getPrices(missingBridgeKeys, timestamp); | ||
| const got = Object.keys(extra).length; | ||
| console.log(`[bridge-price-fill] resolved ${got}/${missingBridgeKeys.length} (sample:`, Object.keys(extra).slice(0, 5), ")"); | ||
| Object.keys(extra).forEach((k) => { | ||
| if (k.startsWith("coingecko:")) (extra[k] as any).decimals = 0; // match native-side convention | ||
| allPrices[k] = extra[k]; | ||
| }); | ||
| } catch (e: any) { | ||
| console.warn("[bridge-price-fill] failed:", e?.message); | ||
| } | ||
| } |
There was a problem hiding this comment.
Fetch mcaps for bridge price-fill keys too.
Lines 420-426 add missing prices to allPrices, but nativeDataAfterMcaps later drops any key whose allMcaps[key]?.mcap is missing. Bridge assets resolved only by this new fill can still disappear before classification.
Suggested fix
const extra = await coins.getPrices(missingBridgeKeys, timestamp);
+ const extraMcaps = await coins.getMcaps(Object.keys(extra), timestamp);
const got = Object.keys(extra).length;
console.log(`[bridge-price-fill] resolved ${got}/${missingBridgeKeys.length} (sample:`, Object.keys(extra).slice(0, 5), ")");
Object.keys(extra).forEach((k) => {
if (k.startsWith("coingecko:")) (extra[k] as any).decimals = 0; // match native-side convention
allPrices[k] = extra[k];
});
+ Object.keys(extraMcaps).forEach((k) => {
+ allMcaps[k] = extraMcaps[k];
+ });Also applies to: 495-503
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/index.ts` around lines 407 - 430, When filling missing
bridge/protocol keys (allBridgeKeys → missingBridgeKeys) you only merge prices
into allPrices but later filtering (nativeDataAfterMcaps) drops keys missing in
allMcaps; fix by also fetching and merging market caps for those same
missingBridgeKeys. After the coins.getPrices call (or in the same branch) call
the corresponding mcaps API (e.g. coins.getMcaps or coins.getMarketCaps) with
missingBridgeKeys and timestamp, then merge results into allMcaps (same key
semantics and any decimals handling) so keys resolved here are preserved by
nativeDataAfterMcaps; apply the same change to the similar block around lines
495-503.
| // create symbol key data — only actually issue the R2 write when not in dry-run mode | ||
| const symbolMapPromise = dryRun | ||
| ? Promise.resolve() | ||
| : storeR2JSONString("chainAssetsSymbolMap", JSON.stringify(symbolMap)); |
There was a problem hiding this comment.
Start the symbol-map write only after validation passes.
storeR2JSONString("chainAssetsSymbolMap", ...) is invoked before verifyChangesV2(). If validation fails, the symbol map can still be updated while chainassets2 DB/R2 writes are skipped.
Suggested fix
- // create symbol key data — only actually issue the R2 write when not in dry-run mode
- const symbolMapPromise = dryRun
- ? Promise.resolve()
- : storeR2JSONString("chainAssetsSymbolMap", JSON.stringify(symbolMap));
+ // create symbol key data after validation, so failed runs do not mutate R2 if (!override) await verifyChangesV2(symbolData);
if (dryRun) {
console.log("[dryRun] skipping prod writes (chainAssetsSymbolMap R2, chainassets2 DB, chainAssets2 R2)");
return { rawData, symbolData };
}
await Promise.all([
- symbolMapPromise,
+ storeR2JSONString("chainAssetsSymbolMap", JSON.stringify(symbolMap)),
storeHistoricalToDB({ timestamp: getCurrentUnixTimestamp(), value: rawData }),
storeR2JSONString("chainAssets2", JSON.stringify({ timestamp: getCurrentUnixTimestamp(), value: symbolData })),
]);Also applies to: 615-626
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/index.ts` around lines 574 - 577, The symbol map R2 write is
started before validation, so move the invocation of storeR2JSONString into the
post-validation path: don't create symbolMapPromise (or call storeR2JSONString)
until after verifyChangesV2() has completed successfully and only when dryRun is
false; ensure symbolMapPromise is assigned inside the success branch (and
resolved/no-op when dryRun) so that the symbol map is written only if
verifyChangesV2 passes (apply the same change to the other similar block
referencing chainassets2).
| const sql = await iniDbConnection(); | ||
|
|
||
| const timestamps = await queryPostgresWithRetry(sql`select timestamp from chainassets2`, sql); | ||
| const dailyEntries = findDailyTimestamps(timestamps); | ||
| const dailyData = await queryPostgresWithRetry( | ||
| sql`select * from chainassets2 where timestamp in ${sql(dailyEntries)}`, | ||
| const rows = await queryPostgresWithRetry( | ||
| sql` | ||
| select distinct on (date_trunc('day', to_timestamp(timestamp))) | ||
| timestamp, value | ||
| from chainassets2 | ||
| order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc | ||
| `, | ||
| sql | ||
| ); | ||
| sql.end(); |
There was a problem hiding this comment.
Close Postgres connections with finally.
If either query throws, sql.end() is skipped and the connection can leak. Mirror the try/finally pattern used in verifyChangesV2().
Suggested fix pattern
const sql = await iniDbConnection();
- const rows = await queryPostgresWithRetry(
- sql`
- select distinct on (date_trunc('day', to_timestamp(timestamp)))
- timestamp, value
- from chainassets2
- order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc
- `,
- sql
- );
- sql.end();
+ let rows: any[];
+ try {
+ rows = await queryPostgresWithRetry(
+ sql`
+ select distinct on (date_trunc('day', to_timestamp(timestamp)))
+ timestamp, value
+ from chainassets2
+ order by date_trunc('day', to_timestamp(timestamp)) asc, timestamp desc
+ `,
+ sql
+ );
+ } finally {
+ sql.end();
+ }Apply the same pattern around the allData query in fetchHistoricalFromDBDirect().
Also applies to: 122-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/storeToDb.ts` around lines 44 - 54, The DB connection opened by
iniDbConnection() around the queryPostgresWithRetry call must be closed in a
finally block to avoid leaking when the query throws; wrap the query and any
subsequent use of the sql client (including the allData query in
fetchHistoricalFromDBDirect()) in try { ... } finally { sql.end() } similar to
the pattern in verifyChangesV2(), ensuring sql.end() is always called even on
error.
| Object.keys(d[chain][section].breakdown ?? {}).forEach((asset: string) => { | ||
| if (!symbolMap[asset]) return; | ||
| symbolEntry.data[section].breakdown[symbolMap[asset]] = d[chain][section].breakdown[asset]; | ||
| }); |
There was a problem hiding this comment.
Aggregate duplicate symbols in breakdown output.
Multiple asset keys can map to the same symbol; the current assignment overwrites earlier values, underreporting historical breakdowns for duplicated symbols.
Suggested fix
import postgres from "postgres";
+import BigNumber from "bignumber.js"; Object.keys(d[chain][section].breakdown ?? {}).forEach((asset: string) => {
if (!symbolMap[asset]) return;
- symbolEntry.data[section].breakdown[symbolMap[asset]] = d[chain][section].breakdown[asset];
+ const symbol = symbolMap[asset];
+ symbolEntry.data[section].breakdown[symbol] = BigNumber(
+ symbolEntry.data[section].breakdown[symbol] ?? 0
+ )
+ .plus(d[chain][section].breakdown[asset])
+ .toString();
}); Object.keys(d[c][section].breakdown ?? {}).forEach((asset: string) => {
if (!symbolMap[asset]) return;
- symbolEntry.data[readableChain][section].breakdown[symbolMap[asset]] = d[c][section].breakdown[asset];
+ const symbol = symbolMap[asset];
+ symbolEntry.data[readableChain][section].breakdown[symbol] = BigNumber(
+ symbolEntry.data[readableChain][section].breakdown[symbol] ?? 0
+ )
+ .plus(d[c][section].breakdown[asset])
+ .toString();
});Also applies to: 205-208
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@defi/l2/v2/storeToDb.ts` around lines 194 - 197, The breakdown assignment
currently overwrites values when multiple asset keys map to the same symbol;
update the loop that iterates Object.keys(d[chain][section].breakdown) to
aggregate (e.g., sum) values into
symbolEntry.data[section].breakdown[symbolMap[asset]] instead of assigning,
handling missing keys by initializing to zero before adding the incoming amount
from d[chain][section].breakdown[asset]; apply the identical aggregation fix to
the analogous block around the other occurrence (lines referenced in the
comment) so both places use summation rather than direct assignment.
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Chores