feat: add token metadata proxy endpoint#1265
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a new GET /api/v1/tokens/metadata endpoint and a TokenMetadata handler that validates query params, dispatches requests by chainId to EVM (Alchemy RPC) or Solana (Alchemy RPC), fetches and parses upstream token metadata, applies caching headers, and maps upstream errors to appropriate HTTP responses. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler as TokenMetadata Handler
participant AlchemyEVM as Alchemy EVM RPC
participant AlchemySol as Alchemy Solana RPC
Client->>Handler: GET /api/v1/tokens/metadata?chainId=...&tokenAddress=...
Handler->>Handler: Validate chainId and tokenAddress
alt Unsupported chainId
Handler->>Client: 422 Unprocessable Entity
else EVM chain
Handler->>AlchemyEVM: alchemy_getTokenMetadata (RPC POST with ALCHEMY_API_KEY)
AlchemyEVM-->>Handler: metadata / error
Handler->>Client: 200 metadata or 404/502/500 (with X-Cache)
else Solana chain
Handler->>AlchemySol: getAsset (with ALCHEMY_API_KEY)
AlchemySol-->>Handler: metadata / error
Handler->>Client: 200 metadata or 404/502/500 (with X-Cache)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 82-101: The in-memory requestsByIp map in isRateLimited can grow
unbounded because expired buckets aren't removed; update isRateLimited (or add a
helper it calls) to evict stale entries by iterating requestsByIp and deleting
any entry whose resetAt <= now before inserting or counting a request (use the
same now timestamp and retain existing behavior for count/reset logic),
referencing requestsByIp, isRateLimited, windowMs, and maxRequestsPerWindow so
stale IP buckets are removed and memory stays bounded.
- Line 84: The rate-limiting using req.ip in tokenMetadata.ts will see the
proxy's IP unless Express is configured with trust proxy; update the app
initialization to call app.set('trust proxy', <appropriate value>) (for example
'loopback', 1, or a custom function) in the module that creates the Express app
(where the Express instance is named app) so req.ip returns the client IP behind
your ingress/proxy and per-IP rate limits work correctly.
- Line 23: Replace the current regex-only validator in isValidSolanaAddress with
a lightweight Base58 decoder that decodes the address and verifies the resulting
byte array is exactly 32 bytes (the size of a Solana public key); keep the
function name isValidSolanaAddress and ensure it returns false on decode errors
or wrong length, avoiding adding `@solana/web3.js` as a dependency. This change
will reject malformed Base58 strings that happen to match the regex and yield a
proper 422 validation failure before calling the RPC.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
node/proxy/api/src/app.tsnode/proxy/api/src/tokenMetadata.ts
|
Addressed the CodeRabbit feedback in this branch:
Validation:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
node/proxy/api/src/tokenMetadata.ts (1)
123-154: Rate limiter is process-local; consider shared enforcement for multi-replica deploys.This implementation works per instance, but limits can be bypassed (or uneven) behind load balancing. Consider Redis or ingress-level rate limiting for consistent global behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/proxy/api/src/tokenMetadata.ts` around lines 123 - 154, The current isRateLimited method (and its in-memory requestsByIp map, nextCleanupAt, cleanupIntervalMs, windowMs, maxRequestsPerWindow) enforces limits only per process which fails for multi-replica deployments; replace the in-process logic with a shared store or external rate-limiter: migrate the counter/reset logic to a Redis-backed implementation (e.g., use INCR with EXPIRE or a Lua script for atomic sliding-window semantics) or delegate to ingress/edge rate limiting, updating isRateLimited to call the shared Redis helper (or external API) instead of reading/writing requestsByIp and remove the cleanup loop. Ensure keys are derived from req.ip (or X-Forwarded-For) to maintain the same behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 181-191: The code currently accepts any chainId starting with
"solana:" but always fetches mainnet metadata; update the chainId validation in
the block that handles chainId.startsWith('solana:') to explicitly allow only
supported Solana identifiers (e.g. 'solana:mainnet' or whatever supported list
you maintain) before calling isValidSolanaAddress or getSolanaTokenMetadata, and
if the chainId is unsupported call sendValidationError to return a 422-style
validation error; modify the logic around isValidSolanaAddress,
getSolanaTokenMetadata, and sendValidationError to first check the exact chainId
membership and only proceed to address validation and getSolanaTokenMetadata for
supported chain IDs.
---
Nitpick comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 123-154: The current isRateLimited method (and its in-memory
requestsByIp map, nextCleanupAt, cleanupIntervalMs, windowMs,
maxRequestsPerWindow) enforces limits only per process which fails for
multi-replica deployments; replace the in-process logic with a shared store or
external rate-limiter: migrate the counter/reset logic to a Redis-backed
implementation (e.g., use INCR with EXPIRE or a Lua script for atomic
sliding-window semantics) or delegate to ingress/edge rate limiting, updating
isRateLimited to call the shared Redis helper (or external API) instead of
reading/writing requestsByIp and remove the cleanup loop. Ensure keys are
derived from req.ip (or X-Forwarded-For) to maintain the same behavior.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
node/proxy/api/src/app.tsnode/proxy/api/src/tokenMetadata.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- node/proxy/api/src/app.ts
Remove hand-rolled Base58 decoder, custom IP rate limiter, viem address
validation, error sanitization helpers, and response envelope. The proxy
now receives chainId + tokenAddress, injects the Alchemy API key,
forwards to the correct endpoint, normalizes the Solana response, and
returns a flat { chainId, tokenAddress, name, symbol, decimals, logo }.
Co-Authored-By: Claude Opus 4.6 <[email protected]>
Warn instead of crashing when ALCHEMY_API_KEY is missing, returning 503 from the handler. Add Cache-Control: public, max-age=86400 to successful responses since token metadata is essentially immutable. Co-Authored-By: Claude Opus 4.6 <[email protected]>
Solana URL follows the same ALCHEMY_API_KEY pattern as EVM chains. Co-Authored-By: Claude Opus 4.6 <[email protected]>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
node/proxy/api/src/tokenMetadata.ts (1)
35-60:⚠️ Potential issue | 🟠 MajorValidate
tokenAddressper chain before posting upstream.Line 38 and Line 56 send whatever string the caller provides. Invalid EVM or Solana addresses will now hit Alchemy and come back as upstream failures instead of the intended
422validation response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/proxy/api/src/tokenMetadata.ts` around lines 35 - 60, The handler currently forwards whatever tokenAddress the caller provides to Alchemy (when ALCHEMY_NETWORK_BY_CHAIN_ID[chainId] is truthy or chainId === SOLANA_CHAIN_ID) causing upstream failures instead of returning a 422; before any axios.post call, validate tokenAddress for the target chain (for EVM chains use a proper EVM address check such as ethers.utils.isAddress or an equivalent isValidEvmAddress(tokenAddress), and for Solana use a Solana PublicKey/Solana validator like isValidSolanaAddress(tokenAddress)); if validation fails, respond with res.status(422).json({ error: 'Invalid token address' }) and return — apply this check around the branches that reference tokenAddress/ALCHEMY_NETWORK_BY_CHAIN_ID and the SOLANA_CHAIN_ID branch so invalid inputs never reach the Alchemy requests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 20-34: The handler function currently forwards requests to Alchemy
without applying the per-IP throttle; add a check at the top of async
handler(req: Request, res: Response) (before any upstream call or use of
ALCHEMY_API_KEY) that enforces the route-level per-IP rate limit using your
existing throttle store/utility (use req.ip or X-Forwarded-For to identify
client). If the throttle indicates the client is over the limit, immediately
return res.status(429).json({ error: 'Too many requests' }) and do not proceed
to call Alchemy; otherwise decrement/record the usage and continue with the
existing logic. Ensure the check is placed before any network call or use of
ALCHEMY_API_KEY so paid upstream cannot be hit by bursts.
---
Duplicate comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 35-60: The handler currently forwards whatever tokenAddress the
caller provides to Alchemy (when ALCHEMY_NETWORK_BY_CHAIN_ID[chainId] is truthy
or chainId === SOLANA_CHAIN_ID) causing upstream failures instead of returning a
422; before any axios.post call, validate tokenAddress for the target chain (for
EVM chains use a proper EVM address check such as ethers.utils.isAddress or an
equivalent isValidEvmAddress(tokenAddress), and for Solana use a Solana
PublicKey/Solana validator like isValidSolanaAddress(tokenAddress)); if
validation fails, respond with res.status(422).json({ error: 'Invalid token
address' }) and return — apply this check around the branches that reference
tokenAddress/ALCHEMY_NETWORK_BY_CHAIN_ID and the SOLANA_CHAIN_ID branch so
invalid inputs never reach the Alchemy requests.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a96cb24c-adc5-4e8f-a260-2cf604411c52
📒 Files selected for processing (3)
node/proxy/api/src/app.tsnode/proxy/api/src/tokenMetadata.tsnode/proxy/sample.env
✅ Files skipped from review due to trivial changes (1)
- node/proxy/sample.env
gomesalexandre
left a comment
There was a problem hiding this comment.
Smoke tested locally
Cloned, checked out the branch, ran the proxy on port 3999 with a real Alchemy key. Built a hurl test collection covering all paths - 20/20 pass.
node/proxy/token-metadata.hurl (click to expand)
# ============================================================
# Token Metadata Proxy - endpoint smoke tests
# Run: hurl --test token-metadata.hurl
# ============================================================
# --- Validation / error paths ---
# Missing both params
GET http://127.0.0.1:3999/api/v1/tokens/metadata
HTTP 400
[Asserts]
jsonpath "$.error" contains "required"
# Missing tokenAddress
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1
HTTP 400
# Missing chainId
GET http://127.0.0.1:3999/api/v1/tokens/metadata?tokenAddress=0xdeadbeef
HTTP 400
# Unsupported chainId
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:9999&tokenAddress=0xdeadbeef
HTTP 422
[Asserts]
jsonpath "$.error" contains "Unsupported"
# Empty string params
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=&tokenAddress=
HTTP 400
# --- EVM: Ethereum mainnet (eip155:1) ---
# USDC on Ethereum
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&tokenAddress=0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48
HTTP 200
[Asserts]
jsonpath "$.symbol" == "USDC"
jsonpath "$.decimals" == 6
jsonpath "$.name" == "USDC"
jsonpath "$.chainId" == "eip155:1"
jsonpath "$.tokenAddress" == "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48"
jsonpath "$.logo" != null
header "Cache-Control" contains "max-age=86400"
# WETH on Ethereum
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&tokenAddress=0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2
HTTP 200
[Asserts]
jsonpath "$.symbol" == "WETH"
jsonpath "$.decimals" == 18
# DAI on Ethereum
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&tokenAddress=0x6b175474e89094c44da98b954eedeac495271d0f
HTTP 200
[Asserts]
jsonpath "$.symbol" == "DAI"
jsonpath "$.decimals" == 18
# Invalid EVM address - Alchemy returns 400, proxy forwards status
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&tokenAddress=notanaddress
HTTP 400
# Non-existent token (valid address format, not a token contract)
# BUG: currently returns 200 with empty name/symbol - should arguably 404
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&tokenAddress=0x0000000000000000000000000000000000000001
HTTP 200
[Asserts]
jsonpath "$.name" == ""
jsonpath "$.symbol" == ""
jsonpath "$.decimals" == null
# --- EVM: Optimism (eip155:10) ---
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:10&tokenAddress=0x0b2c639c533813f4aa9d7837caf62653d097ff85
HTTP 200
[Asserts]
jsonpath "$.decimals" == 6
jsonpath "$.chainId" == "eip155:10"
# --- EVM: Polygon (eip155:137) ---
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:137&tokenAddress=0x3c499c542cef5e3811e1192ce70d8cc03d5c3359
HTTP 200
[Asserts]
jsonpath "$.decimals" == 6
jsonpath "$.chainId" == "eip155:137"
# --- EVM: Base (eip155:8453) ---
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:8453&tokenAddress=0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913
HTTP 200
[Asserts]
jsonpath "$.decimals" == 6
jsonpath "$.chainId" == "eip155:8453"
# --- EVM: Arbitrum (eip155:42161) ---
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:42161&tokenAddress=0xaf88d065e77c8cC2239327C5EDb3A432268e5831
HTTP 200
[Asserts]
jsonpath "$.decimals" == 6
jsonpath "$.chainId" == "eip155:42161"
# --- Solana ---
# USDC on Solana
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=solana:5eykt4UsFv8P8NJdTREpY1vzqKqZKvdp&tokenAddress=EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v
HTTP 200
[Asserts]
jsonpath "$.symbol" == "USDC"
jsonpath "$.decimals" == 6
jsonpath "$.chainId" == "solana:5eykt4UsFv8P8NJdTREpY1vzqKqZKvdp"
header "Cache-Control" contains "max-age=86400"
# Bonk on Solana
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=solana:5eykt4UsFv8P8NJdTREpY1vzqKqZKvdp&tokenAddress=DezXAZ8z7PnrnRJjz3wXBoRgixCa6xjnB7YaB1pPB263
HTTP 200
[Asserts]
jsonpath "$.symbol" == "Bonk"
jsonpath "$.name" exists
# Invalid Solana address
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=solana:5eykt4UsFv8P8NJdTREpY1vzqKqZKvdp&tokenAddress=notasolanaaddress
HTTP 502
# Wrong Solana chainId (devnet genesis hash) - should 422
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=solana:EtWTRABZaYq6iMfeYKouRu166VU2xqa1&tokenAddress=EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v
HTTP 422
# --- Cross-cutting ---
# Duplicate chainId params (express parses as array, handler should reject)
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:1&chainId=eip155:10&tokenAddress=0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48
HTTP 400
# Very long chainId (potential abuse)
GET http://127.0.0.1:3999/api/v1/tokens/metadata?chainId=eip155:99999999999999999999999999999999&tokenAddress=0xdeadbeef
HTTP 422$ hurl --test node/proxy/token-metadata.hurl
Success node/proxy/token-metadata.hurl (20 request(s) in 2377 ms)
Executed requests: 20 (8.4/s)
Succeeded files: 1 (100.0%)
Failed files: 0 (0.0%)
q:
Graceful vs hard failure pattern - this handler console.warns when ALCHEMY_API_KEY is missing and returns 503 at runtime. Every other handler in the proxy (coingecko.ts, zerion.ts, zrx.ts, portals.ts) throw new Error() and crash the whole process on boot. This is better design - intentional?
Missing chains - Avalanche (eip155:43114), BSC (eip155:56), Gnosis (eip155:100) are all on Alchemy. Scope decision or plan to add later?
CodeRabbit comments status
The rabbit flagged rate limiting (3x) and input validation (Base58, EVM address format) across 3 review rounds. These were addressed in intermediate commits then simplified back to a pure pass-through. The current diff deliberately punts validation to Alchemy - valid proxy design, but finding #1 below is a direct consequence: Alchemy "validates" by returning empty data, not by erroring.
tl;dr
Clean PR, 91 lines of focused code, good REST semantics (GET, idempotent, cacheable, CAIP-2 chain IDs). SSRF is mitigated by the chain ID allowlist, CSRF is N/A (stateless GET). The real bug is #1 in inline comments - non-existent tokens return 200 with empty data and get cached for 24h. Error path leakiness (#2, #3) is worth cleaning up. Rest is hardening.
lol @ requested changes, meant to comment, not stamp nor request changes
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
node/proxy/api/src/tokenMetadata.ts (3)
43-46: Consider defensive parsing for upstream response.The cast
r as EvmResultassumes the upstream response matches exactly. If Alchemy changes their response shape, this could silently return undefined values. Low risk for a proxy, but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/proxy/api/src/tokenMetadata.ts` around lines 43 - 46, The parseEvm function currently casts r as EvmResult which can hide malformed upstream shapes; update parseEvm to defensively validate the incoming object before using its fields by checking that r is an object and that name and symbol are strings, decimals is a number (or coercible) and logo is a string (or null), then return a TokenMetadataPayload with safe defaults (e.g., nulls or empty strings) or normalized values instead of trusting the cast; reference parseEvm, EvmResult and TokenMetadataPayload when making these validations and ensure no runtime exceptions occur if properties are missing or of the wrong type.
166-167: Timer accumulation with high-cardinality traffic.Each cache entry spawns its own
setTimeout. Under sustained traffic with many unique tokens, this creates many pending timers. Consider a periodic sweep pattern (as mentioned in past reviews) if this becomes a concern in production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/proxy/api/src/tokenMetadata.ts` around lines 166 - 167, The current per-entry setTimeout call (setTimeout(() => delete this.requestCache[cacheKey], CACHE_TTL_MS)) causes timer accumulation under high-cardinality traffic; remove the per-entry setTimeout and instead record a timestamp on the stored metadata in this.requestCache[cacheKey], then add a single periodic sweeper (e.g., start a single setInterval in the class constructor or an init method) that iterates over this.requestCache and deletes entries older than CACHE_TTL_MS; update places that read requestCache to expect the timestamped metadata and ensure the sweeper is cleared on shutdown.
170-178: AddCache-Control: no-storeon error responses.Per past review feedback, error responses without cache directives may be cached by intermediaries. Setting
no-storeprevents a 404 or 502 from being cached for hours.🔧 Proposed fix
} catch (err) { if (isAxiosError(err)) { - res.status(502).json({ error: err.message || 'Upstream request failed' }) + res.status(502).set('Cache-Control', 'no-store').json({ error: err.message || 'Upstream request failed' }) } else if (err instanceof Error) { - res.status(500).json({ error: err.message || 'Internal server error' }) + res.status(500).set('Cache-Control', 'no-store').json({ error: err.message || 'Internal server error' }) } else { - res.status(500).json({ error: 'Internal server error' }) + res.status(500).set('Cache-Control', 'no-store').json({ error: 'Internal server error' }) } }Also apply to the 502 at line 156 and 404 at line 162.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/proxy/api/src/tokenMetadata.ts` around lines 170 - 178, The error-handling branches need to include a Cache-Control: no-store header to prevent intermediaries from caching error responses; update the catch block branches that use isAxiosError and the generic Error/else branches (the res.status(502)..., res.status(500)... responses) to set Cache-Control: no-store before sending JSON, and also add the same header to the earlier res.status(502)... and res.status(404)... error responses so all non-success responses include Cache-Control: no-store.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 1-8: The import of PublicKey from `@solana/web3.js` in
tokenMetadata.ts means that package must be present in the service dependencies;
add "@solana/web3.js" to the node/proxy/api package.json dependencies (choose a
compatible version) and run your package manager to install and update the
lockfile so the import PublicKey resolves at runtime; ensure the dependency is
saved under "dependencies" (not devDependencies) so production builds include
it.
---
Nitpick comments:
In `@node/proxy/api/src/tokenMetadata.ts`:
- Around line 43-46: The parseEvm function currently casts r as EvmResult which
can hide malformed upstream shapes; update parseEvm to defensively validate the
incoming object before using its fields by checking that r is an object and that
name and symbol are strings, decimals is a number (or coercible) and logo is a
string (or null), then return a TokenMetadataPayload with safe defaults (e.g.,
nulls or empty strings) or normalized values instead of trusting the cast;
reference parseEvm, EvmResult and TokenMetadataPayload when making these
validations and ensure no runtime exceptions occur if properties are missing or
of the wrong type.
- Around line 166-167: The current per-entry setTimeout call (setTimeout(() =>
delete this.requestCache[cacheKey], CACHE_TTL_MS)) causes timer accumulation
under high-cardinality traffic; remove the per-entry setTimeout and instead
record a timestamp on the stored metadata in this.requestCache[cacheKey], then
add a single periodic sweeper (e.g., start a single setInterval in the class
constructor or an init method) that iterates over this.requestCache and deletes
entries older than CACHE_TTL_MS; update places that read requestCache to expect
the timestamped metadata and ensure the sweeper is cleared on shutdown.
- Around line 170-178: The error-handling branches need to include a
Cache-Control: no-store header to prevent intermediaries from caching error
responses; update the catch block branches that use isAxiosError and the generic
Error/else branches (the res.status(502)..., res.status(500)... responses) to
set Cache-Control: no-store before sending JSON, and also add the same header to
the earlier res.status(502)... and res.status(404)... error responses so all
non-success responses include Cache-Control: no-store.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b49287a4-a44b-416c-bb4e-999bf9101d98
📒 Files selected for processing (6)
.prettierignorego/coinstacks/cosmos/api/railway.jsongo/coinstacks/mayachain/api/railway.jsongo/coinstacks/thorchain-v1/api/railway.jsongo/coinstacks/thorchain/api/railway.jsonnode/proxy/api/src/tokenMetadata.ts
✅ Files skipped from review due to trivial changes (4)
- go/coinstacks/cosmos/api/railway.json
- go/coinstacks/mayachain/api/railway.json
- go/coinstacks/thorchain-v1/api/railway.json
- go/coinstacks/thorchain/api/railway.json
Summary
/api/v1/tokens/metadatapass-through proxy endpoint for token metadata lookups via Alchemyalchemy_getTokenMetadataand Solana mainnet viagetAssetchainId+tokenAddressquery params, injects the Alchemy API key, and returns a flat normalized response:{ chainId, tokenAddress, name, symbol, decimals, logo }Cache-Control: public, max-age=86400headerALCHEMY_API_KEYis not configuredALCHEMY_API_KEYtosample.envTesting
yarn eslint node/proxy/api/src/tokenMetadata.ts node/proxy/api/src/app.ts— passes/health,/api/v1/tokens/metadata)Summary by CodeRabbit