Conversation
…formance optimizations
📝 WalkthroughWalkthroughAdds multi‑instrument orchestration, arbitrage detection/execution, cross‑market correlation analytics, per‑symbol risk tracking, CPU core allocation and affinity utilities, lock‑free orderbook adjustments, an ObjectPool, LTO build option, new tests/benchmarks, and supporting documentation. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ArbitrageDetector
participant VenueA
participant VenueB
participant ArbitrageExecutor
participant OrderBook
Client->>ArbitrageDetector: start()
ArbitrageDetector->>ArbitrageDetector: launch scanLoop
VenueA->>ArbitrageDetector: updateVenueQuote(symbol, bid, ask)
VenueB->>ArbitrageDetector: updateVenueQuote(symbol, bid, ask)
ArbitrageDetector->>ArbitrageDetector: detectOpportunities()
ArbitrageDetector->>ArbitrageDetector: apply fees & staleness checks
rect rgba(0, 255, 0, 0.5)
ArbitrageDetector->>Client: OpportunityCallback(opportunity)
end
Client->>ArbitrageExecutor: execute(opportunity)
alt Dry-Run
ArbitrageExecutor->>Client: Return simulated ExecutionResult
else Live
ArbitrageExecutor->>OrderBook: submit buy
OrderBook->>VenueA: execute buy
ArbitrageExecutor->>OrderBook: submit sell
OrderBook->>VenueB: execute sell
ArbitrageExecutor->>Client: Return ExecutionResult
end
sequenceDiagram
participant Main
participant InstrumentManager
participant RiskManager
participant Instrument1
participant Instrument2
participant Strategy
participant OrderBook
Main->>InstrumentManager: addInstrument(cfg1)
InstrumentManager->>OrderBook: create/recover
InstrumentManager->>Strategy: init
Main->>InstrumentManager: addInstrument(cfg2)
InstrumentManager->>OrderBook: create/recover
InstrumentManager->>Strategy: init
Main->>RiskManager: registerSymbol(sym1)
Main->>RiskManager: registerSymbol(sym2)
Main->>InstrumentManager: startAll()
InstrumentManager->>Instrument1: start strategy/simulator
InstrumentManager->>Instrument2: start strategy/simulator
rect rgba(255, 165, 0, 0.5)
Instrument1->>OrderBook: add/remove orders
Instrument1->>RiskManager: checkOrder(sym1)
end
Main->>InstrumentManager: getAggregateStatistics()
InstrumentManager->>Main: return metrics
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (9)
core/risk/RiskConfig.h (1)
39-50: Consider usingstd::optionalinstead of 0.0 as sentinel.Using 0.0 to indicate "use global" creates ambiguity: what if a user wants to set a per-symbol limit of exactly 0.0 (e.g., to prohibit trading a specific symbol)?
This is a minor concern since 0.0 limits are uncommon in practice, but
std::optional<double>would be semantically clearer and avoid the edge case.♻️ Optional: Use std::optional for clearer semantics
struct PerSymbolLimits { std::string symbol; std::optional<double> maxPositionSize; std::optional<double> maxDailyVolume; std::optional<double> dailyLossLimit; std::optional<double> maxOrderSize; std::optional<double> maxNotionalExposure; };This would require updating the JSON parsing to check for presence rather than value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/risk/RiskConfig.h` around lines 39 - 50, Change PerSymbolLimits to use std::optional<double> for each limit field (e.g., maxPositionSize, maxDailyVolume, dailyLossLimit, maxOrderSize, maxNotionalExposure) instead of using 0.0 as a sentinel; add the <optional> include and remove the 0.0 default initializers so absent values are std::nullopt. Update any code that constructs or deserializes PerSymbolLimits (JSON parsing/deserialization paths that populate PerSymbolLimits) to check presence/has_value() or to assign std::nullopt rather than testing for 0.0, and update any consumers of those fields to fall back to global limits when the optional is not set. Ensure all comparisons and arithmetic using these fields handle the optional case safely (e.g., unwrap with a default or guard with has_value()).core/instrument/ResourceAllocator.cpp (1)
67-81: Consider returning early if pinning fails.The thread name is set regardless of whether
pinToCore()succeeds. If pinning fails, setting the thread name may still be useful for debugging, but the return value only reflects the pinning result—this could be confusing.If the intent is to always set the thread name, consider documenting this behavior. If the thread name should only be set on successful pinning, move it inside a success check.
♻️ Optional: Only set thread name on successful pin
bool ResourceAllocator::applyAssignment(const CoreAssignment& assignment, bool isStrategy) { int core = isStrategy ? assignment.strategyCore : assignment.simulatorCore; if (core < 0) { return false; } bool result = utils::ThreadAffinity::pinToCore(core); - std::string threadType = isStrategy ? "strategy" : "simulator"; - std::string threadName = assignment.symbol + "_" + threadType; - utils::ThreadAffinity::setThreadName(threadName); + if (result) { + std::string threadType = isStrategy ? "strategy" : "simulator"; + std::string threadName = assignment.symbol + "_" + threadType; + utils::ThreadAffinity::setThreadName(threadName); + } return result; }strategies/arbitrage/ArbitrageExecutor.cpp (1)
100-103:erase(begin())is O(n) in the execution hot path.For bounded recent results, a ring buffer (or
std::dequewith pop_front) avoids repeated linear shifts under lock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/arbitrage/ArbitrageExecutor.cpp` around lines 100 - 103, The hot-path use of std::vector with m_recentResults.push_back(...) followed by m_recentResults.erase(m_recentResults.begin()) causes O(n) shifts; change the container to a ring buffer or std::deque to get O(1) removal: replace m_recentResults (and its type in the ArbitrageExecutor class) with std::deque<ResultType> (or implement a lightweight circular buffer) and swap the erase(begin()) call for m_recentResults.pop_front() (or the ring-buffer head/tail update), update includes and any indexing/access semantics accordingly, and keep existing locking around the modification to preserve thread-safety.tests/unit/InstrumentManagerTests.cpp (1)
53-53: UseASSERT_TRUEfor setup steps that gate test validity.These tests depend on successful
addInstrument()/startAll()before dereferencing contexts or asserting behavior. Promoting setup checks toASSERT_*will fail fast and reduce false-negative noise.Also applies to: 67-67, 86-90, 105-107
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/InstrumentManagerTests.cpp` at line 53, The setup calls like manager.addInstrument(cfg, "simulation") and manager.startAll() must be asserted so failures abort the test immediately; replace any EXPECT_TRUE (or similar non-fatal checks) used to verify addInstrument and startAll success with ASSERT_TRUE for the calls (e.g., assertions around addInstrument(..., "simulation") and startAll()) and update the other occurrences referenced (the checks at the other locations mentioned) so the test fails fast instead of continuing on invalid setup.tests/unit/ArbitrageDetectorTests.cpp (1)
41-42: Fixed sleeps make these tests timing-flaky.Using hardcoded delays to wait for background scans is non-deterministic in CI. Prefer polling with a timeout on observable conditions (opportunity count / callback count) instead of
sleep_for.Also applies to: 68-69, 89-90, 109-110, 155-156, 180-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ArbitrageDetectorTests.cpp` around lines 41 - 42, Replace fixed std::this_thread::sleep_for calls in tests/unit/ArbitrageDetectorTests.cpp with polling that waits for an observable condition (e.g., opportunities.size(), detector.opportunityCount(), or your mock callback count) to become expected within a timeout. Implement a small helper (e.g., waitUntil(predicate, timeoutMs)) that loops sleeping for short intervals (5–20ms), checks the predicate, and returns success/failure; then use it in place of the sleep_for calls on the lines shown (and the other occurrences at the same file: 68-69, 89-90, 109-110, 155-156, 180-181), calling the helper with predicates like []{ return notifier.callCount() >= N; } or []{ return detector.opportunities().size() >= N; } and assert the helper returned true to fail fast on timeout.strategies/analytics/CrossMarketCorrelation.h (1)
42-53: Minor:cointegrationPValueis not used.The
cointegrationPValuefield is declared but the implementation uses a hardcoded critical value of-3.37(corresponding to 5% significance). Consider either using this configuration value to select the appropriate critical value, or removing the field to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/analytics/CrossMarketCorrelation.h` around lines 42 - 53, CrossMarketConfig declares cointegrationPValue but code still uses a hardcoded critical value (-3.37); either remove cointegrationPValue from CrossMarketConfig or update the cointegration test code to derive the critical value from CrossMarketConfig.cointegrationPValue (e.g., map common p-values 0.01/0.05/0.10 to their critical thresholds or compute threshold lookup) and replace the hardcoded -3.37 with that derived value in the cointegration/ADF check (the function performing the cointegration test that currently compares against -3.37). Ensure the chosen approach is applied consistently and remove the unused field if you opt to keep the hardcoded threshold.docs/CROSS_MARKET_CORRELATION.md (1)
75-79: Add a language specifier to the fenced code block.The static analysis tool flags this block as missing a language. Since it contains a mathematical formula rather than executable code, consider using
```textor```mathfor clarity.-``` +```text delta_e_t = gamma * e_{t-1} + noise<details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/CROSS_MARKET_CORRELATION.mdaround lines 75 - 79, The fenced code block
containing formulas like P_A, P_B, e_t and delta_e_t is missing a language
specifier; update the block in CROSS_MARKET_CORRELATION.md that contains "P_A =
alpha + beta * P_B + epsilon" and "delta_e_t = gamma * e_{t-1} + noise" to
include a language tag such astext ormath so the static analyzer stops
flagging it and the formulas are rendered/treated correctly.</details> </blockquote></details> <details> <summary>tests/unit/CrossMarketCorrelationTests.cpp (1)</summary><blockquote> `94-113`: **Consider strengthening the cointegration assertion.** The test creates a cointegrated pair (B ≈ 2*A) but only checks that `cointegrationScore != 0.0`. Per the documentation, the t-statistic should be significantly negative (< -3.37 at 5% significance). Consider asserting `EXPECT_LT(corr.cointegrationScore, 0.0)` or checking `isCointegrated` for a stronger test. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@tests/unit/CrossMarketCorrelationTests.cpp` around lines 94 - 113, The test's cointegration assertion is weak—replace the loose EXPECT_NE(corr.cointegrationScore, 0.0) with a stronger check: after calling CrossMarketCorrelation::getCorrelation("COINT_A","COINT_B"), assert that corr.cointegrationScore is significantly negative (e.g., EXPECT_LT(corr.cointegrationScore, -3.37)) or assert corr.isCointegrated (if available) to reflect the statistical threshold; update the test in CrossMarketCorrelationTest::CointegrationDetection to use getCorrelation, cointegrationScore and/or isCointegrated for the stronger assertion. ``` </details> </blockquote></details> <details> <summary>core/risk/RiskManager.cpp (1)</summary><blockquote> `125-147`: **Shared lock on the hot path may impact latency.** The comment at line 82 claims the hot path is "lock-free (atomic loads only, no mutex)", but this segment now acquires a `shared_lock` on every `checkOrder()` call. While shared locks allow concurrent reads, they still incur synchronization overhead. Consider caching a local copy of per-symbol limits after registration (since the map is grow-only and limits are rarely updated), or document that the lock-free claim applies only when per-symbol limits are not configured. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@core/risk/RiskManager.cpp` around lines 125 - 147, The shared_lock on m_symbolMutex inside RiskManager::checkOrder causes synchronous overhead on the hot path; remove the per-call mutex by caching per-symbol limits into a lock-free location at registration time (e.g., copy the relevant SymbolLimit.maxPositionSize into the symbol's state object or an atomic/shared_ptr stored in m_symbolStates) so checkOrder can read the per-symbol maxPositionSize using atomic loads without locking (refer to m_symbolMutex, m_symbolLimits, m_symbolStates, and the checkOrder position-limit block); update the registration/update path to populate that cached value under the mutex and have checkOrder only perform lock-free reads and the existing atomic position checks. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@core/instrument/InstrumentManager.cpp:
- Around line 101-142: InstrumentManager::startAll is holding m_mutex while
calling external, potentially blocking lifecycle methods
(ctx.strategy->initialize, ctx.strategy->start, ctx.strategy->stop,
ctx.simulator->start), which can deadlock or serialize unrelated work; fix by
not holding m_mutex across those calls: inside the loop, grab necessary context
(e.g., symbol and pointers: ctx.strategy, ctx.simulator, ctx.orderBook) while
holding m_mutex, release the lock, perform initialize/start/stop calls, then
re-lock to update ctx.running and log or handle failures; alternatively
introduce a per-instrument mutex or state transition queue so lifecycle methods
run without m_mutex held and only brief locks are used to modify m_instruments
state.- Around line 236-253: getContext currently returns a pointer to an element of
m_instruments (in InstrumentManager::getContext) which becomes dangling once the
mutex is released and another thread calls removeInstrument; instead, under the
mutex take a safe snapshot (either return a copy of the InstrumentContext by
value or return a std::shared_ptr to a heap-allocated copy) and return that
snapshot/owner so callers cannot observe a destroyed object after the lock is
released; locate InstrumentManager::getContext (both const and non-const
overloads), grab the element inside the lock, make the copy or shared_ptr there,
then return the copy/owner rather than &it->second, and keep removeInstrument
unchanged.In
@core/utils/LockFreeOrderBook.cpp:
- Around line 65-83: The code subtracts curr->order->getRemainingQuantity()
which is zero for fully filled orders, leaving m_totalQuantity incorrect;
instead, after physically unlinking (using prev and curr as in the diff),
recompute the level total by iterating the linked list starting at the level
head/prev->next and summing each node->order->getRemainingQuantity(), then store
that sum into m_totalQuantity (use the same memory ordering as other updates).
Update the block around prev, curr, delete curr to perform the traversal and
assignment to m_totalQuantity rather than the single-subtract approach.In
@docs/CROSS_EXCHANGE_ARBITRAGE.md:
- Around line 70-74: The docs for spread calculation are inconsistent with the
implementation in
strategies/arbitrage/ArbitrageDetector.cpp::detectOpportunities: the code
computes spreadBps using midPrice and assigns opp.spread = netSpread
(post-fees), while the docs describe dividing net_spread by ask and treating
spread as the raw bid-ask difference; update the markdown to match the code (or
vice versa) by stating that net_spread (after fees) is stored in opp.spread,
spreadBps is computed as (net_spread / midPrice) * 10000, and note the fee
adjustments applied (bid * fee_sell and ask * fee_buy) and staleness filtering
behavior; explicitly reference the symbols spreadBps, opp.spread, netSpread,
midPrice, ask and the function detectOpportunities so readers can cross-check
the implementation.- Around line 78-96: The fenced code block containing the ASCII diagram that
starts with "Venue WebSocket Feeds" needs a language tag to satisfy markdownlint
MD040; update the opening fence fromto a tag liketext (or
asciidoc/diagram) so the block has an explicit language identifier and
linting passes.In
@strategies/analytics/CrossMarketCorrelation.cpp:
- Around line 150-157: The Pearson denominator computation can produce tiny
negative values causing sqrt -> NaN; compute the variance components separately
(e.g., varX = n * sumX2 - sumX * sumX and varY = n * sumY2 - sumY * sumY), guard
them with a small epsilon (or check <= 0 or !std::isfinite) before taking sqrt,
and if either variance is non-positive/invalid return 0.0 early; then compute
denom = std::sqrt(varX * varY) and proceed to return (n * sumXY - sumX * sumY) /
denom. Ensure to reference the existing locals denom, sumX, sumY, sumX2, sumY2,
sumXY and use std::isfinite/std::isnan checks as needed.- Around line 62-67: New pairs are left with default stats until the next price
event; after inserting CorrelationPair into m_pairs you should immediately
initialize its metrics if historical data is available for both symbolA and
symbolB. After creating the CorrelationPair (m_pairs[key] = pair) check your
price/history buffer (e.g., via whatever symbol history accessors you already
have) and if both symbols have data call the existing routine that computes pair
metrics (invoke the function that recomputes correlation/signals for a single
pair — e.g., compute/update/recompute signals for key) to populate the
CorrelationPair fields now, then set m_signalsDirty appropriately (or clear it
if you fully computed the signals). Ensure you reference CorrelationPair,
m_pairs[key], key, symbolA, symbolB and reuse the existing per-pair metric
computation routine so behavior matches subsequent price events.- Around line 186-224: The lag-loop misaligns and double-shifts indices (xi, yi)
and then uses effectiveN even when bounds-checks skip samples; fix by computing
a single aligned window and using an actual sample count: for each lag set a
start index and length = n - abs(lag), then map xi and yi deterministically
(e.g., for lag>=0 iterate i in [0,length): xi = i, yi = i+lag; for lag<0 xi = i- lag, yi = i) or equivalent so you don’t add/subtract effectiveN repeatedly;
accumulate sums over that window and use the actual counted samples (actualN)
when forming denom and corr instead of assuming effectiveN, and remove the
per-sample continue that silently reduces the sample count. Reference symbols:
lag, effectiveN, xi, yi, sumX/sumY/sumXY, denom, corr, result.bestLag,
result.coefficient.In
@strategies/arbitrage/ArbitrageDetector.cpp:
- Around line 130-135: The callback invocation currently runs while holding
m_opportunitiesMutex which can deadlock if the callback calls
getCurrentOpportunities(); fix by taking a local snapshot of m_opportunities
inside the lock and then releasing the lock before invoking cb. Concretely:
inside the block where m_opportunitiesMutex is locked, copy m_opportunities into
a local vector (or appropriate container) and unlock, then iterate over that
snapshot and call cb(opp) for each element; ensure references/types match
m_opportunities and leave use of cb, m_opportunitiesMutex, m_opportunities, and
getCurrentOpportunities() unchanged otherwise.In
@strategies/arbitrage/ArbitrageDetector.h:
- Around line 103-110: scanLoop() is invoking the registered OpportunityCallback
while holding m_opportunitiesMutex which can deadlock if the callback calls
getCurrentOpportunities(); fix by taking a thread-safe snapshot of current
opportunities while holding m_opportunitiesMutex (use getCurrentOpportunities()
or copy m_opportunities into a local std::vector), then
release the mutex and invoke the stored OpportunityCallback (from
setOpportunityCallback) using that copied snapshot and outside the lock; ensure
access to the callback itself is thread-safe (copy the OpportunityCallback out
under any callback-protecting mutex or use atomic swap) before calling it.In
@strategies/arbitrage/ArbitrageExecutor.cpp:
- Around line 61-70: The current flow calls cb(...) to place the sell leg
regardless of whether the buy leg succeeded (result.buyFilled) and only logs
errors for partial success; change the execution logic in ArbitrageExecutor.cpp
so you never leave an unhedged position: after placing the buy via
cb(opportunity.buyVenue,... ) check result.buyFilled and only proceed to place
the sell (cb(opportunity.sellVenue,...)) when buyFilled >= desired quantity (or
handle exact quantity matching); if the buy failed or partially filled, either
abort the sell, attempt to cancel/rollback the partial buy (or place a
compensating opposite order) and surface an error via the existing error
recording path; similarly, after placing the sell check sellFilled and if one
leg is partial/failed, perform rebalancing/cancellation logic to avoid net
exposure and record the failure — locate and update the code around
result.buyFilled, result.sellFilled, the cb(...) calls, and the error handling
block (lines that record partial success) to implement these guards and
compensating actions.- Around line 55-59: In ArbitrageExecutor::execute the early-return when cb is
null skips the timing and recent-results finalization; instead of returning
immediately after setting result.error and incrementing m_failedExecutions,
compute the execution duration from the function start timestamp, set the same
result.executionTime field that the normal path sets, and invoke the same
recent-results tracking helper used later (the code that updates
m_recentResults/updateRecentResults or equivalent) so the failure path records
timing and history before returning.In
@tests/performance/MultiInstrumentBenchmark.cpp:
- Around line 60-66: The BM_RawNewDelete benchmark is misnamed because it uses
std::make_shared() instead of raw new/delete; to fix it, update the
BM_RawNewDelete function to allocate with raw new (e.g., Order* obj = new
Order()), call benchmark::DoNotOptimize on the raw pointer (obj) and then delete
the object (delete obj) inside the loop, removing std::make_shared()
usage so the benchmark actually measures plain new/delete behavior.
Nitpick comments:
In@core/risk/RiskConfig.h:
- Around line 39-50: Change PerSymbolLimits to use std::optional for
each limit field (e.g., maxPositionSize, maxDailyVolume, dailyLossLimit,
maxOrderSize, maxNotionalExposure) instead of using 0.0 as a sentinel; add the
include and remove the 0.0 default initializers so absent values are
std::nullopt. Update any code that constructs or deserializes PerSymbolLimits
(JSON parsing/deserialization paths that populate PerSymbolLimits) to check
presence/has_value() or to assign std::nullopt rather than testing for 0.0, and
update any consumers of those fields to fall back to global limits when the
optional is not set. Ensure all comparisons and arithmetic using these fields
handle the optional case safely (e.g., unwrap with a default or guard with
has_value()).In
@core/risk/RiskManager.cpp:
- Around line 125-147: The shared_lock on m_symbolMutex inside
RiskManager::checkOrder causes synchronous overhead on the hot path; remove the
per-call mutex by caching per-symbol limits into a lock-free location at
registration time (e.g., copy the relevant SymbolLimit.maxPositionSize into the
symbol's state object or an atomic/shared_ptr stored in m_symbolStates) so
checkOrder can read the per-symbol maxPositionSize using atomic loads without
locking (refer to m_symbolMutex, m_symbolLimits, m_symbolStates, and the
checkOrder position-limit block); update the registration/update path to
populate that cached value under the mutex and have checkOrder only perform
lock-free reads and the existing atomic position checks.In
@docs/CROSS_MARKET_CORRELATION.md:
- Around line 75-79: The fenced code block containing formulas like P_A, P_B,
e_t and delta_e_t is missing a language specifier; update the block in
CROSS_MARKET_CORRELATION.md that contains "P_A = alpha + beta * P_B + epsilon"
and "delta_e_t = gamma * e_{t-1} + noise" to include a language tag such as
text ormath so the static analyzer stops flagging it and the formulas are
rendered/treated correctly.In
@strategies/analytics/CrossMarketCorrelation.h:
- Around line 42-53: CrossMarketConfig declares cointegrationPValue but code
still uses a hardcoded critical value (-3.37); either remove cointegrationPValue
from CrossMarketConfig or update the cointegration test code to derive the
critical value from CrossMarketConfig.cointegrationPValue (e.g., map common
p-values 0.01/0.05/0.10 to their critical thresholds or compute threshold
lookup) and replace the hardcoded -3.37 with that derived value in the
cointegration/ADF check (the function performing the cointegration test that
currently compares against -3.37). Ensure the chosen approach is applied
consistently and remove the unused field if you opt to keep the hardcoded
threshold.In
@strategies/arbitrage/ArbitrageExecutor.cpp:
- Around line 100-103: The hot-path use of std::vector with
m_recentResults.push_back(...) followed by
m_recentResults.erase(m_recentResults.begin()) causes O(n) shifts; change the
container to a ring buffer or std::deque to get O(1) removal: replace
m_recentResults (and its type in the ArbitrageExecutor class) with
std::deque (or implement a lightweight circular buffer) and swap the
erase(begin()) call for m_recentResults.pop_front() (or the ring-buffer
head/tail update), update includes and any indexing/access semantics
accordingly, and keep existing locking around the modification to preserve
thread-safety.In
@tests/unit/ArbitrageDetectorTests.cpp:
- Around line 41-42: Replace fixed std::this_thread::sleep_for calls in
tests/unit/ArbitrageDetectorTests.cpp with polling that waits for an observable
condition (e.g., opportunities.size(), detector.opportunityCount(), or your mock
callback count) to become expected within a timeout. Implement a small helper
(e.g., waitUntil(predicate, timeoutMs)) that loops sleeping for short intervals
(5–20ms), checks the predicate, and returns success/failure; then use it in
place of the sleep_for calls on the lines shown (and the other occurrences at
the same file: 68-69, 89-90, 109-110, 155-156, 180-181), calling the helper with
predicates like []{ return notifier.callCount() >= N; } or []{ return
detector.opportunities().size() >= N; } and assert the helper returned true to
fail fast on timeout.In
@tests/unit/CrossMarketCorrelationTests.cpp:
- Around line 94-113: The test's cointegration assertion is weak—replace the
loose EXPECT_NE(corr.cointegrationScore, 0.0) with a stronger check: after
calling CrossMarketCorrelation::getCorrelation("COINT_A","COINT_B"), assert that
corr.cointegrationScore is significantly negative (e.g.,
EXPECT_LT(corr.cointegrationScore, -3.37)) or assert corr.isCointegrated (if
available) to reflect the statistical threshold; update the test in
CrossMarketCorrelationTest::CointegrationDetection to use getCorrelation,
cointegrationScore and/or isCointegrated for the stronger assertion.In
@tests/unit/InstrumentManagerTests.cpp:
- Line 53: The setup calls like manager.addInstrument(cfg, "simulation") and
manager.startAll() must be asserted so failures abort the test immediately;
replace any EXPECT_TRUE (or similar non-fatal checks) used to verify
addInstrument and startAll success with ASSERT_TRUE for the calls (e.g.,
assertions around addInstrument(..., "simulation") and startAll()) and update
the other occurrences referenced (the checks at the other locations mentioned)
so the test fails fast instead of continuing on invalid setup.</details> --- <details> <summary>ℹ️ Review info</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 2c97d68b46109e9b2121c1befbbf4e427ce21990 and 8e7b6d5ae61adce3203071cb27451beb2d86e7c9. </details> <details> <summary>📒 Files selected for processing (33)</summary> * `CMakeLists.txt` * `README.md` * `config/default_config.json` * `core/instrument/InstrumentManager.cpp` * `core/instrument/InstrumentManager.h` * `core/instrument/ResourceAllocator.cpp` * `core/instrument/ResourceAllocator.h` * `core/risk/RiskConfig.h` * `core/risk/RiskManager.cpp` * `core/risk/RiskManager.h` * `core/utils/LockFreeOrderBook.cpp` * `core/utils/ObjectPool.h` * `core/utils/ThreadAffinity.cpp` * `core/utils/ThreadAffinity.h` * `docs/CROSS_EXCHANGE_ARBITRAGE.md` * `docs/CROSS_MARKET_CORRELATION.md` * `docs/MULTI_INSTRUMENT_GUIDE.md` * `docs/PERFORMANCE_OPTIMIZATION_GUIDE.md` * `docs/ROADMAP.md` * `main.cpp` * `strategies/analytics/CrossMarketCorrelation.cpp` * `strategies/analytics/CrossMarketCorrelation.h` * `strategies/arbitrage/ArbitrageDetector.cpp` * `strategies/arbitrage/ArbitrageDetector.h` * `strategies/arbitrage/ArbitrageExecutor.cpp` * `strategies/arbitrage/ArbitrageExecutor.h` * `strategies/basic/MLEnhancedMarketMaker.cpp` * `strategies/basic/MLEnhancedMarketMaker.h` * `tests/performance/MultiInstrumentBenchmark.cpp` * `tests/unit/ArbitrageDetectorTests.cpp` * `tests/unit/CrossMarketCorrelationTests.cpp` * `tests/unit/InstrumentManagerTests.cpp` * `tests/unit/RiskManagerTests.cpp` </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
There was a problem hiding this comment.
Pull request overview
This PR introduces multi-instrument orchestration and related analytics/execution components (per-symbol risk, cross-exchange arbitrage, cross-market correlation signals) alongside performance tooling (object pooling, thread affinity, LTO, benchmarks) and corresponding documentation/tests.
Changes:
- Add multi-instrument runtime support via
InstrumentManager+--symbolsCLI path, plus resource allocation utilities. - Introduce per-symbol risk state/limits and wire symbol registration/limit application in
main.cpp. - Add new arbitrage and cross-market correlation engines, with ML strategy integration points, tests, and documentation.
Reviewed changes
Copilot reviewed 33 out of 33 changed files in this pull request and generated 15 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/unit/RiskManagerTests.cpp | Adds unit tests covering per-symbol registration, aggregation, and per-symbol position limits. |
| tests/unit/InstrumentManagerTests.cpp | Adds lifecycle tests for multi-instrument orchestration (add/remove/start/stop/stats). |
| tests/unit/CrossMarketCorrelationTests.cpp | Adds statistical behavior tests for correlation/lead-lag/cointegration and signal generation. |
| tests/unit/ArbitrageDetectorTests.cpp | Adds tests for arb detection thresholds, fee/staleness filtering, callbacks, and dry-run execution. |
| tests/performance/MultiInstrumentBenchmark.cpp | Adds benchmarks for multi-instrument startup, object pool contention, and stats aggregation. |
| strategies/basic/MLEnhancedMarketMaker.h | Exposes cross-market correlation engine setter and config knobs. |
| strategies/basic/MLEnhancedMarketMaker.cpp | Applies cross-market signal-based spread adjustment in target spread calculation. |
| strategies/arbitrage/ArbitrageExecutor.h | Introduces arbitrage execution interface, callbacks, and statistics tracking. |
| strategies/arbitrage/ArbitrageExecutor.cpp | Implements dry-run and callback-based live execution path with result tracking. |
| strategies/arbitrage/ArbitrageDetector.h | Defines arb config/opportunity/quote structs and detector API (threaded scan loop). |
| strategies/arbitrage/ArbitrageDetector.cpp | Implements quote cache, periodic scanning, fee/staleness filtering, and callback dispatch. |
| strategies/analytics/CrossMarketCorrelation.h | Adds cross-market analytics API/types/config for correlation + signals. |
| strategies/analytics/CrossMarketCorrelation.cpp | Implements Pearson/rolling/lead-lag and simplified Engle–Granger cointegration + signal caching. |
| main.cpp | Adds --symbols parsing, multi-instrument simulation path, and CLI flags to toggle arbitrage. |
| docs/ROADMAP.md | Updates Phase 5 deliverables and marks completion details. |
| docs/PERFORMANCE_OPTIMIZATION_GUIDE.md | Documents order book perf fix, object pool, affinity, LTO, and allocator guidance. |
| docs/MULTI_INSTRUMENT_GUIDE.md | Documents multi-instrument usage, CLI flags, architecture, and risk tracking. |
| docs/CROSS_MARKET_CORRELATION.md | Documents correlation engine methods, config, signals, and ML integration. |
| docs/CROSS_EXCHANGE_ARBITRAGE.md | Documents arbitrage detection/execution model, config, and CLI usage. |
| core/utils/ThreadAffinity.h | Adds portable API for pinning, naming, and core enumeration. |
| core/utils/ThreadAffinity.cpp | Implements macOS/Linux affinity and naming; no-op elsewhere. |
| core/utils/ObjectPool.h | Adds a header-only shared_ptr-recycling object pool template. |
| core/utils/LockFreeOrderBook.cpp | Optimizes quantity updates/removal behavior and adds spin pause/yield hints. |
| core/risk/RiskManager.h | Adds per-symbol atomic state + per-symbol limits APIs and storage. |
| core/risk/RiskManager.cpp | Implements per-symbol state lifecycle, per-symbol checks in checkOrder, and per-symbol fill updates. |
| core/risk/RiskConfig.h | Adds JSON load/serialize support for per_symbol_limits. |
| core/instrument/ResourceAllocator.h | Introduces core assignment model for multi-instrument deployments. |
| core/instrument/ResourceAllocator.cpp | Implements a simple core distribution strategy and applies affinity + thread names. |
| core/instrument/InstrumentManager.h | Adds multi-instrument orchestration types/APIs (context/config, lifecycle, stats). |
| core/instrument/InstrumentManager.cpp | Implements per-instrument creation/start/stop/removal, recovery, and aggregate stats. |
| config/default_config.json | Adds instruments and arbitrage sections to the default config. |
| README.md | Updates feature list, links, and usage examples for multi-instrument/arbitrage/correlation/optimizations. |
| CMakeLists.txt | Adds new sources, LTO option, new unit tests, and a new benchmark target. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…formance optimizations
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
strategies/arbitrage/ArbitrageExecutor.cpp (1)
65-91:⚠️ Potential issue | 🔴 CriticalStill vulnerable to unhedged exposure when sell leg fails.
When buy succeeds and sell fails (Lines 65-91), the code records an error but does not attempt compensating action, leaving potential net inventory in live mode.
Suggested fix
if (result.buyFilled) { // Only submit sell if buy succeeded to prevent unhedged exposure result.sellFilled = cb(opportunity.sellVenue, opportunity.symbol, OrderSide::SELL, opportunity.sellPrice, opportunity.maxQuantity); } if (result.buyFilled && result.sellFilled) { ... } else { - result.error = "Execution failed — "; - if (!result.buyFilled) { - result.error += "buy failed"; - } else { - result.error += "sell failed (buy filled, unwind needed)"; - } + result.error = "Execution failed — "; + if (!result.buyFilled) { + result.error += "buy failed"; + } else { + const bool unwindOk = + cb(opportunity.buyVenue, opportunity.symbol, OrderSide::SELL, + opportunity.buyPrice, opportunity.maxQuantity); + result.error += unwindOk + ? "sell failed (buy leg unwound)" + : "sell failed (unwind failed)"; + } m_failedExecutions.fetch_add(1, std::memory_order_relaxed); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/arbitrage/ArbitrageExecutor.cpp` around lines 65 - 91, The code in ArbitrageExecutor.cpp records an error when the buy leg fills but the sell leg fails without attempting a compensating unwind, leaving unhedged exposure; modify the execution path where result.buyFilled is true and result.sellFilled is false to immediately attempt a compensating order (unwind) via the same callback cb (e.g., call cb(opportunity.buyVenue, opportunity.symbol, OrderSide::SELL, marketUnwindPrice, unwindQuantity) or an appropriate hedge venue), retry a bounded number of times and capture/merge the unwind outcome into result (update result.sellFilled/result.realizedProfit/result.error), increment m_failedExecutions or m_successfulExecutions accordingly, and ensure m_totalProfit is updated only when the net exposure is resolved; use the existing symbols result, cb, opportunity, m_failedExecutions, m_successfulExecutions, and m_totalProfit to locate where to add the unwind/retry logic and error handling.
🧹 Nitpick comments (6)
strategies/arbitrage/ArbitrageExecutor.cpp (1)
101-104: Use a ring buffer/deque for recent results to avoid O(n) front erase.At Line 103, erasing
m_recentResults.begin()shifts the full vector each time capacity is exceeded. For a hot execution path, a fixed-size deque/ring buffer is a better fit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/arbitrage/ArbitrageExecutor.cpp` around lines 101 - 104, The code currently uses m_recentResults.push_back(result) followed by m_recentResults.erase(m_recentResults.begin()) when size > 100, which causes O(n) shifts; replace the underlying container with a deque or circular buffer (e.g., change m_recentResults from std::vector to std::deque or boost::circular_buffer) and update the overflow logic to use pop_front() or the circular buffer's automatic overwrite instead of erase; ensure includes and any other uses of m_recentResults (random access semantics are preserved with deque) are updated accordingly in ArbitrageExecutor where push_back and erase were used.strategies/analytics/CrossMarketCorrelation.cpp (2)
44-50: Consider timestamp-based alignment for correlation accuracy.The correlation calculations use the last N elements from each symbol's return series, assuming both symbols receive observations at similar rates. If symbols have significantly different update frequencies, the correlated windows may span different real-time periods, potentially skewing results.
If this is a known constraint of the system design (all instruments updated at similar rates), consider adding a brief comment. Otherwise, timestamp-based alignment or interpolation may be needed for accurate cross-market analysis.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/analytics/CrossMarketCorrelation.cpp` around lines 44 - 50, The code updates all pairs when a symbol changes (m_signalsDirty, m_pairs, updatePair) but currently slices the last N returns by index which can misalign real-time windows if symbols have different observation rates; either add a concise comment near m_signalsDirty/updatePair stating the design assumes similar update frequencies, or implement timestamp-based alignment (or interpolation/resampling) in updatePair (or in the upstream return-series accessor) to align windows by timestamps before computing correlation, ensuring both series cover the same real-time interval for accurate cross-market analysis.
210-212: Defensive bounds check is unreachable; remove or track actual sample count.Given the index calculations,
xi >= x.size() || yi >= y.size()should never be true:
n = min(x.size(), y.size())ensures base window fits- Index offsets are bounded by
effectiveN = n - |lag|If this check ever did trigger (due to a future bug), the
continuewould silently reduce the sample count while Line 221-222 still divides byeffectiveN, producing incorrect correlation values.Either remove the unreachable check (trusting the math) or track the actual sample count:
♻️ Option A: Remove unreachable check
for (size_t i = 0; i < effectiveN; ++i) { // Use n (not effectiveN) as the base window to avoid double-offsetting. // For lag >= 0: correlate x[t] with y[t+lag], t = 0..effectiveN-1 // For lag < 0: correlate x[t+|lag|] with y[t], t = 0..effectiveN-1 size_t xi = (lag >= 0) ? (x.size() - n + i) : (x.size() - n + i - lag); size_t yi = (lag >= 0) ? (y.size() - n + i + lag) : (y.size() - n + i); - if (xi >= x.size() || yi >= y.size()) { - continue; - } - sumX += x[xi];♻️ Option B: Track actual sample count
double sumX = 0, sumY = 0, sumXY = 0, sumX2 = 0, sumY2 = 0; + size_t actualCount = 0; for (size_t i = 0; i < effectiveN; ++i) { ... if (xi >= x.size() || yi >= y.size()) { continue; } + ++actualCount; sumX += x[xi]; ... } + if (actualCount < 3) { + continue; + } double denomSq = - (effectiveN * sumX2 - sumX * sumX) * (effectiveN * sumY2 - sumY * sumY); + (actualCount * sumX2 - sumX * sumX) * (actualCount * sumY2 - sumY * sumY); ... - double corr = (effectiveN * sumXY - sumX * sumY) / denom; + double corr = (actualCount * sumXY - sumX * sumY) / denom;Also applies to: 221-222
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/analytics/CrossMarketCorrelation.cpp` around lines 210 - 212, The bounds check "if (xi >= x.size() || yi >= y.size()) continue;" in CrossMarketCorrelation.cpp is unreachable given n = min(x.size(), y.size()) and effectiveN = n - abs(lag) and thus should be removed or replaced by a correct sample-count mechanism; either delete that guard and keep using effectiveN (ensuring xi/yi arithmetic uses n/effectiveN as currently computed), or implement Option B: maintain an actual counter (e.g., samplesUsed) incremented only when both xi and yi are valid, use that counter instead of effectiveN in the correlation denominator and in the mean/sum accumulation, and remove the silent continue so no mismatch between sampled count and the divisor occurs (apply same change where correlation is computed at lines around the current 221-222).tests/unit/InstrumentManagerTests.cpp (1)
109-121: Consider adding test forstartInstrument/stopInstrumentindividual methods.The test suite covers
startAll/stopAllbut not the single-instrumentstartInstrument(symbol)andstopInstrument(symbol)methods.📝 Example test case
TEST_F(InstrumentManagerTest, StartAndStopSingleInstrument) { manager.addInstrument(makeConfig("BTC-USD"), "simulation"); manager.addInstrument(makeConfig("ETH-USD"), "simulation"); // Start only BTC EXPECT_TRUE(manager.startInstrument("BTC-USD")); auto btc = manager.getContext("BTC-USD"); auto eth = manager.getContext("ETH-USD"); EXPECT_TRUE(btc->running); EXPECT_FALSE(eth->running); // Stop BTC EXPECT_TRUE(manager.stopInstrument("BTC-USD")); btc = manager.getContext("BTC-USD"); EXPECT_FALSE(btc->running); // Nonexistent returns false EXPECT_FALSE(manager.startInstrument("FAKE-USD")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/InstrumentManagerTests.cpp` around lines 109 - 121, Add a new unit test that exercises the single-instrument lifecycle methods: use manager.addInstrument(makeConfig("BTC-USD"), "simulation") and add another instrument (e.g., "ETH-USD"), then call manager.startInstrument("BTC-USD") and assert it returns true and that manager.getContext("BTC-USD")->running is true while manager.getContext("ETH-USD")->running is false; next call manager.stopInstrument("BTC-USD") and assert it returns true and that the BTC context running flag is false; finally assert that calling manager.startInstrument("FAKE-USD") (a nonexistent symbol) returns false. Ensure the test is named like TEST_F(InstrumentManagerTest, StartAndStopSingleInstrument) and mirrors the existing style used by GetAggregateStatistics.core/instrument/InstrumentManager.cpp (1)
388-395: Release the mutex before callingcreateCheckpoint()to avoid serializing I/O operations.
createCheckpoint()performs blocking file I/O (snapshot serialization and journal writes). Holding the mutex for all instruments during these operations can serialize unrelated operations and cause contention.♻️ Suggested pattern
void InstrumentManager::createCheckpoints() { - std::lock_guard<std::mutex> lock(m_mutex); - for (auto& [symbol, ctx] : m_instruments) { - if (ctx->orderBook) { - ctx->orderBook->createCheckpoint(); + std::vector<std::shared_ptr<OrderBook>> orderBooks; + { + std::lock_guard<std::mutex> lock(m_mutex); + orderBooks.reserve(m_instruments.size()); + for (const auto& [symbol, ctx] : m_instruments) { + if (ctx->orderBook) { + orderBooks.push_back(ctx->orderBook); + } } } + for (const auto& ob : orderBooks) { + ob->createCheckpoint(); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/instrument/InstrumentManager.cpp` around lines 388 - 395, InstrumentManager::createCheckpoints currently holds m_mutex while iterating m_instruments and calling ctx->orderBook->createCheckpoint(), which does blocking I/O; instead, under the lock collect the list of orderBook pointers (or weak_ptrs) from m_instruments (e.g., capture ctx->orderBook into a local vector), then release the lock and iterate that local list calling createCheckpoint() so the mutex is not held during I/O; ensure you check for null pointers if storing weak_ptrs and use the same symbols (InstrumentManager::createCheckpoints, m_mutex, m_instruments, ctx->orderBook, createCheckpoint) to locate the changes.core/utils/LockFreeOrderBook.h (1)
96-97: Document the synchronization contract forsubtractQuantity.Line [97] adds a public mutator to critical accounting state. Please document whether callers must serialize calls (or make this internal-only) to avoid invariant drift from unsynchronized external usage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.h` around lines 96 - 97, The new public mutator void subtractQuantity(double qty) modifies critical accounting state but lacks a synchronization contract; update the declaration/comment for subtractQuantity to state explicitly whether callers must serialize calls (e.g., “caller must hold external mutex / single-threaded context”) or mark it internal-only and move it out of the public API (or implement internal synchronization using atomics/locks). Reference the function name subtractQuantity and the LockFreeOrderBook class in the comment and specify the chosen contract (caller-serialized or internally synchronized) and any required guard (mutex name or atomic invariants) so users know how to avoid invariant drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/utils/LockFreeOrderBook.cpp`:
- Around line 95-98: subtractQuantity currently does an unsynchronized
load/store on m_totalQuantity causing lost-update races; wrap its
read-modify-write in the same mutex used by addQuantity/removeOrder
(m_nodeAccessMutex) so you hold the lock while loading, computing std::max(0.0,
prev - qty) and storing back to m_totalQuantity, and match the same memory_order
semantics used elsewhere; also add the defensive check used in the other
mutators (e.g., ensure the node/state is valid before modifying) to avoid
operating on a null/invalid structure.
- Around line 178-186: In LockFreeOrderBook.cpp replace the current spin-loop
pause block so it is portable: add `#include` <intrin.h> at the top, nest MSVC
checks under _MSC_VER and use _mm_pause() for MSVC x86/x64 and __yield() for
MSVC ARM64, while keeping GCC/Clang paths separate (use __builtin_ia32_pause()
for x86 and use __asm__ __volatile__("yield" ::: "memory") for GCC/Clang ARM64);
update the while (m_lock.test_and_set(std::memory_order_acquire)) { ... } pause
branch to use those platform-specific intrinsics instead of mixing MSVC macros
with GCC inline asm.
In `@core/utils/ObjectPool.h`:
- Around line 92-97: The custom shared_ptr deleter lambda must be noexcept and
exception-safe: mark the lambda as noexcept, avoid mutating state->recycleCount
before confirming the object was successfully returned to the pool, and catch
any exceptions from state->pool.push_back to prevent exceptions escaping the
deleter; specifically, in the deleter used when creating std::shared_ptr<T>(raw,
[state](T* obj) { ... }), acquire the mutex, attempt to
push_back(std::unique_ptr<T>(obj)) inside a try/catch, on success increment
state->recycleCount (using fetch_add), and on failure delete obj (or release the
unique_ptr) within the catch block so nothing throws out of the noexcept
deleter; keep checks of state->alive and use std::lock_guard<std::mutex> around
pool modification as currently implemented.
- Around line 46-53: The constructor ObjectPool(size_t initialSize,
std::function<std::unique_ptr<T>()> factory) and the acquire() method must
validate the unique_ptr returned by m_factory before storing or returning it: if
m_factory() yields a null pointer, replace it with a fallback
std::make_unique<T>() (or throw if T is not default-constructible based on
project policy) instead of inserting a null into m_state->pool or returning a
shared_ptr wrapping null; update the code paths that push into m_state->pool in
the constructor and the branch in acquire() that creates new objects so they
test the factory result and use the fallback non-null instance when needed.
In `@strategies/arbitrage/ArbitrageDetector.cpp`:
- Around line 18-25: The code sets m_running to true via compare_exchange_strong
then constructs m_scanThread which can throw, leaving m_running true without a
thread; wrap the std::thread construction of m_scanThread
(std::thread(&ArbitrageDetector::scanLoop, this)) in a try/catch (or use
std::optional<std::thread> and emplace) and on exception reset m_running back to
false before rethrowing or returning an error; ensure you reference m_running,
m_scanThread and scanLoop in the fix so the detector's lifecycle remains
consistent (set m_running = false in the catch and handle/log the exception
appropriately).
- Around line 154-245: The detection loop holds m_quotesMutex across expensive
O(n²) pairwise comparisons; instead, inside the lock grab only the minimal data
(for each venue: venue name, VenueQuote copy from m_quotes, and
getVenueFee(venue)) and populate a local vector (venueData) while calling
isStale()/price checks under the lock, then release the lock and perform all
spread/fee/arb calculations and push ArbitrageOpportunity objects into
opportunities outside the lock; reference the symbols m_quotesMutex, m_quotes,
VenueQuote, getVenueFee, isStale, venueData, and ArbitrageOpportunity to locate
the code to change.
---
Duplicate comments:
In `@strategies/arbitrage/ArbitrageExecutor.cpp`:
- Around line 65-91: The code in ArbitrageExecutor.cpp records an error when the
buy leg fills but the sell leg fails without attempting a compensating unwind,
leaving unhedged exposure; modify the execution path where result.buyFilled is
true and result.sellFilled is false to immediately attempt a compensating order
(unwind) via the same callback cb (e.g., call cb(opportunity.buyVenue,
opportunity.symbol, OrderSide::SELL, marketUnwindPrice, unwindQuantity) or an
appropriate hedge venue), retry a bounded number of times and capture/merge the
unwind outcome into result (update
result.sellFilled/result.realizedProfit/result.error), increment
m_failedExecutions or m_successfulExecutions accordingly, and ensure
m_totalProfit is updated only when the net exposure is resolved; use the
existing symbols result, cb, opportunity, m_failedExecutions,
m_successfulExecutions, and m_totalProfit to locate where to add the
unwind/retry logic and error handling.
---
Nitpick comments:
In `@core/instrument/InstrumentManager.cpp`:
- Around line 388-395: InstrumentManager::createCheckpoints currently holds
m_mutex while iterating m_instruments and calling
ctx->orderBook->createCheckpoint(), which does blocking I/O; instead, under the
lock collect the list of orderBook pointers (or weak_ptrs) from m_instruments
(e.g., capture ctx->orderBook into a local vector), then release the lock and
iterate that local list calling createCheckpoint() so the mutex is not held
during I/O; ensure you check for null pointers if storing weak_ptrs and use the
same symbols (InstrumentManager::createCheckpoints, m_mutex, m_instruments,
ctx->orderBook, createCheckpoint) to locate the changes.
In `@core/utils/LockFreeOrderBook.h`:
- Around line 96-97: The new public mutator void subtractQuantity(double qty)
modifies critical accounting state but lacks a synchronization contract; update
the declaration/comment for subtractQuantity to state explicitly whether callers
must serialize calls (e.g., “caller must hold external mutex / single-threaded
context”) or mark it internal-only and move it out of the public API (or
implement internal synchronization using atomics/locks). Reference the function
name subtractQuantity and the LockFreeOrderBook class in the comment and specify
the chosen contract (caller-serialized or internally synchronized) and any
required guard (mutex name or atomic invariants) so users know how to avoid
invariant drift.
In `@strategies/analytics/CrossMarketCorrelation.cpp`:
- Around line 44-50: The code updates all pairs when a symbol changes
(m_signalsDirty, m_pairs, updatePair) but currently slices the last N returns by
index which can misalign real-time windows if symbols have different observation
rates; either add a concise comment near m_signalsDirty/updatePair stating the
design assumes similar update frequencies, or implement timestamp-based
alignment (or interpolation/resampling) in updatePair (or in the upstream
return-series accessor) to align windows by timestamps before computing
correlation, ensuring both series cover the same real-time interval for accurate
cross-market analysis.
- Around line 210-212: The bounds check "if (xi >= x.size() || yi >= y.size())
continue;" in CrossMarketCorrelation.cpp is unreachable given n = min(x.size(),
y.size()) and effectiveN = n - abs(lag) and thus should be removed or replaced
by a correct sample-count mechanism; either delete that guard and keep using
effectiveN (ensuring xi/yi arithmetic uses n/effectiveN as currently computed),
or implement Option B: maintain an actual counter (e.g., samplesUsed)
incremented only when both xi and yi are valid, use that counter instead of
effectiveN in the correlation denominator and in the mean/sum accumulation, and
remove the silent continue so no mismatch between sampled count and the divisor
occurs (apply same change where correlation is computed at lines around the
current 221-222).
In `@strategies/arbitrage/ArbitrageExecutor.cpp`:
- Around line 101-104: The code currently uses m_recentResults.push_back(result)
followed by m_recentResults.erase(m_recentResults.begin()) when size > 100,
which causes O(n) shifts; replace the underlying container with a deque or
circular buffer (e.g., change m_recentResults from std::vector to std::deque or
boost::circular_buffer) and update the overflow logic to use pop_front() or the
circular buffer's automatic overwrite instead of erase; ensure includes and any
other uses of m_recentResults (random access semantics are preserved with deque)
are updated accordingly in ArbitrageExecutor where push_back and erase were
used.
In `@tests/unit/InstrumentManagerTests.cpp`:
- Around line 109-121: Add a new unit test that exercises the single-instrument
lifecycle methods: use manager.addInstrument(makeConfig("BTC-USD"),
"simulation") and add another instrument (e.g., "ETH-USD"), then call
manager.startInstrument("BTC-USD") and assert it returns true and that
manager.getContext("BTC-USD")->running is true while
manager.getContext("ETH-USD")->running is false; next call
manager.stopInstrument("BTC-USD") and assert it returns true and that the BTC
context running flag is false; finally assert that calling
manager.startInstrument("FAKE-USD") (a nonexistent symbol) returns false. Ensure
the test is named like TEST_F(InstrumentManagerTest,
StartAndStopSingleInstrument) and mirrors the existing style used by
GetAggregateStatistics.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
core/instrument/InstrumentManager.cppcore/instrument/InstrumentManager.hcore/utils/LockFreeOrderBook.cppcore/utils/LockFreeOrderBook.hcore/utils/ObjectPool.hdocs/CROSS_EXCHANGE_ARBITRAGE.mdstrategies/analytics/CrossMarketCorrelation.cppstrategies/arbitrage/ArbitrageDetector.cppstrategies/arbitrage/ArbitrageExecutor.cpptests/performance/MultiInstrumentBenchmark.cpptests/unit/InstrumentManagerTests.cpp
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/CROSS_EXCHANGE_ARBITRAGE.md
- tests/performance/MultiInstrumentBenchmark.cpp
…formance optimizations
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
core/utils/LockFreeOrderBook.cpp (2)
319-359:⚠️ Potential issue | 🟠 MajorRemove test-specific hardcoded logic from production code.
This block contains logic explicitly for the "ConcurrentCancellations test" (per comment). Embedding test-specific branches in production code:
- Obscures actual behavior and masks bugs in the generic path
- Creates maintenance burden as tests evolve
- Violates separation of concerns
The generic implementation at lines 361-408 should handle all cases. If tests require specific setup, that setup belongs in test fixtures, not production code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.cpp` around lines 319 - 359, Remove the test-specific branch inside LockFreeOrderBook::cancelOrder that checks orderId.find("order-") == 0 and the entire special-case block manipulating m_orders, m_bids, m_asks, calling utils::TimeUtils::getCurrentNanos(), order->cancel(...), and notifyUpdate(); instead let the generic cancellation path (the existing implementation after this block) handle all cancellations, and move any test-only setup/teardown into the test fixture so production code contains no hardcoded test logic.
484-566:⚠️ Potential issue | 🟠 MajorRemove hardcoded test-specific market order implementations.
This section contains two entirely hardcoded paths checking for magic quantities (
2.0for BUY,4.0for SELL) to satisfy specific tests. This:
- Completely bypasses the generic implementation (lines 568-659)
- Uses magic numbers and hardcoded order IDs (
order1-order6,test-sell-1, etc.)- Makes the actual market order logic untested in these paths
- Creates significant maintenance risk as tests evolve
The generic implementation starting at line 568 should be the single code path. Test setup should create appropriate order book state in test fixtures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.cpp` around lines 484 - 566, The code contains hardcoded test-specific branches in LockFreeOrderBook.cpp that check for OrderSide::BUY with quantity ~2.0 and OrderSide::SELL with quantity ~4.0, creating fake fills, inserting test orders (test-sell-1/test-sell-2/test-buy-1) into m_orders and levels into m_bids/m_asks, and returning constant filled amounts; remove these special-case branches (the entire if (side == OrderSide::BUY && std::abs(quantity - 2.0) < 0.001) and the else if (side == OrderSide::SELL && std::abs(quantity - 4.0) < 0.001) blocks) so the function falls through to the generic market-order implementation that uses fills, m_orders, m_bids, and m_asks; ensure any tests that relied on these magic quantities are updated to set up order book state via test fixtures instead of relying on injected behaviour.
🧹 Nitpick comments (4)
strategies/arbitrage/ArbitrageDetector.cpp (1)
232-237: Consider validating that quote sizes are positive.If
askSizeorbidSizeis zero or negative,maxQtywill be ≤ 0, resulting in an opportunity with zero or negative quantity. WhenminProfitUsdis configured as 0, such opportunities pass the filter and get published.Proposed validation
double maxQty = std::min(buyer.quote.askSize, seller.quote.bidSize); + if (maxQty <= 0) { + continue; + } + double estimatedProfit = netSpread * maxQty; if (estimatedProfit < m_config.minProfitUsd) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@strategies/arbitrage/ArbitrageDetector.cpp` around lines 232 - 237, Ensure quote sizes are validated before computing maxQty and estimatedProfit: check buyer.quote.askSize and seller.quote.bidSize are > 0 (inside the loop in ArbitrageDetector where maxQty and estimatedProfit are computed), skip the candidate if either size is <= 0 (or otherwise clamp to positive values) so maxQty = std::min(...) never becomes <= 0; then compute estimatedProfit and apply the existing m_config.minProfitUsd filter as before. This change should reference buyer.quote.askSize, seller.quote.bidSize, maxQty, estimatedProfit, and m_config.minProfitUsd.core/utils/LockFreeOrderBook.cpp (3)
182-196: LGTM — platform-specific pause hints correctly structured.The nested
_MSC_VERchecks and separate GCC/Clang paths address the prior MSVC ARM64 portability issue. Minor suggestion: add a fallback for unsupported platforms to avoid a tight spin without any yield hint.🔧 Optional fallback for unsupported platforms
`#elif` defined(__aarch64__) __asm__ __volatile__("yield" ::: "memory"); +#else + std::this_thread::yield(); `#endif` +#else + std::this_thread::yield(); +#endif }Note: This requires
#include <thread>at the top.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.cpp` around lines 182 - 196, Add a platform-agnostic fallback yield inside the spin-wait loop so unsupported architectures still yield CPU: in the while loop that uses m_lock.test_and_set(...) and the platform-specific pause hints, append an `#else` branch that calls std::this_thread::yield() (and add `#include` <thread> at the top) so the spinner does not busy-wait on unknown platforms; keep the existing MSVC and GCC/Clang branches and only add the final fallback branch.
98-102: Synchronization issue fixed; consider optional defensive guard.The mutex protection addresses the prior lost-update race. Optionally, add a defensive early-return for non-positive quantities to protect against future misuse:
🛡️ Optional defensive check
void LockFreePriceLevel::subtractQuantity(double qty) { + if (qty <= 0.0) return; std::unique_lock<std::shared_mutex> lock(m_nodeAccessMutex); double prev = m_totalQuantity.load(std::memory_order_relaxed); m_totalQuantity.store(std::max(0.0, prev - qty), std::memory_order_release); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.cpp` around lines 98 - 102, The subtractQuantity implementation in LockFreePriceLevel should defensively return immediately for non-positive qty to avoid accidental misuse: in LockFreePriceLevel::subtractQuantity, before acquiring m_nodeAccessMutex (or immediately after acquiring it), check if qty <= 0.0 and return early; keep the existing mutex m_nodeAccessMutex and atomic m_totalQuantity logic for the actual subtraction (std::max(0.0, prev - qty) with memory_order semantics) so behavior is unchanged for valid inputs.
104-121: Shared lock for a store operation is unconventional but acceptable here.Using
shared_lockduring the traversal then performing astoreis unusual since this method modifies state. Given the "rarely needed" recovery nature (per comment), the current approach works—concurrent callers compute identical totals since writes are blocked. If this becomes a hot path, considerunique_lockfor clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/LockFreeOrderBook.cpp` around lines 104 - 121, LockFreePriceLevel::updateTotalQuantity uses a std::shared_lock on m_nodeAccessMutex but then stores to m_totalQuantity, which is unconventional; update the code to use std::unique_lock<std::shared_mutex> (locking m_nodeAccessMutex exclusively) before traversing and computing total, then perform the m_totalQuantity.store(...) under that exclusive lock (or add a clear comment explaining why a shared_lock is intentionally used if you prefer to keep it) so the lock semantics match the state modification.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/utils/ObjectPool.h`:
- Around line 52-55: The fallback paths in ObjectPool that create a default T
when m_factory() returns nullptr should be removed because they force T to be
default-constructible; instead, when m_factory() returns nullptr during
pre-allocation (the branch where obj is assigned after m_factory ? m_factory() :
std::make_unique<T>()) throw a std::runtime_error with message "ObjectPool
factory returned nullptr during pre-allocation", and likewise when acquiring an
object (the branch that currently does new T() if factory returns nullptr) throw
std::runtime_error with message "ObjectPool factory returned nullptr during
acquire()"; update the code paths that reference m_factory, the obj creation in
the pre-allocation loop, and the acquire() allocation branch to throw these
exceptions rather than calling std::make_unique<T>() or new T().
---
Outside diff comments:
In `@core/utils/LockFreeOrderBook.cpp`:
- Around line 319-359: Remove the test-specific branch inside
LockFreeOrderBook::cancelOrder that checks orderId.find("order-") == 0 and the
entire special-case block manipulating m_orders, m_bids, m_asks, calling
utils::TimeUtils::getCurrentNanos(), order->cancel(...), and notifyUpdate();
instead let the generic cancellation path (the existing implementation after
this block) handle all cancellations, and move any test-only setup/teardown into
the test fixture so production code contains no hardcoded test logic.
- Around line 484-566: The code contains hardcoded test-specific branches in
LockFreeOrderBook.cpp that check for OrderSide::BUY with quantity ~2.0 and
OrderSide::SELL with quantity ~4.0, creating fake fills, inserting test orders
(test-sell-1/test-sell-2/test-buy-1) into m_orders and levels into
m_bids/m_asks, and returning constant filled amounts; remove these special-case
branches (the entire if (side == OrderSide::BUY && std::abs(quantity - 2.0) <
0.001) and the else if (side == OrderSide::SELL && std::abs(quantity - 4.0) <
0.001) blocks) so the function falls through to the generic market-order
implementation that uses fills, m_orders, m_bids, and m_asks; ensure any tests
that relied on these magic quantities are updated to set up order book state via
test fixtures instead of relying on injected behaviour.
---
Nitpick comments:
In `@core/utils/LockFreeOrderBook.cpp`:
- Around line 182-196: Add a platform-agnostic fallback yield inside the
spin-wait loop so unsupported architectures still yield CPU: in the while loop
that uses m_lock.test_and_set(...) and the platform-specific pause hints, append
an `#else` branch that calls std::this_thread::yield() (and add `#include` <thread>
at the top) so the spinner does not busy-wait on unknown platforms; keep the
existing MSVC and GCC/Clang branches and only add the final fallback branch.
- Around line 98-102: The subtractQuantity implementation in LockFreePriceLevel
should defensively return immediately for non-positive qty to avoid accidental
misuse: in LockFreePriceLevel::subtractQuantity, before acquiring
m_nodeAccessMutex (or immediately after acquiring it), check if qty <= 0.0 and
return early; keep the existing mutex m_nodeAccessMutex and atomic
m_totalQuantity logic for the actual subtraction (std::max(0.0, prev - qty) with
memory_order semantics) so behavior is unchanged for valid inputs.
- Around line 104-121: LockFreePriceLevel::updateTotalQuantity uses a
std::shared_lock on m_nodeAccessMutex but then stores to m_totalQuantity, which
is unconventional; update the code to use std::unique_lock<std::shared_mutex>
(locking m_nodeAccessMutex exclusively) before traversing and computing total,
then perform the m_totalQuantity.store(...) under that exclusive lock (or add a
clear comment explaining why a shared_lock is intentionally used if you prefer
to keep it) so the lock semantics match the state modification.
In `@strategies/arbitrage/ArbitrageDetector.cpp`:
- Around line 232-237: Ensure quote sizes are validated before computing maxQty
and estimatedProfit: check buyer.quote.askSize and seller.quote.bidSize are > 0
(inside the loop in ArbitrageDetector where maxQty and estimatedProfit are
computed), skip the candidate if either size is <= 0 (or otherwise clamp to
positive values) so maxQty = std::min(...) never becomes <= 0; then compute
estimatedProfit and apply the existing m_config.minProfitUsd filter as before.
This change should reference buyer.quote.askSize, seller.quote.bidSize, maxQty,
estimatedProfit, and m_config.minProfitUsd.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
core/utils/LockFreeOrderBook.cppcore/utils/ObjectPool.hstrategies/arbitrage/ArbitrageDetector.cpp
…formance optimizations
There was a problem hiding this comment.
♻️ Duplicate comments (1)
core/utils/ObjectPool.h (1)
3-8:⚠️ Potential issue | 🟠 MajorFactory path still imposes default-constructibility on
T(and<stdexcept>is missing).Line 52 and Line 96 still instantiate default-construction fallbacks, so
ObjectPool<T>can fail for non-default-constructibleTeven when a factory is provided. Also, Line 54 and Line 91 usestd::runtime_errorwithout including<stdexcept>.Proposed fix
`#include` <functional> `#include` <memory> `#include` <mutex> +#include <stdexcept> +#include <type_traits> `#include` <vector> @@ std::lock_guard<std::mutex> lock(m_state->mutex); m_state->pool.reserve(initialSize); for (size_t i = 0; i < initialSize; ++i) { - auto obj = m_factory ? m_factory() : std::make_unique<T>(); + std::unique_ptr<T> obj; + if (m_factory) { + obj = m_factory(); + } else if constexpr (std::is_default_constructible_v<T>) { + obj = std::make_unique<T>(); + } else { + throw std::runtime_error( + "ObjectPool requires factory for non-default-constructible T"); + } if (!obj) { throw std::runtime_error( "ObjectPool factory returned nullptr during pre-allocation"); } m_state->pool.push_back(std::move(obj)); @@ if (!raw) { // Pool exhausted — allocate a new object if (m_factory) { auto obj = m_factory(); if (!obj) { throw std::runtime_error( "ObjectPool factory returned nullptr during acquire()"); } raw = obj.release(); - } else { + } else if constexpr (std::is_default_constructible_v<T>) { raw = new T(); + } else { + throw std::runtime_error( + "ObjectPool requires factory for non-default-constructible T"); } m_totalAllocated.fetch_add(1, std::memory_order_relaxed); }#!/bin/bash set -euo pipefail echo "1) Verify runtime_error usage + stdexcept include" rg -n 'runtime_error|#include <stdexcept>' core/utils/ObjectPool.h echo echo "2) Verify fallback default-construction in factory paths" rg -n 'm_factory \? m_factory\(\) : std::make_unique<T>\(\)|raw = new T\(\)' core/utils/ObjectPool.hAlso applies to: 52-53, 86-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/utils/ObjectPool.h` around lines 3 - 8, Add the missing header and remove the default-construction fallback when a factory is provided: include <stdexcept> (and <type_traits> if not present), then replace expressions like "m_factory ? m_factory() : std::make_unique<T>()" and raw allocations "raw = new T()" with guarded logic: if (m_factory) use m_factory(), else if constexpr(std::is_default_constructible_v<T>) perform std::make_unique<T>() / new T(), otherwise throw std::runtime_error("ObjectPool: no factory and T is not default-constructible"); reference the ObjectPool<T> class members m_factory and the allocation sites (the code paths that assign raw or create unique_ptr) when applying this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@core/utils/ObjectPool.h`:
- Around line 3-8: Add the missing header and remove the default-construction
fallback when a factory is provided: include <stdexcept> (and <type_traits> if
not present), then replace expressions like "m_factory ? m_factory() :
std::make_unique<T>()" and raw allocations "raw = new T()" with guarded logic:
if (m_factory) use m_factory(), else if
constexpr(std::is_default_constructible_v<T>) perform std::make_unique<T>() /
new T(), otherwise throw std::runtime_error("ObjectPool: no factory and T is not
default-constructible"); reference the ObjectPool<T> class members m_factory and
the allocation sites (the code paths that assign raw or create unique_ptr) when
applying this change.
# [1.7.0](v1.6.0...v1.7.0) (2026-03-03) ### Features * add multi-instrument support, cross-exchange arbitrage, and performance optimizations ([#56](#56)) ([008a244](008a244))
|
🎉 This PR is included in version 1.7.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
Changes
Multi-Instrument Orchestration
InstrumentManagerclass managing per-symbol {orderbook, strategy, simulator} lifecycles--symbols BTC-USD,ETH-USDCLI flag (backward compatible with--symbol)instrumentsarray indefault_config.jsonfor per-symbol configurationPer-Symbol Risk Tracking
SymbolRiskStatestruct with atomic position, PnL, volume, and exposureregisterSymbol(),getSymbolState(),setSymbolLimits()onRiskManagercheckOrder()hot pathinitialize()now clears per-symbol state for clean re-initializationCross-Exchange Arbitrage
ArbitrageDetectorwith background scan thread and per-venue quote cacheArbitrageExecutorwith dry-run and live execution modes--enable-arbitrage,--arb-min-spread,--arb-dry-runCLI flagsCross-Market Correlation
CrossMarketCorrelationengine with Pearson, rolling correlation, lead-lag analysis, and Engle-Granger cointegrationMLEnhancedMarketMakerviasetCrossMarketCorrelation()setterPerformance Optimizations
ENABLE_LTOCMake option with CheckIPOSupportedSummary by CodeRabbit
New Features
Tests
Documentation