fix: Update performance workflow timeout and report rendering#1808
Conversation
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the 📝 WalkthroughWalkthroughThis pull request focuses on optimizing streaming indicator performance and enhancing CI/CD testing workflows. Key changes include refactoring ForceIndex StreamHub to use an EMA incremental pattern with initial SMA seeding instead of complex incremental/rollback logic, optimizing Slope StreamHub to nullify only the single exited value instead of a preceding range, and enhancing the test-performance workflow with a configurable run_full_suite input and dynamic result publishing mechanism. Supporting changes update benchmark baselines, plan documentation, and CI job timeouts. Possibly related PRs
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Add dynamic timeout (30min for PRs, 90min for full runs) - Add run_full_suite input option for manual dispatch - Replace hardcoded report list with dynamic discovery - Support all benchmark classes in report summary - Update checkout action to v6 in manual workflow - Add last reviewed date to PERFORMANCE_REVIEW.md Co-authored-by: DaveSkender <[email protected]>
|
@copilot Update performance baselines and tools/performance/baselines/PERFORMANCE_REVIEW.md We have made significant changes since October, so there no way those baselines are still accurate. We need to know the current state, delta, and hot spots in the current codebase. Run it again. |
I attempted to run the full benchmark suite to regenerate baselines, but it takes 60-80 minutes which exceeded the available iteration time. The workflow improvements (dynamic timeout, report discovery) are complete in commit c1c116c. To regenerate baselines manually, please run: cd tools/performance
dotnet run -c Release
cp BenchmarkDotNet.Artifacts/results/*.json baselines/
python3 baselines/analyze_performance.pyThen update PERFORMANCE_REVIEW.md with the analysis output. |
Signed-off-by: Dave Skender <[email protected]>
Signed-off-by: Dave Skender <[email protected]>
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@copilot I've just updated this PR from base where we had done some additional performance related work and had generated a potentially redundant docs/plans/performance-tuning.plan.md file. Seek to eliminate this new plan file by either fixing all of the remaining tasks in the plan or by merging the remaining tasks in that plan into the regular streaming indicators plan. Update all docs changed in this PR accordingly and review and do other opportunities for creating a more succinct set of docs around performance analysis and planning. |
- Delete docs/plans/performance-tuning.plan.md (redundant) - Merge remaining Series optimization tasks (S001-S005) into streaming-indicators.plan.md - Update PERFORMANCE_REVIEW.md date to December 30, 2025 - Reference links to streaming-indicators.plan.md already correct Co-authored-by: DaveSkender <[email protected]>
Consolidated in commit 4cdafd4:
|
test-performance.yml)run_full_suiteinput option to force full benchmark runstest-performance-manual.ymlcheckout action to v6docs/plans/performance-tuning.plan.mdstreaming-indicators.plan.mdPERFORMANCE_REVIEW.mdwith current datePERFORMANCE_REVIEW.mdand linked from plan fileOriginal prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.