-
Notifications
You must be signed in to change notification settings - Fork 17
Prototype benchmark code #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I was thinking that, instead of / in addition to creating new artificial tests, we could just grab the timing information in the DefaultTestSet structure.. We could either do this at the top level only or recursively for every single testset. |
|
@miguelbiron do you mean that we'd collect/report benchmarking results for the tests in our test suite? I'm not sure there will be a tonne of overlap between the things we want to benchmark and the things we want to use for unit testing. |
|
That's true... Also there's no alloc data in the testset. |
|
@miguelbiron @alexandrebouchard I've added the prototype code for the workflows. A brief explanation:
One issue we should fix before merging: both I am not entire sure how we do environment setup in the |
|
I also had an idea of something neat we could do: we could include a plot of benchmarking results as a function of time in the Pigeons documentation. This command: will spit out a list of git commit hashes corresponding to commits where Thoughts? |
|
Ah one more thing @alexandrebouchard we should probably modfiy (At least on mvn1000 it seems the results vary by around 10% regularly for a single trial) |
|
Ditto the last comment and also we probably need some basic metadata about the environment. At the very least we should store the exact Julia version used. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #333 +/- ##
==========================================
+ Coverage 87.40% 87.82% +0.42%
==========================================
Files 107 107
Lines 2660 2654 -6
==========================================
+ Hits 2325 2331 +6
+ Misses 335 323 -12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@alexandrebouchard @trevorcampbell the failing docs are due to the old nonreproducible errors that seem to have been fixed with the new DPPL version. We should just merge this PR so that we can then merge #328 which doesn't suffer from this |
|
OK I think I'm giving up on reducing precompilation time for now. Let's just push that off to a later PR and get this one sorted. Seems like the last TODOs here are
|
|
OK @alexandrebouchard this should be all sorted now. Example PR thread message with diff below:
|
|
Ah actually @alexandrebouchard one thing I'd like your eye on before we merge is how packages are activated/installed/used in You'll notice the but once that's sorted we can merge and i'll get the runner set up properly for this repo |
|
This is looks really great and will be super useful. I'll add more targets very shortly, I'm building a collection. |
|
@alexandrebouchard FYI I replaced |


No description provided.