Skip to content

Conversation

@FedericoCeratto
Copy link
Contributor

@FedericoCeratto FedericoCeratto commented May 20, 2023

Implement basic reproducible benchmarking

See https://pythonspeed.com/articles/consistent-benchmarking-in-ci/

@FedericoCeratto FedericoCeratto marked this pull request as ready for review May 20, 2023 14:11
# Runs quality assurance checks
name: "qa"
on:
pull_request:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to run this workflow on every pull request because it is slow and it will hence slow down development. We usually only want to run QA tests when we are inside a release branch.

Copy link
Contributor

@bassosimone bassosimone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should conflate the scripts we use for performing QA with cachegrind runs because that would most likely make the QA scripts (which are already slow and flaky) even slower and more annoying to run during releases.

Go contains an extensive framework for benchmarking, but it is unclear to me whether there is an equivalent of using cachegrind. Could you check? (I'd rather really use native Go tools for benchmarking than external tools.)

What were you trying to benchmark? Perhaps, I can help by adding support to miniooni for generating performance traces that we can analyze with Go tools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants