Skip to content

Conversation

@jodavies
Copy link
Collaborator

@jodavies jodavies commented Oct 9, 2025

This improves performance for things like "gcd_(1-x^20,1-x^1000000)" since FLINT poly has a dense representation whereas mpoly is sparse. "Real life" benchmarks such as forcer or minceex are unchanged, they have dense polynomials.


Any thoughts on this? I have determined the threshold "experimentally" by benchmarking gcd_(1-x^20,1-x^N) for a range of N. Of course different computations probably have a different threshold... this kind of heuristic usually has tricky cases.

@coveralls
Copy link

coveralls commented Oct 9, 2025

Coverage Status

coverage: 56.759% (-0.02%) from 56.779%
when pulling b79cc4c on jodavies:flint-sparse
into 2aba64b on form-dev:master.

@tueda
Copy link
Collaborator

tueda commented Oct 9, 2025

I'm not sure if users should be able to choose the threshold, but in principle, it could be a setup parameter that they can adjust if needed.

@jodavies
Copy link
Collaborator Author

jodavies commented Oct 9, 2025

Right, this is an option.

For the record, forcer and minceex are ~25% slower if you just force the use of mpoly always.

This improves performance for things like "gcd_(1-x^20,1-x^1000000)"
since FLINT poly has a dense representation whereas mpoly is sparse.
"Real life" benchmarks such as forcer or minceex are unchanged, they
have dense polynomials.
@jodavies
Copy link
Collaborator Author

Updated, the term counting was not correct in fact.

I ran a few benchmarks also for mul_ and div_, the optimal threshold for those is not exactly the same (for the tested polynomials) as for gcd_ but the value I chose before is broadly OK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants