-
Notifications
You must be signed in to change notification settings - Fork 45
Add optimizers from nevergrad #591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add optimizers from nevergrad #591
Conversation
Hi @gauravmanmode, thanks for the PR. I definitely like the idea of your Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this. |
Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now. |
Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions. |
Codecov ReportAttention: Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
Hi, @janosg , Here is the list of parameter names I have referred to nevergrad_cmaes
what kind of tests should i have for the internal helper function ? |
Hi @janosg, |
Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well. I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here. |
About the names:
I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test. If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly. |
for more information, see https://pre-commit.ci
UpdateI have updated the PR Description . I have made a few changes to non_linear constraints and test_history_collection to work with the added optimizers, but they can be reverted back.
nevergrad_randomsearch
RandomSearch
nevergrad_samplingsearch
SamplingSearch
nevergrad_tbpsa
TBPSA
nevergrad_emna
EMNA
nevergrad_bayes_optim
BayesOptim
`nevergrad_de`
Differential Evolution (DE)
|
tests/optimagic/optimization/test_with_nonlinear_constraints.py
Outdated
Show resolved
Hide resolved
for more information, see https://pre-commit.ci
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Can you quickly explain why you removed SPSA? |
SPSA was not accurate and was failing the tests. |
This is something we always need to discuss before we decide to drop an algorithm. Often it is possible to tune the parameters to make algorithms more precise; In extreme cases we can also relax the required precision for algorithms before we drop them. I merged main into your branch. Now tests are failing due to the changes in #610 but this will be a quick fix. |
Sorry I missed a discussion on this. |
…e_bounds, and docs
PR Description
This PR adds support for the following optimizers from the nevergrad optimization library.
Two optimizers from nevergrad are not wrapped namely SPSA and AXP as they are either slow or imprecise.
Features:
Helper functions
_nevergrad_internal
:Handle the optimization loop, return InternalOptimizeResult .
x _process_nonlinear_constraints
:Flatten vector constraint into a list of scalar constraints for use with Nevergrad.
x _get_constraint_evaluations
:Return a list of constraint evaluations at x
x _batch_constraint_evaluations
:Batch version of _get_constraint_evaluations
Test suite:
Note:
Nonlinear constraints on hold until improved handling.
Changes to
optimize.py
:Currently None bounds are transformed to arrays of np.inf. Handle this case if optimizer does not support infinite bounds.
Added test test_infinite_and_incomplete_bounds.py:
test_no_bounds_with_nevergrad
This test should pass when no bounds are provided to nevergrad optimizers.