Skip to content

Add optimizers from nevergrad #591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 48 commits into
base: main
Choose a base branch
from

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Apr 23, 2025

PR Description

This PR adds support for the following optimizers from the nevergrad optimization library.

  • PSO
  • CMAES
  • ONEPLUSONE
  • RANDOMSEARCH
  • SAMPLINGSEARCH
  • DE
  • BO
  • EDA
  • TBPSA
  • EMNA
  • NGOPT OPTIMIZERS
  • META OPTIMIZERS

Two optimizers from nevergrad are not wrapped namely SPSA and AXP as they are either slow or imprecise.

Features:

  • Parallelization
  • [] Support for nonlinear constraints

Helper functions

_nevergrad_internal:
Handle the optimization loop, return InternalOptimizeResult .

x _process_nonlinear_constraints:
Flatten vector constraint into a list of scalar constraints for use with Nevergrad.

x _get_constraint_evaluations:
Return a list of constraint evaluations at x

x _batch_constraint_evaluations:
Batch version of _get_constraint_evaluations

Test suite:

x test_process_nonlinear_constraints
x test_get_constraint_evaluations
x test_batch_constraint_evaluations
test_meta_optimizers_are_valid
test_ngopt_optimizers_are_valid

Note:
Nonlinear constraints on hold until improved handling.

Changes to optimize.py:
Currently None bounds are transformed to arrays of np.inf. Handle this case if optimizer does not support infinite bounds.

Added test test_infinite_and_incomplete_bounds.py:
test_no_bounds_with_nevergrad
This test should pass when no bounds are provided to nevergrad optimizers.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Hi @gauravmanmode, thanks for the PR.

I definitely like the idea of your nevergrad_internal function. We currently have several independent nevergrad PRs open and a function like this is good to avoid code duplication.

Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions.

Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 97.85276% with 14 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/optimagic/optimizers/nevergrad_optimizers.py 94.98% 13 Missing ⚠️
src/optimagic/optimization/optimize.py 87.50% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 87.63% <100.00%> (+1.69%) ⬆️
src/optimagic/optimization/optimize.py 92.17% <87.50%> (+0.35%) ⬆️
src/optimagic/optimizers/nevergrad_optimizers.py 95.47% <94.98%> (-2.21%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode
Copy link
Collaborator Author

gauravmanmode commented May 5, 2025

Hi, @janosg ,
Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review

what kind of tests should i have for the internal helper function ?
Should I have tests for ftol, stopping_maxfun?
Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something.
image
For reference, I have attached a notebook I used while exploring here

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
I am thinking of refactoring the code for the already added nevergrad_pso optimizer and nevergrad_cmaes in this pr.
Does this sound good?
Also, I would like your thoughts on this.

  1. currently I am passing the optimizer object to the helper function _nevergrad_internal.
    image
  2. Another approach is to pass the optimizer name as a string as in pygmo
    image
    image
    What would be a better choice?

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well.

I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here.

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi, @janosg , Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review
what kind of tests should i have for the internal helper function ? Should I have tests for ftol, stopping_maxfun? Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something. image For reference, I have attached a notebook I used while exploring here

About the names:

  • xtol and ftol are convergence criteria, so the name would be convergence_xtol. Ideally you would also find out if this is an absolute or relative tolerance and then add the corresponding abbreviation (e.g. convergence_xtol_rel). You can find examples of the naming scheme here
  • The otrher names are god

I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test.

If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly.

@gauravmanmode gauravmanmode changed the title Add CMAES optimizer from nevergrad Add CMAES optimizer from nevergrad and refactor existing code May 22, 2025
@gauravmanmode gauravmanmode changed the title Add CMAES optimizer from nevergrad and refactor existing code Add optimizers from nevergrad Jun 25, 2025
@gauravmanmode
Copy link
Collaborator Author

Update

I have updated the PR Description .
These are the parameter names for the additonal optimizers added.

I have made a few changes to non_linear constraints and test_history_collection to work with the added optimizers, but they can be reverted back.
nonlinear_constraints #606
As there is no stopping_maxiter option with optimizers in nevergrad, stopping_maxfun would be a good way to limit the numbe of entries in test_history_collection

nevergrad_randomsearch RandomSearch
Old Name New Name
stupid baseline
middle_point init_zero
opposition_mode mirror_sampling
sampler sampling_method
scale scale
recommendation_rule recommendation_rule
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

nevergrad_samplingsearch SamplingSearch
Old Name New Name
sampler sampling_method
scrambled scrambled
middle_point init_zero
cauchy cauchy
scale scale
rescaled rescaled
recommendation_rule recommendation_rule
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

nevergrad_tbpsa TBPSA
Old Name New Name
naive naive
initial_popsize initial_popsize
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

nevergrad_emna EMNA
Old Name New Name
isotropic isotropic
naive naive
population_size_adaptation population_size_adaptation
initial_popsize initial_popsize
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

nevergrad_bayes_optim BayesOptim
Old Name New Name
init_budget init_budget
pca pca
n_components n_components
prop_doe_factor prop_doe_factor
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

`nevergrad_de` Differential Evolution (DE)
Old Name New Name
initialization initialization
scale scale
recommendation recommendation
crossover crossover
F1 F1
F2 F2
population_size population_size
high_speed high_speed
stopping_maxfun stopping_maxfun
n_cores n_cores
seed seed
sigma sigma

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@janosg
Copy link
Member

janosg commented Jul 16, 2025

Can you quickly explain why you removed SPSA?

@gauravmanmode
Copy link
Collaborator Author

SPSA was not accurate and was failing the tests.

@janosg
Copy link
Member

janosg commented Jul 16, 2025

This is something we always need to discuss before we decide to drop an algorithm. Often it is possible to tune the parameters to make algorithms more precise; In extreme cases we can also relax the required precision for algorithms before we drop them.

I merged main into your branch. Now tests are failing due to the changes in #610 but this will be a quick fix.

@gauravmanmode
Copy link
Collaborator Author

Sorry I missed a discussion on this.
But the implentation of SPSA in nevergrad was a WIP (many todos listed) and no tuning parameters exposed, so I decided to skip it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants