Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for partially defined objective functions #248

Open
mirai-computing opened this issue Feb 7, 2025 · 1 comment
Open

Add support for partially defined objective functions #248

mirai-computing opened this issue Feb 7, 2025 · 1 comment

Comments

@mirai-computing
Copy link

In case when an objective function is not defined everywhere in the search domain there should be a way to tell CMA-ES that the function cannot be evaluated at requested point and that another random point must be drawn from a particular distribution. Currently the only way to do this I'm aware of is to return some very high value and hope that optimizer will exclude these points, but this is not a correct way of doing things and a kind of dirty hackery that may spoil the solution to unknown degree.

There are at least three ways to implement this that come to mind right away.

  1. Re-evaluate FitFunc whenever it returns nan (or even inf);
  2. Make FitFunc API to have a [optional] boolean return parameter { true = OK return value is valid, false = can't evaluate this function at this point, return value is not valid }, re-evaluate FitFunc when it returns false;
  3. Call an optional companion function like CanEvaluate()/IsDefinedAt()/... before calling FitFunc itself; Keep calling CanEvaluate() until companion function returns 'true' and then call FitFunc;

If a re-evaluation counter runs out at some point then there may be no solution better than previous one or no solution at all.

@nikohansen
Copy link
Collaborator

I could be mistaken, but it seems, this is where the ask-and-tell interface comes in handy, as gives the freedom to do any of these and more without the requirement to design or abide by a specific interface.

Currently the only way to do this I'm aware of is to return some very high value and hope that optimizer will exclude these points, but this is not a correct way of doing things and a kind of dirty hackery that may spoil the solution to unknown degree.

Generally, I'd prefer this over rejection sampling and both need some additional safeguards when there are too many failures. A useful surrogate value may account for the distance (or the Mahalanobis distance) to the current distribution mean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants