Skip to content

Conversation

spline2hg
Copy link
Collaborator

This PR introduces the following optimizers from PySwarms:

  • pyswarms_global_best
  • pyswarms_local_best
  • pyswarms_general

Copy link

codecov bot commented Aug 27, 2025

Codecov Report

❌ Patch coverage is 98.59155% with 4 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/optimagic/optimizers/pyswarms_optimizers.py 97.84% 4 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 88.51% <100.00%> (+0.33%) ⬆️
src/optimagic/config.py 100.00% <100.00%> (ø)
src/optimagic/optimization/algo_options.py 100.00% <ø> (ø)
src/optimagic/optimizers/pyswarms_optimizers.py 97.84% <97.84%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@spline2hg
Copy link
Collaborator Author

@janosg could you review this?

A few points:

  1. Could you check the failing run on Python 3.11? I’m not sure what’s causing the issue.
  2. PySwarms has stochastic algorithms but doesn’t expose a seed parameter. Because of that, they were being treated as deterministic in the tests. I’ve added a seed parameter with a random number generator for it to be treated as stochastic.
  3. The default value CONVERGENCE_FTOL_REL = 2e-9 is causing the algorithm to converge earlier than expected(how should we handle this?).

@janosg
Copy link
Member

janosg commented Aug 30, 2025

@janosg could you review this?

A few points:

  1. Could you check the failing run on Python 3.11? I’m not sure what’s causing the issue.
    I also don't see the reason but it seems completely unrelated to your PR so you can ignore it for now.
  1. PySwarms has stochastic algorithms but doesn’t expose a seed parameter. Because of that, they were being treated as deterministic in the tests. I’ve added a seed parameter with a random number generator for it to be treated as stochastic.

If I see correctly, you only use the rng for inital positions. Is this the only stochastic part of the algorithm or are ther other stochastic parts that make the optimization non-deterministic even if a seed is set?

  1. The default value CONVERGENCE_FTOL_REL = 2e-9 is causing the algorithm to converge earlier than expected(how should we handle this?).

You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?

Copy link
Member

@janosg janosg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have time to completely finish the review but this should give you some pointers to continue working on the PR.

@spline2hg
Copy link
Collaborator Author

If I see correctly, you only use the rng for inital positions. Is this the only stochastic part of the algorithm or are ther other stochastic parts that make the optimization non-deterministic even if a seed is set?

Stochastic parts in PySwarms:

  1. Random topology and random bounds handling strategy.
  2. Random components (r1, r2) in the velocity update equation, making the movement inherently stochastic.

You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?

If we set convergence_ftol_rel to 0, it disables early convergence and forces the algorithm to run for STOPPING_MAXITER iterations by default if stopping_maxiter is not set. However, running optimization for STOPPING_MAXITER times i.e (1_000_000) takes a long time.

@janosg
Copy link
Member

janosg commented Sep 3, 2025

Stochastic parts in PySwarms:

  1. Random topology and random bounds handling strategy.
  2. Random components (r1, r2) in the velocity update equation, making the movement inherently stochastic.

Then we need a very big warning that the seed does not make the algorithm deterministic. Would setting a global numpy seed help? i.e. np.random.seed(123)? If so, the warning should explain this workaround but we would not want to set a global seed in optimagic.

You can deviate from this default if there are good reasons. Especially for a global optimizer I don't think it is problematic to set this to 0 because global optimizers are usually expected to run until maxiter or maxfun are reached. Does setting convergence_ftol_rel=0 fix the problem?

If we set convergence_ftol_rel to 0, it disables early convergence and forces the algorithm to run for STOPPING_MAXITER iterations by default if stopping_maxiter is not set. However, running optimization for STOPPING_MAXITER times i.e (1_000_000) takes a long time.

As I said, you can deviate from defaults where necessary. Of course, we would not leave maxiter at a million if this is a bad default. Maybe 1000 would be a good idea? We just can't have a variable called "STOPPING_MAXITER_GLOBAL" (analogous to "STOPPING_MAXFUN_GLOBAL") in the algo options because what an iteration is changes between optimizers.

@janosg
Copy link
Member

janosg commented Sep 3, 2025

The failing test should be fixed after you update the branch.

@spline2hg
Copy link
Collaborator Author

spline2hg commented Sep 3, 2025

Then we need a very big warning that the seed does not make the algorithm deterministic. Would setting a global numpy seed help? i.e. np.random.seed(123)? If so, the warning should explain this workaround but we would not want to set a global seed in optimagic.

I meant that there are stochastic parts to it, but setting the global seed does make it deterministic. Yes setting global seed work. I will add a warning if seed is set.

As I said, you can deviate from defaults where necessary. Of course, we would not leave maxiter at a million if this is a bad default. Maybe 1000 would be a good idea? We just can't have a variable called "STOPPING_MAXITER_GLOBAL" (analogous to "STOPPING_MAXFUN_GLOBAL") in the algo options because what an iteration is changes between optimizers.

Yes, 1000 is a reasonable number for PySwarms, so I had introduced STOPPING_MAXITER_GLOBAL earlier, but I will remove it and just default to 1000 iterations.

@spline2hg spline2hg requested a review from janosg September 10, 2025 12:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants