Skip to content

Conversation

madhavansingh
Copy link

Describe your change:

  • Add an algorithm
  • Fix a bug or typo in an existing algorithm
  • Add or change doctests
  • Documentation change

This PR adds the Stochastic Gradient Descent (SGD) optimizer under
machine_learning/neural_network/optimizers/sgd.py. SGD is a fundamental optimizer used for training neural networks and deep learning models.

Key features included in this PR:

  • Clear, educational implementation suitable for learners
  • Type hints for all parameters and return values
  • Doctest included to demonstrate usage
  • Unit test in test_sgd.py to validate functionality

This PR is the first step in adding a sequence of neural network optimizers:
Momentum SGD, Nesterov Accelerated Gradient (NAG), Adagrad, Adam, and Muon.

This implementation provides a reference-quality example and lays the foundation for future contributions.

Checklist:

  • I have read CONTRIBUTING.md.
  • This pull request is all my own work -- I have not plagiarized.
  • I know that pull requests will not be merged if they fail the automated tests.
  • This PR only changes one algorithm file.
  • All new Python files are placed inside an existing directory.
  • All filenames are in all lowercase characters with no spaces or dashes.
  • All functions and variable names follow Python naming conventions.
  • All function parameters and return values are annotated with Python type hints.
  • All functions have doctests that pass the automated testing.
  • All new algorithms include at least one URL that points to Wikipedia or another similar explanation.

Fixes #13662

@algorithms-keeper algorithms-keeper bot added the awaiting reviews This PR is ready to be reviewed label Oct 22, 2025
@madhavansingh
Copy link
Author

Hi @Adhithya-Laxman,

I’ve added the first optimizer (Stochastic Gradient Descent, SGD) in the new module
machine_learning/neural_network/optimizers/sgd.py along with a unit test and doctest.

This is the first step toward implementing the full sequence of optimizers (Momentum SGD, NAG, Adagrad, Adam, Muon).
Feedback is welcome, and I’ll continue adding the remaining optimizers in subsequent PRs.

Thanks!

@Adhithya-Laxman
Copy link

Hi @madhavansingh ,
I have the implementation for muon, Adam and momentum optimisers, I’ll raise the PRs for the same. We shall collaborate on the rest. :)

Thanks for your contribution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting reviews This PR is ready to be reviewed tests are failing Do not merge until tests pass

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add neural network optimizers module to enhance training capabilities

2 participants