Toy neural network repository, to better understand the theory and play around.
The code is based on Michael Nielsen's www.neuralnetworksanddeeplearning.com but I've changed the structure a bit, added type hints (to the extent that I understand this concept), and added some things here and there. I plan to continue elaborating on it. So far, only a simple feedforward network.
I haven't tested it with the MNIST dataset (as in the book). I have tested it with addition, which does seem to work for the ultra simple concept of two inputs and one output neuron with linear activation.
Oh, there's also a jupyter notebook with the addition demonstrated, plus some playing around of where it goes haywire with a more complicated network.