This repo contains some personal implementations of neural networks done for practice implementing backprop and optimization from scratch (using only numpy). Currently all networks are sequential.
To install as python package, run:
python setup.py install
To compile Cython modules for testing run:
python setup.py build_ext --inplace
For development, push to the develop
branch, then merge with master when ready.
The neural_nets_dsr
package contains the following subpackages and modules:
-
The
activations
package contains activation functions for network layers, along with anActivationFunc
class in thebase.py
module for creating new ones. -
The
cost_functions
package contains several cost functions that can be used to train networks, and it also contains abase.py
module with a class that allows the creation of new cost functions. -
The
layers
package contains several layer implementations that can be used to construct networks. -
The
optim
package contains optimization algorithms for training. -
The
network.py
module contains the class that represents a network. -
Finally, the
utils.py
module is for miscellaneous utility functions and classes.
- Every activation and cost function should 'know' how to compute its own gradient.
- Each layer should know how to forward and back propagate through itself.
- Every optimizer should know how to perform updates on weights and biases.
- Tiny version change
0.0.x
: Bugfix or minor change in implementation. - Minor version change
0.x.0
: New feature added, but still compatible with previous versions. - Major version change
x.0.0
: Major refactoring or changes that break compatibility with previous versions.
- Doubts about batchnorm derivative computation.
- Numerical stability.