This repository contains a simple implementation of a neural network in Python from scratch. The neural network is designed to perform binary classification based on a small training dataset.
The neural network implemented here is a basic feedforward neural network with one hidden layer. It is trained using a backpropagation algorithm to minimize the error between predicted and target outputs. The network uses a sigmoid activation function in the hidden layer for non-linearity.
To use the neural network, follow these steps:
- Clone the repository to your local machine.
- Ensure you have Python installed.
- Open a terminal and navigate to the repository directory.
- Run the
neural_network.py
file.
neural_network.py
: Contains the implementation of the neural network.README.md
: This file, providing an overview of the repository and instructions for usage.
The training data used by the neural network is pre-coded within the script. It consists of four samples, each with three input features and a corresponding binary output.
The NeuralNetwork
class is the core of this implementation. It includes methods for initialization, forward propagation (think
), training, and utility functions for the sigmoid activation and its gradient.
neural_network = NeuralNetwork()
print("Random Starting weights: ", neural_network.weights)
# Train the neural network
neural_network.train(training_data, number_of_iterations=10000)
# Print the new training weights
print("Updated Weights: ", neural_network.weights)
# Make predictions with new data
new_data = [0, 1, 0]
prediction = neural_network.think(new_data)
# Print the prediction
print("New Data Prediction: ", prediction)
For a neuron Input X, Weight W and Bias B, the output is:
for input in inputs:
return Sum(Input * Weight) + Bias
Output = (1 / (1 + exp(-Input)))
First we assign random numbers to our weights
Error = 1/2 * (Target - Output)^2
Note :
Error = is the total error from our function (Network)
Target = is the correct label of our function
Output = is the predicted label of our function
LearningRate = also known as the Gradient Descent finds the minimum by taking steps proportional to the negative of the gradient.
Input = Input Target = Target Output = Output
Error cost function Gradient = - Input * ErrorInOutput * SigmoidCurveGradient
Weight adjust = Input * ErrorInOutput * SigmoidCurveGradient
sigmoidGradient = neuronOutput * (1 - neuronOutput)
Weight adjust = Input * ErrorInOutput * SigmoidCurveGradient
## where
sigmoidGradient = neuronOutput * (1 - neuronOutput)