This project implements a simple neural network with one hidden layer using Tanh activation function. The network consists of:
- Input Layer: 2 neurons
- Hidden Layer: 2 neurons
- Output Layer: 1 neuron
It initializes random weights, performs forward propagation, computes the final output, and calculates the error between the predicted and target values.
- Uses Tanh Activation Function.
- Random weight initialization.
- Computes Mean Squared Error (MSE).
- Visualizes the Tanh function using
matplotlib.
Make sure you have Python and the following libraries installed:
pip install numpy matplotlib- Initializes weights randomly within the range (-0.5, 0.5).
- Computes activations for the hidden and output layers using tanh.
- Computes error using Mean Squared Error (MSE) formula.
- Plots the Tanh activation function for visualization.
- tanh_activation(x): Computes the Tanh activation.
- initialize_weights(shape): Initializes random weights.
- compute_error(target, output): Computes the error.
- Forward pass:
- Computes hidden layer activations.
- Computes final output.
- Prints results and plots the Tanh Activation Function.
Hidden Layer Input: [ 0.2 -0.1]
Hidden Layer Output: [ 0.197 -0.099]
Output Layer Input: [0.45]
Final Output: [0.42]
Error: [0.0035]
The script generates a plot of the Tanh Activation Function to illustrate its shape and properties.
This project is open-source. Feel free to modify and use it for educational purposes.
