This repo contains a blueprint implementation(WIP) of neural network architecture using an evolving Transaction-Level design methodology i.e., TL-Verilog.
-
" nn.tlv "
- Complete neuron macro implementation with variable pipeline depths (from 0 to 5).
- Complete layer macro implementation, which instantiates the specified number of neurons.
- Complete memory(ROM) for weights and bias (without different values)
-
" gen_wb.py "
- Python script for generating weigths and bias using Tensor-flow.
-
Currently, weights and bias values are hardcoded and are identical for all neurons in memory. Hence, we need to import necessary parameters according to network configuration from the external mem file for each neuron.
-
The logic/code for I/O interafce and connection of layers is not parameterized using M4 (like a neuron, layer), and at present, it's fixed.
-
Add write ports in memory so that software can also change or manipulate the weights and biases value.
-
Add support for different Activation Functions (like, sigmoid, tanh, linear, softsign, etc.)
-
Currently, MAC (multiply and accumulate) operation uses Fixed-Point representation. We can add logic to support Floating-point, Posits, etc.
-
Add test cases for Verification of design.
This repo is WIP(work in progress) and may be moved to a different place in later stages. It contains a draft architecture and approach for neural-network generators using TL-Verilog.
The python script and is taken from the following refernces.
Any kind of feedback or suggestions is highly appreciated.