diff --git a/assets/tutorials/bitstream/code/output/ex10.res b/assets/tutorials/bitstream/code/output/ex10.res index 7537403..5e18ba7 100644 --- a/assets/tutorials/bitstream/code/output/ex10.res +++ b/assets/tutorials/bitstream/code/output/ex10.res @@ -1 +1 @@ -0.008000000000000007 \ No newline at end of file +0.0040000000000000036 \ No newline at end of file diff --git a/assets/tutorials/bitstream/code/output/ex7.res b/assets/tutorials/bitstream/code/output/ex7.res index 0643e21..e84e32b 100644 --- a/assets/tutorials/bitstream/code/output/ex7.res +++ b/assets/tutorials/bitstream/code/output/ex7.res @@ -1 +1 @@ -0.00259481037924153 \ No newline at end of file +0.0005988023952095967 \ No newline at end of file diff --git a/assets/tutorials/pruning/code/output/ex7.out b/assets/tutorials/pruning/code/output/ex7.out index 71cf1c6..3e33968 100644 --- a/assets/tutorials/pruning/code/output/ex7.out +++ b/assets/tutorials/pruning/code/output/ex7.out @@ -1 +1 @@ -Propagated MobileNet Mults 4919881 Adds 4820043 +Propagated MobileNet Mults 4992057 Adds 4887057 diff --git a/assets/tutorials/pruning/code/output/ex8.out b/assets/tutorials/pruning/code/output/ex8.out index aba4ec2..66f77ca 100644 --- a/assets/tutorials/pruning/code/output/ex8.out +++ b/assets/tutorials/pruning/code/output/ex8.out @@ -1 +1 @@ -Resized MobileNet Mults 3568722 Adds 3420021 +Resized MobileNet Mults 3671590 Adds 3518013 diff --git a/index.html b/index.html index bc2c942..b505c98 100644 --- a/index.html +++ b/index.html @@ -1 +1 @@ - UW-Madison Bitstream Computing Hackathon

Bitstream Computing Hackathon at UW-Madison

Welcome to BCH@UW!

Watch the Introduction video or browse the PPTX slides at your convenience.

This hackathon will give you a chance to learn about ultra low-power neural networks, how they are designed, how they are programmed or trained, and how they are used to process sensory data from the real world.

When you participate, you will learn:

  • a new programming language (Julia)

  • about low-cost, ultra-low computing using Bitstreams as a data type

  • how to train a powerful neural network called MobileNet

  • how to prune/tune/quantize this network to make it energy efficient

The BCH@UW Hackathon kicks off on Sun 11/12/2023 at 1pm in EH2261. In the meantime, you can work through the tutorials on this website to get started with the tools and optimization flows.

Participation rules

Goal: Prune a pre-trained MobileNetv1 model to optimize for energy efficiency without compromising the accuracy.

Your entry will be evaluated on three categories:

  • Accuracy: how well can your model classify input images?

  • Area: how big is the hardware circuit implementation of your model?

  • Energy: how energy efficient is your hardware?

Follow the instructions in the submission guide to evaluate your result.

Information and suppport

Join the mailing list for updates/questions/support:

NOTE: You need a gmail/google account to join the mailing list. You can always create one with your existing email and delete it once you are done with it by following the instructions found here: https://support.google.com/accounts/answer/27441

Hackathon flyer

🚧 Site under construction 🚧


CC BY-SA 4.0 UW-Madison PHARM Group. Last modified: November 12, 2023. Website built with Franklin.jl and the Julia programming language.
\ No newline at end of file + UW-Madison Bitstream Computing Hackathon

Bitstream Computing Hackathon at UW-Madison

Welcome to BCH@UW!

The hackathon is now live. You can work through the materials off line, on your own schedule. Please join the mailing list to ask any questions! Submissions are due in two weeks (don't be afraid to ask for an extension if you need one).

Watch the Introduction video or browse the PPTX slides at your convenience.

This hackathon will give you a chance to learn about ultra low-power neural networks, how they are designed, how they are programmed or trained, and how they are used to process sensory data from the real world.

When you participate, you will learn:

  • a new programming language (Julia)

  • about low-cost, ultra-low computing using Bitstreams as a data type

  • how to train a powerful neural network called MobileNet

  • how to prune/tune/quantize this network to make it energy efficient

The BCH@UW Hackathon kicks off on Sun 11/12/2023 at 1pm in EH2261. In the meantime, you can work through the tutorials on this website to get started with the tools and optimization flows.

Participation rules

Goal: Prune a pre-trained MobileNetv1 model to optimize for energy efficiency without compromising the accuracy.

Your entry will be evaluated on three categories:

  • Accuracy: how well can your model classify input images?

  • Area: how big is the hardware circuit implementation of your model?

  • Energy: how energy efficient is your hardware?

Follow the instructions in the submission guide to evaluate your result.

Information and suppport

Join the mailing list for updates/questions/support:

NOTE: You need a gmail/google account to join the mailing list. You can always create one with your existing email and delete it once you are done with it by following the instructions found here: https://support.google.com/accounts/answer/27441

Hackathon flyer

🚧 Site under construction 🚧


CC BY-SA 4.0 UW-Madison PHARM Group. Last modified: November 12, 2023. Website built with Franklin.jl and the Julia programming language.
\ No newline at end of file diff --git a/tutorials/bitstream/index.html b/tutorials/bitstream/index.html index 3a8c9c1..948223b 100644 --- a/tutorials/bitstream/index.html +++ b/tutorials/bitstream/index.html @@ -25,7 +25,7 @@ SBitstream{Float64}(value = 0.3) with 1002 bits.

Finally, we can see that the empirical average over the SBits in queue matches the encoded value quite closely.

-
abs(estimate(x) - float(x))
0.00259481037924153
+
abs(estimate(x) - float(x))
0.0005988023952095967

Operations on SBitstreams

So far, we have not computed any meaningful results with BitSAD. Let's go back to the multiplication example and try to multiply two SBitstreams.

y = SBitstream(0.5)
@@ -51,7 +51,7 @@ 

end -abs(estimate(z) - float(z))

0.008000000000000007
+abs(estimate(z) - float(z))
0.0040000000000000036

We used a helper function, multiply_sbit to multiply the positive and negative channel of each SBit separately. This resulted in a new SBit, zbit, which we pushed onto z. When we take the empirical average of all these zbits, we see that it is close to the true mean of z.

Hopefully, you can now see why stochastic computing can be so resource efficient. Each channel of multiply_sbit only needed to multiply two 1-bit numbers. This can be done with a single AND gate.

diff --git a/tutorials/pruning/index.html b/tutorials/pruning/index.html index e26470a..90a4942 100644 --- a/tutorials/pruning/index.html +++ b/tutorials/pruning/index.html @@ -24,13 +24,13 @@

m_pruned = keepprune(m_ch_pruned) m_prop = prune_propagate(m_pruned) mults, adds, output_size = compute_dot_prods(m_prop, (96, 96, 3, 1)) -println("Propagated MobileNet Mults ", mults, " Adds ", adds)
Propagated MobileNet Mults 4919881 Adds 4820043
+println("Propagated MobileNet Mults ", mults, " Adds ", adds)
Propagated MobileNet Mults 4992057 Adds 4887057
 

Resizing the propagated model

If enough nodes get pruned out, there would be slices in the model which accomplish nothing, computationally. Instead of wasting resources on passing these kernels full of zeros around, they can be eliminated from the structure of our model.

m_resized = resize(m_prop)
 mults, adds, output_size = compute_dot_prods(m_resized, (96, 96, 3, 1))
-println("Resized MobileNet Mults ", mults, " Adds ", adds)
Resized MobileNet Mults 3568722 Adds 3420021
+println("Resized MobileNet Mults ", mults, " Adds ", adds)
Resized MobileNet Mults 3671590 Adds 3518013
 

Pruning and Finetuning pipeline

Now that we seen how to prune our model, let's try to finetune it to recover some of the accuracy we lost. A basic template setup for training the model is provided by trainer function, and can be used as a starting point for your own training methodology.