From a8f249400d7dde145507d40ddbac2b9ba3d051f8 Mon Sep 17 00:00:00 2001 From: mlipasti Date: Tue, 7 Nov 2023 01:48:22 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20=20@=20ad8b7?= =?UTF-8?q?737a069417930a0f54b1f61ae9534b118c2=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- assets/tutorials/bitstream/code/output/ex10.res | 2 +- assets/tutorials/bitstream/code/output/ex4.res | 2 +- assets/tutorials/bitstream/code/output/ex7.res | 2 +- assets/tutorials/pruning/code/output/ex7.out | 2 +- assets/tutorials/pruning/code/output/ex8.out | 2 +- index.html | 2 +- tutorials/bitstream/index.html | 6 +++--- tutorials/pruning/index.html | 4 ++-- 8 files changed, 11 insertions(+), 11 deletions(-) diff --git a/assets/tutorials/bitstream/code/output/ex10.res b/assets/tutorials/bitstream/code/output/ex10.res index cf1c8de..e901cdf 100644 --- a/assets/tutorials/bitstream/code/output/ex10.res +++ b/assets/tutorials/bitstream/code/output/ex10.res @@ -1 +1 @@ -0.0030000000000000027 \ No newline at end of file +0.007000000000000006 \ No newline at end of file diff --git a/assets/tutorials/bitstream/code/output/ex4.res b/assets/tutorials/bitstream/code/output/ex4.res index b2204e8..ae0c5c5 100644 --- a/assets/tutorials/bitstream/code/output/ex4.res +++ b/assets/tutorials/bitstream/code/output/ex4.res @@ -1 +1 @@ -SBit(pos = true, neg = false) \ No newline at end of file +SBit(pos = false, neg = false) \ No newline at end of file diff --git a/assets/tutorials/bitstream/code/output/ex7.res b/assets/tutorials/bitstream/code/output/ex7.res index c2e5097..03b6bc8 100644 --- a/assets/tutorials/bitstream/code/output/ex7.res +++ b/assets/tutorials/bitstream/code/output/ex7.res @@ -1 +1 @@ -0.0033932135728543256 \ No newline at end of file +0.006586826347305397 \ No newline at end of file diff --git a/assets/tutorials/pruning/code/output/ex7.out b/assets/tutorials/pruning/code/output/ex7.out index d8314ae..f937d7f 100644 --- a/assets/tutorials/pruning/code/output/ex7.out +++ b/assets/tutorials/pruning/code/output/ex7.out @@ -1 +1 @@ -Propagated MobileNet Mults 5093506 Adds 4984905 +Propagated MobileNet Mults 4977772 Adds 4875654 diff --git a/assets/tutorials/pruning/code/output/ex8.out b/assets/tutorials/pruning/code/output/ex8.out index 87685f9..7e8a050 100644 --- a/assets/tutorials/pruning/code/output/ex8.out +++ b/assets/tutorials/pruning/code/output/ex8.out @@ -1 +1 @@ -Resized MobileNet Mults 3779404 Adds 3622566 +Resized MobileNet Mults 3645231 Adds 3494586 diff --git a/index.html b/index.html index 396d2e2..d1609c4 100644 --- a/index.html +++ b/index.html @@ -1 +1 @@ - UW-Madison Bitstream Computing Hackathon

Bitstream Computing Hackathon at UW-Madison

Welcome to BCH@UW!

Watch the Introduction video or browse the PPTX slides at your convenience.

This hackathon will give you a chance to learn about ultra low-power neural networks, how they are designed, how they are programmed or trained, and how they are used to process sensory data from the real world.

When you participate, you will learn:

  • a new programming language (Julia)

  • about low-cost, ultra-low computing using Bitstreams as a data type

  • how to train a powerful neural network called MobileNet

  • how to prune/tune/quantize this network to make it energy efficient

The BCH@UW Hackathon kicks off on Sun 11/12/2023 at 1pm in EH2261. In the meantime, you can work through the tutorials on this website to get started with the tools and optimization flows.

Participation rules

Goal: Prune a pre-trained MobileNetv1 model to optimize for energy efficiency without compromising the accuracy.

Your entry will be evaluated on three categories:

  • Accuracy: how well can your model classify input images?

  • Area: how big is the hardware circuit implementation of your model?

  • Energy: how energy efficient is your hardware?

Follow the instructions in the submission guide to evaluate your result.

Information and suppport

Join the mailing list for updates/questions/support:

NOTE: You need a gmail/google account to join the mailing list. You can always create one with your existing email and delete it once you are done with it by following the instructions found here: https://support.google.com/accounts/answer/27441

Hackathon flyer

🚧 Site under construction 🚧


CC BY-SA 4.0 UW-Madison PHARM Group. Last modified: November 07, 2023. Website built with Franklin.jl and the Julia programming language.
\ No newline at end of file + UW-Madison Bitstream Computing Hackathon

Bitstream Computing Hackathon at UW-Madison

Welcome to BCH@UW!

Watch the Introduction video improved audio) or browse the PPTX slides at your convenience.

This hackathon will give you a chance to learn about ultra low-power neural networks, how they are designed, how they are programmed or trained, and how they are used to process sensory data from the real world.

When you participate, you will learn:

  • a new programming language (Julia)

  • about low-cost, ultra-low computing using Bitstreams as a data type

  • how to train a powerful neural network called MobileNet

  • how to prune/tune/quantize this network to make it energy efficient

The BCH@UW Hackathon kicks off on Sun 11/12/2023 at 1pm in EH2261. In the meantime, you can work through the tutorials on this website to get started with the tools and optimization flows.

Participation rules

Goal: Prune a pre-trained MobileNetv1 model to optimize for energy efficiency without compromising the accuracy.

Your entry will be evaluated on three categories:

  • Accuracy: how well can your model classify input images?

  • Area: how big is the hardware circuit implementation of your model?

  • Energy: how energy efficient is your hardware?

Follow the instructions in the submission guide to evaluate your result.

Information and suppport

Join the mailing list for updates/questions/support:

NOTE: You need a gmail/google account to join the mailing list. You can always create one with your existing email and delete it once you are done with it by following the instructions found here: https://support.google.com/accounts/answer/27441

Hackathon flyer

🚧 Site under construction 🚧


CC BY-SA 4.0 UW-Madison PHARM Group. Last modified: November 07, 2023. Website built with Franklin.jl and the Julia programming language.
\ No newline at end of file diff --git a/tutorials/bitstream/index.html b/tutorials/bitstream/index.html index e07d5b0..b9b2f4c 100644 --- a/tutorials/bitstream/index.html +++ b/tutorials/bitstream/index.html @@ -3,7 +3,7 @@ x = SBitstream(0.3)
SBitstream{Float64}(value = 0.3)
     with 0 bits.

Here, we created a SBitstream (the type in BitSAD for stochastic bitstreams) encoding the real value 0.3. SBitstream will keep track of the mean of the Bernoulli distribution, which we can recover with float.

float(x)
0.3

You'll also notice that there were "0 bits enqueue" in x. This refers to the fact that the bitstream, x, is a sequence of samples. Currently, we have not drawn any samples from x. We can try that now:

-
xt = pop!(x)
SBit(pos = true, neg = false)
+
xt = pop!(x)
SBit(pos = false, neg = false)

Now, we have a single sample, xt, which is of type SBit. An SBit is a "stochastic bit" which is just a convenient alias for a NamedTuple with two parts –- the positive part (pos) and the negative part (neg).

Wait, I thought stochastic bitstreams were a single bit sequence?
–- You (probably)

@@ -25,7 +25,7 @@ SBitstream{Float64}(value = 0.3) with 1002 bits.

Finally, we can see that the empirical average over the SBits in queue matches the encoded value quite closely.

-
abs(estimate(x) - float(x))
0.0033932135728543256
+
abs(estimate(x) - float(x))
0.006586826347305397

Operations on SBitstreams

So far, we have not computed any meaningful results with BitSAD. Let's go back to the multiplication example and try to multiply two SBitstreams.

y = SBitstream(0.5)
@@ -51,7 +51,7 @@ 

end -abs(estimate(z) - float(z))

0.0030000000000000027
+abs(estimate(z) - float(z))
0.007000000000000006

We used a helper function, multiply_sbit to multiply the positive and negative channel of each SBit separately. This resulted in a new SBit, zbit, which we pushed onto z. When we take the empirical average of all these zbits, we see that it is close to the true mean of z.

Hopefully, you can now see why stochastic computing can be so resource efficient. Each channel of multiply_sbit only needed to multiply two 1-bit numbers. This can be done with a single AND gate.

diff --git a/tutorials/pruning/index.html b/tutorials/pruning/index.html index 8b1184f..1d4ffe1 100644 --- a/tutorials/pruning/index.html +++ b/tutorials/pruning/index.html @@ -24,13 +24,13 @@

m_pruned = keepprune(m_ch_pruned) m_prop = prune_propagate(m_pruned) mults, adds, output_size = compute_dot_prods(m_prop, (96, 96, 3, 1)) -println("Propagated MobileNet Mults ", mults, " Adds ", adds)
Propagated MobileNet Mults 5093506 Adds 4984905
+println("Propagated MobileNet Mults ", mults, " Adds ", adds)
Propagated MobileNet Mults 4977772 Adds 4875654
 

Resizing the propagated model

If enough nodes get pruned out, there would be slices in the model which accomplish nothing, computationally. Instead of wasting resources on passing these kernels full of zeros around, they can be eliminated from the structure of our model.

m_resized = resize(m_prop)
 mults, adds, output_size = compute_dot_prods(m_resized, (96, 96, 3, 1))
-println("Resized MobileNet Mults ", mults, " Adds ", adds)
Resized MobileNet Mults 3779404 Adds 3622566
+println("Resized MobileNet Mults ", mults, " Adds ", adds)
Resized MobileNet Mults 3645231 Adds 3494586
 

Pruning and Finetuning pipeline

Now that we seen how to prune our model, let's try to finetune it to recover some of the accuracy we lost. A basic template setup for training the model is provided by trainer function, and can be used as a starting point for your own training methodology.