From 3e49533f965c898d70a82f193071e74cd2fddbe2 Mon Sep 17 00:00:00 2001 From: Stephen Brennan Date: Tue, 6 Oct 2015 22:05:39 -0400 Subject: [PATCH] W06: Cop out solution for problem 5. --- w06/w06.tex | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/w06/w06.tex b/w06/w06.tex index bd75c30..bad1010 100644 --- a/w06/w06.tex +++ b/w06/w06.tex @@ -217,9 +217,16 @@ \begin{question} When learning the weights for the perceptron, we dropped the $sign()$ activation function to make the objective smooth. Show that the same - strategy does not work for an arbitrary ANN. (Hint: consider the shape of - the decision boundary if we did this.) (10 points) + strategy does not work for an arbitrary ANN. (Hint: consider the shape the + of decision boundary if we did this.) (10 points) \end{question} + + When we train a single perceptron, we drop the $sign()$ function because it + is not differentiable, and dropping the sign function will give us the same + result for classification. However, in an ANN, the output of a unit will be + used by units in the next layer. It is important that the activation + function retains the properties of the $sign()$ function so that these units + receive consistent input. \end{problem} \newpage