Skip to content

Commit

Permalink
W06: Cop out solution for problem 5.
Browse files Browse the repository at this point in the history
  • Loading branch information
Stephen Brennan committed Oct 7, 2015
1 parent 64e2ba3 commit 3e49533
Showing 1 changed file with 9 additions and 2 deletions.
11 changes: 9 additions & 2 deletions w06/w06.tex
Original file line number Diff line number Diff line change
Expand Up @@ -217,9 +217,16 @@
\begin{question}
When learning the weights for the perceptron, we dropped the $sign()$
activation function to make the objective smooth. Show that the same
strategy does not work for an arbitrary ANN. (Hint: consider the shape of
the decision boundary if we did this.) (10 points)
strategy does not work for an arbitrary ANN. (Hint: consider the shape the
of decision boundary if we did this.) (10 points)
\end{question}

When we train a single perceptron, we drop the $sign()$ function because it
is not differentiable, and dropping the sign function will give us the same
result for classification. However, in an ANN, the output of a unit will be
used by units in the next layer. It is important that the activation
function retains the properties of the $sign()$ function so that these units
receive consistent input.
\end{problem}

\newpage
Expand Down

0 comments on commit 3e49533

Please sign in to comment.