Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 19 additions & 19 deletions assignment/a1/answers
Original file line number Diff line number Diff line change
Expand Up @@ -20,66 +20,66 @@
# ------------------------------------------------------------------

# Question 1.1 (/1): What are the dimensions of W? (Hint... don't change the dimensionality of the answer.)
neural_network_basics_a_1_1: [d0]
neural_network_basics_a_1_1: [1]

# Question 1.2 (/1): What are the dimensions of b? (Hint... don't change the dimensionality of the answer.)
neural_network_basics_a_1_2: [d0]
neural_network_basics_a_1_2: [1]


# ------------------------------------------------------------------
# | Section (B): Batching (4 points) |
# ------------------------------------------------------------------

# Question 1 (/1): What are the dimensions of W?
neural_network_basics_b_1: [d0]
neural_network_basics_b_1: [11,1]

# Question 2 (/1): What are the dimensions of b?
neural_network_basics_b_2: [d0]
neural_network_basics_b_2: [1,1]

# Question 3 (/1): What are the dimensions of x?
neural_network_basics_b_3: [d0, d1]
neural_network_basics_b_3: [30,11]

# Question 4 (/1): What are the dimensions of z?
neural_network_basics_b_4: [d0]
neural_network_basics_b_4: [30,1]


# ------------------------------------------------------------------
# | Section (C): Logistic Regression NumPy Implementation (4 points) |
# ------------------------------------------------------------------

# Question 1 (/2): What is the probability of the positive class for [0, 0, 0, 0, 5]?
neural_network_basics_c_1: 0.00000
neural_network_basics_c_1: 0.50000

# Question 2 (/2): What is the cross entropy loss (Base 2) if the second example is positive?
neural_network_basics_c_2: 0
neural_network_basics_c_2: 1.0000


# ------------------------------------------------------------------
# | Section (D): NumPy Feed Forward Neural Network (4 points) |
# ------------------------------------------------------------------

# Question 1 (/2): What is the probability of the third example in the batch?
neural_network_basics_d_1: 0.00000
neural_network_basics_d_1: 0.36920

# Question 2 (/2): What is the cross-entropy loss if its label is negative?
neural_network_basics_d_2: 0.00000
neural_network_basics_d_2: 0.66500


# ------------------------------------------------------------------
# | Section (E): Softmax (8 points) |
# ------------------------------------------------------------------

# Question 1 (/2): What is the probability of the middle class?
neural_network_basics_e_1: 0.00000
neural_network_basics_e_1: 0.11730

# Question 2 (/2): What is the cross-entropy loss (log base 2) if the correct class is the last (z=8)?
neural_network_basics_e_2: 0.00000
neural_network_basics_e_2: 0.20620

# Question 3.1 (/2): What are the dimensions of W3 above if it were a three class problem instead of a binary one?
neural_network_basics_e_3_1: [d0, d1]
neural_network_basics_e_3_1: [10,3]

# Question 3.2 (/2): What is the dimension of b3 above if it were a three class problem?
neural_network_basics_e_3_2: [d0]
neural_network_basics_e_3_2: [3]



Expand All @@ -98,19 +98,19 @@ neural_network_basics_e_3_2: [d0]
tensorflow_1_1: 0

# Question 2 (/2): What's the derivative of relu(z) with respect to z if z = 5
tensorflow_1_2: 0
tensorflow_1_2: 5

# Question 3 (/2): Why do you still use a sigmoid at the top of the binary classification network?
# (This question is multiple choice. Delete all but the correct answer).
tensorflow_1_3:
- Its range matches what is allowed for a probability.
- Sigmoid is convenient, but you lose nothing by using a Relu or Tanh


# Question 4 (/2): For the sequential model, what is the minimum number of hidden layers with the same number of neurons in each (n where n > 5) you can get away with and still achieve the desired loss on the training set?
tensorflow_1_4: 0
tensorflow_1_4: 1

# Question 5 (/2): What is the smallest number of neurons (n) you can use in a layer in the network with a large number of layers (layers > 3) and still get the desired loss on the training set? (Assume all layers have the same number of neurons .)
tensorflow_1_5: 0
tensorflow_1_5: 8

# Question 6 (/2): What is the accuracy score you get after training the functional model for 10 epochs?
tensorflow_1_6: 0.00000
tensorflow_1_6: 0.99650