Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
99dff6a
Update user id
nimrossum Feb 28, 2024
0ff61ed
Solve numpy_entropy
nimrossum Mar 3, 2024
cd61dcd
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
ff1adfc
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
8acc24d
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
a15b402
Update team description
nimrossum Mar 4, 2024
6abc32e
Solve pca_first.keras.py
nimrossum Mar 4, 2024
3d43f48
Specify encoding
nimrossum Mar 4, 2024
4b560bd
Add Lisa's solution
nimrossum Mar 4, 2024
aebe52d
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
5d8b091
Fix test script
nimrossum Mar 4, 2024
741ecfb
Solve mnist_layers_activations.py
nimrossum Mar 5, 2024
f1a3abc
Add team description to all files
nimrossum Mar 5, 2024
3bf89ba
Update repo setup
nimrossum Mar 12, 2024
1bb7ee5
Solve sgd_backpropagation
nimrossum Mar 5, 2024
7b5b10d
The average score was 423.7.
nimrossum Mar 12, 2024
376a1e8
The average score was 457.23.
nimrossum Mar 12, 2024
218ca64
The average score was 465.86.
nimrossum Mar 12, 2024
6aad8db
The average score was 490.01.
nimrossum Mar 12, 2024
d698a61
The average score was 491.41.
nimrossum Mar 12, 2024
ac5533d
Add test script
nimrossum Mar 12, 2024
d90e244
The average score was 498.73.
nimrossum Mar 12, 2024
0bcf0e5
Refactor loss calculation in Model class
nimrossum Mar 12, 2024
79188d1
Add .venv/Include to .gitignore
nimrossum Mar 12, 2024
ac9722d
Update user id
nimrossum Feb 28, 2024
d63ed53
Solve numpy_entropy
nimrossum Mar 3, 2024
2d332d7
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
56c53d0
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
ffae03c
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
ebbc6ea
Update team description
nimrossum Mar 4, 2024
8e13a47
Specify encoding
nimrossum Mar 4, 2024
7b16fa8
Add Lisa's solution
nimrossum Mar 4, 2024
4b6d23d
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
5d8da61
Fix test script
nimrossum Mar 4, 2024
09ac428
Update repo setup
nimrossum Mar 12, 2024
06a9887
task2,3
lizawang Mar 16, 2024
97bcd62
my solution so far
lizawang Mar 11, 2024
abde70f
update
lizawang Mar 12, 2024
37ec3d8
update
lizawang Mar 12, 2024
0fbc474
third commit
lizawang Mar 12, 2024
c6e6977
final
lizawang Mar 12, 2024
5d298e6
final
lizawang Mar 12, 2024
e2e01d6
fixed
lizawang Mar 16, 2024
971d069
Remove unnecessary entries from .gitignore
nimrossum Mar 18, 2024
9767497
Solve numpy_entropy
nimrossum Mar 3, 2024
2f892aa
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
eeb9797
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
ec92984
Add Lisa's solution
nimrossum Mar 4, 2024
c272072
Update .gitignore
nimrossum Mar 18, 2024
83b4417
Solve mnist_regularization
nimrossum Mar 18, 2024
c343955
Solve mnist_ensemble
nimrossum Mar 21, 2024
f93989a
Broken uppercase
nimrossum Mar 21, 2024
f4b0e12
Update cnn_manual description to mention PyTorch instead of TensorFlow
davidruda Mar 23, 2024
807ee66
Remove obsolete sentence.
foxik Mar 23, 2024
7b308da
Add missing torch suubmodule import to cifar10.py
akumm2k Mar 23, 2024
ee9fe3c
Fix cifar10 `Dataset` type hint
akumm2k Mar 23, 2024
1dce322
Remove unnecessary annotation.
foxik Mar 23, 2024
ae63b89
Reflow line longer than 119 characters.
foxik Mar 23, 2024
ef4d124
Add a question about CutMix.
foxik Mar 25, 2024
88bfd8e
Add lecture 6 slides.
foxik Mar 25, 2024
17228f9
Add link to the recording of Czech lecture 6.
foxik Mar 25, 2024
9f9dd39
Fix link to R-CNN paper.
foxik Mar 25, 2024
4e1f053
Add link to the recording of English lecture 6.
foxik Mar 26, 2024
7a8c6b6
Fix error in mnist_ensemble task description.
foxik Mar 26, 2024
0f3c971
Move Group Normalization to the list of required topics.
foxik Mar 26, 2024
3d2526c
Add batch size in RPN training.
foxik Mar 26, 2024
219963e
Fix a type (the types are evaluated in global scope).
foxik Mar 26, 2024
54c62fc
Provide `transform` also in the `TransformedDataset`.
foxik Mar 26, 2024
f1ec251
Improve loading of empty values.
foxik Mar 26, 2024
db9213d
Update the ignore file.
foxik Mar 27, 2024
3be95e4
Add assignments of the lecture 6.
foxik Mar 27, 2024
c5757d5
Mention that next Monday is Easter Monday.
foxik Mar 27, 2024
d3d6402
Add link to the recordings of practicals 6.
foxik Mar 27, 2024
08072f8
Use correct metric name.
foxik Mar 27, 2024
00e15c7
Fix the huber_loss figure.
foxik Mar 27, 2024
c2aac07
Reformulate the conditions for obtaining regular and bonus points.
foxik Mar 27, 2024
0244b99
Solve mnist_cnn.py
nimrossum Mar 21, 2024
baa8a8b
Fix dropout getting rounded
nimrossum Mar 24, 2024
24671e5
Add test script
nimrossum Mar 24, 2024
7776939
Fix issue with CB layers
nimrossum Mar 24, 2024
acfab5e
mnist_cnn.py passes 1-5
nimrossum Mar 24, 2024
4368a88
Refactor and simplify solution to mnist_cnn.py
nimrossum Mar 27, 2024
c432b9d
Solve mnist_multiple.py
nimrossum Mar 28, 2024
422cb93
Improve test output
nimrossum Mar 28, 2024
d1adbc6
Solve torch_dataset
nimrossum Mar 28, 2024
57e5d35
Solve cifar_competition
nimrossum Apr 1, 2024
82f8d75
Jonas Homework 5
nimrossum Apr 2, 2024
c0004c1
Progress on cags_classification
nimrossum Apr 3, 2024
edf5ea9
Solve cags_segmentation
nimrossum Apr 9, 2024
a809ff5
Remove tensorflow dep
nimrossum Apr 9, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
**/.venv/
logs/
mnist.npz
*.zip
3 changes: 3 additions & 0 deletions .venv/pyvenv.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
home = C:\Python310
include-system-site-packages = false
version = 3.10.7
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic"
}
31 changes: 31 additions & 0 deletions exam/questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,8 @@

- Compare Cutout and DropBlock. [5]

- Describe in detail how is CutMix performed. [5]

- Describe Squeeze and Excitation applied to a ResNet block. [5]

- Draw the Mobile inverted bottleneck block (including explanation of separable
Expand All @@ -119,3 +121,32 @@
channels. Write down (or derive) the equation of transposed convolution
(or equivalently backpropagation through a convolution to its inputs). [5]

#### Questions@:, Lecture 6 Questions
- Describe the differences among semantic segmentation, image classification,
object detection, and instance segmentation, and write down which metrics
are used for these tasks. [5]

- Write down how is $\mathit{AP}_{50}$ computed. [5]

- Considering a Fast-RCNN architecture, draw overall network architecture,
explain what a RoI-pooling layer is, show how the network parametrizes
bounding boxes and write down the loss. Finally, describe non-maximum
suppression and how the Fast-RCNN prediction is performed. [10]

- Considering a Faster-RCNN architecture, describe the region proposal network
(what are anchors, architecture including both heads, how are the coordinates
of proposals parametrized, what does the loss look like). [10]

- Considering Mask-RCNN architecture, describe the additions to a Faster-RCNN
architecture (the RoI-Align layer, the new mask-producing head). [5]

- Write down the focal loss with class weighting, including the commonly used
hyperparameter values. [5]

- Draw the overall architecture of a RetinaNet architecture (the computation of
$C_1, \ldots, C_7$, the FPN architecture computing $P_1, \ldots, P_7$
including the block combining feature maps of different resolutions; the
classification and bounding box generation heads, including their output
size). Write down the losses for both heads. [10]

- Describe GroupNorm, and compare it to BatchNorm and LayerNorm. [5]
2 changes: 1 addition & 1 deletion labs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ logs/
*.h5
*.keras
*.npz
*.pickle
*.tfrecord
*.zip
39 changes: 39 additions & 0 deletions labs/01/expected.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
python3 mnist_layers_activations.py --hidden_layers=0 --activation=none
Epoch 1/10 accuracy: 0.7801 - loss: 0.8405 - val_accuracy: 0.9300 - val_loss: 0.2716
Epoch 5/10 accuracy: 0.9222 - loss: 0.2792 - val_accuracy: 0.9406 - val_loss: 0.2203
Epoch 10/10 accuracy: 0.9304 - loss: 0.2515 - val_accuracy: 0.9432 - val_loss: 0.2159

python3 mnist_layers_activations.py --hidden_layers=1 --activation=none
Epoch 1/10 accuracy: 0.8483 - loss: 0.5230 - val_accuracy: 0.9352 - val_loss: 0.2422
Epoch 5/10 accuracy: 0.9236 - loss: 0.2758 - val_accuracy: 0.9360 - val_loss: 0.2325
Epoch 10/10 accuracy: 0.9298 - loss: 0.2517 - val_accuracy: 0.9354 - val_loss: 0.2439

python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu
Epoch 1/10 accuracy: 0.8503 - loss: 0.5286 - val_accuracy: 0.9604 - val_loss: 0.1432
Epoch 5/10 accuracy: 0.9824 - loss: 0.0613 - val_accuracy: 0.9808 - val_loss: 0.0740
Epoch 10/10 accuracy: 0.9948 - loss: 0.0202 - val_accuracy: 0.9788 - val_loss: 0.0821

python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh
Epoch 1/10 accuracy: 0.8529 - loss: 0.5183 - val_accuracy: 0.9564 - val_loss: 0.1632
Epoch 5/10 accuracy: 0.9800 - loss: 0.0728 - val_accuracy: 0.9740 - val_loss: 0.0853
Epoch 10/10 accuracy: 0.9948 - loss: 0.0244 - val_accuracy: 0.9782 - val_loss: 0.0772

python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
Epoch 1/10 accuracy: 0.7851 - loss: 0.8650 - val_accuracy: 0.9414 - val_loss: 0.2196
Epoch 5/10 accuracy: 0.9647 - loss: 0.1270 - val_accuracy: 0.9704 - val_loss: 0.1079
Epoch 10/10 accuracy: 0.9852 - loss: 0.0583 - val_accuracy: 0.9756 - val_loss: 0.0837

python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu
Epoch 1/10 accuracy: 0.8497 - loss: 0.5011 - val_accuracy: 0.9664 - val_loss: 0.1225
Epoch 5/10 accuracy: 0.9862 - loss: 0.0438 - val_accuracy: 0.9734 - val_loss: 0.1026
Epoch 10/10 accuracy: 0.9932 - loss: 0.0202 - val_accuracy: 0.9818 - val_loss: 0.0865

python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu
Epoch 1/10 accuracy: 0.7710 - loss: 0.6793 - val_accuracy: 0.9570 - val_loss: 0.1479
Epoch 5/10 accuracy: 0.9780 - loss: 0.0783 - val_accuracy: 0.9786 - val_loss: 0.0808
Epoch 10/10 accuracy: 0.9869 - loss: 0.0481 - val_accuracy: 0.9724 - val_loss: 0.1163

python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
Epoch 1/10 accuracy: 0.1072 - loss: 2.3068 - val_accuracy: 0.1784 - val_loss: 2.1247
Epoch 5/10 accuracy: 0.8825 - loss: 0.4776 - val_accuracy: 0.9164 - val_loss: 0.3686
Epoch 10/10 accuracy: 0.9294 - loss: 0.2994 - val_accuracy: 0.9386 - val_loss: 0.2671
24 changes: 24 additions & 0 deletions labs/01/mnist.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=0 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=0 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=tanh
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=3 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
# Write-Output ""
15 changes: 14 additions & 1 deletion labs/01/mnist_layers_activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@

from mnist import MNIST

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

parser = argparse.ArgumentParser()
# These arguments will be set appropriately by ReCodEx, even if you change them.
parser.add_argument("--activation", default="none", choices=["none", "relu", "tanh", "sigmoid"], help="Activation.")
Expand Down Expand Up @@ -68,14 +73,22 @@ def main(args: argparse.Namespace) -> dict[str, float]:
# Create the model
model = keras.Sequential()
model.add(keras.Input([MNIST.H, MNIST.W, MNIST.C]))
# TODO: Finish the model. Namely:
# Finish the model. Namely:
# - start by adding a `keras.layers.Rescaling(1 / 255)` layer;
# - then add a `keras.layers.Flatten()` layer;
# - add `args.hidden_layers` number of fully connected hidden layers
# `keras.layers.Dense()` with `args.hidden_layer` neurons, using activation
# from `args.activation`, allowing "none", "relu", "tanh", "sigmoid";
# - finally, add an output fully connected layer with `MNIST.LABELS` units
# and `softmax` activation.
model.add(keras.layers.Rescaling(1 / 255))
model.add(keras.layers.Flatten())

for _ in range(args.hidden_layers):
activation = None if args.activation == "none" else args.activation
model.add(keras.layers.Dense(args.hidden_layer, activation=activation))

model.add(keras.layers.Dense(MNIST.LABELS, activation="softmax"))

model.compile(
optimizer=keras.optimizers.Adam(),
Expand Down
53 changes: 34 additions & 19 deletions labs/01/numpy_entropy.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
#!/usr/bin/env python3

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

import argparse

import numpy as np
Expand All @@ -12,42 +18,51 @@


def main(args: argparse.Namespace) -> tuple[float, float, float]:
# TODO: Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r") as data:
# Load data distribution, each line containing a datapoint -- a string.
data_map = {}

# Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r", encoding="utf-8") as data:
for line in data:
line = line.rstrip("\n")
# TODO: Process the line, aggregating data with built-in Python

# Process the line, aggregating data with built-in Python
# data structures (not NumPy, which is not suitable for incremental
# addition and string mapping).
if line in data_map:
data_map[line] += 1
else:
data_map[line] = 1

# TODO: Create a NumPy array containing the data distribution. The
# Create a NumPy array containing the data distribution. The
# NumPy array should contain only data, not any mapping. Alternatively,
# the NumPy array might be created after loading the model distribution.
data_dist = np.array(list(data_map.values())) / sum(data_map.values())

# Load model distribution, each line `string \t probability`.
model_map = {}

# TODO: Load model distribution, each line `string \t probability`.
with open(args.model_path, "r") as model:
for line in model:
line = line.rstrip("\n")
# TODO: Process the line, aggregating using Python data structures.
key, value = line.split("\t")
model_map[key] = float(value)

# TODO: Create a NumPy array containing the model distribution.
# Create a NumPy array containing the model distribution.
model_dist = np.array([model_map[key] if key in model_map else np.inf for key in data_map.keys()])

# TODO: Compute the entropy H(data distribution). You should not use
# manual for/while cycles, but instead use the fact that most NumPy methods
# operate on all elements (for example `*` is vector element-wise multiplication).
entropy = ...
# Compute the entropy H(data distribution).
entropy = -np.sum(data_dist * np.log(data_dist))

# TODO: Compute cross-entropy H(data distribution, model distribution).
# When some data distribution elements are missing in the model distribution,
# return `np.inf`.
crossentropy = ...
# Compute cross-entropy H(data distribution, model distribution).
crossentropy = -np.sum(data_dist * np.log(model_dist))

# TODO: Compute KL-divergence D_KL(data distribution, model_distribution),
# again using `np.inf` when needed.
kl_divergence = ...
# Compute KL-divergence D_KL(data distribution, model_distribution).
kl_divergence = crossentropy - entropy
# kl_divergence = np.where(np.isinf(kl_divergence), np.inf, kl_divergence)

# Return the computed values for ReCodEx to validate.
return entropy, crossentropy, kl_divergence
return entropy, crossentropy if np.isfinite(crossentropy) else np.inf, kl_divergence if np.isfinite(kl_divergence) else np.inf


if __name__ == "__main__":
Expand Down
Loading