Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
124 commits
Select commit Hold shift + click to select a range
b1cc00c
Update user id
nimrossum Feb 28, 2024
2e4403f
Solve numpy_entropy
nimrossum Mar 3, 2024
e912cb1
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
ada791e
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
5b296c9
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
22b8213
Update team description
nimrossum Mar 4, 2024
d67d119
Solve pca_first.keras.py
nimrossum Mar 4, 2024
232ee1c
Specify encoding
nimrossum Mar 4, 2024
c218afb
Add Lisa's solution
nimrossum Mar 4, 2024
dd4c484
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
01a9ad8
Fix test script
nimrossum Mar 4, 2024
618c830
Solve mnist_layers_activations.py
nimrossum Mar 5, 2024
e8c9096
Add team description to all files
nimrossum Mar 5, 2024
4e62352
Fix the model distribution in numpy_entropy.
foxik Mar 3, 2024
2b9afd3
Improve typography by making outer parentheses bigger.
foxik Mar 3, 2024
fa888ae
Remove the Nesterov momentum from the questions.
foxik Mar 3, 2024
11592ae
Add syllabus of the third lecture.
foxik Mar 3, 2024
8f2e2d9
Add slides for lecture 3.
foxik Mar 4, 2024
8eba671
Incremental display of Softmax MLE loss.
foxik Mar 4, 2024
66ff9c3
Add link to the Czech lecture 3 recording.
foxik Mar 4, 2024
8f662b1
Se #θ to denote number of parameters instead of |θ|.
foxik Mar 5, 2024
46b41ff
Fix "derived by MSE" -> MLE
petrkasp Mar 5, 2024
4c6ed61
Add link to the recording of English lecture 3.
foxik Mar 5, 2024
8ab0aa9
my solution so far
lizawang Mar 11, 2024
9b37aa7
Use Python 3.9 typing features.
foxik Mar 5, 2024
edc4649
Rename the MNIST.Datasplit to MNIST.Dataset.
foxik Mar 6, 2024
b3adec3
Add lecture 3 assignments.
foxik Mar 6, 2024
2fa29c9
Add links to the recordings from the third practicals.
foxik Mar 6, 2024
775a4ad
Remove the backticks from the recording title.
foxik Mar 6, 2024
900488c
Update title for designing NNs in slides/03
akumm2k Mar 5, 2024
ba5d2d2
Fix a typo on the list of possible weight decays.
foxik Mar 7, 2024
26acc73
Reformulate gradient clipping API description.
foxik Mar 7, 2024
1a2f651
Make sure thetas are present where it is appropriate.
foxik Mar 7, 2024
f51f2b3
Set the number of threads only when `args.threads` is nonzero.
foxik Mar 8, 2024
7b88728
Correctly use MNIST.Dataset after renaming from MNIST.Datasplit.
foxik Mar 8, 2024
8993624
Add MNIST.Datasplit alias for MNIST.Dataset.
foxik Mar 10, 2024
9691d71
Check that if alphabet is passed, it contains "<pad>" and "<unk>".
foxik Mar 10, 2024
42aa28e
Add slides for lecture 4.
foxik Mar 11, 2024
176f5a1
Add link to Czech lecture 4 recording.
foxik Mar 11, 2024
6df2af5
Raise error when args.batch_size or args.epochs is still an ellipsis.
foxik Mar 11, 2024
2ffc7e3
Update repo setup
nimrossum Mar 12, 2024
5bf8a9c
update
lizawang Mar 12, 2024
9d6372b
update
lizawang Mar 12, 2024
63b6188
third commit
lizawang Mar 12, 2024
9a55b19
final
lizawang Mar 12, 2024
3ceaecc
final
lizawang Mar 12, 2024
095223e
fixed
lizawang Mar 16, 2024
866dc6c
task2,3
lizawang Mar 16, 2024
67df8ac
Merge branch 'hw2_lisa' into DL_lisa
lizawang Mar 16, 2024
bb292a5
barely passed the baseline
lizawang Mar 19, 2024
6017b7b
barely passed the baseline
lizawang Mar 19, 2024
99dff6a
Update user id
nimrossum Feb 28, 2024
0ff61ed
Solve numpy_entropy
nimrossum Mar 3, 2024
cd61dcd
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
ff1adfc
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
8acc24d
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
a15b402
Update team description
nimrossum Mar 4, 2024
6abc32e
Solve pca_first.keras.py
nimrossum Mar 4, 2024
3d43f48
Specify encoding
nimrossum Mar 4, 2024
4b560bd
Add Lisa's solution
nimrossum Mar 4, 2024
aebe52d
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
5d8b091
Fix test script
nimrossum Mar 4, 2024
741ecfb
Solve mnist_layers_activations.py
nimrossum Mar 5, 2024
f1a3abc
Add team description to all files
nimrossum Mar 5, 2024
3bf89ba
Update repo setup
nimrossum Mar 12, 2024
1bb7ee5
Solve sgd_backpropagation
nimrossum Mar 5, 2024
7b5b10d
The average score was 423.7.
nimrossum Mar 12, 2024
376a1e8
The average score was 457.23.
nimrossum Mar 12, 2024
218ca64
The average score was 465.86.
nimrossum Mar 12, 2024
6aad8db
The average score was 490.01.
nimrossum Mar 12, 2024
d698a61
The average score was 491.41.
nimrossum Mar 12, 2024
ac5533d
Add test script
nimrossum Mar 12, 2024
d90e244
The average score was 498.73.
nimrossum Mar 12, 2024
0bcf0e5
Refactor loss calculation in Model class
nimrossum Mar 12, 2024
79188d1
Add .venv/Include to .gitignore
nimrossum Mar 12, 2024
ac9722d
Update user id
nimrossum Feb 28, 2024
d63ed53
Solve numpy_entropy
nimrossum Mar 3, 2024
2d332d7
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
56c53d0
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
ffae03c
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
ebbc6ea
Update team description
nimrossum Mar 4, 2024
8e13a47
Specify encoding
nimrossum Mar 4, 2024
7b16fa8
Add Lisa's solution
nimrossum Mar 4, 2024
4b6d23d
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
5d8da61
Fix test script
nimrossum Mar 4, 2024
09ac428
Update repo setup
nimrossum Mar 12, 2024
06a9887
task2,3
lizawang Mar 16, 2024
97bcd62
my solution so far
lizawang Mar 11, 2024
abde70f
update
lizawang Mar 12, 2024
37ec3d8
update
lizawang Mar 12, 2024
0fbc474
third commit
lizawang Mar 12, 2024
c6e6977
final
lizawang Mar 12, 2024
5d298e6
final
lizawang Mar 12, 2024
e2e01d6
fixed
lizawang Mar 16, 2024
971d069
Remove unnecessary entries from .gitignore
nimrossum Mar 18, 2024
9767497
Solve numpy_entropy
nimrossum Mar 3, 2024
2f892aa
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
eeb9797
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
ec92984
Add Lisa's solution
nimrossum Mar 4, 2024
c272072
Update .gitignore
nimrossum Mar 18, 2024
83b4417
Solve mnist_regularization
nimrossum Mar 18, 2024
c343955
Solve mnist_ensemble
nimrossum Mar 21, 2024
f93989a
Broken uppercase
nimrossum Mar 21, 2024
9a07c3b
commit
lizawang Mar 23, 2024
de44e6f
Merge branch 'DL_lisa' of https://github.com/joglr/npfl138 into DL_lisa
lizawang Mar 23, 2024
2c52149
Merge branch 'master' of https://github.com/joglr/npfl138 into DL_lisa
lizawang Apr 9, 2024
d9b76c3
Update
lizawang Apr 9, 2024
3a761e1
Add results
lizawang Apr 9, 2024
7d4388d
finally solved
lizawang Apr 12, 2024
10dee40
Merge branch 'master' of https://github.com/joglr/npfl138 into DL_lisa
lizawang Apr 16, 2024
b0719d2
Add files via upload
lizawang Apr 16, 2024
5571e3a
Merge branch 'DL_lisa' of github.com:joglr/npfl138 into DL_lisa
lizawang Apr 16, 2024
2895410
Merge branch 'master' of github.com:joglr/npfl138 into DL_lisa
lizawang Apr 18, 2024
a1ee4a5
update
lizawang Apr 18, 2024
fcd1110
update
lizawang Apr 21, 2024
c2715c5
svhn update
lizawang Apr 22, 2024
7a5fce1
update
lizawang Apr 23, 2024
19db759
update
lizawang Apr 23, 2024
bcef215
update
lizawang Apr 23, 2024
1b904c0
Add tensorboard callback
nimrossum Apr 23, 2024
65a1358
update
lizawang Apr 28, 2024
0e1a0cf
Merge branch 'DL_lisa' of github.com:joglr/npfl138 into DL_lisa
lizawang Apr 28, 2024
11c1645
solved tagger_cle
lizawang Apr 28, 2024
ac4393e
Add tagger_competition
nimrossum Apr 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
**/.venv/Lib
**/.venv/Scripts
**/.venv/share
**/.venv/
logs/
mnist.npz
*.zip
*.tfrecord
3 changes: 3 additions & 0 deletions .venv/pyvenv.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
home = C:\Python310
include-system-site-packages = false
version = 3.10.7
39 changes: 39 additions & 0 deletions labs/01/expected.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
python3 mnist_layers_activations.py --hidden_layers=0 --activation=none
Epoch 1/10 accuracy: 0.7801 - loss: 0.8405 - val_accuracy: 0.9300 - val_loss: 0.2716
Epoch 5/10 accuracy: 0.9222 - loss: 0.2792 - val_accuracy: 0.9406 - val_loss: 0.2203
Epoch 10/10 accuracy: 0.9304 - loss: 0.2515 - val_accuracy: 0.9432 - val_loss: 0.2159

python3 mnist_layers_activations.py --hidden_layers=1 --activation=none
Epoch 1/10 accuracy: 0.8483 - loss: 0.5230 - val_accuracy: 0.9352 - val_loss: 0.2422
Epoch 5/10 accuracy: 0.9236 - loss: 0.2758 - val_accuracy: 0.9360 - val_loss: 0.2325
Epoch 10/10 accuracy: 0.9298 - loss: 0.2517 - val_accuracy: 0.9354 - val_loss: 0.2439

python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu
Epoch 1/10 accuracy: 0.8503 - loss: 0.5286 - val_accuracy: 0.9604 - val_loss: 0.1432
Epoch 5/10 accuracy: 0.9824 - loss: 0.0613 - val_accuracy: 0.9808 - val_loss: 0.0740
Epoch 10/10 accuracy: 0.9948 - loss: 0.0202 - val_accuracy: 0.9788 - val_loss: 0.0821

python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh
Epoch 1/10 accuracy: 0.8529 - loss: 0.5183 - val_accuracy: 0.9564 - val_loss: 0.1632
Epoch 5/10 accuracy: 0.9800 - loss: 0.0728 - val_accuracy: 0.9740 - val_loss: 0.0853
Epoch 10/10 accuracy: 0.9948 - loss: 0.0244 - val_accuracy: 0.9782 - val_loss: 0.0772

python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
Epoch 1/10 accuracy: 0.7851 - loss: 0.8650 - val_accuracy: 0.9414 - val_loss: 0.2196
Epoch 5/10 accuracy: 0.9647 - loss: 0.1270 - val_accuracy: 0.9704 - val_loss: 0.1079
Epoch 10/10 accuracy: 0.9852 - loss: 0.0583 - val_accuracy: 0.9756 - val_loss: 0.0837

python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu
Epoch 1/10 accuracy: 0.8497 - loss: 0.5011 - val_accuracy: 0.9664 - val_loss: 0.1225
Epoch 5/10 accuracy: 0.9862 - loss: 0.0438 - val_accuracy: 0.9734 - val_loss: 0.1026
Epoch 10/10 accuracy: 0.9932 - loss: 0.0202 - val_accuracy: 0.9818 - val_loss: 0.0865

python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu
Epoch 1/10 accuracy: 0.7710 - loss: 0.6793 - val_accuracy: 0.9570 - val_loss: 0.1479
Epoch 5/10 accuracy: 0.9780 - loss: 0.0783 - val_accuracy: 0.9786 - val_loss: 0.0808
Epoch 10/10 accuracy: 0.9869 - loss: 0.0481 - val_accuracy: 0.9724 - val_loss: 0.1163

python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
Epoch 1/10 accuracy: 0.1072 - loss: 2.3068 - val_accuracy: 0.1784 - val_loss: 2.1247
Epoch 5/10 accuracy: 0.8825 - loss: 0.4776 - val_accuracy: 0.9164 - val_loss: 0.3686
Epoch 10/10 accuracy: 0.9294 - loss: 0.2994 - val_accuracy: 0.9386 - val_loss: 0.2671
24 changes: 24 additions & 0 deletions labs/01/mnist.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=0 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=0 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=tanh
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=3 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
# Write-Output ""
15 changes: 14 additions & 1 deletion labs/01/mnist_layers_activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@

from mnist import MNIST

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

parser = argparse.ArgumentParser()
# These arguments will be set appropriately by ReCodEx, even if you change them.
parser.add_argument("--activation", default="none", choices=["none", "relu", "tanh", "sigmoid"], help="Activation.")
Expand Down Expand Up @@ -68,14 +73,22 @@ def main(args: argparse.Namespace) -> dict[str, float]:
# Create the model
model = keras.Sequential()
model.add(keras.Input([MNIST.H, MNIST.W, MNIST.C]))
# TODO: Finish the model. Namely:
# Finish the model. Namely:
# - start by adding a `keras.layers.Rescaling(1 / 255)` layer;
# - then add a `keras.layers.Flatten()` layer;
# - add `args.hidden_layers` number of fully connected hidden layers
# `keras.layers.Dense()` with `args.hidden_layer` neurons, using activation
# from `args.activation`, allowing "none", "relu", "tanh", "sigmoid";
# - finally, add an output fully connected layer with `MNIST.LABELS` units
# and `softmax` activation.
model.add(keras.layers.Rescaling(1 / 255))
model.add(keras.layers.Flatten())

for _ in range(args.hidden_layers):
activation = None if args.activation == "none" else args.activation
model.add(keras.layers.Dense(args.hidden_layer, activation=activation))

model.add(keras.layers.Dense(MNIST.LABELS, activation="softmax"))

model.compile(
optimizer=keras.optimizers.Adam(),
Expand Down
87 changes: 68 additions & 19 deletions labs/01/numpy_entropy.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,16 @@
#!/usr/bin/env python3

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66


# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

import argparse

import numpy as np
Expand All @@ -12,42 +24,79 @@


def main(args: argparse.Namespace) -> tuple[float, float, float]:
# TODO: Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r") as data:
# Load data distribution, each line containing a datapoint -- a string.
data_map = {}

# Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r", encoding="utf-8") as data:
# Load data distribution, each line containing a datapoint -- a string.
data_map = {}

with open(args.data_path, "r", encoding="utf-8") as data:
for line in data:
line = line.rstrip("\n")
# TODO: Process the line, aggregating data with built-in Python

# Process the line, aggregating data with built-in Python

# Process the line, aggregating data with built-in Python
# data structures (not NumPy, which is not suitable for incremental
# addition and string mapping).
if line in data_map:
data_map[line] += 1
else:
data_map[line] = 1
if line in data_map:
data_map[line] += 1
else:
data_map[line] = 1

# TODO: Create a NumPy array containing the data distribution. The
# Create a NumPy array containing the data distribution. The
# Create a NumPy array containing the data distribution. The
# NumPy array should contain only data, not any mapping. Alternatively,
# the NumPy array might be created after loading the model distribution.
data_dist = np.array(list(data_map.values())) / sum(data_map.values())

# Load model distribution, each line `string \t probability`.
model_map = {}
data_dist = np.array(list(data_map.values())) / sum(data_map.values())

# Load model distribution, each line `string \t probability`.
model_map = {}

# TODO: Load model distribution, each line `string \t probability`.
with open(args.model_path, "r") as model:
for line in model:
line = line.rstrip("\n")
# TODO: Process the line, aggregating using Python data structures.
key, value = line.split("\t")
model_map[key] = float(value)
key, value = line.split("\t")
model_map[key] = float(value)

# Create a NumPy array containing the model distribution.
model_dist = np.array([model_map[key] if key in model_map else np.inf for key in data_map.keys()])

# TODO: Create a NumPy array containing the model distribution.
# Compute the entropy H(data distribution).
entropy = -np.sum(data_dist * np.log(data_dist))
# Create a NumPy array containing the model distribution.
model_dist = np.array([model_map[key] if key in model_map else np.inf for key in data_map.keys()])

# TODO: Compute the entropy H(data distribution). You should not use
# manual for/while cycles, but instead use the fact that most NumPy methods
# operate on all elements (for example `*` is vector element-wise multiplication).
entropy = ...
# Compute the entropy H(data distribution).
entropy = -np.sum(data_dist * np.log(data_dist))

# TODO: Compute cross-entropy H(data distribution, model distribution).
# When some data distribution elements are missing in the model distribution,
# return `np.inf`.
crossentropy = ...
# Compute cross-entropy H(data distribution, model distribution).
crossentropy = -np.sum(data_dist * np.log(model_dist))
# Compute cross-entropy H(data distribution, model distribution).
crossentropy = -np.sum(data_dist * np.log(model_dist))

# TODO: Compute KL-divergence D_KL(data distribution, model_distribution),
# again using `np.inf` when needed.
kl_divergence = ...
# Compute KL-divergence D_KL(data distribution, model_distribution).
kl_divergence = crossentropy - entropy
# kl_divergence = np.where(np.isinf(kl_divergence), np.inf, kl_divergence)
# Compute KL-divergence D_KL(data distribution, model_distribution).
kl_divergence = crossentropy - entropy
# kl_divergence = np.where(np.isinf(kl_divergence), np.inf, kl_divergence)

# Return the computed values for ReCodEx to validate.
return entropy, crossentropy, kl_divergence
return entropy, crossentropy if np.isfinite(crossentropy) else np.inf, kl_divergence if np.isfinite(kl_divergence) else np.inf
return entropy, crossentropy if np.isfinite(crossentropy) else np.inf, kl_divergence if np.isfinite(kl_divergence) else np.inf


if __name__ == "__main__":
Expand Down
167 changes: 167 additions & 0 deletions labs/01/output.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
Epoch 1/10
1100/1100 14s 12ms/step - accuracy: 0.7761 - loss: 0.8442 - val_accuracy: 0.9298 - val_loss: 0.2730
Epoch 2/10
1100/1100 12s 11ms/step - accuracy: 0.9057 - loss: 0.3428 - val_accuracy: 0.9336 - val_loss: 0.2418
Epoch 3/10
1100/1100 11s 10ms/step - accuracy: 0.9177 - loss: 0.2945 - val_accuracy: 0.9366 - val_loss: 0.2284
Epoch 4/10
1100/1100 12s 10ms/step - accuracy: 0.9193 - loss: 0.2839 - val_accuracy: 0.9384 - val_loss: 0.2267
Epoch 5/10
1100/1100 11s 10ms/step - accuracy: 0.9228 - loss: 0.2790 - val_accuracy: 0.9392 - val_loss: 0.2208
Epoch 6/10
1100/1100 12s 11ms/step - accuracy: 0.9244 - loss: 0.2713 - val_accuracy: 0.9440 - val_loss: 0.2162
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9252 - loss: 0.2662 - val_accuracy: 0.9398 - val_loss: 0.2178
Epoch 8/10
1100/1100 14s 12ms/step - accuracy: 0.9269 - loss: 0.2626 - val_accuracy: 0.9398 - val_loss: 0.2169
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9286 - loss: 0.2612 - val_accuracy: 0.9458 - val_loss: 0.2128
Epoch 10/10
1100/1100 13s 12ms/step - accuracy: 0.9307 - loss: 0.2515 - val_accuracy: 0.9438 - val_loss: 0.2161

Epoch 1/10
1100/1100 15s 13ms/step - accuracy: 0.8422 - loss: 0.5383 - val_accuracy: 0.9346 - val_loss: 0.2400
Epoch 2/10
1100/1100 18s 17ms/step - accuracy: 0.9120 - loss: 0.3102 - val_accuracy: 0.9364 - val_loss: 0.2372
Epoch 3/10
1100/1100 16s 15ms/step - accuracy: 0.9233 - loss: 0.2774 - val_accuracy: 0.9352 - val_loss: 0.2342
Epoch 4/10
1100/1100 16s 14ms/step - accuracy: 0.9225 - loss: 0.2736 - val_accuracy: 0.9366 - val_loss: 0.2336
Epoch 5/10
1100/1100 15s 13ms/step - accuracy: 0.9233 - loss: 0.2760 - val_accuracy: 0.9344 - val_loss: 0.2331
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.9251 - loss: 0.2683 - val_accuracy: 0.9382 - val_loss: 0.2247
Epoch 7/10
1100/1100 15s 14ms/step - accuracy: 0.9261 - loss: 0.2658 - val_accuracy: 0.9356 - val_loss: 0.2367
Epoch 8/10
1100/1100 15s 14ms/step - accuracy: 0.9256 - loss: 0.2635 - val_accuracy: 0.9364 - val_loss: 0.2308
Epoch 9/10
1100/1100 15s 13ms/step - accuracy: 0.9253 - loss: 0.2625 - val_accuracy: 0.9386 - val_loss: 0.2277
Epoch 10/10
1100/1100 15s 13ms/step - accuracy: 0.9301 - loss: 0.2515 - val_accuracy: 0.9358 - val_loss: 0.2441

Epoch 1/10
1100/1100 16s 13ms/step - accuracy: 0.8499 - loss: 0.5317 - val_accuracy: 0.9618 - val_loss: 0.1400
Epoch 2/10
1100/1100 15s 13ms/step - accuracy: 0.9517 - loss: 0.1637 - val_accuracy: 0.9682 - val_loss: 0.1153
Epoch 3/10
1100/1100 14s 13ms/step - accuracy: 0.9700 - loss: 0.1021 - val_accuracy: 0.9730 - val_loss: 0.0897
Epoch 4/10
1100/1100 13s 12ms/step - accuracy: 0.9774 - loss: 0.0757 - val_accuracy: 0.9754 - val_loss: 0.0835
Epoch 5/10
1100/1100 13s 12ms/step - accuracy: 0.9824 - loss: 0.0603 - val_accuracy: 0.9772 - val_loss: 0.0766
Epoch 6/10
1100/1100 14s 12ms/step - accuracy: 0.9855 - loss: 0.0486 - val_accuracy: 0.9762 - val_loss: 0.0850
Epoch 7/10
1100/1100 14s 13ms/step - accuracy: 0.9889 - loss: 0.0374 - val_accuracy: 0.9776 - val_loss: 0.0774
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9901 - loss: 0.0318 - val_accuracy: 0.9786 - val_loss: 0.0765
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9928 - loss: 0.0267 - val_accuracy: 0.9804 - val_loss: 0.0766
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9944 - loss: 0.0208 - val_accuracy: 0.9792 - val_loss: 0.0801

Epoch 1/10
1100/1100 14s 12ms/step - accuracy: 0.8468 - loss: 0.5308 - val_accuracy: 0.9594 - val_loss: 0.1591
Epoch 2/10
1100/1100 13s 12ms/step - accuracy: 0.9433 - loss: 0.1909 - val_accuracy: 0.9646 - val_loss: 0.1300
Epoch 3/10
1100/1100 13s 12ms/step - accuracy: 0.9658 - loss: 0.1235 - val_accuracy: 0.9726 - val_loss: 0.0973
Epoch 4/10
1100/1100 13s 12ms/step - accuracy: 0.9744 - loss: 0.0909 - val_accuracy: 0.9732 - val_loss: 0.0876
Epoch 5/10
1100/1100 13s 12ms/step - accuracy: 0.9798 - loss: 0.0747 - val_accuracy: 0.9788 - val_loss: 0.0770
Epoch 6/10
1100/1100 13s 12ms/step - accuracy: 0.9832 - loss: 0.0606 - val_accuracy: 0.9766 - val_loss: 0.0801
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9881 - loss: 0.0460 - val_accuracy: 0.9792 - val_loss: 0.0714
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9894 - loss: 0.0397 - val_accuracy: 0.9768 - val_loss: 0.0741
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9923 - loss: 0.0312 - val_accuracy: 0.9796 - val_loss: 0.0709
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9940 - loss: 0.0257 - val_accuracy: 0.9802 - val_loss: 0.0720

Epoch 1/10
1100/1100 15s 13ms/step - accuracy: 0.8072 - loss: 0.8138 - val_accuracy: 0.9452 - val_loss: 0.2121
Epoch 2/10
1100/1100 15s 14ms/step - accuracy: 0.9241 - loss: 0.2602 - val_accuracy: 0.9570 - val_loss: 0.1663
Epoch 3/10
1100/1100 15s 14ms/step - accuracy: 0.9476 - loss: 0.1863 - val_accuracy: 0.9648 - val_loss: 0.1322
Epoch 4/10
1100/1100 14s 13ms/step - accuracy: 0.9583 - loss: 0.1490 - val_accuracy: 0.9670 - val_loss: 0.1168
Epoch 5/10
1100/1100 14s 13ms/step - accuracy: 0.9658 - loss: 0.1243 - val_accuracy: 0.9696 - val_loss: 0.1047
Epoch 6/10
1100/1100 14s 12ms/step - accuracy: 0.9706 - loss: 0.1065 - val_accuracy: 0.9718 - val_loss: 0.0975
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9758 - loss: 0.0891 - val_accuracy: 0.9740 - val_loss: 0.0918
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9779 - loss: 0.0792 - val_accuracy: 0.9758 - val_loss: 0.0885
Epoch 9/10
1100/1100 14s 13ms/step - accuracy: 0.9816 - loss: 0.0681 - val_accuracy: 0.9776 - val_loss: 0.0825
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9852 - loss: 0.0583 - val_accuracy: 0.9766 - val_loss: 0.0831

Epoch 1/10
1100/1100 16s 14ms/step - accuracy: 0.8483 - loss: 0.5002 - val_accuracy: 0.9650 - val_loss: 0.1189
Epoch 2/10
1100/1100 16s 14ms/step - accuracy: 0.9609 - loss: 0.1262 - val_accuracy: 0.9718 - val_loss: 0.0971
Epoch 3/10
1100/1100 16s 14ms/step - accuracy: 0.9759 - loss: 0.0783 - val_accuracy: 0.9772 - val_loss: 0.0690
Epoch 4/10
1100/1100 16s 14ms/step - accuracy: 0.9810 - loss: 0.0597 - val_accuracy: 0.9788 - val_loss: 0.0752
Epoch 5/10
1100/1100 15s 14ms/step - accuracy: 0.9855 - loss: 0.0468 - val_accuracy: 0.9748 - val_loss: 0.0817
Epoch 6/10
1100/1100 16s 14ms/step - accuracy: 0.9884 - loss: 0.0398 - val_accuracy: 0.9758 - val_loss: 0.0909
Epoch 7/10
1100/1100 15s 14ms/step - accuracy: 0.9898 - loss: 0.0318 - val_accuracy: 0.9724 - val_loss: 0.0998
Epoch 8/10
1100/1100 16s 14ms/step - accuracy: 0.9892 - loss: 0.0305 - val_accuracy: 0.9778 - val_loss: 0.0952
Epoch 9/10
1100/1100 16s 14ms/step - accuracy: 0.9914 - loss: 0.0267 - val_accuracy: 0.9756 - val_loss: 0.0878
Epoch 10/10
1100/1100 16s 15ms/step - accuracy: 0.9935 - loss: 0.0203 - val_accuracy: 0.9770 - val_loss: 0.0974

Epoch 1/10
1100/1100 24s 21ms/step - accuracy: 0.7772 - loss: 0.6657 - val_accuracy: 0.9524 - val_loss: 0.1752
Epoch 2/10
1100/1100 24s 22ms/step - accuracy: 0.9525 - loss: 0.1705 - val_accuracy: 0.9682 - val_loss: 0.1261
Epoch 3/10
1100/1100 22s 20ms/step - accuracy: 0.9675 - loss: 0.1162 - val_accuracy: 0.9750 - val_loss: 0.0945
Epoch 4/10
1100/1100 22s 20ms/step - accuracy: 0.9735 - loss: 0.0929 - val_accuracy: 0.9720 - val_loss: 0.1018
Epoch 5/10
1100/1100 22s 20ms/step - accuracy: 0.9789 - loss: 0.0794 - val_accuracy: 0.9762 - val_loss: 0.0888
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.9806 - loss: 0.0729 - val_accuracy: 0.9760 - val_loss: 0.0961
Epoch 7/10
1100/1100 22s 20ms/step - accuracy: 0.9847 - loss: 0.0578 - val_accuracy: 0.9810 - val_loss: 0.0932
Epoch 8/10
1100/1100 22s 20ms/step - accuracy: 0.9824 - loss: 0.0643 - val_accuracy: 0.9786 - val_loss: 0.0854
Epoch 9/10
1100/1100 22s 20ms/step - accuracy: 0.9864 - loss: 0.0487 - val_accuracy: 0.9764 - val_loss: 0.1054
Epoch 10/10
1100/1100 22s 20ms/step - accuracy: 0.9864 - loss: 0.0493 - val_accuracy: 0.9780 - val_loss: 0.1108

Epoch 1/10
1100/1100 23s 20ms/step - accuracy: 0.1052 - loss: 2.3130 - val_accuracy: 0.1808 - val_loss: 1.9383
Epoch 2/10
1100/1100 22s 20ms/step - accuracy: 0.2002 - loss: 1.9364 - val_accuracy: 0.2168 - val_loss: 1.8587
Epoch 3/10
1100/1100 23s 20ms/step - accuracy: 0.2161 - loss: 1.8392 - val_accuracy: 0.5588 - val_loss: 1.2106
Epoch 4/10
1100/1100 22s 20ms/step - accuracy: 0.5594 - loss: 1.1159 - val_accuracy: 0.8168 - val_loss: 0.7119
Epoch 5/10
1100/1100 22s 20ms/step - accuracy: 0.8359 - loss: 0.6312 - val_accuracy: 0.8994 - val_loss: 0.4360
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.8827 - loss: 0.4854 - val_accuracy: 0.9066 - val_loss: 0.4053
Epoch 7/10
1100/1100 22s 20ms/step - accuracy: 0.9007 - loss: 0.4218 - val_accuracy: 0.9166 - val_loss: 0.3660
Epoch 8/10
1100/1100 22s 20ms/step - accuracy: 0.9075 - loss: 0.3940 - val_accuracy: 0.9204 - val_loss: 0.3552
Epoch 9/10
1100/1100 22s 20ms/step - accuracy: 0.9090 - loss: 0.3922 - val_accuracy: 0.9242 - val_loss: 0.3356
Epoch 10/10
1100/1100 24s 22ms/step - accuracy: 0.9191 - loss: 0.3534 - val_accuracy: 0.9270 - val_loss: 0.3286
Loading