Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
9645fe9
Update user id
nimrossum Feb 28, 2024
d7c8844
Solve numpy_entropy
nimrossum Mar 3, 2024
372885d
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
161e5c9
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
bfa91ea
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
cba46bc
Update team description
nimrossum Mar 4, 2024
76d549e
Solve pca_first.keras.py
nimrossum Mar 4, 2024
eda8cab
Specify encoding
nimrossum Mar 4, 2024
c84f9a3
Add Lisa's solution
nimrossum Mar 4, 2024
4d12eab
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
ca7e4bd
Fix test script
nimrossum Mar 4, 2024
3f6bde2
Solve mnist_layers_activations.py
nimrossum Mar 5, 2024
451cb9e
Add team description to all files
nimrossum Mar 5, 2024
558f9f4
Update repo setup
nimrossum Mar 12, 2024
55c07e2
Solve sgd_backpropagation
nimrossum Mar 5, 2024
732d7d5
The average score was 423.7.
nimrossum Mar 12, 2024
7d36248
The average score was 457.23.
nimrossum Mar 12, 2024
ed8a2a0
The average score was 465.86.
nimrossum Mar 12, 2024
be32730
The average score was 490.01.
nimrossum Mar 12, 2024
01f0bde
The average score was 491.41.
nimrossum Mar 12, 2024
b57af98
Add test script
nimrossum Mar 12, 2024
d0ad9b9
The average score was 498.73.
nimrossum Mar 12, 2024
83f390a
Refactor loss calculation in Model class
nimrossum Mar 12, 2024
df4da95
Add .venv/Include to .gitignore
nimrossum Mar 12, 2024
80063fd
Update user id
nimrossum Feb 28, 2024
3efc547
Solve numpy_entropy
nimrossum Mar 3, 2024
db32724
Add pull.sh script to automate upstream pull
nimrossum Mar 3, 2024
abb4320
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
63db7e3
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
0aa170f
Update team description
nimrossum Mar 4, 2024
90fe7a3
Specify encoding
nimrossum Mar 4, 2024
4b84b2f
Add Lisa's solution
nimrossum Mar 4, 2024
7955c92
Use matrix multiplication instead of element-wise multiplication
nimrossum Mar 4, 2024
68a5439
Fix test script
nimrossum Mar 4, 2024
6c10a61
Update repo setup
nimrossum Mar 12, 2024
3dcc91e
task2,3
lizawang Mar 16, 2024
2f0852a
my solution so far
lizawang Mar 11, 2024
2543504
update
lizawang Mar 12, 2024
a76ee80
update
lizawang Mar 12, 2024
8f66c58
third commit
lizawang Mar 12, 2024
986032a
final
lizawang Mar 12, 2024
8226764
final
lizawang Mar 12, 2024
7af654e
fixed
lizawang Mar 16, 2024
aaa30ef
Remove unnecessary entries from .gitignore
nimrossum Mar 18, 2024
ed5c3be
Solve numpy_entropy
nimrossum Mar 3, 2024
6659f8e
Fix reshape and compute covariance matrix in pca_first.keras.py and p…
nimrossum Mar 4, 2024
ded51d8
Add .gitignore, pull.ps1, and setup.ps1 files
nimrossum Mar 4, 2024
ae7c7a2
Add Lisa's solution
nimrossum Mar 4, 2024
28b0519
Update .gitignore
nimrossum Mar 18, 2024
329b154
Solve mnist_regularization
nimrossum Mar 18, 2024
93fd87b
Solve mnist_ensemble
nimrossum Mar 21, 2024
6ff7487
Broken uppercase
nimrossum Mar 21, 2024
8b21116
Add missing torch suubmodule import to cifar10.py
akumm2k Mar 23, 2024
5d2751d
Remove unnecessary annotation.
foxik Mar 23, 2024
9679da6
Solve mnist_cnn.py
nimrossum Mar 21, 2024
19b1dda
Fix dropout getting rounded
nimrossum Mar 24, 2024
50952bc
Add test script
nimrossum Mar 24, 2024
3dafa1c
Fix issue with CB layers
nimrossum Mar 24, 2024
cbf57e6
mnist_cnn.py passes 1-5
nimrossum Mar 24, 2024
a2bd060
Refactor and simplify solution to mnist_cnn.py
nimrossum Mar 27, 2024
3cdadd1
Solve mnist_multiple.py
nimrossum Mar 28, 2024
7f52f3c
Improve test output
nimrossum Mar 28, 2024
ca5827f
Solve torch_dataset
nimrossum Mar 28, 2024
0bb34e1
Solve cifar_competition
nimrossum Apr 1, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
**/.venv/
logs/
mnist.npz
*.zip
3 changes: 3 additions & 0 deletions .venv/pyvenv.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
home = C:\Python310
include-system-site-packages = false
version = 3.10.7
3 changes: 3 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"python.analysis.typeCheckingMode": "basic"
}
39 changes: 39 additions & 0 deletions labs/01/expected.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
python3 mnist_layers_activations.py --hidden_layers=0 --activation=none
Epoch 1/10 accuracy: 0.7801 - loss: 0.8405 - val_accuracy: 0.9300 - val_loss: 0.2716
Epoch 5/10 accuracy: 0.9222 - loss: 0.2792 - val_accuracy: 0.9406 - val_loss: 0.2203
Epoch 10/10 accuracy: 0.9304 - loss: 0.2515 - val_accuracy: 0.9432 - val_loss: 0.2159

python3 mnist_layers_activations.py --hidden_layers=1 --activation=none
Epoch 1/10 accuracy: 0.8483 - loss: 0.5230 - val_accuracy: 0.9352 - val_loss: 0.2422
Epoch 5/10 accuracy: 0.9236 - loss: 0.2758 - val_accuracy: 0.9360 - val_loss: 0.2325
Epoch 10/10 accuracy: 0.9298 - loss: 0.2517 - val_accuracy: 0.9354 - val_loss: 0.2439

python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu
Epoch 1/10 accuracy: 0.8503 - loss: 0.5286 - val_accuracy: 0.9604 - val_loss: 0.1432
Epoch 5/10 accuracy: 0.9824 - loss: 0.0613 - val_accuracy: 0.9808 - val_loss: 0.0740
Epoch 10/10 accuracy: 0.9948 - loss: 0.0202 - val_accuracy: 0.9788 - val_loss: 0.0821

python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh
Epoch 1/10 accuracy: 0.8529 - loss: 0.5183 - val_accuracy: 0.9564 - val_loss: 0.1632
Epoch 5/10 accuracy: 0.9800 - loss: 0.0728 - val_accuracy: 0.9740 - val_loss: 0.0853
Epoch 10/10 accuracy: 0.9948 - loss: 0.0244 - val_accuracy: 0.9782 - val_loss: 0.0772

python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
Epoch 1/10 accuracy: 0.7851 - loss: 0.8650 - val_accuracy: 0.9414 - val_loss: 0.2196
Epoch 5/10 accuracy: 0.9647 - loss: 0.1270 - val_accuracy: 0.9704 - val_loss: 0.1079
Epoch 10/10 accuracy: 0.9852 - loss: 0.0583 - val_accuracy: 0.9756 - val_loss: 0.0837

python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu
Epoch 1/10 accuracy: 0.8497 - loss: 0.5011 - val_accuracy: 0.9664 - val_loss: 0.1225
Epoch 5/10 accuracy: 0.9862 - loss: 0.0438 - val_accuracy: 0.9734 - val_loss: 0.1026
Epoch 10/10 accuracy: 0.9932 - loss: 0.0202 - val_accuracy: 0.9818 - val_loss: 0.0865

python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu
Epoch 1/10 accuracy: 0.7710 - loss: 0.6793 - val_accuracy: 0.9570 - val_loss: 0.1479
Epoch 5/10 accuracy: 0.9780 - loss: 0.0783 - val_accuracy: 0.9786 - val_loss: 0.0808
Epoch 10/10 accuracy: 0.9869 - loss: 0.0481 - val_accuracy: 0.9724 - val_loss: 0.1163

python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
Epoch 1/10 accuracy: 0.1072 - loss: 2.3068 - val_accuracy: 0.1784 - val_loss: 2.1247
Epoch 5/10 accuracy: 0.8825 - loss: 0.4776 - val_accuracy: 0.9164 - val_loss: 0.3686
Epoch 10/10 accuracy: 0.9294 - loss: 0.2994 - val_accuracy: 0.9386 - val_loss: 0.2671
24 changes: 24 additions & 0 deletions labs/01/mnist.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=0 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=0 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=none"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=none
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=tanh"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=tanh
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=1 --activation=sigmoid
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=3 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=3 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=relu"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=relu
# Write-Output ""
# Write-Output "python3 mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid"
..\..\.venv\Scripts\python mnist_layers_activations.py --hidden_layers=10 --activation=sigmoid
# Write-Output ""
15 changes: 14 additions & 1 deletion labs/01/mnist_layers_activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@

from mnist import MNIST

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

parser = argparse.ArgumentParser()
# These arguments will be set appropriately by ReCodEx, even if you change them.
parser.add_argument("--activation", default="none", choices=["none", "relu", "tanh", "sigmoid"], help="Activation.")
Expand Down Expand Up @@ -68,14 +73,22 @@ def main(args: argparse.Namespace) -> dict[str, float]:
# Create the model
model = keras.Sequential()
model.add(keras.Input([MNIST.H, MNIST.W, MNIST.C]))
# TODO: Finish the model. Namely:
# Finish the model. Namely:
# - start by adding a `keras.layers.Rescaling(1 / 255)` layer;
# - then add a `keras.layers.Flatten()` layer;
# - add `args.hidden_layers` number of fully connected hidden layers
# `keras.layers.Dense()` with `args.hidden_layer` neurons, using activation
# from `args.activation`, allowing "none", "relu", "tanh", "sigmoid";
# - finally, add an output fully connected layer with `MNIST.LABELS` units
# and `softmax` activation.
model.add(keras.layers.Rescaling(1 / 255))
model.add(keras.layers.Flatten())

for _ in range(args.hidden_layers):
activation = None if args.activation == "none" else args.activation
model.add(keras.layers.Dense(args.hidden_layer, activation=activation))

model.add(keras.layers.Dense(MNIST.LABELS, activation="softmax"))

model.compile(
optimizer=keras.optimizers.Adam(),
Expand Down
53 changes: 34 additions & 19 deletions labs/01/numpy_entropy.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
#!/usr/bin/env python3

# Jonas Glerup Røssum <jglr@itu.dk>
# 31a0a96a-c590-4486-b194-f72765b2ce25
# Xiao Wang <xiao.wang@student.uni-tuebingen.de>
# 91d4d1d7-b800-4765-96b9-df098ac36a66

import argparse

import numpy as np
Expand All @@ -12,42 +18,51 @@


def main(args: argparse.Namespace) -> tuple[float, float, float]:
# TODO: Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r") as data:
# Load data distribution, each line containing a datapoint -- a string.
data_map = {}

# Load data distribution, each line containing a datapoint -- a string.
with open(args.data_path, "r", encoding="utf-8") as data:
for line in data:
line = line.rstrip("\n")
# TODO: Process the line, aggregating data with built-in Python

# Process the line, aggregating data with built-in Python
# data structures (not NumPy, which is not suitable for incremental
# addition and string mapping).
if line in data_map:
data_map[line] += 1
else:
data_map[line] = 1

# TODO: Create a NumPy array containing the data distribution. The
# Create a NumPy array containing the data distribution. The
# NumPy array should contain only data, not any mapping. Alternatively,
# the NumPy array might be created after loading the model distribution.
data_dist = np.array(list(data_map.values())) / sum(data_map.values())

# Load model distribution, each line `string \t probability`.
model_map = {}

# TODO: Load model distribution, each line `string \t probability`.
with open(args.model_path, "r") as model:
for line in model:
line = line.rstrip("\n")
# TODO: Process the line, aggregating using Python data structures.
key, value = line.split("\t")
model_map[key] = float(value)

# TODO: Create a NumPy array containing the model distribution.
# Create a NumPy array containing the model distribution.
model_dist = np.array([model_map[key] if key in model_map else np.inf for key in data_map.keys()])

# TODO: Compute the entropy H(data distribution). You should not use
# manual for/while cycles, but instead use the fact that most NumPy methods
# operate on all elements (for example `*` is vector element-wise multiplication).
entropy = ...
# Compute the entropy H(data distribution).
entropy = -np.sum(data_dist * np.log(data_dist))

# TODO: Compute cross-entropy H(data distribution, model distribution).
# When some data distribution elements are missing in the model distribution,
# return `np.inf`.
crossentropy = ...
# Compute cross-entropy H(data distribution, model distribution).
crossentropy = -np.sum(data_dist * np.log(model_dist))

# TODO: Compute KL-divergence D_KL(data distribution, model_distribution),
# again using `np.inf` when needed.
kl_divergence = ...
# Compute KL-divergence D_KL(data distribution, model_distribution).
kl_divergence = crossentropy - entropy
# kl_divergence = np.where(np.isinf(kl_divergence), np.inf, kl_divergence)

# Return the computed values for ReCodEx to validate.
return entropy, crossentropy, kl_divergence
return entropy, crossentropy if np.isfinite(crossentropy) else np.inf, kl_divergence if np.isfinite(kl_divergence) else np.inf


if __name__ == "__main__":
Expand Down
167 changes: 167 additions & 0 deletions labs/01/output.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
Epoch 1/10
1100/1100 14s 12ms/step - accuracy: 0.7761 - loss: 0.8442 - val_accuracy: 0.9298 - val_loss: 0.2730
Epoch 2/10
1100/1100 12s 11ms/step - accuracy: 0.9057 - loss: 0.3428 - val_accuracy: 0.9336 - val_loss: 0.2418
Epoch 3/10
1100/1100 11s 10ms/step - accuracy: 0.9177 - loss: 0.2945 - val_accuracy: 0.9366 - val_loss: 0.2284
Epoch 4/10
1100/1100 12s 10ms/step - accuracy: 0.9193 - loss: 0.2839 - val_accuracy: 0.9384 - val_loss: 0.2267
Epoch 5/10
1100/1100 11s 10ms/step - accuracy: 0.9228 - loss: 0.2790 - val_accuracy: 0.9392 - val_loss: 0.2208
Epoch 6/10
1100/1100 12s 11ms/step - accuracy: 0.9244 - loss: 0.2713 - val_accuracy: 0.9440 - val_loss: 0.2162
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9252 - loss: 0.2662 - val_accuracy: 0.9398 - val_loss: 0.2178
Epoch 8/10
1100/1100 14s 12ms/step - accuracy: 0.9269 - loss: 0.2626 - val_accuracy: 0.9398 - val_loss: 0.2169
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9286 - loss: 0.2612 - val_accuracy: 0.9458 - val_loss: 0.2128
Epoch 10/10
1100/1100 13s 12ms/step - accuracy: 0.9307 - loss: 0.2515 - val_accuracy: 0.9438 - val_loss: 0.2161

Epoch 1/10
1100/1100 15s 13ms/step - accuracy: 0.8422 - loss: 0.5383 - val_accuracy: 0.9346 - val_loss: 0.2400
Epoch 2/10
1100/1100 18s 17ms/step - accuracy: 0.9120 - loss: 0.3102 - val_accuracy: 0.9364 - val_loss: 0.2372
Epoch 3/10
1100/1100 16s 15ms/step - accuracy: 0.9233 - loss: 0.2774 - val_accuracy: 0.9352 - val_loss: 0.2342
Epoch 4/10
1100/1100 16s 14ms/step - accuracy: 0.9225 - loss: 0.2736 - val_accuracy: 0.9366 - val_loss: 0.2336
Epoch 5/10
1100/1100 15s 13ms/step - accuracy: 0.9233 - loss: 0.2760 - val_accuracy: 0.9344 - val_loss: 0.2331
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.9251 - loss: 0.2683 - val_accuracy: 0.9382 - val_loss: 0.2247
Epoch 7/10
1100/1100 15s 14ms/step - accuracy: 0.9261 - loss: 0.2658 - val_accuracy: 0.9356 - val_loss: 0.2367
Epoch 8/10
1100/1100 15s 14ms/step - accuracy: 0.9256 - loss: 0.2635 - val_accuracy: 0.9364 - val_loss: 0.2308
Epoch 9/10
1100/1100 15s 13ms/step - accuracy: 0.9253 - loss: 0.2625 - val_accuracy: 0.9386 - val_loss: 0.2277
Epoch 10/10
1100/1100 15s 13ms/step - accuracy: 0.9301 - loss: 0.2515 - val_accuracy: 0.9358 - val_loss: 0.2441

Epoch 1/10
1100/1100 16s 13ms/step - accuracy: 0.8499 - loss: 0.5317 - val_accuracy: 0.9618 - val_loss: 0.1400
Epoch 2/10
1100/1100 15s 13ms/step - accuracy: 0.9517 - loss: 0.1637 - val_accuracy: 0.9682 - val_loss: 0.1153
Epoch 3/10
1100/1100 14s 13ms/step - accuracy: 0.9700 - loss: 0.1021 - val_accuracy: 0.9730 - val_loss: 0.0897
Epoch 4/10
1100/1100 13s 12ms/step - accuracy: 0.9774 - loss: 0.0757 - val_accuracy: 0.9754 - val_loss: 0.0835
Epoch 5/10
1100/1100 13s 12ms/step - accuracy: 0.9824 - loss: 0.0603 - val_accuracy: 0.9772 - val_loss: 0.0766
Epoch 6/10
1100/1100 14s 12ms/step - accuracy: 0.9855 - loss: 0.0486 - val_accuracy: 0.9762 - val_loss: 0.0850
Epoch 7/10
1100/1100 14s 13ms/step - accuracy: 0.9889 - loss: 0.0374 - val_accuracy: 0.9776 - val_loss: 0.0774
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9901 - loss: 0.0318 - val_accuracy: 0.9786 - val_loss: 0.0765
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9928 - loss: 0.0267 - val_accuracy: 0.9804 - val_loss: 0.0766
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9944 - loss: 0.0208 - val_accuracy: 0.9792 - val_loss: 0.0801

Epoch 1/10
1100/1100 14s 12ms/step - accuracy: 0.8468 - loss: 0.5308 - val_accuracy: 0.9594 - val_loss: 0.1591
Epoch 2/10
1100/1100 13s 12ms/step - accuracy: 0.9433 - loss: 0.1909 - val_accuracy: 0.9646 - val_loss: 0.1300
Epoch 3/10
1100/1100 13s 12ms/step - accuracy: 0.9658 - loss: 0.1235 - val_accuracy: 0.9726 - val_loss: 0.0973
Epoch 4/10
1100/1100 13s 12ms/step - accuracy: 0.9744 - loss: 0.0909 - val_accuracy: 0.9732 - val_loss: 0.0876
Epoch 5/10
1100/1100 13s 12ms/step - accuracy: 0.9798 - loss: 0.0747 - val_accuracy: 0.9788 - val_loss: 0.0770
Epoch 6/10
1100/1100 13s 12ms/step - accuracy: 0.9832 - loss: 0.0606 - val_accuracy: 0.9766 - val_loss: 0.0801
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9881 - loss: 0.0460 - val_accuracy: 0.9792 - val_loss: 0.0714
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9894 - loss: 0.0397 - val_accuracy: 0.9768 - val_loss: 0.0741
Epoch 9/10
1100/1100 13s 12ms/step - accuracy: 0.9923 - loss: 0.0312 - val_accuracy: 0.9796 - val_loss: 0.0709
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9940 - loss: 0.0257 - val_accuracy: 0.9802 - val_loss: 0.0720

Epoch 1/10
1100/1100 15s 13ms/step - accuracy: 0.8072 - loss: 0.8138 - val_accuracy: 0.9452 - val_loss: 0.2121
Epoch 2/10
1100/1100 15s 14ms/step - accuracy: 0.9241 - loss: 0.2602 - val_accuracy: 0.9570 - val_loss: 0.1663
Epoch 3/10
1100/1100 15s 14ms/step - accuracy: 0.9476 - loss: 0.1863 - val_accuracy: 0.9648 - val_loss: 0.1322
Epoch 4/10
1100/1100 14s 13ms/step - accuracy: 0.9583 - loss: 0.1490 - val_accuracy: 0.9670 - val_loss: 0.1168
Epoch 5/10
1100/1100 14s 13ms/step - accuracy: 0.9658 - loss: 0.1243 - val_accuracy: 0.9696 - val_loss: 0.1047
Epoch 6/10
1100/1100 14s 12ms/step - accuracy: 0.9706 - loss: 0.1065 - val_accuracy: 0.9718 - val_loss: 0.0975
Epoch 7/10
1100/1100 13s 12ms/step - accuracy: 0.9758 - loss: 0.0891 - val_accuracy: 0.9740 - val_loss: 0.0918
Epoch 8/10
1100/1100 13s 12ms/step - accuracy: 0.9779 - loss: 0.0792 - val_accuracy: 0.9758 - val_loss: 0.0885
Epoch 9/10
1100/1100 14s 13ms/step - accuracy: 0.9816 - loss: 0.0681 - val_accuracy: 0.9776 - val_loss: 0.0825
Epoch 10/10
1100/1100 14s 12ms/step - accuracy: 0.9852 - loss: 0.0583 - val_accuracy: 0.9766 - val_loss: 0.0831

Epoch 1/10
1100/1100 16s 14ms/step - accuracy: 0.8483 - loss: 0.5002 - val_accuracy: 0.9650 - val_loss: 0.1189
Epoch 2/10
1100/1100 16s 14ms/step - accuracy: 0.9609 - loss: 0.1262 - val_accuracy: 0.9718 - val_loss: 0.0971
Epoch 3/10
1100/1100 16s 14ms/step - accuracy: 0.9759 - loss: 0.0783 - val_accuracy: 0.9772 - val_loss: 0.0690
Epoch 4/10
1100/1100 16s 14ms/step - accuracy: 0.9810 - loss: 0.0597 - val_accuracy: 0.9788 - val_loss: 0.0752
Epoch 5/10
1100/1100 15s 14ms/step - accuracy: 0.9855 - loss: 0.0468 - val_accuracy: 0.9748 - val_loss: 0.0817
Epoch 6/10
1100/1100 16s 14ms/step - accuracy: 0.9884 - loss: 0.0398 - val_accuracy: 0.9758 - val_loss: 0.0909
Epoch 7/10
1100/1100 15s 14ms/step - accuracy: 0.9898 - loss: 0.0318 - val_accuracy: 0.9724 - val_loss: 0.0998
Epoch 8/10
1100/1100 16s 14ms/step - accuracy: 0.9892 - loss: 0.0305 - val_accuracy: 0.9778 - val_loss: 0.0952
Epoch 9/10
1100/1100 16s 14ms/step - accuracy: 0.9914 - loss: 0.0267 - val_accuracy: 0.9756 - val_loss: 0.0878
Epoch 10/10
1100/1100 16s 15ms/step - accuracy: 0.9935 - loss: 0.0203 - val_accuracy: 0.9770 - val_loss: 0.0974

Epoch 1/10
1100/1100 24s 21ms/step - accuracy: 0.7772 - loss: 0.6657 - val_accuracy: 0.9524 - val_loss: 0.1752
Epoch 2/10
1100/1100 24s 22ms/step - accuracy: 0.9525 - loss: 0.1705 - val_accuracy: 0.9682 - val_loss: 0.1261
Epoch 3/10
1100/1100 22s 20ms/step - accuracy: 0.9675 - loss: 0.1162 - val_accuracy: 0.9750 - val_loss: 0.0945
Epoch 4/10
1100/1100 22s 20ms/step - accuracy: 0.9735 - loss: 0.0929 - val_accuracy: 0.9720 - val_loss: 0.1018
Epoch 5/10
1100/1100 22s 20ms/step - accuracy: 0.9789 - loss: 0.0794 - val_accuracy: 0.9762 - val_loss: 0.0888
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.9806 - loss: 0.0729 - val_accuracy: 0.9760 - val_loss: 0.0961
Epoch 7/10
1100/1100 22s 20ms/step - accuracy: 0.9847 - loss: 0.0578 - val_accuracy: 0.9810 - val_loss: 0.0932
Epoch 8/10
1100/1100 22s 20ms/step - accuracy: 0.9824 - loss: 0.0643 - val_accuracy: 0.9786 - val_loss: 0.0854
Epoch 9/10
1100/1100 22s 20ms/step - accuracy: 0.9864 - loss: 0.0487 - val_accuracy: 0.9764 - val_loss: 0.1054
Epoch 10/10
1100/1100 22s 20ms/step - accuracy: 0.9864 - loss: 0.0493 - val_accuracy: 0.9780 - val_loss: 0.1108

Epoch 1/10
1100/1100 23s 20ms/step - accuracy: 0.1052 - loss: 2.3130 - val_accuracy: 0.1808 - val_loss: 1.9383
Epoch 2/10
1100/1100 22s 20ms/step - accuracy: 0.2002 - loss: 1.9364 - val_accuracy: 0.2168 - val_loss: 1.8587
Epoch 3/10
1100/1100 23s 20ms/step - accuracy: 0.2161 - loss: 1.8392 - val_accuracy: 0.5588 - val_loss: 1.2106
Epoch 4/10
1100/1100 22s 20ms/step - accuracy: 0.5594 - loss: 1.1159 - val_accuracy: 0.8168 - val_loss: 0.7119
Epoch 5/10
1100/1100 22s 20ms/step - accuracy: 0.8359 - loss: 0.6312 - val_accuracy: 0.8994 - val_loss: 0.4360
Epoch 6/10
1100/1100 22s 20ms/step - accuracy: 0.8827 - loss: 0.4854 - val_accuracy: 0.9066 - val_loss: 0.4053
Epoch 7/10
1100/1100 22s 20ms/step - accuracy: 0.9007 - loss: 0.4218 - val_accuracy: 0.9166 - val_loss: 0.3660
Epoch 8/10
1100/1100 22s 20ms/step - accuracy: 0.9075 - loss: 0.3940 - val_accuracy: 0.9204 - val_loss: 0.3552
Epoch 9/10
1100/1100 22s 20ms/step - accuracy: 0.9090 - loss: 0.3922 - val_accuracy: 0.9242 - val_loss: 0.3356
Epoch 10/10
1100/1100 24s 22ms/step - accuracy: 0.9191 - loss: 0.3534 - val_accuracy: 0.9270 - val_loss: 0.3286
Loading