Skip to content

Commit e63cda5

Browse files
committed
Update to ML 2.0
1 parent ce10f64 commit e63cda5

4 files changed

Lines changed: 17 additions & 16 deletions

File tree

LICENSE

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
MIT License
22

3-
Copyright (c) 2021 Andrew DalPino
3+
Copyright (c) 2022 Andrew DalPino
44

55
Permission is hereby granted, free of charge, to any person obtaining a copy
66
of this software and associated documentation files (the "Software"), to deal

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ $dataset = new Labeled($samples, $labels);
5555
We're going to use a transformer [Pipeline](https://docs.rubixml.com/latest/pipeline.html) to shape the dataset into the correct format for our learner. We know that the size of each sample image in the MNIST dataset is 28 x 28 pixels, but just to make sure that future samples are always the correct input size we'll add an [Image Resizer](https://docs.rubixml.com/latest/transformers/image-resizer.html). Then, to convert the image into raw pixel data we'll use the [Image Vectorizer](https://docs.rubixml.com/latest/transformers/image-vectorizer.html) which extracts continuous raw color channel values from the image. Since the sample images are black and white, we only need to use 1 color channel per pixel. At the end of the pipeline we'll center and scale the dataset using the [Z Scale Standardizer](https://docs.rubixml.com/latest/transformers/z-scale-standardizer.html) to help speed up the convergence of the neural network.
5656

5757
### Instantiating the Learner
58-
Now, we'll go ahead and instantiate our [Multilayer Perceptron](https://docs.rubixml.com/latest/classifiers/multilayer-perceptron.html) classifier. Let's consider a neural network architecture suited for the MNIST problem consisting of 3 groups of [Dense](https://docs.rubixml.com/latest/neural-network/hidden-layers/dense.html) neuronal layers, followed by a [Leaky ReLU](https://docs.rubixml.com/latest/neural-network/activation-functions/leaky-relu.html) activation layer, and then a mild [Dropout](https://docs.rubixml.com/latest/neural-network/hidden-layers/dropout.html) layer to act as a regularizer. The output layer adds an additional layer of neurons with a [Softmax](https://docs.rubixml.com/latest/neural-network/activation-functions/softmax.html) activation making this particular network architecture 4 layers deep.
58+
Now, we'll go ahead and instantiate our [Multilayer Perceptron](https://docs.rubixml.com/latest/classifiers/multilayer-perceptron.html) classifier. Let's consider a neural network architecture suited for the MNIST problem consisting of 3 groups of [Dense](https://docs.rubixml.com/latest/neural-network/hidden-layers/dense.html) neuronal layers, followed by a [ReLU](https://docs.rubixml.com/latest/neural-network/activation-functions/relu.html) activation layer, and then a mild [Dropout](https://docs.rubixml.com/latest/neural-network/hidden-layers/dropout.html) layer to act as a regularizer. The output layer adds an additional layer of neurons with a [Softmax](https://docs.rubixml.com/latest/neural-network/activation-functions/softmax.html) activation making this particular network architecture 4 layers deep.
5959

6060
Next, we'll set the batch size to 256. The batch size is the number of samples sent through the network at a time. We'll also specify an optimizer and learning rate which determines the update step of the Gradient Descent algorithm. The [Adam](https://docs.rubixml.com/latest/neural-network/optimizers/adam.html) optimizer uses a combination of [Momentum](https://docs.rubixml.com/latest/neural-network/optimizers/momentum.html) and [RMS Prop](https://docs.rubixml.com/latest/neural-network/optimizers/rms-prop.html) to make its updates and usually converges faster than standard *stochastic* Gradient Descent. It uses a global learning rate to control the magnitude of the step which we'll set to 0.0001 for this example.
6161

@@ -69,7 +69,7 @@ use Rubix\ML\Classifiers\MultiLayerPerceptron;
6969
use Rubix\ML\NeuralNet\Layers\Dense;
7070
use Rubix\ML\NeuralNet\Layers\Dropout;
7171
use Rubix\ML\NeuralNet\Layers\Activation;
72-
use Rubix\ML\NeuralNet\ActivationFunctions\LeakyReLU;
72+
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
7373
use Rubix\ML\NeuralNet\Optimizers\Adam;
7474
use Rubix\ML\Persisters\Filesystem;
7575

@@ -80,13 +80,13 @@ $estimator = new PersistentModel(
8080
new ZScaleStandardizer(),
8181
], new MultiLayerPerceptron([
8282
new Dense(100),
83-
new Activation(new LeakyReLU()),
83+
new Activation(new ReLU()),
8484
new Dropout(0.2),
8585
new Dense(100),
86-
new Activation(new LeakyReLU()),
86+
new Activation(new ReLU()),
8787
new Dropout(0.2),
8888
new Dense(100),
89-
new Activation(new LeakyReLU()),
89+
new Activation(new ReLU()),
9090
new Dropout(0.2),
9191
], 256, new Adam(0.0001))),
9292
new Filesystem('mnist.rbx', true)

composer.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"license": "MIT",
77
"keywords": [
88
"classification", "cross validation", "dataset", "data science", "dropout", "example project",
9-
"feed forward", "image recognition", "image classification", "leaky relu", "machine learning",
9+
"feed forward", "image recognition", "image classification", "relu", "machine learning",
1010
"ml", "mnist", "multilayer perceptron", "neural network", "php", "php ml", "relu", "rubix ml",
1111
"rubixml", "tutorial"
1212
],
@@ -20,7 +20,7 @@
2020
"require": {
2121
"php": ">=7.4",
2222
"ext-gd": "*",
23-
"rubix/ml": "^1.0"
23+
"rubix/ml": "^2.0"
2424
},
2525
"scripts": {
2626
"train": "@php train.php",

train.php

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
use Rubix\ML\NeuralNet\Layers\Dense;
1414
use Rubix\ML\NeuralNet\Layers\Dropout;
1515
use Rubix\ML\NeuralNet\Layers\Activation;
16-
use Rubix\ML\NeuralNet\ActivationFunctions\LeakyReLU;
16+
use Rubix\ML\NeuralNet\ActivationFunctions\ReLU;
1717
use Rubix\ML\NeuralNet\Optimizers\Adam;
1818
use Rubix\ML\Persisters\Filesystem;
1919
use Rubix\ML\Extractors\CSV;
@@ -30,7 +30,8 @@
3030
foreach (glob("training/$label/*.png") as $file) {
3131
$samples[] = [imagecreatefrompng($file)];
3232
$labels[] = "#$label";
33-
}
33+
}y
34+
3435
}
3536

3637
$dataset = new Labeled($samples, $labels);
@@ -41,14 +42,14 @@
4142
new ImageVectorizer(true),
4243
new ZScaleStandardizer(),
4344
], new MultilayerPerceptron([
44-
new Dense(100),
45-
new Activation(new LeakyReLU()),
45+
new Dense(128),
46+
new Activation(new ReLU()),
4647
new Dropout(0.2),
47-
new Dense(100),
48-
new Activation(new LeakyReLU()),
48+
new Dense(128),
49+
new Activation(new ReLU()),
4950
new Dropout(0.2),
50-
new Dense(100),
51-
new Activation(new LeakyReLU()),
51+
new Dense(128),
52+
new Activation(new ReLU()),
5253
new Dropout(0.2),
5354
], 256, new Adam(0.0001))),
5455
new Filesystem('mnist.rbx', true)

0 commit comments

Comments
 (0)