In this lab, you'll explore the classic MNIST dataset of handwritten digits. While not as large as the previous dataset on facial image recognition, it still provides a 64-dimensional dataset that is ripe for feature reduction.
In this lab you will:
- Use PCA to discover the principal components with images
- Use the principal components of a dataset as features in a machine learning model
- Calculate the time savings and performance gains of layering in PCA as a preprocessing step in machine learning pipelines
Load the load_digits
dataset from the datasets
module of scikit-learn.
# Load the dataset
data = None
print(data.data.shape, data.target.shape)
Now that the dataset is loaded, display the first 20 images.
# Display the first 20 images
Now it's time to fit an initial baseline model.
- Split the data into training and test sets. Set
random_state=22
- Fit a support vector machine to the dataset. Set
gamma='auto'
- Record the training time
- Print the training and test accucary of the model
# Split the data
X = None
y = None
X_train, X_test, y_train, y_test = None
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
# Fit a naive model
clf = None
# Training and test accuracy
train_acc = None
test_acc = None
print('Training Accuracy: {}\nTesting Accuracy: {}'.format(train_acc, test_acc))
Refine the initial model by performing a grid search to tune the hyperparameters. The two most important parameters to adjust are 'C'
and 'gamma'
. Once again, be sure to record the training time as well as the training and test accuracy.
# Your code here
# ⏰ Your code may take several minutes to run
# Print the best parameters
# Print the training and test accuracy
train_acc = None
test_acc = None
print('Training Accuracy: {}\tTesting Accuracy: {}'.format(train_acc, test_acc))
Now that you've fit a baseline classifier, it's time to explore the impacts of using PCA as a preprocessing technique. To start, perform PCA on X_train
. (Be sure to only fit PCA to X_train
; you don't want to leak any information from the test set.) Also, don't reduce the number of features quite yet. You'll determine the number of features needed to account for 95% of the overall variance momentarily.
# Your code here
In order to determine the number of features you wish to reduce the dataset to, it is sensible to plot the overall variance accounted for by the first
# Your code here
Great! Now determine the number of features needed to capture 95% of the dataset's overall variance.
# Your code here
Use your knowledge to reproject the dataset into a lower-dimensional space using PCA.
# Your code here
Now, refit a classification model to the compressed dataset. Be sure to time the required training time, as well as the test and training accuracy.
# Your code here
Finally, use grid search to find optimal hyperparameters for the classifier on the reduced dataset. Be sure to record the time required to fit the model, the optimal hyperparameters and the test and train accuracy of the resulting model.
# Your code here
# ⏰ Your code may take several minutes to run
# Print the best parameters
# Print the training and test accuracy
train_acc = None
test_acc = None
print('Training Accuracy: {}\tTesting Accuracy: {}'.format(train_acc, test_acc))
Well done! In this lab, you employed PCA to reduce a high dimensional dataset. With this, you observed the potential cost benefits required to train a model and performance gains of the model itself.