Add OneDNN-based MNIST neural network implementation for optimized performance #6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR introduces an alternate implementation of the MNIST neural network training problem that leverages Intel's OneDNN (Deep Neural Network Library) for optimized CPU performance.
Overview
The new implementation provides the same
OptimizationProbleminterface as the existing Candle-based MNIST implementation but uses Intel's OneDNN library for highly optimized matrix operations and neural network primitives.Key Features
Performance Optimizations
Implementation Details
OptimizationProblemtrait, making it a drop-in replacement for benchmarkingonednnfeature flag to avoid requiring OneDNN installationUsage Example
Benchmarking Support
The implementation includes comprehensive benchmarking tools:
Installation
OneDNN must be installed separately. The PR includes:
install_onednn.py) for Ubuntu/Debian systemsdocs/onednn_mnist.mdFiles Added
src/benchmarks/mnist_onednn.rs- Core OneDNN implementationdocs/onednn_mnist.md- Comprehensive documentation and usage guideexamples/onednn_mnist.rs- Basic usage exampleexamples/benchmark_comparison.rs- Performance comparison toolinstall_onednn.py- Automated OneDNN installation scriptCompatibility
onednnfeatureThis implementation enables researchers and practitioners to leverage Intel's highly optimized neural network primitives while maintaining full compatibility with the existing QQN optimization framework.
💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.