You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONNX 1.6 compatibility - operator support for all opset11 ops on CPU, including Sequence ops.
Free dimension override: Add ability to override free dimensions to the inputs of a model. Free dimensions are tensor shapes which aren't statically known at model author time, and must be provided at runtime. Free dimensions are most often used for the batch size of a model's inputs, allowing for customizable batch sizes at runtime. This feature enables certain optimizations since the shape can be known apriori.
Performance improvements to further accelerate model inferencing latency on CPU and GPU. Notable updates include:
Additional CUDA operators added to support Object Detection and BERT models. Note: CUDA operator coverage is still limited and performance will vary significantly depending on the model and operator usage.
Improved parallelism for operators that use GEMM and MatMul
New implementation for 64 bits MatMul on x86_64 CPU
Added ability to set # of threads used by intra and inter operator parallelism to allow optimal configuration for both sequential and concurrent inferencing scenarios
Gelu fusion optimizer
Threading updates:
Eigen ThreadPool is now the default (previously there were two thread pool implementations, TaskThreadPool and Eigen ThreadPool)
Ability to disable multiple threading by setting thread pool size to 1 and onnxruntime_USE_OPENMP to OFF.
MLAS now uses the number of thread pool threads plus one as the parallelism level. (e.g. if you have 4 CPUs, you need to set the thread pool size to 3 so that you only have one thread per CPU)
Support for CentOS 6 and 7 for Python, C, and C++. Most of the code is now C++11 compliant (previously required C++14). C# .NET Core compatibility coming soon.
Telemetry - component level logging through Trace Logging for Windows builds. Data collection is limited and used strictly to identify areas for improvement. You can read more about the data collected and how to manage these settings here.
Bug fixes to address various issues filed on Github and other channels
API updates
Updates to the C API for clarity of usage. The 1.0 version of the API is now stable and will maintain backwards compatibility. Versioning is in supported to accommodate future updates.
C APIs are ABI compatible and follows Semantic Versioning. Programs linked with the current version of the ONNX Runtime library will continue to work with subsequent releases without updating any client code or re-linking.
New session option available for serializing optimized ONNX models
Enabled some new capabilities through the Python and C# APIs for feature parity, including registration of execution providers in Python and setting additional run options in C#.
MKL-DNN EP updated from 0.18.1 to 1.0.2 for an average of 5-10% (up to 50%) performance improvement on ONNX Model Zoo model latency
nGraph EP updated from 0.18 to 0.26, with support of new operators for quantization and performance improvements on LSTM ops (without peephole) and Pad op
TensorRT EP updated to the latest TensorRT 6.0 libraries
Android DNNLibrary version update
New EP support
[Preview]NUPHAR (Neural-network Unified Preprocessing Heterogeneous ARchitecture) is a TVM and LLVM based EP offering model acceleration by compiling nodes in subgraphs into optimized functions via JIT
[Preview]DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows, providing GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers
[Preview]ARM Compute Library (ACL) Execution Provider targets ARM CPUs and GPUs for optimized execution of ONNX operators using the low-level libraries.
Build updates
Two new cmake options: onnxruntime_USE_GEMMLOWP, onnxruntime_USE_AUTOML, onnxruntime_USE_DML
Removed two cmake options: onnxruntime_USE_MLAS/onnxruntime_USE_EIGEN_THREADPOOL. These are always ON now.
The minimal supported gcc version is 4.8.2
Tooling
Availability of ONNX Go Live tool, which automates the process of shipping ONNX models by combining model conversion, correctness tests, and performance tuning into a single pipeline as a series of Docker images.
Supports selective quantization for some nodes instead of all possible nodes
Bias quantization for Conv nodes
Node fusion for dynamic quantization
onnxruntime_perf_tool usage updates:
new option "-y" for controlling inter_op_num_threads
max optimization level is now 99, and 3 is now an invalid value. In most cases, this tool should be run with "-o 99"
Other Dependency Updates
Replaced gsl with gsl-lite to be compatible with C++11
Added NVIDIA cub
Added Wil for DML execution provider
Pybind11 updated from 2.2.4 to 2.4.0 to fix a compatibility issue with Baidu PaddlePaddle and some other python modules that are also depend on Pybind11