Releases: intel/onnxruntime
OpenVINO™ Execution Provider for ONNXRuntime 5.5.1
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.5.1 Release based on the latest OpenVINO™ 2024.5 Release and OnnxRuntime 1.20.0 Release
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
This release supports ONNXRuntime 1.20.0 with the latest OpenVINO™ 2024.5.0 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
Supports OpenVINO 2024.5.0
OpenVINO™ Execution Provider for ONNXRuntime 5.5
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.5 Release based on the latest OpenVINO™ 2024.4 Release and OnnxRuntime 1.20.0 Release
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
This release supports ONNXRuntime 1.20.0 with the latest OpenVINO™ 2024.4 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
Supports OpenVINO 2024.4.0
Supports loading OV Config.
Supports Remote Tensor feature for direct memory access while inferencing on NPU.
Memory Optimizations.
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2024.4.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
C# Package:
Download the Microsoft.ML.OnnxRuntime.Managed nuget from the below link and use it with the
Microsoft.ML.OnnxRuntime.OpenVino nuget attached here.
https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Managed/1.20.0
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
OpenVINO™ Execution Provider for ONNXRuntime 5.4
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.4 Release based on the latest OpenVINO™ 2024.3 Release and OnnxRuntime 1.19.0 Release
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
This release supports ONNXRuntime 1.19.0 with the latest OpenVINO™ 2024.3 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
- Supports OpenVINO 2024.3.
- Support for disabling NPU to OV CPU Fallback during build time and disabling MLAS fallback and OV CPU fallback during run time.
- Support ep_context as session options instead of provider options .
- Added QDQ Optimization Feature for NPU device and QDQ models.
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2024.3.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
C# Package:
Download the Microsoft.ML.OnnxRuntime.Managed nuget from the below link and use it with the
Microsoft.ML.OnnxRuntime.OpenVino nuget attached here.
https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Managed/1.19.0
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
OpenVINO™ Execution Provider for ONNXRuntime 5.3.1
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.3.1 Release based on the latest OpenVINO™ 2024.3 Release
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2024.3. This also provides functional bug fixes.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
- Supports OpenVINO 2024.3
- fix for setting precision with Auto Plugin
- changes to ensure we use fast compile for model path but not for auto:gpu,cpu
- Updated fix for setting cache with Auto Plugin
- Device Update accepts GPU.1 on runtime as well with Auto
OpenVINO™ Execution Provider for ONNXRuntime 5.3
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.3 Release based on the latest OpenVINO™ 2024.1 Release and OnnxRuntime 1.18.0 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2024.1. This provides functional bug fixes, and new features from the previous release.
This release supports ONNXRuntime 1.18.0 with the latest OpenVINO™ 2024.1 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
- Supports OpenVINO 2024.1.
- Supports NPU as a device option.
- Separating Device/Precision Device will be CPU, GPU, NPU and inference precision will be set as provider option . CPU_FP32, GPU_FP32 options are deprecated.
- Importing Precompiled Blobs to OpenVINO. It will be possible to import Precompiled Blobs to OpenVINO.
- OVEP Windows Logging Support for NPU. It is possible to obtain NPU Profiling information from debug build of OpenVINO.
- Packages support NPU on Windows.
- Supports Priority through Runtime Provider Option.
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2024.1.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
OpenVINO™ Execution Provider for ONNXRuntime 5.2.1
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.2.1 Release based on the latest OpenVINO™ 2024.0 Release
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2024.0. This provides Nuget packages aligned with OpenVINO™ 2024.0 Release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
OpenVINO™ Execution Provider for ONNXRuntime 5.2
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.2 Release based on the latest OpenVINO™ 2023.3 Release and OnnxRuntime 1.17.1 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2023.3. This provides functional bug fixes, and capability changes from the previous 2022.3.3 release.
This release supports ONNXRuntime 1.17.1 with the latest OpenVINO™ 2023.3 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Modifications:
- Use the provider option
disable_dynamic_shapes
to infer only with static inputs. The default behaviour is to attempt to compile and infer with symbolic shapes. - The provider option
enable_dynamic_shapes
is deprecated and will be removed in next release. - Introduce AppendExecutionProvider_OpenVINO_V2 API and support for OV 2023.3.
- Add support for OpenVINO 2023.3 official release only
- Logging in Debug mode now includes the runtime properties set for devices
- Fix issue in using external weights through OpenVINO with the read_model API: microsoft#17499
- Nuget package only contains OnnxRuntime Libs. Please set up the openvino environment while running the dotnet application
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
/* Steps If using python openvino package to set openvino runtime environment */
pip install openvino==2023.3.0
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
OpenVINO™ Execution Provider for ONNXRuntime 5.1
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.1 Release based on the latest OpenVINO™ 2023.1 Release and OnnxRuntime 1.16 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
OpenVINO™ version upgraded to 2023.1. This provides functional bug fixes, and capability changes from the previous 2022.3.1 release.
This release supports ONNXRuntime 1.16 with the latest OpenVINO™ 2023.1 release.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
New Extendible API added for better backward compatibility
Num Streams Support Added
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
pip install openvino
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
Custom Release OpenVINO™ Execution Provider for OnnxRuntime 1.15
We are releasing Custom OpenVINO™ Execution Provider for OnnxRuntime 1.15 with depreciating OpenVINO 1.0 API and increasing operator coverage. This release is based on OpenVINO™ 2023.1.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
- OpenVINO™ version upgraded to 2023.1.0. This provides functional bug fixes, and capability changes from the previous 2023.0.0 release.
- Improved FIL with custom OpenVINO API for model loading across CPU and GPU accelerators.
- Added bug fixes for model caching feature.
- Operator coverage compliant with OV 2023.1
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options
OpenVINO Execution Provider for OnnxRuntime 5.0
Description:
OpenVINO™ Execution Provider For ONNXRuntime v5.0 Release based on the latest OpenVINO™ 2023.0 Release and OnnxRuntime 1.15 Release.
For all the latest information, Refer to our official documentation:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
Announcements:
- OpenVINO™ version upgraded to 2023.0.0. This provides functional bug fixes, and capability changes from the previous 2022.3.0 release.
- This release supports ONNXRuntime 1.15 with the latest OpenVINO™ 2023.0 release.
- Hassle free user experience for OVEP Python developers on windows platform. Just PIP install is all you required on windows now.
- Complete full model support for stable Diffusion with dynamic shapes on CPU/GPU.
- Improved FIL with custom OpenVINO API for model loading.
- Model caching is now generic across all accelerators. Kernel caching is enabled for partially supported models.
Please refer to the OpenVINO™ Execution Provider For ONNXRuntime build instructions for information on system pre-requisites as well as instructions to build from source.
https://onnxruntime.ai/docs/build/eps.html#openvino
Samples:
https://github.com/microsoft/onnxruntime-inference-examples
Python Package:
https://pypi.org/project/onnxruntime-openvino/
Installation and usage Instructions on Windows:
pip install onnxruntime-openvino
pip install openvino
<Add these 2 lines in the application code>
import onnxruntime.tools.add_openvino_win_libs as utils
utils.add_openvino_libs_to_path()
ONNXRuntime APIs usage:
Please refer to the link below for Python/C++ APIs:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options