Skip to content

Commit

Permalink
Merge branch 'master' into github_actions/local_caches
Browse files Browse the repository at this point in the history
  • Loading branch information
mryzhov committed Mar 5, 2024
2 parents 1a7868c + b6b4bda commit 8322d25
Show file tree
Hide file tree
Showing 91 changed files with 2,465 additions and 1,368 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build_doc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ jobs:
lfs: 'true'

- name: Install apt-get dependencies
uses: awalsh128/[email protected].1
uses: awalsh128/[email protected].2
with:
packages: graphviz texlive liblua5.2-0 libclang1-9 libclang-cpp9
version: 3.0
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/code_snippets.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
submodules: 'true'

- name: Install OpenCL
uses: awalsh128/[email protected].1
uses: awalsh128/[email protected].2
if: runner.os == 'Linux'
with:
packages: ocl-icd-opencl-dev opencl-headers
Expand Down
16 changes: 14 additions & 2 deletions .github/workflows/job_pytorch_models_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,8 @@ jobs:

- name: Install OpenVINO Python wheels
run: |
# To enable pytest parallel features
python3 -m pip install pytest-xdist[psutil]
python3 -m pip install ${INSTALL_DIR}/tools/openvino-*
python3 -m pip install ${INSTALL_DIR}/openvino_tokenizers-*
Expand All @@ -118,10 +120,20 @@ jobs:
env:
CPLUS_INCLUDE_PATH: ${{ env.Python_ROOT_DIR }}/include/python${{ env.PYTHON_VERSION }}

- name: PyTorch Models Tests
- name: PyTorch Models Tests Timm and Torchvision
run: |
export PYTHONPATH=${MODEL_HUB_TESTS_INSTALL_DIR}:$PYTHONPATH
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_model_tests.html --self-contained-html -v
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch/ -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_model_timm_tv_tests.html --self-contained-html -v -n 4 -k "TestTimmConvertModel or TestTorchHubConvertModel"
env:
TYPE: ${{ inputs.event == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
OP_REPORT_FILE: ${{ env.INSTALL_TEST_DIR }}/TEST-torch_unsupported_ops.log

- name: PyTorch Models Tests Not Timm or Torchvision
if: always()
run: |
export PYTHONPATH=${MODEL_HUB_TESTS_INSTALL_DIR}:$PYTHONPATH
python3 -m pytest ${MODEL_HUB_TESTS_INSTALL_DIR}/pytorch -m ${TYPE} --html=${INSTALL_TEST_DIR}/TEST-torch_model_tests.html --self-contained-html -v -k "not (TestTimmConvertModel or TestTorchHubConvertModel)"
env:
TYPE: ${{ inputs.event == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
Expand Down
21 changes: 13 additions & 8 deletions .github/workflows/linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ jobs:

Conformance:
needs: [ Build, Smart_CI ]
timeout-minutes: ${{ matrix.TEST_TYPE == 'API' && 5 || 30 }}
timeout-minutes: ${{ matrix.TEST_TYPE == 'API' && 5 || 20 }}
defaults:
run:
shell: bash
Expand Down Expand Up @@ -511,18 +511,23 @@ jobs:
runner: 'ubuntu-20.04-8-cores'
model_scope: 'precommit'

TensorFlow_Models_Tests_Nightly:
name: TensorFlow Models tests
TensorFlow_Models_Tests_Nightly_TF_HUB:
name: TensorFlow TF Hub Models tests
if: ${{ github.event_name == 'schedule' }}
needs: [ Build, Smart_CI, Openvino_tokenizers ]
uses: ./.github/workflows/job_tensorflow_models_tests.yml
with:
runner: 'ubuntu-20.04-16-cores'
model_scope: 'nightly_tf_hub'

TensorFlow_Models_Tests_Nightly_HF:
name: TensorFlow Hugging Face Models tests
if: ${{ github.event_name == 'schedule' }}
needs: [ Build, Smart_CI, Openvino_tokenizers ]
strategy:
max-parallel: 2
matrix:
MODEL_SCOPE: ['nightly_hf', 'nightly_tf_hub']
uses: ./.github/workflows/job_tensorflow_models_tests.yml
with:
runner: 'ubuntu-20.04-16-cores'
model_scope: ${{ matrix.MODEL_SCOPE }}
model_scope: 'nightly_hf'

# TODO: Switch back to self-hosted runners
# container:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ Install OpenVINO™ Runtime on Linux


.. meta::
:description: Learn how to install OpenVINO™ Runtime on Linux operating system.
You can use an archive, a PyPi package, APT, YUM, Conda Forge,
:description: Learn how to install OpenVINO™ Runtime on Linux operating system.
You can use an archive, a PyPi package, npm package, APT, YUM, Conda Forge,
Homebrew or a Docker image.


Expand All @@ -23,6 +23,7 @@ Install OpenVINO™ Runtime on Linux
Use Homebrew <openvino_docs_install_guides_installing_openvino_brew>
Use Conan <openvino_docs_install_guides_installing_openvino_conan>
Use Docker <openvino_docs_install_guides_installing_openvino_docker>
Use npm <openvino_docs_install_guides_installing_openvino_npm>


If you want to install OpenVINO™ Runtime on Linux, you have the following options:
Expand All @@ -36,8 +37,5 @@ If you want to install OpenVINO™ Runtime on Linux, you have the following opti
* :doc:`Install OpenVINO using Homebrew <openvino_docs_install_guides_installing_openvino_brew>`
* :doc:`Install OpenVINO using Docker <openvino_docs_install_guides_installing_openvino_docker>`
* :doc:`Install OpenVINO using Conan Package Manager <openvino_docs_install_guides_installing_openvino_conan>`




* :doc:`Install OpenVINO using npm <openvino_docs_install_guides_installing_openvino_npm>`

Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ Install OpenVINO™ Runtime for macOS


.. meta::
:description: Learn how to install OpenVINO™ Runtime on macOS operating
system. You can use an archive, a PyPi package, Conda Forge
:description: Learn how to install OpenVINO™ Runtime on macOS operating
system. You can use an archive, a PyPi package, npm package, Conda Forge
or Homebrew.


Expand All @@ -20,7 +20,7 @@ Install OpenVINO™ Runtime for macOS
Use Conda Forge <openvino_docs_install_guides_installing_openvino_conda>
Use vcpkg <openvino_docs_install_guides_installing_openvino_vcpkg>
Use Conan <openvino_docs_install_guides_installing_openvino_conan>

Use npm <openvino_docs_install_guides_installing_openvino_npm>

If you want to install OpenVINO™ Runtime on macOS, you have the following options:

Expand All @@ -31,6 +31,5 @@ If you want to install OpenVINO™ Runtime on macOS, you have the following opti
* :doc:`Install OpenVINO using Homebrew <openvino_docs_install_guides_installing_openvino_brew>`
* :doc:`Install OpenVINO using vcpkg <openvino_docs_install_guides_installing_openvino_vcpkg>`
* :doc:`Install OpenVINO using Conan Package Manager <openvino_docs_install_guides_installing_openvino_conan>`


* :doc:`Install OpenVINO using npm <openvino_docs_install_guides_installing_openvino_npm>`

Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
.. {#openvino_docs_install_guides_installing_openvino_npm}
Install Intel® Distribution of OpenVINO™ Toolkit from npm Registry
==================================================================

.. meta::
:description: Learn how to install OpenVINO™ Runtime on Windows, Linux, and
macOS operating systems, using the npm registry.


.. note::

Note that the npm distribution:

* offers the JavaScript API only
* is dedicated to users of all major OSes: Windows, Linux, and macOS
(all x86_64 / arm64 architectures)
* macOS offers support only for CPU inference

.. tab-set::

.. tab-item:: System Requirements
:sync: system-requirements

- Windows, Linux, macOS
- x86, ARM (Windows ARM not supported)

.. tab-item:: Software Requirements
:sync: software-requirements

`Node.js version 20.5.1 and higher <https://nodejs.org/en/download/>`__


Installing OpenVINO Node.js
###########################

1. Make sure that you have installed `Node.js and npm <https://nodejs.org/en/download>`__
on your system.
2. Navigate to your project directory and run the following command in the terminal:

.. code-block:: sh
npm install openvino-node
.. note::

The *openvino-node* npm package runs in Node.js environment only and provides
a subset of :doc:`OpenVINO Runtime C++ API <../../api/c_cpp_api/group__ov__cpp__api>`.

What's Next?
####################

Now that you’ve installed OpenVINO npm package, you’re ready to run your own machine
learning applications! Explore :doc:`OpenVINO Node.js API <../../api/nodejs_api/nodejs_api>`
to learn more about how to integrate a model in Node.js applications.

Additional Resources
####################

- Intel® Distribution of OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
- For IoT Libraries & Code Samples, see `Intel® IoT Developer Kit <https://github.com/intel-iot-devkit>`__.
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ Install OpenVINO™ Runtime on Windows


.. meta::
:description: Learn how to install OpenVINO™ Runtime on Windows operating
system. You can use an archive, a PyPi package, Conda Forge,
:description: Learn how to install OpenVINO™ Runtime on Windows operating
system. You can use an archive, a PyPi package, npm package, Conda Forge,
or a Docker image.


Expand All @@ -20,6 +20,7 @@ Install OpenVINO™ Runtime on Windows
Use vcpkg <openvino_docs_install_guides_installing_openvino_vcpkg>
Use Docker <openvino_docs_install_guides_installing_openvino_docker>
Use Conan <openvino_docs_install_guides_installing_openvino_conan>
Use npm <openvino_docs_install_guides_installing_openvino_npm>



Expand All @@ -31,5 +32,5 @@ If you want to install OpenVINO™ Runtime on Windows, you have the following op
* :doc:`Install OpenVINO using vcpkg <openvino_docs_install_guides_installing_openvino_vcpkg>`
* :doc:`Install OpenVINO using Docker <openvino_docs_install_guides_installing_openvino_docker>`
* :doc:`Install OpenVINO using Conan Package Manager <openvino_docs_install_guides_installing_openvino_conan>`

* :doc:`Install OpenVINO using npm <openvino_docs_install_guides_installing_openvino_npm>`

Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This is the advanced quantization flow that allows to apply 8-bit quantization t
* Since accuracy validation is run several times during the quantization process, quantization with accuracy control can take more time than the :doc:`Basic 8-bit quantization <basic_quantization_flow>` flow.
* The resulted model can provide smaller performance improvement than the :doc:`Basic 8-bit quantization <basic_quantization_flow>` flow because some of the operations are kept in the original precision.

.. note:: Currently, 8-bit quantization with accuracy control is available only for models in OpenVINO representation.
.. note:: Currently, 8-bit quantization with accuracy control is available only for models in OpenVINO and onnx.ModelProto representation.

The steps for the quantization with accuracy control are described below.

Expand All @@ -38,10 +38,18 @@ This step is similar to the :doc:`Basic 8-bit quantization <basic_quantization_f
:language: python
:fragment: [dataset]

.. tab-item:: ONNX
:sync: onnx

.. doxygensnippet:: docs/optimization_guide/nncf/ptq/code/ptq_aa_onnx.py
:language: python
:fragment: [dataset]

Prepare validation function
############################

Validation function receives ``openvino.CompiledModel`` object and validation dataset and returns accuracy metric value. The following code snippet shows an example of validation function for OpenVINO model:
The validation function takes two arguments: a model object and a validation dataset, and it returns the accuracy metric value. The type of the model object varies for different frameworks. In OpenVINO, it is an ``openvino.CompiledModel``. In ONNX, it is an ``onnx.ModelProto``.
The following code snippet shows an example of a validation function for OpenVINO and ONNX framework:

.. tab-set::

Expand All @@ -52,10 +60,17 @@ Validation function receives ``openvino.CompiledModel`` object and validation da
:language: python
:fragment: [validation]

.. tab-item:: ONNX
:sync: onnx

.. doxygensnippet:: docs/optimization_guide/nncf/ptq/code/ptq_aa_onnx.py
:language: python
:fragment: [validation]

Run quantization with accuracy control
#######################################

``nncf.quantize_with_accuracy_control()`` function is used to run the quantization with accuracy control. The following code snippet shows an example of quantization with accuracy control for OpenVINO model:
``nncf.quantize_with_accuracy_control()`` function is used to run the quantization with accuracy control. The following code snippet shows an example of quantization with accuracy control for OpenVINO and ONNX framework:

.. tab-set::

Expand All @@ -66,6 +81,13 @@ Run quantization with accuracy control
:language: python
:fragment: [quantization]

.. tab-item:: ONNX
:sync: onnx

.. doxygensnippet:: docs/optimization_guide/nncf/ptq/code/ptq_aa_onnx.py
:language: python
:fragment: [quantization]

* ``max_drop`` defines the accuracy drop threshold. The quantization process stops when the degradation of accuracy metric on the validation dataset is less than the ``max_drop``. The default value is 0.01. NNCF will stop the quantization and report an error if the ``max_drop`` value can't be reached.

* ``drop_type`` defines how the accuracy drop will be calculated: ``ABSOLUTE`` (used by default) or ``RELATIVE``.
Expand All @@ -81,6 +103,13 @@ After that the model can be compiled and run with OpenVINO:
:language: python
:fragment: [inference]

.. tab-item:: ONNX
:sync: onnx

.. doxygensnippet:: docs/optimization_guide/nncf/ptq/code/ptq_aa_onnx.py
:language: python
:fragment: [inference]

To save the model in the OpenVINO Intermediate Representation (IR), use ``openvino.save_model()``. When dealing with an original model in FP32 precision, it's advisable to preserve FP32 precision in the most impactful model operations that were reverted from INT8 to FP32. To do this, consider using compress_to_fp16=False during the saving process. This recommendation is based on the default functionality of ``openvino.save_model()``, which saves models in FP16, potentially impacting accuracy through this conversion.

.. tab-set::
Expand All @@ -101,6 +130,7 @@ Examples of NNCF post-training quantization with control of accuracy metric:

* `Post-Training Quantization of Anomaly Classification OpenVINO model with control of accuracy metric <https://github.com/openvinotoolkit/nncf/blob/develop/examples/post_training_quantization/openvino/anomaly_stfpm_quantize_with_accuracy_control>`__
* `Post-Training Quantization of YOLOv8 OpenVINO Model with control of accuracy metric <https://github.com/openvinotoolkit/nncf/blob/develop/examples/post_training_quantization/openvino/yolov8_quantize_with_accuracy_control>`__
* `Post-Training Quantization of YOLOv8 ONNX Model with control of accuracy metric <https://github.com/openvinotoolkit/nncf/blob/develop/examples/post_training_quantization/onnx/yolov8_quantize_with_accuracy_control>`__

See also
####################
Expand Down
75 changes: 75 additions & 0 deletions docs/optimization_guide/nncf/ptq/code/ptq_aa_onnx.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Copyright (C) 2018-2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

#! [dataset]
import nncf
import torch

calibration_loader = torch.utils.data.DataLoader(...)

def transform_fn(data_item):
images, _ = data_item
return {input_name: images.numpy()} # input_name should be taken from the model,
# e.g. model.graph.input[0].name

calibration_dataset = nncf.Dataset(calibration_loader, transform_fn)
validation_dataset = nncf.Dataset(calibration_loader, transform_fn)
#! [dataset]

#! [validation]
import numpy as np
import torch
from sklearn.metrics import accuracy_score

import onnx
import onnxruntime


def validate(model: onnx.ModelProto,
validation_loader: torch.utils.data.DataLoader) -> float:
predictions = []
references = []

input_name = model.graph.input[0].name
serialized_model = model.SerializeToString()
session = onnxruntime.InferenceSession(serialized_model, providers=["CPUExecutionProvider"])
output_names = [output.name for output in session.get_outputs()]

for images, target in validation_loader:
pred = session.run(output_names, input_feed={input_name: images.numpy()})[0]
predictions.append(np.argmax(pred, axis=1))
references.append(target)

predictions = np.concatenate(predictions, axis=0)
references = np.concatenate(references, axis=0)
return accuracy_score(predictions, references)
#! [validation]

#! [quantization]
import onnx

model = onnx.load("model_path")

quantized_model = nncf.quantize_with_accuracy_control(
model,
calibration_dataset=calibration_dataset,
validation_dataset=validation_dataset,
validation_fn=validate,
max_drop=0.01,
drop_type=nncf.DropType.ABSOLUTE,
)
#! [quantization]

#! [inference]
import openvino as ov

# convert ONNX model to OpenVINO model
ov_quantized_model = ov.convert_model(quantized_model)

# compile the model to transform quantized operations to int8
model_int8 = ov.compile_model(ov_quantized_model)

input_fp32 = ... # FP32 model input
res = model_int8(input_fp32)

#! [inference]
Loading

0 comments on commit 8322d25

Please sign in to comment.