Skip to content

Commit

Permalink
[DOCS] OVC/convert_model Documentation (openvinotoolkit#19555)
Browse files Browse the repository at this point in the history
* Added OVC and ov.convert_model() description.

* Minor corrections.

* Small correction.

* Include page to toctree.

* WIP: Model Preparation

* Forked OVC/ov.convert_model documentation sub-directory; reworked model_introduction.md

* Reverted ovc-related changes in old MO_DG documentation

* State explicitly that MO is considered legacy API

* Reduced ovc description in model preparation part; added TF Hub exampe (via file)

* Grammar check; removed obsolexte parts not relevant to ovc; better wording

* Removed a duplicate of mo-to-ovc transition

* Fixed links and some other errors found in documentation build

* Resolved XYZ placeholder to the transition guide

* Fixed technical issues with links

* Up-to-date link to PTQ chapter (instead of obsolete POT)

* Fixed strong text ending

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <[email protected]>

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <[email protected]>

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <[email protected]>

* Renamed Legacy conversion guides

* Fixed links and styles for inlined code

* Fixed style for code references

* Fixing technical syntax errors in docs

* Another attempt to fix docs

* Removed all unreferenced images

* Better content for Additional Resources in model preporation introduction

* MO to OVC transition guide. (#127)

* Examples code correction.

* Change format of example.

* Conflict fix.

* Remove wrong change.

* Added input_shapes example.

* batch example.

* Examples format changed.

* List item removed.

* Remove list for all examples.

* Corrected batch example.

* Transform example.

* Text corrections.

* Text correction.

* Example correction.

* Small correction.

* Small correction.

* Small correction.

* Small correction.

* Text corrections.

* Links corrected.

* Text corrections (#128)

* Text corrections.

* Example corrected.

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Sergey Lyalin <[email protected]>

---------

Co-authored-by: Sergey Lyalin <[email protected]>

* Many technical fixes, description of recursive flattening of list and tuples

* Reorganized structure of Model Conversion toc tree. Removed fp16 dedicated page, merged to Conversion Parameters.

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Roman Kazantsev <[email protected]>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Maciej Smyk <[email protected]>

* Fixed example from tf hub. Removed input_shape references

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <[email protected]>

* Removed

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <[email protected]>

* Fixed links

* Removed TODO for model flow

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <[email protected]>

* Restored lost code-blocks that leaded to wrong rendering of the code snippets in some places

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <[email protected]>

* Update docs/Documentation/model_introduction.md

* Fixed links to notebooks

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <[email protected]>

---------

Co-authored-by: Anastasiia Pnevskaia <[email protected]>
Co-authored-by: Karol Blaszczak <[email protected]>
Co-authored-by: Roman Kazantsev <[email protected]>
Co-authored-by: Maciej Smyk <[email protected]>
Co-authored-by: Tatiana Savina <[email protected]>
  • Loading branch information
6 people authored Sep 12, 2023
1 parent 0675d9f commit adf7a24
Show file tree
Hide file tree
Showing 18 changed files with 2,197 additions and 331 deletions.
225 changes: 197 additions & 28 deletions docs/Documentation/model_introduction.md

Large diffs are not rendered by default.

9 changes: 6 additions & 3 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Convert a Model {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}
# Legacy Conversion API {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}

@sphinxdirective

Expand All @@ -14,12 +14,15 @@
openvino_docs_MO_DG_FP16_Compression
openvino_docs_MO_DG_Python_API
openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ
Supported_Model_Formats_MO_DG

.. meta::
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
:description: Model conversion (MO) furthers the transition between training and
deployment environments, it adjusts deep learning models for
optimal execution on target devices.

.. note::
This part of the documentation describes a legacy approach to model conversion. Starting with OpenVINO 2023.1, a simpler alternative API for model conversion is available: ``openvino.convert_model`` and OpenVINO Model Converter ``ovc`` CLI tool. Refer to `Model preparation <openvino_docs_model_processing_introduction>` for more details. If you are still using `openvino.tools.mo.convert_model` or `mo` CLI tool, you can still refer to this documentation. However, consider checking the `transition guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>` to learn how to migrate from the legacy conversion API to the new one. Depending on the model topology, the new API can be a better option for you.

To convert a model to OpenVINO model format (``ov.Model``), you can use the following command:

Expand Down
10 changes: 5 additions & 5 deletions docs/MO_DG/prepare_model/FP16_Compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
@sphinxdirective

By default, when IR is saved all relevant floating-point weights are compressed to ``FP16`` data type during model conversion.
It results in creating a "compressed ``FP16`` model", which occupies about half of
It results in creating a "compressed ``FP16`` model", which occupies about half of
the original space in the file system. The compression may introduce a minor drop in accuracy,
but it is negligible for most models.
In case if accuracy drop is significant user can disable compression explicitly.
Expand All @@ -29,20 +29,20 @@ To disable compression, use the ``compress_to_fp16=False`` option:
mo --input_model INPUT_MODEL --compress_to_fp16=False


For details on how plugins handle compressed ``FP16`` models, see
For details on how plugins handle compressed ``FP16`` models, see
:doc:`Working with devices <openvino_docs_OV_UG_Working_with_devices>`.

.. note::

``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <pot_introduction>` guide for more
``FP16`` compression is sometimes used as the initial step for ``INT8`` quantization.
Refer to the :doc:`Post-training optimization <ptq_introduction>` guide for more
information about that.


.. note::

Some large models (larger than a few GB) when compressed to ``FP16`` may consume an overly large amount of RAM on the loading
phase of the inference. If that is the case for your model, try to convert it without compression:
phase of the inference. If that is the case for your model, try to convert it without compression:
``convert_model(INPUT_MODEL, compress_to_fp16=False)`` or ``convert_model(INPUT_MODEL)``


Expand Down
62 changes: 31 additions & 31 deletions docs/MO_DG/prepare_model/convert_model/supported_model_formats.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Supported Model Formats {#Supported_Model_Formats}
# Supported Model Formats {#Supported_Model_Formats_MO_DG}

@sphinxdirective

Expand All @@ -17,7 +17,7 @@
:description: Learn about supported model formats and the methods used to convert, read, and compile them in OpenVINO™.


**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.
**OpenVINO IR (Intermediate Representation)** - the proprietary and default format of OpenVINO, benefiting from the full extent of its features. All other supported model formats, as listed below, are converted to :doc:`OpenVINO IR <openvino_ir>` to enable inference. Consider storing your model in this format to minimize first-inference latency, perform model optimization, and, in some cases, save space on your drive.

**PyTorch, TensorFlow, ONNX, and PaddlePaddle** - can be used with OpenVINO Runtime API directly,
which means you do not need to save them as OpenVINO IR before including them in your application.
Expand Down Expand Up @@ -62,9 +62,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model(model)
compiled_model = core.compile_model(ov_model, "AUTO")

For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.

.. tab-item:: TensorFlow
Expand Down Expand Up @@ -104,10 +104,10 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")

For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`__
on this topic.
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/101-tensorflow-to-openvino-with-output.html>`__
on this topic.

* The ``read_model()`` and ``compile_model()`` methods:

Expand All @@ -125,8 +125,8 @@ Here are code examples of how to use these methods with different model formats:
ov_model = read_model("saved_model.pb")
compiled_model = core.compile_model(ov_model, "AUTO")

For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.
For TensorFlow format, see :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`.

.. tab-item:: C++
Expand All @@ -146,7 +146,7 @@ Here are code examples of how to use these methods with different model formats:

ov::CompiledModel compiled_model = core.compile_model("saved_model.pb", "AUTO");

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: C
Expand All @@ -167,7 +167,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "saved_model.pb", "AUTO", 0, &compiled_model);

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: CLI
Expand Down Expand Up @@ -206,9 +206,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model("<INPUT_MODEL>.tflite")
compiled_model = core.compile_model(ov_model, "AUTO")

For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/119-tflite-to-openvino-with-output.html>`__
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/119-tflite-to-openvino-with-output.html>`__
on this topic.


Expand Down Expand Up @@ -239,7 +239,7 @@ Here are code examples of how to use these methods with different model formats:

compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO")

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.


Expand All @@ -258,7 +258,7 @@ Here are code examples of how to use these methods with different model formats:

ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.tflite", "AUTO");

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: C
Expand All @@ -277,7 +277,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.tflite", "AUTO", 0, &compiled_model);

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: CLI
Expand All @@ -297,7 +297,7 @@ Here are code examples of how to use these methods with different model formats:

mo --input_model <INPUT_MODEL>.tflite

For details on the conversion, refer to the
For details on the conversion, refer to the
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow_Lite>`.

.. tab-item:: ONNX
Expand All @@ -324,9 +324,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model("<INPUT_MODEL>.onnx")
compiled_model = core.compile_model(ov_model, "AUTO")

For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
on this topic.


Expand Down Expand Up @@ -445,9 +445,9 @@ Here are code examples of how to use these methods with different model formats:
ov_model = convert_model("<INPUT_MODEL>.pdmodel")
compiled_model = core.compile_model(ov_model, "AUTO")

For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/103-paddle-to-openvino-classification-with-output.html>`__
For more details on conversion, refer to the
:doc:`guide <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`
and an example `tutorial <https://docs.openvino.ai/nightly/notebooks/103-paddle-to-openvino-classification-with-output.html>`__
on this topic.

* The ``read_model()`` method:
Expand Down Expand Up @@ -477,7 +477,7 @@ Here are code examples of how to use these methods with different model formats:

compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO")

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: C++
Expand All @@ -495,7 +495,7 @@ Here are code examples of how to use these methods with different model formats:

ov::CompiledModel compiled_model = core.compile_model("<INPUT_MODEL>.pdmodel", "AUTO");

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: C
Expand All @@ -514,7 +514,7 @@ Here are code examples of how to use these methods with different model formats:
ov_compiled_model_t* compiled_model = NULL;
ov_core_compile_model_from_file(core, "<INPUT_MODEL>.pdmodel", "AUTO", 0, &compiled_model);

For a guide on how to run inference, see how to
For a guide on how to run inference, see how to
:doc:`Integrate OpenVINO™ with Your Application <openvino_docs_OV_UG_Integrate_OV_with_your_application>`.

.. tab-item:: CLI
Expand All @@ -538,8 +538,8 @@ Here are code examples of how to use these methods with different model formats:
:doc:`article <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle>`.


**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
**MXNet, Caffe, and Kaldi** are legacy formats that need to be converted explicitly to OpenVINO IR or ONNX before running inference.
As OpenVINO is currently proceeding **to deprecate these formats** and **remove their support entirely in the future**,
converting them to ONNX for use with OpenVINO should be considered the default path.

.. note::
Expand Down
Loading

0 comments on commit adf7a24

Please sign in to comment.