From 99ce7c045a2d83e75a2646841d95b0d194a3051f Mon Sep 17 00:00:00 2001 From: Sebastian Golebiewski Date: Tue, 12 Mar 2024 14:41:37 +0100 Subject: [PATCH] [DOCS] Fixing formatting and reference issues (#23405) Fixing formatting and reference issues in docs. Porting: https://github.com/openvinotoolkit/openvino/pull/23333 Fixing directives for linking and cross-reference. Porting: https://github.com/openvinotoolkit/openvino/pull/23300 --- ...gacy]-convert-models-as-python-objects.rst | 12 +- .../convert-tensorflow-gnmt.rst | 6 +- ...egacy]-graph-transformation-extensions.rst | 2 +- .../openvino-security-add-on.rst | 4 +- .../custom-gpu-operations.rst | 3 +- .../custom-openvino-operations.rst | 2 +- .../step3-main.rst | 2 +- .../operation-specs/internal/augru-cell.md | 135 ------------- .../operation-specs/internal/augru-cell.rst | 157 +++++++++++++++ .../internal/augru-sequence.md | 161 --------------- .../internal/augru-sequence.rst | 190 ++++++++++++++++++ .../movement/depth-to-space-1.rst | 15 +- .../movement/space-to-depth-1.rst | 30 ++- .../configurations-intel-gpu.rst | 4 +- .../install-openvino-archive-macos.rst | 4 +- .../install-openvino-archive-windows.rst | 4 +- .../changing-input-shape.rst | 2 +- .../gpu-device.rst | 10 +- ...tegrate-openvino-with-your-application.rst | 2 + .../model-representation.rst | 6 +- .../running-inference/optimize-inference.rst | 2 +- .../optimize-inference/optimizing-latency.rst | 2 +- .../running-inference/stateful-models.rst | 4 +- .../obtaining-stateful-openvino-model.rst | 4 +- docs/sphinx_setup/api/nodejs_api/addon.rst | 12 +- .../openvino-node/enums/element.rst | 20 +- .../openvino-node/enums/resizeAlgorithm.rst | 6 +- .../interfaces/CompiledModel.rst | 16 +- .../openvino-node/interfaces/Core.rst | 26 +-- .../interfaces/CoreConstructor.rst | 4 +- .../openvino-node/interfaces/InferRequest.rst | 36 ++-- .../openvino-node/interfaces/InputInfo.rst | 8 +- .../interfaces/InputModelInfo.rst | 4 +- .../interfaces/InputTensorInfo.rst | 8 +- .../openvino-node/interfaces/Model.rst | 16 +- .../openvino-node/interfaces/Output.rst | 18 +- .../openvino-node/interfaces/OutputInfo.rst | 4 +- .../interfaces/OutputTensorInfo.rst | 6 +- .../openvino-node/interfaces/PartialShape.rst | 10 +- .../interfaces/PartialShapeConstructor.rst | 4 +- .../interfaces/PrePostProcessor.rst | 12 +- .../PrePostProcessorConstructor.rst | 4 +- .../interfaces/PreProcessSteps.rst | 4 +- .../openvino-node/interfaces/Tensor.rst | 10 +- .../interfaces/TensorConstructor.rst | 6 +- .../openvino-node/types/Dimension.rst | 2 +- .../types/SupportedTypedArray.rst | 2 +- .../openvino-node/types/elementTypeString.rst | 2 +- 48 files changed, 535 insertions(+), 468 deletions(-) delete mode 100644 docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.md create mode 100644 docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.rst delete mode 100644 docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.md create mode 100644 docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.rst diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst index 7749ef4f5fe10d..212aea1cf5790f 100644 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst +++ b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-convert-models-as-python-objects.rst @@ -7,7 +7,7 @@ The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. - This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Model Preparation <../../../../openvino-workflow/model-preparation>` article. + This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Model Preparation <../../../../openvino-workflow/model-preparation>` article. Model conversion API is represented by ``convert_model()`` method in openvino.tools.mo namespace. ``convert_model()`` is compatible with types from openvino.runtime, like PartialShape, Layout, Type, etc. @@ -32,8 +32,8 @@ Example of converting a PyTorch model directly from memory: The following types are supported as an input model for ``convert_model()``: -* PyTorch - ``torch.nn.Module``, ``torch.jit.ScriptModule``, ``torch.jit.ScriptFunction``. Refer to the :doc:`Converting a PyTorch Model<[legacy]-supported-model-formats/[legacy]-convert-pytorch>` article for more details. -* TensorFlow / TensorFlow 2 / Keras - ``tf.keras.Model``, ``tf.keras.layers.Layer``, ``tf.compat.v1.Graph``, ``tf.compat.v1.GraphDef``, ``tf.Module``, ``tf.function``, ``tf.compat.v1.session``, ``tf.train.checkpoint``. Refer to the :doc:`Converting a TensorFlow Model<[legacy]-supported-model-formats/[legacy]-convert-tensorflow>` article for more details. +* PyTorch - ``torch.nn.Module``, ``torch.jit.ScriptModule``, ``torch.jit.ScriptFunction``. Refer to the :doc:`Converting a PyTorch Model <[legacy]-supported-model-formats/[legacy]-convert-pytorch>` article for more details. +* TensorFlow / TensorFlow 2 / Keras - ``tf.keras.Model``, ``tf.keras.layers.Layer``, ``tf.compat.v1.Graph``, ``tf.compat.v1.GraphDef``, ``tf.Module``, ``tf.function``, ``tf.compat.v1.session``, ``tf.train.checkpoint``. Refer to the :doc:`Converting a TensorFlow Model <[legacy]-supported-model-formats/[legacy]-convert-tensorflow>` article for more details. ``convert_model()`` accepts all parameters available in the MO command-line tool. Parameters can be specified by Python classes or string analogs, similar to the command-line tool. @@ -64,7 +64,7 @@ Example of using a tuple in the ``input`` parameter to cut a model: ov_model = convert_model(model, input=("input_name", [3], np.float32)) -For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``. +For complex cases, when a value needs to be set in the ``input`` parameter, the ``InputCutInfo`` class can be used. ``InputCutInfo`` accepts four parameters: ``name``, ``shape``, ``type``, and ``value``. ``InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])`` is equivalent of ``InputCutInfo(name="input_name", shape=[3], type=np.float32, value=[0.5, 2.1, 3.4])``. @@ -85,11 +85,11 @@ Example of using ``InputCutInfo`` to freeze an input with value: ov_model = convert_model(model, input=InputCutInfo("input_name", [3], np.float32, [0.5, 2.1, 3.4])) To set parameters for models with multiple inputs, use ``list`` of parameters. -Parameters supporting ``list``: +Parameters supporting ``list``: * input * input_shape -* layout +* layout * source_layout * dest_layout * mean_values diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst index f0fd88ccfca948..ac5b43d55feb7f 100644 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst +++ b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-conversion-api/[legacy]-supported-model-formats/[legacy]-conversion-tutorials/convert-tensorflow-gnmt.rst @@ -5,7 +5,7 @@ Converting a TensorFlow GNMT Model .. meta:: - :description: Learn how to convert a GNMT model + :description: Learn how to convert a GNMT model from TensorFlow to the OpenVINO Intermediate Representation. .. danger:: @@ -13,7 +13,7 @@ Converting a TensorFlow GNMT Model The code described here has been **deprecated!** Do not use it to avoid working with a legacy solution. It will be kept for some time to ensure backwards compatibility, but **you should not use** it in contemporary applications. This guide describes a deprecated conversion method. The guide on the new and recommended method can be found in the :doc:`Python tutorials <../../../../../../learn-openvino/interactive-tutorials-python>`. - + This tutorial explains how to convert Google Neural Machine Translation (GNMT) model to the Intermediate Representation (IR). There are several public versions of TensorFlow GNMT model implementation available on GitHub. This tutorial explains how to convert the GNMT model from the `TensorFlow Neural Machine Translation (NMT) repository `__ to the IR. @@ -26,7 +26,7 @@ Before converting the model, you need to create a patch file for the repository. 1. Go to a writable directory and create a ``GNMT_inference.patch`` file. 2. Copy the following diff code to the file: - .. code-block:: cpp + .. code-block:: py diff --git a/nmt/inference.py b/nmt/inference.py index 2cbef07..e185490 100644 diff --git a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst index 3e18a780aab93b..39162e5c6fc78a 100644 --- a/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst +++ b/docs/articles_en/documentation/legacy-features/transition-legacy-conversion-api/legacy-model-optimizer-extensibility/[legacy]-model-optimizer-extensions/[legacy]-graph-transformation-extensions.rst @@ -457,7 +457,7 @@ For other examples of transformations with points, refer to the Generic Front Phase Transformations Enabled with Transformations Configuration File ################################################################################### -This type of transformation works similarly to the :ref:`Generic Front Phase Transformations ` but require a JSON configuration file to enable it similarly to :ref:`Node Name Pattern Front Phase Transformations ` and :ref:`Front Phase Transformations Using Start and End Points `. diff --git a/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst b/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst index b840228f19c09b..2365f721da1edb 100644 --- a/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst +++ b/docs/articles_en/documentation/openvino-ecosystem/openvino-security-add-on.rst @@ -734,9 +734,9 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc How to Use the OpenVINO™ Security Add-on ######################################## -This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable :ref:`set up steps ` and :ref:`installation steps ` before beginning this section. +This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable :ref:`set up steps ` and :ref:`installation steps ` before beginning this section. -This document uses the :ref:`face-detection-retail-0004 <../../omz_models_model_face_detection_retail_0044>` model as an example. +This document uses the :doc:`face-detection-retail-0004 <../../omz_models_model_face_detection_retail_0044>` model as an example. The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User. diff --git a/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst b/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst index e9ff4af6a319cf..8e7f6f34e197da 100644 --- a/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst +++ b/docs/articles_en/documentation/openvino-extensibility/custom-gpu-operations.rst @@ -350,7 +350,8 @@ Example Kernel Debugging Tips ############## -**Using ``printf`` in the OpenCL™ Kernels**. +**Using** ``printf`` **in the OpenCL™ Kernels**. + To debug the specific values, use ``printf`` in your kernels. However, be careful not to output excessively, which could generate too much data. The ``printf`` output is typical, so diff --git a/docs/articles_en/documentation/openvino-extensibility/custom-openvino-operations.rst b/docs/articles_en/documentation/openvino-extensibility/custom-openvino-operations.rst index 01d46c73447636..aafcfe23538281 100644 --- a/docs/articles_en/documentation/openvino-extensibility/custom-openvino-operations.rst +++ b/docs/articles_en/documentation/openvino-extensibility/custom-openvino-operations.rst @@ -9,7 +9,7 @@ Custom OpenVINO Operations custom operations to support models with operations not supported by OpenVINO. -OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions ` for more details on library creation and usage. The remining part of this document describes how to implement an operation class. +OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions ` for more details on library creation and usage. The remaining part of this document describes how to implement an operation class. Operation Class ############### diff --git a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations/step3-main.rst b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations/step3-main.rst index cf4961502f10e8..66c46124e1c1a2 100644 --- a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations/step3-main.rst +++ b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/advanced-guides/low-precision-transformations/step3-main.rst @@ -69,7 +69,7 @@ Main transformations are the majority of low precision transformations. Transfor * :doc:`MultiplyPartialTransformation ` * :doc:`MVNTransformation ` * :doc:`NormalizeL2Transformation ` -* :doc:`PadTransformation` +* :doc:`PadTransformation ` * :doc:`PReluTransformation ` * :doc:`ReduceMaxTransformation ` * :doc:`ReduceMeanTransformation ` diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.md b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.md deleted file mode 100644 index ed980d826dbb34..00000000000000 --- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.md +++ /dev/null @@ -1,135 +0,0 @@ -# AUGRUCell - -**Versioned name**: *AUAUGRUCell* - -**Category**: *Sequence processing* - -**Short description**: *AUGRUCell* represents a single AUGRU Cell (GRU with attentional update gate). - -**Detailed description**: The main difference between *AUGRUCell* and [GRUCell](../../../docs/ops/sequence/GRUCell_3.md) is the additional attention score input `A`, which is a multiplier for the update gate. -The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672). - -``` -AUGRU formula: - * - matrix multiplication - (.) - Hadamard product (element-wise) - - f, g - activation functions - z - update gate, r - reset gate, h - hidden gate - a - attention score - - rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr) - zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz) - ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # 'linear_before_reset' is False - - zt' = (1 - at) (.) zt # multiplication by attention score - - Ht = (1 - zt') (.) ht + zt' (.) Ht-1 -``` - -**Attributes** - -* *hidden_size* - - * **Description**: *hidden_size* specifies hidden state size. - * **Range of values**: a positive integer - * **Type**: `int` - * **Required**: *yes* - -* *activations* - - * **Description**: activation functions for gates - * **Range of values**: *sigmoid*, *tanh* - * **Type**: a list of strings - * **Default value**: *sigmoid* for f, *tanh* for g - * **Required**: *no* - -* *activations_alpha, activations_beta* - - * **Description**: *activations_alpha, activations_beta* attributes of functions; applicability and meaning of these attributes depends on chosen activation functions - * **Range of values**: [] - * **Type**: `float[]` - * **Default value**: [] - * **Required**: *no* - -* *clip* - - * **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. Clipping is performed before activations. - * **Range of values**: `0.` - * **Type**: `float` - * **Default value**: `0.` that means the clipping is not applied - * **Required**: *no* - -* *linear_before_reset* - - * **Description**: *linear_before_reset* flag denotes, if the output of hidden gate is multiplied by the reset gate before or after linear transformation. - * **Range of values**: False - * **Type**: `boolean` - * **Default value**: False - * **Required**: *no*. - -**Inputs** - -* **1**: `X` - 2D tensor of type *T* and shape `[batch_size, input_size]`, input data. **Required.** - -* **2**: `H_t` - 2D tensor of type *T* and shape `[batch_size, hidden_size]`. Input with initial hidden state data. **Required.** - -* **3**: `W` - 2D tensor of type *T* and shape `[3 * hidden_size, input_size]`. The weights for matrix multiplication, gate order: zrh. **Required.** - -* **4**: `R` - 2D tensor of type *T* and shape `[3 * hidden_size, hidden_size]`. The recurrence weights for matrix multiplication, gate order: zrh. **Required.** - -* **5**: `B` - 2D tensor of type *T*. The biases. If *linear_before_reset* is set to `False`, then the shape is `[3 * hidden_size]`, gate order: zrh. Otherwise the shape is `[4 * hidden_size]` - the sum of biases for z and r gates (weights and recurrence weights), the biases for h gate are placed separately. **Required.** - -* **6**: `A` - 2D tensor of type *T* and shape `[batch_size, 1]`, the attention score. **Required.** - - -**Outputs** - -* **1**: `Ho` - 2D tensor of type *T* `[batch_size, hidden_size]`, the last output value of hidden state. - -**Types** - -* *T*: any supported floating-point type. - -**Example** -```xml - - - - - 1 - 16 - - - 1 - 128 - - - 384 - 16 - - - 384 - 128 - - - 384 - - - 1 - 1 - - - - - 1 - 4 - 128 - - - 1 - 128 - - - -``` diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.rst new file mode 100644 index 00000000000000..f7d6d4010e816f --- /dev/null +++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-cell.rst @@ -0,0 +1,157 @@ +.. {#openvino_docs_ops_internal_AUGRUCell} + +AUGRUCell +========= + +**Versioned name**: *AUAUGRUCell* + +**Category**: *Sequence processing* + +**Short description**: *AUGRUCell* represents a single AUGRU Cell (GRU with attentional +update gate). + +**Detailed description**: The main difference between *AUGRUCell* and +:doc:`GRUCell <../sequence/gru-cell-3>` is the additional attention score +input ``A``, which is a multiplier for the update gate. +The AUGRU formula is based on the `paper arXiv:1809.03672 `__. + +.. code-block:: py + + AUGRU formula: + * - matrix multiplication + (.) - Hadamard product (element-wise) + + f, g - activation functions + z - update gate, r - reset gate, h - hidden gate + a - attention score + + rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr) + zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz) + ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # 'linear_before_reset' is False + + zt' = (1 - at) (.) zt # multiplication by attention score + + Ht = (1 - zt') (.) ht + zt' (.) Ht-1 + + +**Attributes** + +* *hidden_size* + + * **Description**: *hidden_size* specifies hidden state size. + * **Range of values**: a positive integer + * **Type**: ``int`` + * **Required**: *yes* + +* *activations* + + * **Description**: activation functions for gates + * **Range of values**: *sigmoid*, *tanh* + * **Type**: a list of strings + * **Default value**: *sigmoid* for f, *tanh* for g + * **Required**: *no* + +* *activations_alpha, activations_beta* + + * **Description**: *activations_alpha, activations_beta* attributes of functions; + applicability and meaning of these attributes depends on chosen activation functions + * **Range of values**: [] + * **Type**: ``float[]`` + * **Default value**: [] + * **Required**: *no* + +* *clip* + + * **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. + Clipping is performed before activations. + * **Range of values**: ``0.`` + * **Type**: ``float`` + * **Default value**: ``0.`` that means the clipping is not applied + * **Required**: *no* + +* *linear_before_reset* + + * **Description**: *linear_before_reset* flag denotes, if the output of hidden gate + is multiplied by the reset gate before or after linear transformation. + * **Range of values**: False + * **Type**: ``boolean`` + * **Default value**: False + * **Required**: *no*. + +**Inputs** + +* **1**: ``X`` - 2D tensor of type *T* and shape ``[batch_size, input_size]``, input + data. **Required.** + +* **2**: ``H_t`` - 2D tensor of type *T* and shape ``[batch_size, hidden_size]``. + Input with initial hidden state data. **Required.** + +* **3**: ``W`` - 2D tensor of type *T* and shape ``[3 * hidden_size, input_size]``. + The weights for matrix multiplication, gate order: zrh. **Required.** + +* **4**: ``R`` - 2D tensor of type *T* and shape ``[3 * hidden_size, hidden_size]``. + The recurrence weights for matrix multiplication, gate order: zrh. **Required.** + +* **5**: ``B`` - 2D tensor of type *T*. The biases. If *linear_before_reset* is set + to ``False``, then the shape is ``[3 * hidden_size]``, gate order: zrh. Otherwise + the shape is ``[4 * hidden_size]`` - the sum of biases for z and r gates (weights and + recurrence weights), the biases for h gate are placed separately. **Required.** + +* **6**: ``A`` - 2D tensor of type *T* and shape ``[batch_size, 1]``, the attention + score. **Required.** + + +**Outputs** + +* **1**: ``Ho`` - 2D tensor of type *T* ``[batch_size, hidden_size]``, the last output + value of hidden state. + +**Types** + +* *T*: any supported floating-point type. + +**Example** + +.. code-block:: xml + :force: + + + + + + 1 + 16 + + + 1 + 128 + + + 384 + 16 + + + 384 + 128 + + + 384 + + + 1 + 1 + + + + + 1 + 4 + 128 + + + 1 + 128 + + + + diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.md b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.md deleted file mode 100644 index bb4f38b27a28e0..00000000000000 --- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.md +++ /dev/null @@ -1,161 +0,0 @@ -# AUGRUSequence - -**Versioned name**: *AUGRUSequence* - -**Category**: *Sequence processing* - -**Short description**: *AUGRUSequence* operation represents a series of AUGRU cells (GRU with attentional update gate). - -**Detailed description**: The main difference between *AUGRUSequence* and [GRUSequence](../../../docs/ops/sequence/GRUSequence_5.md) is the additional attention score input `A`, which is a multiplier for the update gate. -The AUGRU formula is based on the [paper arXiv:1809.03672](https://arxiv.org/abs/1809.03672). - -``` -AUGRU formula: - * - matrix multiplication - (.) - Hadamard product (element-wise) - - f, g - activation functions - z - update gate, r - reset gate, h - hidden gate - a - attention score - - rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr) - zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz) - ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # 'linear_before_reset' is False - - zt' = (1 - at) (.) zt # multiplication by attention score - - Ht = (1 - zt') (.) ht + zt' (.) Ht-1 -``` - -Activation functions for gates: *sigmoid* for f, *tanh* for g. -Only `forward` direction is supported, so `num_directions` is always equal to `1`. - -**Attributes** - -* *hidden_size* - - * **Description**: *hidden_size* specifies hidden state size. - * **Range of values**: a positive integer - * **Type**: `int` - * **Required**: *yes* - -* *activations* - - * **Description**: activation functions for gates - * **Range of values**: *sigmoid*, *tanh* - * **Type**: a list of strings - * **Default value**: *sigmoid* for f, *tanh* for g - * **Required**: *no* - -* *activations_alpha, activations_beta* - - * **Description**: *activations_alpha, activations_beta* attributes of functions; applicability and meaning of these attributes depends on chosen activation functions - * **Range of values**: [] - * **Type**: `float[]` - * **Default value**: [] - * **Required**: *no* - -* *clip* - - * **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. Clipping is performed before activations. - * **Range of values**: `0.` - * **Type**: `float` - * **Default value**: `0.` that means the clipping is not applied - * **Required**: *no* - -* *direction* - - * **Description**: Specify if the RNN is forward, reverse, or bidirectional. If it is one of *forward* or *reverse* then `num_directions = 1`, if it is *bidirectional*, then `num_directions = 2`. This `num_directions` value specifies input/output shape requirements. - * **Range of values**: *forward* - * **Type**: `string` - * **Default value**: *forward* - * **Required**: *no* - -* *linear_before_reset* - - * **Description**: *linear_before_reset* flag denotes, if the output of hidden gate is multiplied by the reset gate before or after linear transformation. - * **Range of values**: False - * **Type**: `boolean` - * **Default value**: False - * **Required**: *no* - -**Inputs** - -* **1**: `X` - 3D tensor of type *T1* `[batch_size, seq_length, input_size]`, input data. **Required.** - -* **2**: `H_t` - 3D tensor of type *T1* and shape `[batch_size, num_directions, hidden_size]`. Input with initial hidden state data. **Required.** - -* **3**: `sequence_lengths` - 1D tensor of type *T2* and shape `[batch_size]`. Specifies real sequence lengths for each batch element. **Required.** - -* **4**: `W` - 3D tensor of type *T1* and shape `[num_directions, 3 * hidden_size, input_size]`. The weights for matrix multiplication, gate order: zrh. **Required.** - -* **5**: `R` - 3D tensor of type *T1* and shape `[num_directions, 3 * hidden_size, hidden_size]`. The recurrence weights for matrix multiplication, gate order: zrh. **Required.** - -* **6**: `B` - 2D tensor of type *T1*. The biases. If *linear_before_reset* is set to `False`, then the shape is `[num_directions, 3 * hidden_size]`, gate order: zrh. Otherwise the shape is `[num_directions, 4 * hidden_size]` - the sum of biases for z and r gates (weights and recurrence weights), the biases for h gate are placed separately. **Required.** - -* **7**: `A` - 3D tensor of type *T1* `[batch_size, seq_length, 1]`, the attention score. **Required.** - -**Outputs** - -* **1**: `Y` - 4D tensor of type *T1* `[batch_size, num_directions, seq_length, hidden_size]`, concatenation of all the intermediate output values of the hidden. - -* **2**: `Ho` - 3D tensor of type *T1* `[batch_size, num_directions, hidden_size]`, the last output value of hidden state. - -**Types** - -* *T1*: any supported floating-point type. -* *T2*: any supported integer type. - -**Example** -```xml - - - - - 1 - 4 - 16 - - - 1 - 1 - 128 - - - 1 - - - 1 - 384 - 16 - - - 1 - 384 - 128 - - - 1 - 384 - - - 1 - 4 - 1 - - - - - 1 - 1 - 4 - 128 - - - 1 - 1 - 128 - - - -``` diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.rst new file mode 100644 index 00000000000000..55035aab1e9908 --- /dev/null +++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/internal/augru-sequence.rst @@ -0,0 +1,190 @@ +.. {#openvino_docs_ops_internal_AUGRUSequence} + +AUGRUSequence +============= + +**Versioned name**: *AUGRUSequence* + +**Category**: *Sequence processing* + +**Short description**: *AUGRUSequence* operation represents a series of AUGRU cells +(GRU with attentional update gate). + +**Detailed description**: The main difference between *AUGRUSequence* and +:doc:`GRUSequence <../sequence/gru-sequence-5>` is the additional attention score +input ``A``, which is a multiplier for the update gate. The AUGRU formula is based on +the `paper arXiv:1809.03672 `__. + +.. code-block:: py + + AUGRU formula: + * - matrix multiplication + (.) - Hadamard product (element-wise) + + f, g - activation functions + z - update gate, r - reset gate, h - hidden gate + a - attention score + + rt = f(Xt*(Wr^T) + Ht-1*(Rr^T) + Wbr + Rbr) + zt = f(Xt*(Wz^T) + Ht-1*(Rz^T) + Wbz + Rbz) + ht = g(Xt*(Wh^T) + (rt (.) Ht-1)*(Rh^T) + Rbh + Wbh) # 'linear_before_reset' is False + + zt' = (1 - at) (.) zt # multiplication by attention score + + Ht = (1 - zt') (.) ht + zt' (.) Ht-1 + + +Activation functions for gates: *sigmoid* for f, *tanh* for g. +Only ``forward`` direction is supported, so ``num_directions`` is always equal to ``1``. + +**Attributes** + +* *hidden_size* + + * **Description**: *hidden_size* specifies hidden state size. + * **Range of values**: a positive integer + * **Type**: ``int`` + * **Required**: *yes* + +* *activations* + + * **Description**: activation functions for gates + * **Range of values**: *sigmoid*, *tanh* + * **Type**: a list of strings + * **Default value**: *sigmoid* for f, *tanh* for g + * **Required**: *no* + +* *activations_alpha, activations_beta* + + * **Description**: *activations_alpha, activations_beta* attributes of functions; + applicability and meaning of these attributes depends on chosen activation functions + * **Range of values**: [] + * **Type**: ``float[]`` + * **Default value**: [] + * **Required**: *no* + +* *clip* + + * **Description**: *clip* specifies bound values *[-C, C]* for tensor clipping. + Clipping is performed before activations. + * **Range of values**: ``0.`` + * **Type**: ``float`` + * **Default value**: ``0.`` that means the clipping is not applied + * **Required**: *no* + +* *direction* + + * **Description**: Specify if the RNN is forward, reverse, or bidirectional. If it is + one of *forward* or *reverse* then ``num_directions = 1``, if it is *bidirectional*, + then ``num_directions = 2``. This ``num_directions`` value specifies input/output + shape requirements. + * **Range of values**: *forward* + * **Type**: ``string`` + * **Default value**: *forward* + * **Required**: *no* + +* *linear_before_reset* + + * **Description**: *linear_before_reset* flag denotes, if the output of hidden gate is + multiplied by the reset gate before or after linear transformation. + * **Range of values**: False + * **Type**: ``boolean`` + * **Default value**: False + * **Required**: *no* + +**Inputs** + +* **1**: ``X`` - 3D tensor of type *T1* ``[batch_size, seq_length, input_size]``, + input data. **Required.** + +* **2**: ``H_t`` - 3D tensor of type *T1* and shape ``[batch_size, num_directions, + hidden_size]``. Input with initial hidden state data. **Required.** + +* **3**: ``sequence_lengths`` - 1D tensor of type *T2* and shape ``[batch_size]``. + Specifies real sequence lengths for each batch element. **Required.** + +* **4**: ``W`` - 3D tensor of type *T1* and shape ``[num_directions, 3 * hidden_size, + input_size]``. The weights for matrix multiplication, gate order: zrh. **Required.** + +* **5**: ``R`` - 3D tensor of type *T1* and shape ``[num_directions, 3 * hidden_size, + hidden_size]``. The recurrence weights for matrix multiplication, + gate order: zrh. **Required.** + +* **6**: ``B`` - 2D tensor of type *T1*. The biases. If *linear_before_reset* is set + to ``False``, then the shape is ``[num_directions, 3 * hidden_size]``, + gate order: zrh. Otherwise the shape is ``[num_directions, 4 * hidden_size]`` - the sum of + biases for z and r gates (weights and recurrence weights), the biases for h gate are + placed separately. **Required.** + +* **7**: ``A`` - 3D tensor of type *T1* ``[batch_size, seq_length, 1]``, + the attention score. **Required.** + +**Outputs** + +* **1**: ``Y`` - 4D tensor of type *T1* ``[batch_size, num_directions, seq_length, + hidden_size]``, concatenation of all the intermediate output values of the hidden. + +* **2**: ``Ho`` - 3D tensor of type *T1* ``[batch_size, num_directions, hidden_size]``, + the last output value of hidden state. + +**Types** + +* *T1*: any supported floating-point type. +* *T2*: any supported integer type. + +**Example** + +.. code-block:: xml + :force: + + + + + + 1 + 4 + 16 + + + 1 + 1 + 128 + + + 1 + + + 1 + 384 + 16 + + + 1 + 384 + 128 + + + 1 + 384 + + + 1 + 4 + 1 + + + + + 1 + 1 + 4 + 128 + + + 1 + 1 + 128 + + + + diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/depth-to-space-1.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/depth-to-space-1.rst index 1df751ac0c5f68..1f8a380b5bde3b 100644 --- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/depth-to-space-1.rst +++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/depth-to-space-1.rst @@ -12,15 +12,20 @@ DepthToSpace **Category**: *Data movement* -**Short description**: *DepthToSpace* operation rearranges data from the depth dimension of the input tensor into spatial dimensions of the output tensor. +**Short description**: *DepthToSpace* operation rearranges data from the depth dimension +of the input tensor into spatial dimensions of the output tensor. **Detailed description** -*DepthToSpace* operation permutes elements from the input tensor with shape ``[N, C, D1, D2, ..., DK]``, to the output tensor where values from the input depth dimension (features) ``C`` are moved to spatial blocks in ``D1``, ..., ``DK``. +*DepthToSpace* operation permutes elements from the input tensor with shape ``[N, C, D1, +D2, ..., DK]``, to the output tensor where values from the input depth dimension +(features) ``C`` are moved to spatial blocks in ``D1``, ..., ``DK``. -The operation is equivalent to the following transformation of the input tensor ``data`` with ``K`` spatial dimensions of shape ``[N, C, D1, D2, ..., DK]`` to *Y* output tensor. If ``mode = blocks_first``: +The operation is equivalent to the following transformation of the input tensor ``data`` +with ``K`` spatial dimensions of shape ``[N, C, D1, D2, ..., DK]`` to *Y* output tensor. +If ``mode = blocks_first``: -.. code-block:: cpp +.. code-block:: py x' = reshape(data, [N, block_size, block_size, ..., block_size, C / (block_size ^ K), D1, D2, ..., DK]) x'' = transpose(x', [0, K + 1, K + 2, 1, K + 3, 2, K + 4, 3, ..., K + (K + 1), K]) @@ -28,7 +33,7 @@ The operation is equivalent to the following transformation of the input tensor If ``mode = depth_first``: -.. code-block:: cpp +.. code-block:: py x' = reshape(data, [N, C / (block_size ^ K), block_size, block_size, ..., block_size, D1, D2, ..., DK]) x'' = transpose(x', [0, 1, K + 2, 2, K + 3, 3, K + 4, 4, ..., K + (K + 1), K + 1]) diff --git a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/space-to-depth-1.rst b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/space-to-depth-1.rst index 47ac3bf3fe3c98..599b638c970454 100644 --- a/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/space-to-depth-1.rst +++ b/docs/articles_en/documentation/openvino-ir-format/operation-sets/operation-specs/movement/space-to-depth-1.rst @@ -5,23 +5,28 @@ SpaceToDepth .. meta:: - :description: Learn about SpaceToDepth-1 - a data movement operation, + :description: Learn about SpaceToDepth-1 - a data movement operation, which can be performed on a single input tensor. **Versioned name**: *SpaceToDepth-1* **Category**: *Data movement* -**Short description**: *SpaceToDepth* operation rearranges data from the spatial dimensions of the input tensor into depth dimension of the output tensor. +**Short description**: *SpaceToDepth* operation rearranges data from the spatial dimensions +of the input tensor into depth dimension of the output tensor. **Detailed description** -*SpaceToDepth* operation permutes element from the input tensor with shape ``[N, C, D1, D2, ..., DK]``, to the output tensor where values from the input spatial dimensions ``D1, D2, ..., DK`` are moved to the new depth dimension. +*SpaceToDepth* operation permutes element from the input tensor with shape ``[N, C, D1, D2, +..., DK]``, to the output tensor where values from the input spatial dimensions ``D1, D2, +..., DK`` are moved to the new depth dimension. -The operation is equivalent to the following transformation of the input tensor ``data`` with ``K`` spatial dimensions of shape ``[N, C, D1, D2, ..., DK]`` to *Y* output tensor. If ``mode = blocks_first``: +The operation is equivalent to the following transformation of the input tensor ``data`` +with ``K`` spatial dimensions of shape ``[N, C, D1, D2, ..., DK]`` to *Y* output tensor. +If ``mode = blocks_first``: -.. code-block:: cpp +.. code-block:: py x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ... , DK / block_size, block_size]) @@ -31,7 +36,7 @@ The operation is equivalent to the following transformation of the input tensor If ``mode = depth_first``: -.. code-block:: cpp +.. code-block:: py x' = reshape(data, [N, C, D1 / block_size, block_size, D2 / block_size, block_size, ..., DK / block_size, block_size]) @@ -43,7 +48,8 @@ If ``mode = depth_first``: * *block_size* - * **Description**: specifies the size of the value block to be moved. The spatial dimensions must be evenly divided by ``block_size``. + * **Description**: specifies the size of the value block to be moved. The spatial + dimensions must be evenly divided by ``block_size``. * **Range of values**: a positive integer * **Type**: ``int`` * **Default value**: 1 @@ -51,9 +57,10 @@ If ``mode = depth_first``: * *mode* - * **Description**: specifies how the output depth dimension is gathered from block coordinates and the old depth dimension. + * **Description**: specifies how the output depth dimension is gathered from block + coordinates and the old depth dimension. * **Range of values**: - + * *blocks_first*: the output depth is gathered from ``[block_size, ..., block_size, C]`` * *depth_first*: the output depth is gathered from ``[C, block_size, ..., block_size]`` * **Type**: ``string`` @@ -61,11 +68,12 @@ If ``mode = depth_first``: **Inputs** -* **1**: ``data`` - input tensor of type *T* with rank >= 3. **Required.** +* **1**: ``data`` - input tensor of type *T* with rank >= 3. **Required.** **Outputs** -* **1**: permuted tensor of type *T* and shape ``[N, C * (block_size ^ K), D1 / block_size, D2 / block_size, ..., DK / block_size]``. +* **1**: permuted tensor of type *T* and shape ``[N, C * (block_size ^ K), D1 / block_size, + D2 / block_size, ..., DK / block_size]``. **Types** diff --git a/docs/articles_en/get-started/configurations/configurations-intel-gpu.rst b/docs/articles_en/get-started/configurations/configurations-intel-gpu.rst index 88205f18135298..e6d8b3a4170d04 100644 --- a/docs/articles_en/get-started/configurations/configurations-intel-gpu.rst +++ b/docs/articles_en/get-started/configurations/configurations-intel-gpu.rst @@ -98,7 +98,7 @@ To check if the driver has been installed: Your device driver has been updated and is now ready to use your GPU. -.. _wsl-install: +.. _wsl_install: Windows Subsystem for Linux (WSL) ################################# @@ -111,7 +111,7 @@ WSL allows developers to run a GNU/Linux development environment for the Windows Below are the required steps to make it work with OpenVINO: -- Install the GPU drivers as described :ref:`above `. +- Install the GPU drivers as described :ref:`above `. - Run the following commands in PowerShell to view the latest version of WSL2: .. code-block:: sh diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst index 07b87d20b48611..31526767b89fac 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst @@ -203,8 +203,8 @@ Additional Resources #################### * :doc:`Troubleshooting Guide for OpenVINO Installation & Configuration <../install-openvino>` -* Converting models for use with OpenVINO™: :ref:`Model Optimizer User Guide ` -* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide ` +* :doc:`Convert models for use with OpenVINO™ <../../../openvino-workflow/model-preparation/convert-model-to-ir>` +* :doc:`Write your own OpenVINO™ applications <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview <../../../learn-openvino/openvino-samples>` * Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models <../../../documentation/legacy-features/model-zoo>` * IoT libraries and code samples in the GitHUB repository: `Intel® IoT Developer Kit `__ diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst index 710b927bb7bddc..db50ff234cb7ed 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-windows.rst @@ -248,8 +248,8 @@ Additional Resources #################### * :doc:`Troubleshooting Guide for OpenVINO Installation & Configuration <../install-openvino>` -* Converting models for use with OpenVINO™: :ref:`Model Optimizer Developer Guide ` -* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide ` +* :doc:`Convert models for use with OpenVINO™ <../../../openvino-workflow/model-preparation/convert-model-to-ir>` +* :doc:`Write your own OpenVINO™ applications <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` * Sample applications: :doc:`OpenVINO™ Toolkit Samples Overview <../../../learn-openvino/openvino-samples>` * Pre-trained deep learning models: :doc:`Overview of OpenVINO™ Toolkit Pre-Trained Models <../../../documentation/legacy-features/model-zoo>` * IoT libraries and code samples in the GitHUB repository: `Intel® IoT Developer Kit `__ diff --git a/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst b/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst index c2be99d7f475c0..4ddfa5d8b7b863 100644 --- a/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst +++ b/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst @@ -191,7 +191,7 @@ Once you set the input shape of the model, call the ``compile_model`` method to get a ``CompiledModel`` object for inference with updated shapes. There are other approaches to change model input shapes during the stage of -:ref:`IR generation ` or :ref:`model representation ` in OpenVINO Runtime. +:doc:`IR generation <../model-preparation/setting-input-shapes>` or :doc:`model representation <./integrate-openvino-with-your-application/model-representation>` in OpenVINO Runtime. .. important:: diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst index 126051473c79b6..023b2d1f189b4e 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device.rst @@ -213,7 +213,7 @@ Alternatively, it can be enabled explicitly via the device notion, for example ` :fragment: compile_model_auto_batch -For more details, see the :doc:`Automatic batching`. +For more details, see the :doc:`Automatic batching `. Multi-stream Execution +++++++++++++++++++++++++++++++++++++++ @@ -230,7 +230,7 @@ which means that the incoming infer requests can be processed simultaneously. When multiple inferences of the same model need to be executed in parallel, the multi-stream feature is preferred to multiple instances of the model or application. The reason for this is that the implementation of streams in the GPU plugin supports weight memory sharing across streams, thus, memory consumption may be lower, compared to the other approaches. -For more details, see the :doc:`optimization guide<../optimize-inference>`. +For more details, see the :doc:`optimization guide <../optimize-inference>`. Dynamic Shapes +++++++++++++++++++++++++++++++++++++++ @@ -365,9 +365,9 @@ The GPU plugin has the following additional preprocessing options: With such preprocessing, GPU plugin will expect ``ov::intel_gpu::ocl::ClImage2DTensor`` (or derived) to be passed for each NV12 plane via ``ov::InferRequest::set_tensor()`` or ``ov::InferRequest::set_tensors()`` methods. -For usage examples, refer to the :doc:`RemoteTensor API`. +For usage examples, refer to the :doc:`RemoteTensor API `. -For more details, see the :doc:`preprocessing API<../optimize-inference/optimize-preprocessing>`. +For more details, see the :doc:`preprocessing API <../optimize-inference/optimize-preprocessing>`. Model Caching +++++++++++++++++++++++++++++++++++++++ @@ -465,7 +465,7 @@ GPU Performance Checklist: Summary Since OpenVINO relies on the OpenCL kernels for the GPU implementation, many general OpenCL tips apply: -- Prefer ``FP16`` inference precision over ``FP32``, as Model Conversion API can generate both variants, and the ``FP32`` is the default. To learn about optimization options, see :doc:`Optimization Guide<../../model-optimization>`. +- Prefer ``FP16`` inference precision over ``FP32``, as Model Conversion API can generate both variants, and the ``FP32`` is the default. To learn about optimization options, see :doc:`Optimization Guide <../../model-optimization>`. - Try to group individual infer jobs by using :doc:`automatic batching `. - Consider :doc:`caching <../optimize-inference/optimizing-latency/model-caching-overview>` to minimize model load time. - If your application performs inference on the CPU alongside the GPU, or otherwise loads the host heavily, make sure that the OpenCL driver threads do not starve. :doc:`CPU configuration options ` can be used to limit the number of inference threads for the CPU plugin. diff --git a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst index ff8c49d06f2107..9cdc6cae6460fc 100644 --- a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst +++ b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst @@ -394,6 +394,7 @@ Create Structure for project: .. doxygensnippet:: docs/snippets/src/main.cpp :language: cpp :fragment: [part7] + :force: .. tab-item:: C :sync: c @@ -401,6 +402,7 @@ Create Structure for project: .. doxygensnippet:: docs/snippets/src/main.c :language: cpp :fragment: [part7] + :force: Create Cmake Script diff --git a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.rst b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.rst index 776bd0f37f986c..8fdc0d851631c2 100644 --- a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.rst +++ b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.rst @@ -77,7 +77,7 @@ the conversion will throw an exception. For example: :sync: py .. doxygensnippet:: docs/snippets/ov_model_snippets.py - :language: cpp + :language: python :fragment: [ov:partial_shape] .. tab-item:: C++ @@ -191,7 +191,7 @@ OpenVINO™ provides several debug capabilities: :sync: py .. doxygensnippet:: docs/snippets/ov_model_snippets.py - :language: cpp + :language: python :fragment: [ov:visualize] .. tab-item:: C++ @@ -227,7 +227,7 @@ OpenVINO™ provides several debug capabilities: :sync: py .. doxygensnippet:: docs/snippets/ov_model_snippets.py - :language: cpp + :language: python :fragment: [ov:serialize] .. tab-item:: C++ diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference.rst index 5dd499e03c5aeb..55555ac83a37de 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference.rst @@ -51,7 +51,7 @@ Although inference performed in OpenVINO Runtime can be configured with a multit Secondly, such optimization may not translate well to other device-model combinations. In other words, one set of execution parameters is likely to result in different performance when used under different conditions. For example: -* both the CPU and GPU support the notion of :ref:`streams `, yet they deduce their optimal number very differently. +* both the CPU and GPU support the notion of :doc:`streams <./optimize-inference/optimizing-throughput/advanced_throughput_options>`, yet they deduce their optimal number very differently. * Even among devices of the same type, different execution configurations can be considered optimal, as in the case of instruction sets or the number of cores for the CPU and the batch size for the GPU. * Different models have different optimal parameter configurations, considering factors such as compute vs memory-bandwidth, inference precision, and possible model quantization. * Execution "scheduling" impacts performance strongly and is highly device-specific, for example, GPU-oriented optimizations like batching, combining multiple inputs to achieve the optimal throughput, :doc:`do not always map well to the CPU `. diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency.rst index b612de199079f3..c4db50b827f0d6 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency.rst @@ -31,7 +31,7 @@ Typically, human expertise is required to get more "throughput" out of the devic :doc:`OpenVINO performance hints ` is a recommended way for performance configuration, which is both device-agnostic and future-proof. -**When multiple models are to be used simultaneously**, consider running inference on separate devices for each of them. Finally, when multiple models are executed in parallel on a device, using additional ``ov::hint::model_priority`` may help to define relative priorities of the models. Refer to the documentation on the :ref:`OpenVINO feature support for devices <../../../about-openvino/compatibility-and-support/supported-devices>` to check if your device supports the feature. +**When multiple models are to be used simultaneously**, consider running inference on separate devices for each of them. Finally, when multiple models are executed in parallel on a device, using additional ``ov::hint::model_priority`` may help to define relative priorities of the models. Refer to the documentation on the :doc:`OpenVINO feature support for devices <../../../../about-openvino/compatibility-and-support/supported-devices>` to check if your device supports the feature. **First-Inference Latency and Model Load/Compile Time** diff --git a/docs/articles_en/openvino-workflow/running-inference/stateful-models.rst b/docs/articles_en/openvino-workflow/running-inference/stateful-models.rst index 3b8d289438aef8..735f30b07ddc1a 100644 --- a/docs/articles_en/openvino-workflow/running-inference/stateful-models.rst +++ b/docs/articles_en/openvino-workflow/running-inference/stateful-models.rst @@ -72,11 +72,11 @@ There are three methods of turning an OpenVINO model into a stateful one: are recognized and applied automatically. The drawback is, the tool does not work with all models. -* :ref:`MakeStateful transformation.` - enables the user to choose which +* :ref:`MakeStateful transformation ` - enables the user to choose which pairs of Parameter and Result to replace, as long as the paired operations are of the same shape and element type. -* :ref:`LowLatency2 transformation.` - automatically detects and replaces +* :ref:`LowLatency2 transformation ` - automatically detects and replaces Parameter and Result pairs connected to hidden and cell state inputs of LSTM/RNN/GRU operations or Loop/TensorIterator operations. diff --git a/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst b/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst index 307c385ab5b555..5ac716aca8f607 100644 --- a/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst +++ b/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst @@ -100,7 +100,7 @@ input, as shown in the picture above. These inputs should set the initial value initialization of ReadValue operations. However, such initialization is not supported in the current State API implementation. Input values are ignored, and the initial values for the ReadValue operations are set to zeros unless the user specifies otherwise via -:ref:`State API `. +:doc:`State API <../stateful-models>`. Applying LowLatency2 Transformation ++++++++++++++++++++++++++++++++++++ @@ -181,7 +181,7 @@ Applying LowLatency2 Transformation :fragment: [ov:low_latency_2] -4. Use state API. See sections :ref:`OpenVINO State API `, +4. Use state API. See sections :doc:`OpenVINO State API <../stateful-models>`, :ref:`Stateful Model Inference `. .. image:: ../../../_static/images/low_latency_limitation_2.svg diff --git a/docs/sphinx_setup/api/nodejs_api/addon.rst b/docs/sphinx_setup/api/nodejs_api/addon.rst index a3a3b9722e1837..27542e0b7be1be 100644 --- a/docs/sphinx_setup/api/nodejs_api/addon.rst +++ b/docs/sphinx_setup/api/nodejs_api/addon.rst @@ -33,7 +33,7 @@ Property addon The **openvino-node** package exports ``addon`` which contains the following properties: -.. code-block:: json +.. code-block:: ts interface NodeAddon { Core: CoreConstructor; @@ -55,7 +55,7 @@ Properties .. rubric:: Core -.. code-block:: json +.. code-block:: ts Core: CoreConstructor @@ -70,7 +70,7 @@ Properties -.. code-block:: json +.. code-block:: ts PartialShape: PartialShapeConstructor @@ -83,7 +83,7 @@ Properties .. rubric:: Tensor -.. code-block:: json +.. code-block:: ts Tensor: TensorConstructor @@ -98,7 +98,7 @@ Properties -.. code-block:: json +.. code-block:: ts element: typeof element @@ -112,7 +112,7 @@ Properties -.. code-block:: json +.. code-block:: ts preprocess: { PrePostProcessor: PrePostProcessorConstructor; diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/element.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/element.rst index 15ba79369280b3..b35430cbc645cd 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/element.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/element.rst @@ -5,7 +5,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts f32: number @@ -16,7 +16,7 @@ Enumeration element .. rubric:: f64 -.. code-block:: json +.. code-block:: ts f64: number @@ -27,7 +27,7 @@ Enumeration element .. rubric:: i16 -.. code-block:: json +.. code-block:: ts i16: number @@ -39,7 +39,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts i32: number @@ -51,7 +51,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts i64: number @@ -63,7 +63,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts i8: number @@ -75,7 +75,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts u16: number @@ -87,7 +87,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts u32: number @@ -99,7 +99,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts u64: number @@ -111,7 +111,7 @@ Enumeration element -.. code-block:: json +.. code-block:: ts u8: number diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/resizeAlgorithm.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/resizeAlgorithm.rst index 340e5afe81668b..ca615462a779c0 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/resizeAlgorithm.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/enums/resizeAlgorithm.rst @@ -5,7 +5,7 @@ Enumeration resizeAlgorithm -.. code-block:: json +.. code-block:: ts RESIZE_CUBIC: number @@ -17,7 +17,7 @@ Enumeration resizeAlgorithm -.. code-block:: json +.. code-block:: ts RESIZE_LINEAR: number @@ -29,7 +29,7 @@ Enumeration resizeAlgorithm -.. code-block:: json +.. code-block:: ts RESIZE_NEAREST: number diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CompiledModel.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CompiledModel.rst index 4156012a67c1df..ce151327f985a3 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CompiledModel.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CompiledModel.rst @@ -1,7 +1,7 @@ Interface CompiledModel ======================= -.. code-block:: json +.. code-block:: ts interface CompiledModel {     inputs: Output[]; @@ -22,7 +22,7 @@ Properties -.. code-block:: json +.. code-block:: ts inputs: Output [] @@ -35,7 +35,7 @@ Properties -.. code-block:: json +.. code-block:: ts outputs: Output [] @@ -50,7 +50,7 @@ Methods .. rubric:: createInferRequest -.. code-block:: json +.. code-block:: ts createInferRequest(): InferRequest @@ -65,7 +65,7 @@ Methods -.. code-block:: json +.. code-block:: ts input(nameOrId?): Output @@ -74,7 +74,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts nameOrId: string|number @@ -88,13 +88,13 @@ Methods .. rubric:: output -.. code-block:: json +.. code-block:: ts output(nameOrId?): Output - ``Optional`` - .. code-block:: json + .. code-block:: ts nameOrId: string|number diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Core.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Core.rst index 152d88d0ba6274..3d411edf48e552 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Core.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Core.rst @@ -1,7 +1,7 @@ Interface Core ============== -.. code-block:: json +.. code-block:: ts interface Core {     compileModel(model, device, config?): Promise; @@ -23,7 +23,7 @@ Methods .. rubric:: compileModel -.. code-block:: json +.. code-block:: ts compileModel(model, device, config?): Promise @@ -35,7 +35,7 @@ Methods - device: string - ``Optional`` - .. code-block:: json + .. code-block:: ts config: {     [option: string]: string; @@ -54,7 +54,7 @@ Methods .. rubric:: compileModelSync -.. code-block:: json +.. code-block:: ts compileModelSync(model, device, config?): CompiledModel @@ -65,7 +65,7 @@ Methods - device: string - ``Optional`` - .. code-block:: json + .. code-block:: ts config: {     [option: string]: string; @@ -84,7 +84,7 @@ Methods .. rubric:: readModel -.. code-block:: json +.. code-block:: ts readModel(modelPath, weightsPath?): Promise @@ -94,7 +94,7 @@ Methods - modelPath: string - ``Optional`` - .. code-block:: json + .. code-block:: ts weightsPath: string @@ -104,7 +104,7 @@ Methods - Defined in `addon.ts:34 `__ -.. code-block:: json +.. code-block:: ts readModel(modelBuffer, weightsBuffer?): Promise @@ -113,7 +113,7 @@ Methods - modelBuffer: Uint8Array - ``Optional`` - .. code-block:: json + .. code-block:: ts weightsBuffer: Uint8Array @@ -127,7 +127,7 @@ Methods .. rubric:: readModelSync -.. code-block:: json +.. code-block:: ts readModelSync(modelPath, weightsPath?): Model @@ -137,7 +137,7 @@ Methods - modelPath: string - ``Optional`` - .. code-block:: json + .. code-block:: ts weightsPath: string @@ -146,7 +146,7 @@ Methods - Defined in `addon.ts:37 `__ -.. code-block:: json +.. code-block:: ts readModelSync(modelBuffer, weightsBuffer?): Model @@ -156,7 +156,7 @@ Methods - modelBuffer: Uint8Array - ``Optional`` - .. code-block:: json + .. code-block:: ts weightsBuffer: Uint8Array diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CoreConstructor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CoreConstructor.rst index 4f8051a17e022c..d4a8bdb3809f65 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CoreConstructor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/CoreConstructor.rst @@ -1,7 +1,7 @@ Interface CoreConstructor ========================= -.. code-block:: json +.. code-block:: ts interface CoreConstructor { new Core(): Core; @@ -12,7 +12,7 @@ Interface CoreConstructor .. rubric:: constructor -.. code-block:: json +.. code-block:: ts new Core(): Core diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InferRequest.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InferRequest.rst index 1468c47d24dc85..e3ad4f67bb9fc0 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InferRequest.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InferRequest.rst @@ -4,7 +4,7 @@ InferRequest .. rubric:: Interface InferRequest -.. code-block:: json +.. code-block:: ts interface InferRequest { getCompiledModel(): CompiledModel; @@ -31,7 +31,7 @@ Methods .. rubric:: getCompiledModel -.. code-block:: json +.. code-block:: ts getCompiledModel(): CompiledModel @@ -43,7 +43,7 @@ Methods .. rubric:: getInputTensor -.. code-block:: json +.. code-block:: ts getInputTensor(idx?): Tensor @@ -52,7 +52,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts idx: number @@ -64,7 +64,7 @@ Methods .. rubric:: getOutputTensor -.. code-block:: json +.. code-block:: ts getOutputTensor(idx?): Tensor @@ -73,7 +73,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts idx: number @@ -85,7 +85,7 @@ Methods .. rubric:: getTensor -.. code-block:: json +.. code-block:: ts getTensor(nameOrOutput): Tensor @@ -101,7 +101,7 @@ Methods .. rubric:: infer -.. code-block:: json +.. code-block:: ts infer(inputData?): { [outputName: string]: Tensor; @@ -112,7 +112,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts inputData: { [inputName: string]: Tensor | SupportedTypedArray; @@ -120,7 +120,7 @@ Methods **Returns** -.. code-block:: json +.. code-block:: ts { [outputName: string]: Tensor; @@ -135,7 +135,7 @@ Methods .. rubric:: inferAsync -.. code-block:: json +.. code-block:: ts inferAsync(inputData): Promise<{ [outputName: string]: Tensor; @@ -145,7 +145,7 @@ Methods - - .. code-block:: json + .. code-block:: ts inputData: Tensor[] | { [inputName: string]: Tensor; @@ -153,7 +153,7 @@ Methods **Returns** -.. code-block:: json +.. code-block:: ts Promise<{ [outputName: string]: Tensor; @@ -165,7 +165,7 @@ Methods .. rubric:: setInputTensor -.. code-block:: json +.. code-block:: ts setInputTensor(idxOrTensor, tensor?): void @@ -176,7 +176,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts tensor: Tensor @@ -189,7 +189,7 @@ Methods .. rubric:: setOutputTensor -.. code-block:: json +.. code-block:: ts setOutputTensor(idxOrTensor, tensor?): void @@ -199,7 +199,7 @@ Methods - idxOrTensor: number| :doc:`Tensor ` - ``Optional`` - .. code-block:: json + .. code-block:: ts tensor: Tensor @@ -212,7 +212,7 @@ Methods .. rubric:: setTensor -.. code-block:: json +.. code-block:: ts setTensor(name, tensor): void diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputInfo.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputInfo.rst index 0262d3b17efc77..4cf0f72b6f0d62 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputInfo.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputInfo.rst @@ -1,7 +1,7 @@ Interface InputInfo =================== -.. code-block:: json +.. code-block:: ts interface InputInfo { model(): InputModelInfo; @@ -19,7 +19,7 @@ Methods -.. code-block:: json +.. code-block:: ts model(): InputModelInfo @@ -32,7 +32,7 @@ Methods .. rubric:: preprocess -.. code-block:: json +.. code-block:: ts preprocess(): PreProcessSteps @@ -45,7 +45,7 @@ Methods .. rubric:: tensor -.. code-block:: json +.. code-block:: ts tensor(): InputTensorInfo diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputModelInfo.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputModelInfo.rst index 2a17dcc7840bf9..8ec81609754743 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputModelInfo.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputModelInfo.rst @@ -2,7 +2,7 @@ Interface InputModelInfo ======================== -.. code-block:: json +.. code-block:: ts interface InputModelInfo { setLayout(layout): InputModelInfo; @@ -18,7 +18,7 @@ Methods -.. code-block:: json +.. code-block:: ts setLayout(layout): InputModelInfo diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputTensorInfo.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputTensorInfo.rst index 4d3d8e0c0be29b..54f74c7deaaef9 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputTensorInfo.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/InputTensorInfo.rst @@ -2,7 +2,7 @@ Interface InputTensorInfo ========================= -.. code-block:: json +.. code-block:: ts interface InputTensorInfo { setElementType(elementType): InputTensorInfo; @@ -20,7 +20,7 @@ Methods -.. code-block:: json +.. code-block:: ts setElementType(elementType): InputTensorInfo @@ -42,7 +42,7 @@ Methods -.. code-block:: json +.. code-block:: ts setLayout(layout): InputTensorInfo @@ -59,7 +59,7 @@ Methods .. rubric:: setShape -.. code-block:: json +.. code-block:: ts setShape(shape): InputTensorInfo diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Model.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Model.rst index c7e96a6b4cc2af..de8e253fda0281 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Model.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Model.rst @@ -4,7 +4,7 @@ Interface Model .. rubric:: Interface Model -.. code-block:: json +.. code-block:: ts interface Model { inputs: Output[]; @@ -26,7 +26,7 @@ Properties -.. code-block:: json +.. code-block:: ts inputs: Output[] @@ -37,7 +37,7 @@ Properties -.. code-block:: json +.. code-block:: ts outputs: Output[] @@ -52,7 +52,7 @@ Methods .. rubric:: getName -.. code-block:: json +.. code-block:: ts getName(): string @@ -66,7 +66,7 @@ Methods .. rubric:: input -.. code-block:: json +.. code-block:: ts input(nameOrId?): Output @@ -76,7 +76,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts nameOrId: string|number @@ -91,7 +91,7 @@ Methods .. rubric:: output -.. code-block:: json +.. code-block:: ts output(nameOrId?): Output @@ -100,7 +100,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts nameOrId: string|number diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Output.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Output.rst index 6f7f59ef3b3367..ab9ce353babb38 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Output.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Output.rst @@ -2,7 +2,7 @@ Interface Output ================ -.. code-block:: json +.. code-block:: ts interface Output { anyName: string; @@ -23,7 +23,7 @@ Properties -.. code-block:: json +.. code-block:: ts anyName: string @@ -36,7 +36,7 @@ Properties -.. code-block:: json +.. code-block:: ts shape: number[] @@ -51,7 +51,7 @@ Methods .. rubric:: getAnyName -.. code-block:: json +.. code-block:: ts getAnyName(): string @@ -64,7 +64,7 @@ Methods .. rubric:: getPartialShape -.. code-block:: json +.. code-block:: ts getPartialShape(): PartialShape @@ -77,13 +77,13 @@ Methods .. rubric:: getShape -.. code-block:: json +.. code-block:: ts getShape(): number[] **Returns** -.. code-block:: json +.. code-block:: ts number[] @@ -93,13 +93,13 @@ Methods .. rubric:: toString -.. code-block:: json +.. code-block:: ts toString(): string **Returns** -.. code-block:: json +.. code-block:: ts string diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputInfo.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputInfo.rst index 90f5780942b2cc..2bb3a8f189dc5b 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputInfo.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputInfo.rst @@ -1,7 +1,7 @@ Interface OutputInfo ==================== -.. code-block:: json +.. code-block:: ts interfaceOutputInfo {     tensor(): OutputTensorInfo; @@ -18,7 +18,7 @@ Methods .. rubric:: tensor -.. code-block:: json +.. code-block:: ts tensor(): OutputTensorInfo diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputTensorInfo.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputTensorInfo.rst index d1187a0aec068f..97d5534b926839 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputTensorInfo.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/OutputTensorInfo.rst @@ -1,7 +1,7 @@ Interface OutputTensorInfo ========================== -.. code-block:: json +.. code-block:: ts interface OutputTensorInfo { setElementType(elementType): InputTensorInfo; @@ -17,7 +17,7 @@ Methods .. rubric:: setElementType -.. code-block:: json +.. code-block:: ts setElementType(elementType): InputTensorInfo @@ -33,7 +33,7 @@ Methods .. rubric:: setLayout -.. code-block:: json +.. code-block:: ts setLayout(layout): InputTensorInfo diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShape.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShape.rst index 39446bb3438dff..17fc0da717f7e0 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShape.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShape.rst @@ -1,7 +1,7 @@ Interface PartialShape ====================== -.. code-block:: json +.. code-block:: ts interface PartialShape { getDimensions(): Dimension[]; @@ -19,7 +19,7 @@ Methods .. rubric:: getDimensions -.. code-block:: json +.. code-block:: ts getDimensions(): Dimension @@ -32,7 +32,7 @@ Methods .. rubric:: isDynamic -.. code-block:: json +.. code-block:: ts isDynamic(): boolean @@ -46,7 +46,7 @@ Methods -.. code-block:: json +.. code-block:: ts isStatic(): boolean @@ -60,7 +60,7 @@ Methods .. rubric:: toString -.. code-block:: json +.. code-block:: ts toString(): string diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShapeConstructor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShapeConstructor.rst index 2b1645dc8571bd..884e3651eb12b3 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShapeConstructor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PartialShapeConstructor.rst @@ -1,7 +1,7 @@ Interface PartialShapeConstructor ================================= -.. code-block:: json +.. code-block:: ts interface PartialShapeConstructor { new PartialShape(shape): PartialShape; @@ -15,7 +15,7 @@ Interface PartialShapeConstructor -.. code-block:: json +.. code-block:: ts new PartialShape(shape): PartialShape diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessor.rst index cb1a8f49ae9e3c..d4cc808fa7b48a 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessor.rst @@ -1,7 +1,7 @@ Interface PrePostProcessor ========================== -.. code-block:: json +.. code-block:: ts interface PrePostProcessor { build(): PrePostProcessor; @@ -18,7 +18,7 @@ Methods .. rubric:: build -.. code-block:: json +.. code-block:: ts build(): PrePostProcessor @@ -31,7 +31,7 @@ Methods -.. code-block:: json +.. code-block:: ts input(idxOrTensorName?): InputInfo @@ -40,7 +40,7 @@ Methods - ``Optional`` -.. code-block:: json +.. code-block:: ts idxOrTensorName: string|number @@ -53,7 +53,7 @@ Methods .. rubric:: output -.. code-block:: json +.. code-block:: ts output(idxOrTensorName?): OutputInfo @@ -62,7 +62,7 @@ Methods - ``Optional`` - .. code-block:: json + .. code-block:: ts idxOrTensorName: string|number diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessorConstructor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessorConstructor.rst index 3d7ea4424df5c3..aded5070d56544 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessorConstructor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PrePostProcessorConstructor.rst @@ -2,7 +2,7 @@ Interface PrePostProcessorConstructor ===================================== -.. code-block:: json +.. code-block:: ts interface PrePostProcessorConstructor { new PrePostProcessor(model): PrePostProcessor; @@ -14,7 +14,7 @@ Interface PrePostProcessorConstructor .. rubric:: constructor -.. code-block:: json +.. code-block:: ts new PrePostProcessor(model): PrePostProcessor diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PreProcessSteps.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PreProcessSteps.rst index c519210ea657c4..a808132a307ce9 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PreProcessSteps.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/PreProcessSteps.rst @@ -2,7 +2,7 @@ Interface PreProcessSteps ========================= -.. code-block:: json +.. code-block:: ts interface PreProcessSteps { resize(algorithm): PreProcessSteps; @@ -17,7 +17,7 @@ Methods .. rubric:: resize -.. code-block:: json +.. code-block:: ts resize(algorithm): PreProcessSteps diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Tensor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Tensor.rst index ea04031e8cc30c..91fdf5d007606f 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Tensor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/Tensor.rst @@ -4,7 +4,7 @@ Interface Tensor .. rubric:: Interface Tensor -.. code-block:: json +.. code-block:: ts interface Tensor { data: number[]; @@ -23,7 +23,7 @@ Properties -.. code-block:: json +.. code-block:: ts data: number[] @@ -38,7 +38,7 @@ Methods .. rubric:: getData -.. code-block:: json +.. code-block:: ts getData(): number[] @@ -52,7 +52,7 @@ Methods .. rubric:: getElementType -.. code-block:: json +.. code-block:: ts getElementType(): element @@ -64,7 +64,7 @@ Methods .. rubric:: getShape -.. code-block:: json +.. code-block:: ts getShape(): number[] diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/TensorConstructor.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/TensorConstructor.rst index 5bf387b3bdf7bf..652eaea31db503 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/TensorConstructor.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/interfaces/TensorConstructor.rst @@ -4,7 +4,7 @@ Interface TensorConstructor .. rubric:: Interface TensorConstructor -.. code-block:: json +.. code-block:: ts interface TensorConstructor { new Tensor(type, shape, tensorData?): Tensor; @@ -17,7 +17,7 @@ Interface TensorConstructor -.. code-block:: json +.. code-block:: ts new Tensor(type, shape, tensorData?): Tensor @@ -27,7 +27,7 @@ Interface TensorConstructor - shape: number[] - ``Optional`` - .. code-block:: json + .. code-block:: ts tensorData: number[]|SupportedTypedArray diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/Dimension.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/Dimension.rst index 19fefc2922b265..863ae0a242198b 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/Dimension.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/Dimension.rst @@ -1,7 +1,7 @@ Type alias Dimension ==================== -.. code-block:: json +.. code-block:: ts Dimension: number|[number,number] diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/SupportedTypedArray.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/SupportedTypedArray.rst index e097f90f83ad88..85655f1d61c152 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/SupportedTypedArray.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/SupportedTypedArray.rst @@ -2,7 +2,7 @@ Type alias SupportedTypedArray ============================== -.. code-block:: json +.. code-block:: ts SupportedTypedArray: Int8Array | Uint8Array | Int16Array | Uint16Array | Int32Array | Uint32Array | Float32Array | Float64Array diff --git a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/elementTypeString.rst b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/elementTypeString.rst index e83e9a183b41e8..b5babf1cb4859a 100644 --- a/docs/sphinx_setup/api/nodejs_api/openvino-node/types/elementTypeString.rst +++ b/docs/sphinx_setup/api/nodejs_api/openvino-node/types/elementTypeString.rst @@ -1,7 +1,7 @@ Type alias elementTypeString ============================ -.. code-block:: json +.. code-block:: ts elementTypeString: "u8" | "u32" | "u16" | "u64" | "i8" | "i64" | "i32" | "i16" | "f64" | "f32"