diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/README.md b/tools/visual-pipeline-and-platform-evaluation-tool/README.md index 9ad9aa72a7..a29be1094f 100644 --- a/tools/visual-pipeline-and-platform-evaluation-tool/README.md +++ b/tools/visual-pipeline-and-platform-evaluation-tool/README.md @@ -1,85 +1,220 @@ # Visual Pipeline and Platform Evaluation Tool - -Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI workloads. - + +Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI +workloads. ## Overview The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by enabling -configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, -CPU usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support -optimized hardware selection and performance tuning. +configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, CPU +usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support optimized +hardware selection and performance tuning. ### Use Cases - - -**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, -and efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters -and comparing performance metrics. +**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, and +efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters and comparing +performance metrics. **Performance Benchmarking for AI Models**: Model performance targets and KPIs can be validated by testing AI inference pipelines with different accelerators to measure throughput, latency, and resource utilization. ### Key Features - - -**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® -hardware integration. - -**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization -are available for detailed analysis. +**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® hardware +integration. -**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines -can be adjusted to create tailored performance tests. +**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization are +available for detailed analysis. -**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under -controlled conditions. +**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines can be +adjusted to create tailored performance tests. -## How It Works +**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under controlled +conditions. - +### How It Works The Visual Pipeline and Platform Evaluation Tool integrates with AI-based video processing pipelines to support hardware performance evaluation. -![System Architecture Diagram](docs/user-guide/_assets/architecture.png) - -### **Workflow Overview** - -**Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are -configured to match evaluation needs. - -**AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. - -**Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. +#### Workflow Overview -**Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of -configurations and optimization of settings. +- **Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are + configured to match evaluation needs. +- **AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. +- **Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. +- **Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of + configurations and optimization of settings. + +### Disclaimers + +#### Video Generator Images + +Intel provides six images for demo purposes only. You must provide your own images to run the video generator or to +create videos. + +#### Human Rights + +Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human +Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or +contribute to a violation of an internationally recognized human right. + +#### Models Licensing + +[ssdlite_mobilenet_v2_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/ssdlite_mobilenet_v2_INT8) +(Apache 2.0) + +[resnet-50-tf_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/resnet-50-tf_INT8) +(Apache 2.0) + +[efficientnet-b0_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/efficientnet-b0_INT8) +(Apache 2.0) + +[yolov5s-416_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/yolov5s-416_INT8) (GPL v3) + +Dataset Used: [Intel IoT DevKit Sample Videos](https://github.com/intel-iot-devkit/sample-videos?tab=readme-ov-file) +(CC-BY-4.0)* + +#### Data Transparency + +Refer to Model cards included in this folder for more information on the models and their usage in the Visual Pipeline +and Platform Evaluation tool. + +## Release Notes + +Details about the changes, improvements, and known issues in this release of the application. + +### Current Release: Version 2025.2.0 + +**Release Date**: 2025-12-10 + +#### New Features (2025.2.0) + +- **New graphical user interface (GUI)**: A visual representation of the underlying `gst-launch` pipeline graph is + provided, presenting elements, links, and branches in an interactive view. Pipeline parameters (such as sources, + models, and performance-related options) can be inspected and modified graphically, with changes propagated to the + underlying configuration. +- **Pipeline import and export**: Pipelines can be imported from and exported to configuration files, enabling sharing + of configurations between environments and easier version control. Exported definitions capture both topology and key + parameters, allowing reproducible pipeline setups. +- **Backend and frontend separation**: The application is now structured as a separate backend and frontend, allowing + independent development and deployment of each part. A fully functional REST API is exposed by the backend, which can + be accessed directly by automation scripts or indirectly through the UI. +- **Extensible architecture for dynamic pipelines**: The internal architecture has been evolved to support dynamic + registration and loading of pipelines. New pipeline types can be added without modifying core components, enabling + easier experimentation with custom topologies. +- **POSE model support**: POSE estimation model is now supported as part of the pipeline configuration. +- **DLStreamer Optimizer integration**: Integration with the DLStreamer Optimizer has been added to simplify + configuration of GStreamer-based pipelines. Optimized elements and parameters can be applied automatically, improving + performance and reducing manual tuning. + +#### Improvements (2025.2.0) + +- **Model management enhancements**: Supported models can now be added and removed directly through the application. + The model manager updates available models in a centralized configuration, ensuring that only selected models are + downloaded, stored, and exposed in the UI and API. + +#### Known Issues and Limitations (2025.2.0) + +- **Pipelines failing or missing bounding boxes when multiple devices/codecs are involved**: ViPPET lets you select the + `device` for inference elements such as `gvadetect` and `gvaclassify`. However, in the current implementation there + is no integrated mechanism to also update the DLStreamer codec and post-processing elements for multi-GPU or + mixed-device pipelines. This means that you can change the `device` property on AI elements (for example, to run + detection on another GPU), but the corresponding DLStreamer elements for decoding, post-processing, and encoding may + remain bound to a different GPU or to a default device. In such cases a pipeline can fail to start, error out during + caps negotiation, or run but produce an output video with no bounding boxes rendered, even though inference is + executed. +- **DLSOptimizer takes a long time or causes the application to restart**: When using DLSOptimizer from within ViPPET, + optimization runs can be long-running. It may take 5-6 minutes (or more, depending on pipeline complexity and + hardware) for DLSOptimizer to explore variants and return an optimized pipeline. In the current implementation, it + can also happen that while DLSOptimizer is searching for an optimized pipeline, the ViPPET application is restarted. +- **NPU metrics are not visible in the UI**: ViPPET currently does not support displaying NPU-related metrics. NPU + utilization, throughput, and latency are not exposed in the ViPPET UI. +- **Occasional "Connection lost" message in the UI**: The ViPPET UI is a web application that communicates with backend + services. Under transient network interruptions or short service unavailability, the UI may show a "Connection lost" + message. If this message appears occasionally, refresh the browser page to re-establish the connection to the + backend. +- **Application restart removes user-created pipelines and jobs**: In the current release, restarting the ViPPET + application removes all pipelines created by the user, and all types of jobs (tests, optimization runs, validation + runs, and similar). After a restart, only predefined pipelines remain available. +- **Support limited to DLStreamer 2025.2.0 pipelines and models**: ViPPET currently supports only pipelines and models + that are supported by DLStreamer 2025.2.0. For the full list of supported models, elements, and other details, see + the [DLStreamer release + notes](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md). +- **Limited metrics in the ViPPET UI**: At this stage, the ViPPET UI shows only a limited set of metrics: current CPU + utilization, current utilization of a single GPU, and the most recently measured FPS. +- **Limited validation scope**: Validation and testing in this release focused mainly on sanity checks for predefined + pipelines. For custom pipelines, their behavior in ViPPET is less explored and may vary. However, if a custom + pipeline is supported and works correctly with DLStreamer 2025.2.0, it is expected to behave similarly when run via + ViPPET. +- **No live preview video for running pipelines**: Live preview of the video from a running pipeline is not supported + in this release. As a workaround, you can enable the "Save output" option. After the pipeline finishes, inspect the + generated output video file. +- **Recommended to run only one operation at a time**: Currently, it is recommended to run a single operation at a time + from the following set: tests, optimization, validation. In this release, new jobs are not rejected or queued when + another job is already running. Starting more than one job at the same time launches multiple GStreamer instances. + This can significantly distort performance results (for example, CPU/GPU utilization and FPS). +- **Some GStreamer / DLStreamer elements may not be displayed correctly in the UI**: Some GStreamer or DLStreamer + elements used in a pipeline may not be displayed correctly by the ViPPET UI. Even if some elements are not shown as + expected in the UI, the underlying pipeline is still expected to run. +- **Supported models list is limited and extending it is not guaranteed to work**: ViPPET currently supports only + models defined in + [supported_models.yaml](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/models/supported_models.yaml). + A user can try to extend this file with new models whose `source` is either `public` or `pipeline-zoo-models`, but + there is no guarantee that such models will work out of the box. +- **Pipelines cannot depend on files other than models or videos**: Currently, ViPPET does not support pipelines that + require additional files beyond model files and video files. Pipelines that depend on other external artifacts (for + example, configuration files, custom resources, etc.) are not supported in this release. + +### Version 1.2 + +**Release Date**: 2025-08-20 + +#### New Features (v1.2) + +- **Feature 1**: Simple Video Structurization Pipeline: The Simple Video Structurization (D-T-C) pipeline is a + versatile, use case-agnostic solution that supports license plate recognition, vehicle detection with attribute + classification, and other object detection and classification tasks, adaptable based on the selected model. +- **Feature 2**: Live pipeline output preview: The pipeline now supports live output, allowing users to view real-time + results directly in the UI. This feature enhances the user experience by providing immediate feedback on video + processing tasks. +- **Feature 3**: New pre-trained models: The release includes new pre-trained models for object detection + (`YOLO v8 License Plate Detector`) and classification (`PaddleOCR`, `Vehicle Attributes Recognition Barrier 0039`), + expanding the range of supported use cases and improving accuracy for specific tasks. + +#### Known Issues (v1.2) + +- **Issue**: Metrics are displayed only for the last GPU when the system has multiple discrete GPUs. + +### Version 1.0.0 + +**Release Date**: 2025-03-31 + +#### New Features (v1.0.0) + +- **Feature 1**: Pre-trained Models Optimized for Specific Use Cases: Visual Pipeline and Platform Evaluation Tool + includes pre-trained models that are optimized for specific use cases, such as object detection for Smart NVR + pipeline. These models can be easily integrated into the pipeline, allowing users to quickly evaluate their + performance on different Intel® platforms. +- **Feature 2**: Metrics Collection with Turbostat tool and Qmassa tool: Visual Pipeline and Platform Evaluation Tool + collects real-time CPU and GPU performance metrics using Turbostat tool and Qmassa tool. The collector agent runs in + a dedicated collector container, gathering CPU and GPU metrics. Users can access and analyze these metrics via + intuitive UI, enabling efficient system monitoring and optimization. +- **Feature 3**: Smart NVR Pipeline Integration: The Smart NVR Proxy Pipeline is seamlessly integrated into the tool, + providing a structured video recorder architecture. It enables video analytics by supporting AI inference on selected + input channels while maintaining efficient media processing. The pipeline includes multi-view composition, media + encoding, and metadata extraction for insights. + +#### Known Issues (v1.0.0) + +- **Issue**: The Visual Pipeline and Platform Evaluation Tool container fails to start the analysis when the "Run" + button is clicked in the UI, specifically for systems without GPU. + - **Workaround**: Consider upgrading the hardware to meet the required specifications for optimal performance. ## Learn More -- [System Requirements](docs/user-guide/system-requirements.md) -- [Get Started](docs/user-guide/get-started.md) -- [How to Build Source](docs/user-guide/how-to-build-source.md) -- [How to use Video Generator](docs/user-guide/how-to-use-video-generator.md) -- [Release Notes](docs/user-guide/release-notes.md) +- [Installation](docs/user-guide/installation.md) +- [Usage](docs/user-guide/usage.md) +- [API Reference](docs/user-guide/api-reference.md) diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/Overview.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/Overview.md deleted file mode 100644 index 4962876b8b..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/Overview.md +++ /dev/null @@ -1,85 +0,0 @@ -# Visual Pipeline and Platform Evaluation Tool - -Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI workloads. - - - -## Overview - -The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by enabling -configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, -CPU usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support -optimized hardware selection and performance tuning. - -### Use Cases - - - -**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, -and efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters -and comparing performance metrics. - -**Performance Benchmarking for AI Models**: Model performance targets and KPIs can be validated by testing AI -inference pipelines with different accelerators to measure throughput, latency, and resource utilization. - -### Key Features - - - -**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® -hardware integration. - -**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization -are available for detailed analysis. - -**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines -can be adjusted to create tailored performance tests. - -**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under -controlled conditions. - -## How It Works - - - -The Visual Pipeline and Platform Evaluation Tool integrates with AI-based video processing pipelines to support -hardware performance evaluation. - -![System Architecture Diagram](_assets/architecture.png) - -### **Workflow Overview** - -**Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are -configured to match evaluation needs. - -**AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. - -**Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. - -**Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of -configurations and optimization of settings. - -## Learn More - -- [System Requirements](system-requirements.md) -- [Get Started](get-started.md) -- [How to Build Source](how-to-build-source.md) -- [How to use Video Generator](how-to-use-video-generator.md) -- [Release Notes](release-notes.md) diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/_assets/architecture.png b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/_assets/architecture.png deleted file mode 100644 index e64820dc90..0000000000 Binary files a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/_assets/architecture.png and /dev/null differ diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/api-reference.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/api-reference.md index 43741b6697..e8dd0aa25a 100644 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/api-reference.md +++ b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/api-reference.md @@ -1,5 +1,18 @@ # API Reference +The Visual Pipeline and Platform Evaluation Tool exposes a REST API that can be accessed directly by automation scripts +or through the UI. The API documentation is available via Swagger UI. + +## Accessing the API + +Once the application is running, access the API documentation at: + +- `http://localhost/api/v1/docs` + +The Swagger UI provides an interactive interface for exploring and testing the API endpoints. + +## API Documentation Format + diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/disclaimers.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/disclaimers.md deleted file mode 100644 index bfc606440d..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/disclaimers.md +++ /dev/null @@ -1,34 +0,0 @@ -# Disclaimers - -## Video Generator Images - -Intel provides six images for demo purposes only. You must provide your own images to run the video generator or -to create videos. - -## Human Rights - -Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global -Human Rights Principles. Intel's products and software are intended only to be used in applications that do not -cause or contribute to a violation of an internationally recognized human right. - -## Models Licensing - -[ssdlite_mobilenet_v2_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/ssdlite_mobilenet_v2_INT8) -(Apache 2.0) - -[resnet-50-tf_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/resnet-50-tf_INT8) -(Apache 2.0) - -[efficientnet-b0_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/efficientnet-b0_INT8) -(Apache 2.0) - -[yolov5s-416_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/yolov5s-416_INT8) -(GPL v3) - -Dataset Used: [Intel IoT DevKit Sample Videos](https://github.com/intel-iot-devkit/sample-videos?tab=readme-ov-file) -(CC-BY-4.0)* - -## Data Transparency - -Refer to Model cards included in this folder for more information on the models and their usage in the Visual Pipeline and -Platform Evaluation tool. diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/get-started.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/get-started.md deleted file mode 100644 index 05733dd7b2..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/get-started.md +++ /dev/null @@ -1,107 +0,0 @@ -# Get Started - -The **Visual Pipeline and Platform Evaluation Tool** helps hardware decision-makers and software developers -select the optimal Intel® platform by adjusting workload parameters and analyzing performance metrics. -Through an intuitive web-based interface, the Smart NVR pipeline can be executed and key metrics such as -throughput and CPU/GPU utilization can be evaluated to assess platform performance and determine appropriate -system sizing. - -By following this guide, the following tasks can be completed: - -- **Set up the sample application**: Use the Docker Compose tool to quickly deploy the application in a target environment. -- **Run a predefined pipeline**: Execute the Smart NVR pipeline and observe metrics. - -## Prerequisites - -Before starting, ensure the following: - -- **System requirements**: The system meets the [minimum requirements](./system-requirements.md). -- **Docker platform**: Docker is installed. For details, see the [Docker installation guide](https://docs.docker.com/get-docker/). -- **Dependencies installed**: - - **Git**: [Install Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - - **Make**: Standard build tool, typically provided by the `build-essential` (or equivalent) package on Linux. - - **curl**: Command-line tool for transferring data with URLs, typically provided by the `curl` package on Linux. - -For GPU and/or NPU usage, appropriate drivers must be installed. The recommended method is to use the DLS installation -script, which detects available devices and installs the required drivers. Follow the **Prerequisites** section in: - -- [Install Guide Ubuntu – Prerequisites](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-1.2.0/libraries/dl-streamer/docs/source/get_started/install/install_guide_ubuntu.md#prerequisites) - -This guide assumes basic familiarity with Git commands and terminal usage. For more information, see: - -- [Git Documentation](https://git-scm.com/doc) - -## Set up and First Use - -1. **Set up the working directory**: - - ```bash - mkdir -p visual-pipeline-and-platform-evaluation-tool/models - mkdir -p visual-pipeline-and-platform-evaluation-tool/shared/models - mkdir -p visual-pipeline-and-platform-evaluation-tool/shared/videos - cd visual-pipeline-and-platform-evaluation-tool - ``` - -2. **Download all required files**: - - ```bash - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/setup_env.sh" - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.yml" - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.cpu.yml" - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.gpu.yml" - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.npu.yml" - curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/Makefile" - curl -Lo models/Dockerfile "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/models/Dockerfile" - curl -Lo models/model_manager.sh "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/models/model_manager.sh" - curl -Lo shared/videos/default_recordings.yaml "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/videos/default_recordings.yaml" - curl -Lo shared/models/supported_models.yaml "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/models/supported_models.yaml" - chmod +x models/model_manager.sh - chmod +x setup_env.sh - ``` - -3. **Start the application**: - - ```bash - make build-models run - ``` - -4. **Verify that the application is running**: - - ```bash - docker compose ps - ``` - -5. **Access the application API documentation**: - - - Open a browser and navigate to `http://localhost:7860/docs` to access the Swagger UI. - -## Validation - -1. **Verify build success**: - - Check the logs and look for confirmation messages indicating that the microservice has started successfully. - -## Advanced Setup Options - -For alternative ways to set up the sample application, refer to: - -- [How to Build from Source](./how-to-build-source.md) - -### Model Installation and Management - -When the Visual Pipeline and Platform Evaluation Tool is launched for the first time, -a prompt is displayed to select and install the models to be used. -This step allows installation of only the models relevant to the intended pipelines. - -To manage the installed models again, run the following command: - -```bash -make install-models-force -``` - -## Known issues, limitations and troubleshooting - -- Refer to [Known issues, limitations and troubleshooting](known-issues.md). - -## Supporting Resources - -- [Docker Compose Documentation](https://docs.docker.com/compose/) diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/getting-started.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/getting-started.md new file mode 100644 index 0000000000..bd9e2e28cd --- /dev/null +++ b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/getting-started.md @@ -0,0 +1,220 @@ +# Visual Pipeline and Platform Evaluation Tool + + +Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI +workloads. + +## Overview + +The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by enabling +configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, CPU +usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support optimized +hardware selection and performance tuning. + +### Use Cases + +**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, and +efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters and comparing +performance metrics. + +**Performance Benchmarking for AI Models**: Model performance targets and KPIs can be validated by testing AI +inference pipelines with different accelerators to measure throughput, latency, and resource utilization. + +### Key Features + +**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® hardware +integration. + +**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization are +available for detailed analysis. + +**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines can be +adjusted to create tailored performance tests. + +**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under controlled +conditions. + +### How It Works + +The Visual Pipeline and Platform Evaluation Tool integrates with AI-based video processing pipelines to support +hardware performance evaluation. + +#### Workflow Overview + +- **Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are + configured to match evaluation needs. +- **AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. +- **Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. +- **Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of + configurations and optimization of settings. + +### Disclaimers + +#### Video Generator Images + +Intel provides six images for demo purposes only. You must provide your own images to run the video generator or to +create videos. + +#### Human Rights + +Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human +Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or +contribute to a violation of an internationally recognized human right. + +#### Models Licensing + +[ssdlite_mobilenet_v2_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/ssdlite_mobilenet_v2_INT8) +(Apache 2.0) + +[resnet-50-tf_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/resnet-50-tf_INT8) +(Apache 2.0) + +[efficientnet-b0_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/efficientnet-b0_INT8) +(Apache 2.0) + +[yolov5s-416_INT8](https://github.com/dlstreamer/pipeline-zoo-models/tree/main/storage/yolov5s-416_INT8) (GPL v3) + +Dataset Used: [Intel IoT DevKit Sample Videos](https://github.com/intel-iot-devkit/sample-videos?tab=readme-ov-file) +(CC-BY-4.0)* + +#### Data Transparency + +Refer to Model cards included in this folder for more information on the models and their usage in the Visual Pipeline +and Platform Evaluation tool. + +## Release Notes + +Details about the changes, improvements, and known issues in this release of the application. + +### Current Release: Version 2025.2.0 + +**Release Date**: 2025-12-10 + +#### New Features (2025.2.0) + +- **New graphical user interface (GUI)**: A visual representation of the underlying `gst-launch` pipeline graph is + provided, presenting elements, links, and branches in an interactive view. Pipeline parameters (such as sources, + models, and performance-related options) can be inspected and modified graphically, with changes propagated to the + underlying configuration. +- **Pipeline import and export**: Pipelines can be imported from and exported to configuration files, enabling sharing + of configurations between environments and easier version control. Exported definitions capture both topology and key + parameters, allowing reproducible pipeline setups. +- **Backend and frontend separation**: The application is now structured as a separate backend and frontend, allowing + independent development and deployment of each part. A fully functional REST API is exposed by the backend, which can + be accessed directly by automation scripts or indirectly through the UI. +- **Extensible architecture for dynamic pipelines**: The internal architecture has been evolved to support dynamic + registration and loading of pipelines. New pipeline types can be added without modifying core components, enabling + easier experimentation with custom topologies. +- **POSE model support**: POSE estimation model is now supported as part of the pipeline configuration. +- **DLStreamer Optimizer integration**: Integration with the DLStreamer Optimizer has been added to simplify + configuration of GStreamer-based pipelines. Optimized elements and parameters can be applied automatically, improving + performance and reducing manual tuning. + +#### Improvements (2025.2.0) + +- **Model management enhancements**: Supported models can now be added and removed directly through the application. + The model manager updates available models in a centralized configuration, ensuring that only selected models are + downloaded, stored, and exposed in the UI and API. + +#### Known Issues and Limitations (2025.2.0) + +- **Pipelines failing or missing bounding boxes when multiple devices/codecs are involved**: ViPPET lets you select the + `device` for inference elements such as `gvadetect` and `gvaclassify`. However, in the current implementation there + is no integrated mechanism to also update the DLStreamer codec and post-processing elements for multi-GPU or + mixed-device pipelines. This means that you can change the `device` property on AI elements (for example, to run + detection on another GPU), but the corresponding DLStreamer elements for decoding, post-processing, and encoding may + remain bound to a different GPU or to a default device. In such cases a pipeline can fail to start, error out during + caps negotiation, or run but produce an output video with no bounding boxes rendered, even though inference is + executed. +- **DLSOptimizer takes a long time or causes the application to restart**: When using DLSOptimizer from within ViPPET, + optimization runs can be long-running. It may take 5-6 minutes (or more, depending on pipeline complexity and + hardware) for DLSOptimizer to explore variants and return an optimized pipeline. In the current implementation, it + can also happen that while DLSOptimizer is searching for an optimized pipeline, the ViPPET application is restarted. +- **NPU metrics are not visible in the UI**: ViPPET currently does not support displaying NPU-related metrics. NPU + utilization, throughput, and latency are not exposed in the ViPPET UI. +- **Occasional "Connection lost" message in the UI**: The ViPPET UI is a web application that communicates with backend + services. Under transient network interruptions or short service unavailability, the UI may show a "Connection lost" + message. If this message appears occasionally, refresh the browser page to re-establish the connection to the + backend. +- **Application restart removes user-created pipelines and jobs**: In the current release, restarting the ViPPET + application removes all pipelines created by the user, and all types of jobs (tests, optimization runs, validation + runs, and similar). After a restart, only predefined pipelines remain available. +- **Support limited to DLStreamer 2025.2.0 pipelines and models**: ViPPET currently supports only pipelines and models + that are supported by DLStreamer 2025.2.0. For the full list of supported models, elements, and other details, see + the [DLStreamer release + notes](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md). +- **Limited metrics in the ViPPET UI**: At this stage, the ViPPET UI shows only a limited set of metrics: current CPU + utilization, current utilization of a single GPU, and the most recently measured FPS. +- **Limited validation scope**: Validation and testing in this release focused mainly on sanity checks for predefined + pipelines. For custom pipelines, their behavior in ViPPET is less explored and may vary. However, if a custom + pipeline is supported and works correctly with DLStreamer 2025.2.0, it is expected to behave similarly when run via + ViPPET. +- **No live preview video for running pipelines**: Live preview of the video from a running pipeline is not supported + in this release. As a workaround, you can enable the "Save output" option. After the pipeline finishes, inspect the + generated output video file. +- **Recommended to run only one operation at a time**: Currently, it is recommended to run a single operation at a time + from the following set: tests, optimization, validation. In this release, new jobs are not rejected or queued when + another job is already running. Starting more than one job at the same time launches multiple GStreamer instances. + This can significantly distort performance results (for example, CPU/GPU utilization and FPS). +- **Some GStreamer / DLStreamer elements may not be displayed correctly in the UI**: Some GStreamer or DLStreamer + elements used in a pipeline may not be displayed correctly by the ViPPET UI. Even if some elements are not shown as + expected in the UI, the underlying pipeline is still expected to run. +- **Supported models list is limited and extending it is not guaranteed to work**: ViPPET currently supports only + models defined in + [supported_models.yaml](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/models/supported_models.yaml). + A user can try to extend this file with new models whose `source` is either `public` or `pipeline-zoo-models`, but + there is no guarantee that such models will work out of the box. +- **Pipelines cannot depend on files other than models or videos**: Currently, ViPPET does not support pipelines that + require additional files beyond model files and video files. Pipelines that depend on other external artifacts (for + example, configuration files, custom resources, etc.) are not supported in this release. + +### Version 1.2 + +**Release Date**: 2025-08-20 + +#### New Features (v1.2) + +- **Feature 1**: Simple Video Structurization Pipeline: The Simple Video Structurization (D-T-C) pipeline is a + versatile, use case-agnostic solution that supports license plate recognition, vehicle detection with attribute + classification, and other object detection and classification tasks, adaptable based on the selected model. +- **Feature 2**: Live pipeline output preview: The pipeline now supports live output, allowing users to view real-time + results directly in the UI. This feature enhances the user experience by providing immediate feedback on video + processing tasks. +- **Feature 3**: New pre-trained models: The release includes new pre-trained models for object detection + (`YOLO v8 License Plate Detector`) and classification (`PaddleOCR`, `Vehicle Attributes Recognition Barrier 0039`), + expanding the range of supported use cases and improving accuracy for specific tasks. + +#### Known Issues (v1.2) + +- **Issue**: Metrics are displayed only for the last GPU when the system has multiple discrete GPUs. + +### Version 1.0.0 + +**Release Date**: 2025-03-31 + +#### New Features (v1.0.0) + +- **Feature 1**: Pre-trained Models Optimized for Specific Use Cases: Visual Pipeline and Platform Evaluation Tool + includes pre-trained models that are optimized for specific use cases, such as object detection for Smart NVR + pipeline. These models can be easily integrated into the pipeline, allowing users to quickly evaluate their + performance on different Intel® platforms. +- **Feature 2**: Metrics Collection with Turbostat tool and Qmassa tool: Visual Pipeline and Platform Evaluation Tool + collects real-time CPU and GPU performance metrics using Turbostat tool and Qmassa tool. The collector agent runs in + a dedicated collector container, gathering CPU and GPU metrics. Users can access and analyze these metrics via + intuitive UI, enabling efficient system monitoring and optimization. +- **Feature 3**: Smart NVR Pipeline Integration: The Smart NVR Proxy Pipeline is seamlessly integrated into the tool, + providing a structured video recorder architecture. It enables video analytics by supporting AI inference on selected + input channels while maintaining efficient media processing. The pipeline includes multi-view composition, media + encoding, and metadata extraction for insights. + +#### Known Issues (v1.0.0) + +- **Issue**: The Visual Pipeline and Platform Evaluation Tool container fails to start the analysis when the "Run" + button is clicked in the UI, specifically for systems without GPU. + - **Workaround**: Consider upgrading the hardware to meet the required specifications for optimal performance. + +## Learn More + +- [Installation](installation.md) +- [Usage](usage.md) +- [API Reference](api-reference.md) diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-build-source.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-build-source.md deleted file mode 100644 index c6cd1cbd1f..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-build-source.md +++ /dev/null @@ -1,55 +0,0 @@ -# Build from Source - -Build the Visual Pipeline and Platform Evaluation Tool from source to customize, debug, or extend its -functionality. In this guide, the following tasks are covered: - -- Setting up the development environment. -- Compiling the source code and resolving dependencies. -- Generating a runnable build for local testing or deployment. - -This guide is intended for developers working directly with the source code. - -## Prerequisites - -Before starting, ensure the following: - -- **System requirements**: The system meets the [minimum requirements](./system-requirements.md). -- **Docker platform**: Docker is installed. For details, see the [Docker installation guide](https://docs.docker.com/get-docker/). -- **Dependencies installed**: - - **Git**: [Install Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - - **Make**: Standard build tool, typically provided by the `build-essential` (or equivalent) package on Linux. - -For GPU and/or NPU usage, appropriate drivers must be installed. The recommended method is to use the DLS installation -script, which detects available devices and installs the required drivers. Follow the **Prerequisites** section in: - -- [Install Guide Ubuntu – Prerequisites](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-1.2.0/libraries/dl-streamer/docs/source/get_started/install/install_guide_ubuntu.md#prerequisites) - -This guide assumes basic familiarity with Git commands and terminal usage. For more information, see: - -- [Git Documentation](https://git-scm.com/doc) - -## Steps to Build - -1. **Clone the repository**: - - ```bash - git clone https://github.com/open-edge-platform/edge-ai-libraries.git - cd ./edge-ai-libraries/tools/visual-pipeline-and-platform-evaluation-tool - ``` - -2. **Build and start the application**: - - ```bash - make build run - ``` - -## Validation - -1. **Verify build success**: - - Logs should be checked for confirmation messages indicating that the microservice has started successfully. -2. **Access the application API documentation**: - - Open a browser and navigate to `http://localhost:7860/docs` to access the Swagger UI. - -## Known issues, limitations and troubleshooting - -- Refer to [Known issues, limitations and troubleshooting](known-issues.md). diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-use-video-generator.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-use-video-generator.md deleted file mode 100644 index e4a3e7d429..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/how-to-use-video-generator.md +++ /dev/null @@ -1,132 +0,0 @@ -# Build and Use Video Generator - -The Visual Pipeline and Platform Evaluation Tool includes a video generator that creates composite videos from images -stored in subdirectories. - -This guide is intended for developers working directly with the source code. - -**Build and start the tool**: - -```bash -make run-videogenerator -``` - -## Make Changes - -1. **Change input images**: - - Custom images can be used instead of the default sample images as follows: - - - Navigate to the `images` folder and create subfolders for new image categories, then place the relevant - images inside those subfolders. - - - Open the `config.json` file located at `video_generator/config.json`. - - - Update the `object_counts` section to reference the new image folders. Existing categories (for example, `cars` - or `persons`) should be replaced with the names of the new categories defined in the `images` folder: - - ```json - { - "background_file": "/usr/src/app/background.gif", - "base_image_dir": "/usr/src/app/images", - "output_file": "output_file", - "target_resolution": [1920, 1080], - "frame_count": 300, - "frame_rate": 30, - "swap_percentage": 20, - "object_counts": { - "cars": 3, - "persons": 3 - }, - "object_rotation_rate": 0.25, - "object_scale_rate": 0.25, - "object_scale_range": [0.25, 1], - "encoding": "H264", - "bitrate": 20000, - "swap_rate": 1 - } - ``` - -2. **Configure parameters**: - - The program uses a `config.json` file to customize the video generation process. Below is an example configuration: - - ```json - { - "background_file": "/usr/src/app/background.gif", - "base_image_dir": "/usr/src/app/images", - "output_file": "output_file", - "target_resolution": [1920, 1080], - "frame_count": 300, - "frame_rate": 30, - "swap_percentage": 20, - "object_counts": { - "cars": 3, - "persons": 3 - }, - "object_rotation_rate": 0.25, - "object_scale_rate": 0.25, - "object_scale_range": [0.25, 1], - "encoding": "H264", - "bitrate": 20000, - "swap_rate": 1 - } - ``` - - Parameters in the `config.json` file can be configured as follows: - - - **`background_file`**: Path to a background image (GIF, PNG, and so on) used in composite frames. - - - **`base_image_dir`**: Path to the root directory containing categorized image subdirectories. - - - **`output_file`**: Base name for the generated video file. It is recommended not to provide a file extension and - not to include `.` in the filename (for example, `output_file`). - - - **`target_resolution`**: Resolution of the output video in `[width, height]` format. - - - **`duration`**: Total duration of the generated video in seconds. - - - **`frame_count`**: Total number of frames in the generated video. - - - **`swap_percentage`**: Percentage of images that are swapped between frames. - - - **`object_counts`**: Dictionary specifying the number of images per category in each frame. - - - **`object_rotation_rate`**: Rate at which objects rotate per frame - (for example, `0.25` means a quarter rotation per frame). - - - **`object_scale_rate`**: Rate at which the size of objects changes per frame (for example, `0.25` means the - object size changes by 25% per frame). - - - **`object_scale_range`**: List specifying the minimum and maximum scale factors for the objects (for example, - `[0.25, 1]` means objects can scale between 25% and 100% of their original size). - - - **`encoding`**: Video encoding format (for example, `H264`). - - - **`bitrate`**: Bitrate for video encoding, measured in kbps. - - - **`swap_interval`**: Frequency of image swapping within frames, in seconds. - - - **Supported encodings and video formats**: - - | **Encoding** | **Video Format** | - |--------------|------------------| - | **H264** | .mp4 | - | **HEVC** | .mp4 | - | **VP8** | .webm | - | **VP9** | .webm | - | **AV1** | .mkv | - | **MPEG4** | .avi | - | **ProRes** | .mov | - | **Theora** | .ogg | - -## Validation - -1. **Verify build success**: - - Logs should be checked for confirmation messages indicating that the microservice started successfully: - - ```bash - docker compose logs videogenerator -f - ``` - -- Expected result: An MP4 file is created under the `shared/videos/video-generator` folder. diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/index.rst b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/index.rst index c1d1db832d..73b3e17c9a 100644 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/index.rst +++ b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/index.rst @@ -1,68 +1,176 @@ Visual Pipeline and Platform Evaluation Tool ============================================ -Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI workloads. +Assess Intel® hardware options, benchmark performance, and analyze key metrics to optimize hardware selection for AI +workloads. Overview ######## -The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by enabling configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, CPU usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support optimized hardware selection and performance tuning. +The Visual Pipeline and Platform Evaluation Tool simplifies hardware selection for AI workloads by enabling +configuration of workload parameters, performance benchmarking, and analysis of key metrics such as throughput, CPU +usage, and GPU usage. With its intuitive interface, the tool provides actionable insights that support optimized +hardware selection and performance tuning. Use Cases ######### -**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, and efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters and comparing performance metrics. +**Evaluating Hardware for AI Workloads**: Intel® hardware options can be assessed to balance cost, performance, and +efficiency. AI workloads can be benchmarked under real-world conditions by adjusting pipeline parameters and comparing +performance metrics. -**Performance Benchmarking for AI Models**: Model performance targets and KPIs can be validated by testing AI inference pipelines with different accelerators to measure throughput, latency, and resource utilization. +**Performance Benchmarking for AI Models**: Model performance targets and KPIs can be validated by testing AI +inference pipelines with different accelerators to measure throughput, latency, and resource utilization. Key Features ############ -**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® hardware integration. +**Optimized for Intel® AI Edge Systems**: Pipelines can be run directly on target devices for seamless Intel® hardware +integration. -**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization are available for detailed analysis. +**Comprehensive Hardware Evaluation**: Metrics such as CPU frequency, GPU power usage, and memory utilization are +available for detailed analysis. -**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines can be adjusted to create tailored performance tests. +**Configurable AI Pipelines**: Parameters such as input channels, object detection models, and inference engines can be +adjusted to create tailored performance tests. -**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under controlled conditions. +**Automated Video Generation**: Synthetic test videos can be generated to evaluate system performance under controlled +conditions. How It Works ############ -The Visual Pipeline and Platform Evaluation Tool integrates with AI-based video processing pipelines to support hardware performance evaluation. +The Visual Pipeline and Platform Evaluation Tool integrates with AI-based video processing pipelines to support +hardware performance evaluation. -.. image:: ./_assets/architecture.png - :alt: System Architecture Diagram +Workflow Overview +***************** -### **Workflow Overview** - -**Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are configured to match evaluation needs. - -**AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. - -**Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. - -**Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of configurations and optimization of settings. +- **Data Ingestion**: Video streams from live cameras or recorded files are provided and pipeline parameters are + configured to match evaluation needs. +- **AI Processing**: AI inference is applied using OpenVINO™ models to detect objects in the video streams. +- **Performance Evaluation**: Hardware performance metrics are collected, including CPU/GPU usage and power consumption. +- **Visualization & Analysis**: Real-time performance metrics are displayed on the dashboard to enable comparison of + configurations and optimization of settings. + +Disclaimers +########### + +Video Generator Images +********************** + +Intel provides six images for demo purposes only. You must provide your own images to run the video generator or to +create videos. + +Human Rights +************ + +Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human +Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or +contribute to a violation of an internationally recognized human right. + +Models Licensing +**************** + +`ssdlite_mobilenet_v2_INT8 `_ +(Apache 2.0) + +`resnet-50-tf_INT8 `_ +(Apache 2.0) + +`efficientnet-b0_INT8 `_ +(Apache 2.0) + +`yolov5s-416_INT8 `_ (GPL v3) + +Dataset Used: `Intel IoT DevKit Sample Videos `_ +(CC-BY-4.0)* + +Data Transparency +***************** + +Refer to Model cards included in this folder for more information on the models and their usage in the Visual Pipeline +and Platform Evaluation tool. + +Release Notes +############# + +Details about the changes, improvements, and known issues in this release of the application. + +Current Release: Version 2025.2.0 +********************************* + +**Release Date**: 2025-12-10 + +New Features (v2025.2.0) +^^^^^^^^^^^^^^^^^^^^^^^^ + +- **New graphical user interface (GUI)**: A visual representation of the underlying ``gst-launch`` pipeline graph is + provided, presenting elements, links, and branches in an interactive view. Pipeline parameters (such as sources, + models, and performance-related options) can be inspected and modified graphically, with changes propagated to the + underlying configuration. +- **Pipeline import and export**: Pipelines can be imported from and exported to configuration files, enabling sharing + of configurations between environments and easier version control. Exported definitions capture both topology and key + parameters, allowing reproducible pipeline setups. +- **Backend and frontend separation**: The application is now structured as a separate backend and frontend, allowing + independent development and deployment of each part. A fully functional REST API is exposed by the backend, which can + be accessed directly by automation scripts or indirectly through the UI. +- **Extensible architecture for dynamic pipelines**: The internal architecture has been evolved to support dynamic + registration and loading of pipelines. New pipeline types can be added without modifying core components, enabling + easier experimentation with custom topologies. +- **POSE model support**: POSE estimation model is now supported as part of the pipeline configuration. +- **DLStreamer Optimizer integration**: Integration with the DLStreamer Optimizer has been added to simplify + configuration of GStreamer-based pipelines. Optimized elements and parameters can be applied automatically, improving + performance and reducing manual tuning. + +Improvements (v2025.2.0) +^^^^^^^^^^^^^^^^^^^^^^^^ + +- **Model management enhancements**: Supported models can now be added and removed directly through the application. + The model manager updates available models in a centralized configuration, ensuring that only selected models are + downloaded, stored, and exposed in the UI and API. + +Known Issues and Limitations (v2025.2.0) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +- **Pipelines failing or missing bounding boxes when multiple devices/codecs are involved**: ViPPET lets you select the + ``device`` for inference elements such as ``gvadetect`` and ``gvaclassify``. However, in the current implementation + there is no integrated mechanism to also update the DLStreamer codec and post-processing elements for multi-GPU or + mixed-device pipelines. +- **DLSOptimizer takes a long time or causes the application to restart**: When using DLSOptimizer from within ViPPET, + optimization runs can be long-running. It may take 5-6 minutes (or more, depending on pipeline complexity and + hardware) for DLSOptimizer to explore variants and return an optimized pipeline. +- **NPU metrics are not visible in the UI**: ViPPET currently does not support displaying NPU-related metrics. +- **Occasional "Connection lost" message in the UI**: The ViPPET UI may show a "Connection lost" message under + transient network interruptions. +- **Application restart removes user-created pipelines and jobs**: In the current release, restarting the ViPPET + application removes all pipelines created by the user, and all types of jobs. +- **Support limited to DLStreamer 2025.2.0 pipelines and models**: ViPPET currently supports only pipelines and models + that are supported by DLStreamer 2025.2.0. +- **Limited metrics in the ViPPET UI**: At this stage, the ViPPET UI shows only a limited set of metrics. +- **Limited validation scope**: Validation and testing in this release focused mainly on sanity checks for predefined + pipelines. +- **No live preview video for running pipelines**: Live preview of the video from a running pipeline is not supported + in this release. +- **Recommended to run only one operation at a time**: Currently, it is recommended to run a single operation at a time. +- **Some GStreamer / DLStreamer elements may not be displayed correctly in the UI**: Some elements may not be displayed + correctly by the ViPPET UI. +- **Supported models list is limited**: ViPPET currently supports only models defined in the configuration. +- **Pipelines cannot depend on files other than models or videos**: ViPPET does not support pipelines that require + additional files beyond model files and video files. Learn More ########## -- :doc:`System Requirements <./system-requirements>` -- :doc:`Get Started <./get-started>` -- :doc:`How to Build Source <./how-to-build-source>` -- :doc:`How to use Video Generator <./how-to-use-video-generator>` -- :doc:`Release Notes <./release-notes>` +- :doc:`Installation <./installation>` +- :doc:`Usage <./usage>` +- :doc:`API Reference <./api-reference>` .. toctree:: :hidden: :maxdepth: 2 - system-requirements - get-started - release-notes - how-to-build-source - how-to-use-video-generator + installation + usage api-reference - disclaimers - known-issues GitHub diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/installation.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/installation.md new file mode 100644 index 0000000000..14337258c2 --- /dev/null +++ b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/installation.md @@ -0,0 +1,330 @@ +# Installation + +This guide covers installation and setup of the Visual Pipeline and Platform Evaluation Tool. Choose from Docker +Compose deployment or building from source, and learn how to configure your environment for optimal performance. + +## Prerequisites + +Before starting, ensure the following: + +- **System requirements**: The system meets the minimum requirements specified below. +- **Docker platform**: Docker is installed. For details, see the + [Docker installation guide](https://docs.docker.com/get-docker/). +- **Dependencies installed**: + - **Git**: [Install Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + - **Make**: Standard build tool, typically provided by the `build-essential` (or equivalent) package on Linux. + - **curl**: Command-line tool for transferring data with URLs, typically provided by the `curl` package on Linux. + +For GPU and/or NPU usage, appropriate drivers must be installed. The recommended method is to use the DLS installation +script, which detects available devices and installs the required drivers. Follow the **Prerequisites** section in: + +- [Install Guide Ubuntu – + Prerequisites](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/docs/source/get_started/install/install_guide_ubuntu.md#prerequisites) + +This guide assumes basic familiarity with Git commands and terminal usage. For more information, see: + +- [Git Documentation](https://git-scm.com/doc) + +### Supported Platforms + +#### Operating Systems + +- Ubuntu 24.04.1 LTS + +### Minimum Requirements + +| **Component** | **Minimum** | **Recommended** | +|---------------------|---------------------------------|--------------------------------------| +| **Processor** | 11th Gen Intel® Core™ Processor | Intel® Core™ Ultra 7 Processor 155H | +| **Memory** | 8 GB | 8 GB | +| **Disk Space** | 256 GB SSD | 256 GB SSD | +| **GPU/Accelerator** | Intel® UHD Graphics | Intel® Arc™ Graphics | + +### Software Requirements + +- Docker Engine version 20.10 or higher. +- For GPU and/or NPU usage, appropriate drivers must be installed. The recommended method is to use the DLS + installation script, which detects available devices and installs the required drivers. Follow the + **Prerequisites** section in: [Install Guide Ubuntu – + Prerequisites](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/docs/source/get_started/install/install_guide_ubuntu.md#prerequisites) + +### Compatibility Notes + +**Known Limitations**: + +- GPU compute engine utilization metrics require Intel® Graphics. + +## Option 1: Docker Compose Deployment + +The **Visual Pipeline and Platform Evaluation Tool** helps hardware decision-makers and software developers select the +optimal Intel® platform by adjusting workload parameters and analyzing performance metrics. Through an intuitive +web-based interface, the Smart NVR pipeline can be executed and key metrics such as throughput and CPU/GPU utilization +can be evaluated to assess platform performance and determine appropriate system sizing. + +By following this guide, the following tasks can be completed: + +- **Set up the sample application**: Use the Docker Compose tool to quickly deploy the application in a target + environment. +- **Run a predefined pipeline**: Execute the Smart NVR pipeline and observe metrics. + +### Set up and First Use + +1. **Set up the working directory**: + + ```bash + mkdir -p visual-pipeline-and-platform-evaluation-tool/models + mkdir -p visual-pipeline-and-platform-evaluation-tool/shared/models + mkdir -p visual-pipeline-and-platform-evaluation-tool/shared/videos + cd visual-pipeline-and-platform-evaluation-tool + ``` + +2. **Download all required files**: + + ```bash + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/setup_env.sh" + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.yml" + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.cpu.yml" + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.gpu.yml" + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/compose.npu.yml" + curl -LO "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/Makefile" + curl -Lo models/Dockerfile "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/models/Dockerfile" + curl -Lo models/model_manager.sh "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/models/model_manager.sh" + curl -Lo shared/videos/default_recordings.yaml "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/videos/default_recordings.yaml" + curl -Lo shared/models/supported_models.yaml "https://github.com/open-edge-platform/edge-ai-libraries/raw/refs/heads/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/models/supported_models.yaml" + chmod +x models/model_manager.sh + chmod +x setup_env.sh + ``` + +3. **Start the application**: + + ```bash + make build-models run + ``` + +4. **Verify that the application is running**: + + ```bash + docker compose ps + ``` + +5. Access the application UI: + + - Open a browser and go to `http://:` (e.g. http://localhost if the default port is used). + +### Validation + +1. **Verify build success**: + - Check the logs and look for confirmation messages indicating that the microservice has started successfully. + +### Model Installation and Management + +When the Visual Pipeline and Platform Evaluation Tool is launched for the first time, a prompt is displayed to select +and install the models to be used. This step allows installation of only the models relevant to the intended pipelines. + +To manage the installed models again, run the following command: + +```bash +make install-models-force +``` + +## Option 2: Building from source + +Build the Visual Pipeline and Platform Evaluation Tool from source to customize, debug, or extend its functionality. In +this guide, the following tasks are covered: + +- Setting up the development environment. +- Compiling the source code and resolving dependencies. +- Generating a runnable build for local testing or deployment. + +This guide is intended for developers working directly with the source code. + +### Steps to Build + +1. **Clone the repository**: + + ```bash + git clone https://github.com/open-edge-platform/edge-ai-libraries.git + cd ./edge-ai-libraries/tools/visual-pipeline-and-platform-evaluation-tool + ``` + +2. **Build and start the application**: + + ```bash + make build run + ``` + +### Validation + +1. **Verify build success**: + - Logs should be checked for confirmation messages indicating that the microservice has started successfully. +2. **Access the application UI**: + - Open a browser and go to `http://: (e.g. http://localhost if the default port is used). + +## Optional: Building Video Generator + +The Visual Pipeline and Platform Evaluation Tool includes a video generator that creates composite videos from images +stored in subdirectories. + +This guide is intended for developers working directly with the source code. + +**Build and start the tool**: + +```bash +make run-videogenerator +``` + +### Make Changes + +1. **Change input images**: + + Custom images can be used instead of the default sample images as follows: + + - Navigate to the `images` folder and create subfolders for new image categories, then place the relevant images + inside those subfolders. + - Open the `config.json` file located at `video_generator/config.json`. + - Update the `object_counts` section to reference the new image folders. Existing categories (for example, `cars` or + `persons`) should be replaced with the names of the new categories defined in the `images` folder: + + ```json + { + "background_file": "/usr/src/app/background.gif", + "base_image_dir": "/usr/src/app/images", + "output_file": "output_file", + "target_resolution": [1920, 1080], + "frame_count": 300, + "frame_rate": 30, + "swap_percentage": 20, + "object_counts": { + "cars": 3, + "persons": 3 + }, + "object_rotation_rate": 0.25, + "object_scale_rate": 0.25, + "object_scale_range": [0.25, 1], + "encoding": "H264", + "bitrate": 20000, + "swap_rate": 1 + } + ``` + +2. **Configure parameters**: + + The program uses a `config.json` file to customize the video generation process. Below is an example configuration: + + ```json + { + "background_file": "/usr/src/app/background.gif", + "base_image_dir": "/usr/src/app/images", + "output_file": "output_file", + "target_resolution": [1920, 1080], + "frame_count": 300, + "frame_rate": 30, + "swap_percentage": 20, + "object_counts": { + "cars": 3, + "persons": 3 + }, + "object_rotation_rate": 0.25, + "object_scale_rate": 0.25, + "object_scale_range": [0.25, 1], + "encoding": "H264", + "bitrate": 20000, + "swap_rate": 1 + } + ``` + + Parameters in the `config.json` file can be configured as follows: + + - **`background_file`**: Path to a background image (GIF, PNG, and so on) used in composite frames. + - **`base_image_dir`**: Path to the root directory containing categorized image subdirectories. + - **`output_file`**: Base name for the generated video file. It is recommended not to provide a file extension and + not to include `.` in the filename (for example, `output_file`). + - **`target_resolution`**: Resolution of the output video in `[width, height]` format. + - **`duration`**: Total duration of the generated video in seconds. + - **`frame_count`**: Total number of frames in the generated video. + - **`swap_percentage`**: Percentage of images that are swapped between frames. + - **`object_counts`**: Dictionary specifying the number of images per category in each frame. + - **`object_rotation_rate`**: Rate at which objects rotate per frame (for example, `0.25` means a quarter rotation + per frame). + - **`object_scale_rate`**: Rate at which the size of objects changes per frame (for example, `0.25` means the object + size changes by 25% per frame). + - **`object_scale_range`**: List specifying the minimum and maximum scale factors for the objects (for example, + `[0.25, 1]` means objects can scale between 25% and 100% of their original size). + - **`encoding`**: Video encoding format (for example, `H264`). + - **`bitrate`**: Bitrate for video encoding, measured in kbps. + - **`swap_interval`**: Frequency of image swapping within frames, in seconds. + - **Supported encodings and video formats**: + + | **Encoding** | **Video Format** | + |--------------|------------------| + | **H264** | .mp4 | + | **HEVC** | .mp4 | + | **VP8** | .webm | + | **VP9** | .webm | + | **AV1** | .mkv | + | **MPEG4** | .avi | + | **ProRes** | .mov | + | **Theora** | .ogg | + +### Validation + +1. **Verify build success**: + - Logs should be checked for confirmation messages indicating that the microservice started successfully: + + ```bash + docker compose logs videogenerator -f + ``` + + - Expected result: An MP4 file is created under the `shared/videos/video-generator` folder. + +## Installation troubleshooting + +### Application containers fail to start + +In some environments, ViPPET services may fail to start correctly and the UI may not be reachable. + +#### Troubleshooting steps + +- Check container logs: + + ```bash + docker compose logs + ``` + +- Restart the stack using the provided Makefile: + + ```bash + make stop run + ``` + +This stops currently running containers and starts them again with the default configuration. + +### Port conflicts for `vippet-ui` + +If the `vippet-ui` service cannot be accessed in the browser, it may be caused by a port conflict on the host. + +#### Troubleshooting steps + +- In the Compose file (`compose.yml`), find the `vippet-ui` service and its `ports` section: + + ```yaml + services: + vippet-ui: + ports: + - "80:80" + ``` + +- Change the **host port** (left side) to an available one, for example: + + ```yaml + services: + vippet-ui: + ports: + - "8081:80" + ``` + +- Restart the stack and access ViPPET using the new port, e.g. `http://localhost:8081`. + +## Supporting Resources + +- [Docker Compose Documentation](https://docs.docker.com/compose/) diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/known-issues.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/known-issues.md deleted file mode 100644 index be24277719..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/known-issues.md +++ /dev/null @@ -1,348 +0,0 @@ -# Known issues, limitations and troubleshooting - -## Known issues - -### 1. Pipelines failing or missing bounding boxes when multiple devices/codecs are involved - -ViPPET lets you select the `device` for inference elements such as `gvadetect` and `gvaclassify`. However, in -the current implementation there is no integrated mechanism to also update the DLStreamer codec and post‑processing -elements for multi‑GPU or mixed‑device pipelines. - -This means that: - -- You can change the `device` property on AI elements (for example, to run detection on another GPU), -- But the corresponding DLStreamer elements for **decoding**, **post‑processing**, and **encoding** may remain bound - to a different GPU or to a default device. - -In such cases a pipeline can: - -- Fail to start, -- Error out during caps negotiation, -- Or run but produce an output video with no bounding boxes rendered, even though inference is executed. - -The relevant DLStreamer elements include: - -- **Decoder elements**, such as: - - `vah264dec` (for GPU.0, or simply `GPU` on single-GPU systems) - - `varenderD129h264dec` (for GPU.1) - - `varenderD130h264dec` (for GPU.2) -- **Post‑processing elements**, such as: - - `vapostproc` (for GPU.0, or simply `GPU` on single-GPU systems) - - `varenderD129postproc` (for GPU.1) - - `varenderD130postproc` (for GPU.2) -- **Encoder elements**, such as: - - `vah264enc`, `vah264lpenc` (for GPU.0, or simply `GPU` on single-GPU systems) - - `varenderD129h264enc` (for GPU.1) - - `varenderD130h264enc` (for GPU.2) - -> **GPU.0 note:** In systems with only one GPU, it appears as just `GPU` and uses the generic elements above -> (`vah264dec`, `vapostproc`, `vah264enc`, `vah264lpenc`). -> Only on multi-GPU systems will elements for `GPU.1`, `GPU.2` etc. (`varenderD129*`, `varenderD130*`, etc.) appear. - -#### Workaround - -If you see that the pipeline fails or runs without expected bounding boxes: - -1. Export or re‑create the pipeline description. -2. Manually adjust the DLStreamer decoder, post‑processing, and encoder elements so they are explicitly bound to the - GPU/device consistent with the `device` used by `gvadetect` / `gvaclassify`. -3. Import this modified pipeline into ViPPET as a custom pipeline and run it with the corrected static - device assignments. - -Elements with suffixes like `D129`, `D130`, etc. are typically mapped to specific GPU indices (for example -`GPU.1`, `GPU.2`). The exact mapping between `varenderD129*` / `varenderD130*` elements and `GPU.X` devices depends on -your platform configuration and DLStreamer’s GPU selection rules. For details on how these IDs map to GPU devices and -how to choose the correct elements for each GPU, see the DLStreamer documentation on GPU device selection: -[GPU device selection in DLStreamer](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html). - ---- - -### 2. DLSOptimizer takes a long time or causes the application to restart - -When using DLSOptimizer from within ViPPET, optimization runs can be **long‑running**: - -- It may take **5–6 minutes** (or more, depending on pipeline complexity and hardware) for DLSOptimizer to explore - variants and return an optimized pipeline. - -In the current implementation, it can also happen that while DLSOptimizer is searching for an optimized pipeline, -the ViPPET application is **restarted**. - -For more information about DLSOptimizer behavior and limitations, see the DLSOptimizer limitations section in the -DLStreamer repository: -[DLSOptimizer limitations](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/scripts/optimizer/README.md#limitations). - -#### Risks related to application restart during optimization - -If ViPPET is restarted while DLSOptimizer is running: - -- Any **in‑progress optimization job** is interrupted and its results are lost. -- In the current release, an application restart **removes all user‑created pipelines and all types of jobs** - (tests, optimization runs, validation runs). Only predefined pipelines remain available after restart. -- You may need to **recreate or reimport** your custom pipelines and re‑run your jobs after the application comes back. - -### Recommendations - -- If this behavior is problematic in your environment (for example, it disrupts interactive work or automated - workflows), avoid using pipeline optimization and instead: - - Use baseline, hand‑tuned pipelines. - - Adjust parameters manually rather than relying on DLSOptimizer. - ---- - -### 3. NPU metrics are not visible in the UI - -ViPPET currently does **not** support displaying NPU‑related metrics: - -- NPU utilization, throughput, and latency are not exposed in the ViPPET UI. -- Metrics and visualizations are limited to what is currently integrated for other devices. - -As a result, even if pipelines use an NPU, you will not see NPU‑specific telemetry in ViPPET. - ---- - -### 4. Occasional “Connection lost” message in the UI - -The ViPPET UI is a web application that communicates with backend services. Under transient network -interruptions or short service unavailability, the UI may show a **“Connection lost”** message. - -#### Characteristics - -- It typically appears **sporadically**. -- It is often related to short‑lived connectivity issues between the browser and the backend. - -#### Workaround - -- If the **“Connection lost”** message appears occasionally: - - **Refresh the browser page** to re‑establish the connection to the backend. - ---- - -### 5. Choosing the encoding device for “Save output” and mapping devices to GPU indices - -When you enable the **“Save output”** option in ViPPET: - -- ViPPET records the output video to a file. -- You are asked to select a **device** that will be used for encoding. - -The current implementation does not automatically infer the best encoding device from the existing pipeline. To avoid -confusion and potential issues, use the following guidelines. - -#### How to choose the encoding device - -- Prefer the **same device that is already used by the downstream video elements** in your pipeline. -- In most cases, the most reliable choice is: - - The **device used by the element that is closest to the final `*sink`** in the pipeline (for example, the last `va*` - encoder or post‑processing element before a sink). -- Using a different device for encoding than the one used by the rest of the downstream path can: - - Introduce unnecessary copies between devices, - - Or, in some environments, cause pipeline negotiation or stability issues. - -#### Mapping devices (`GPU.X`) to DLStreamer elements - -DLStreamer maps logical GPU devices (`GPU.0`, `GPU.1`, `GPU.2`, …) to specific element variants as follows: - -- **`GPU.0`** (or `GPU` in a single-GPU system) maps to the generic VA‑API elements: - - Decoders: `vah264dec` - - Post‑processing: `vapostproc` - - Encoders: `vah264enc`, `vah264lpenc` -- **`GPU.1`, `GPU.2`, …** map to per‑GPU elements whose names encode the GPU index, for example: - - For `GPU.1`: elements like `varenderD129h264dec`, `varenderD129postproc`, `varenderD129h264enc` - - For `GPU.2`: elements like `varenderD130h264dec`, `varenderD130postproc`, `varenderD130h264enc` - - And so on for additional GPUs. - -> **Note:** On systems with only one GPU, the device will be listed as simply `GPU` (not `GPU.0`) and you should always -> use the generic elements above (`vah264dec`, `vapostproc`, `vah264enc`, `vah264lpenc`). - -#### Practical guidance - -When selecting the encoding device in the **“Save output”** dialog: - -- If your pipeline uses **`vah264dec` / `vapostproc` / `vah264enc` / `vah264lpenc`** near the end of the pipeline, - it is typically running on **`GPU.0`** (or just `GPU` on a single-GPU system). - → In this case, choose **`GPU.0`** (or `GPU`) for encoding. -- If your pipeline uses elements like **`varenderD129*`**, **`varenderD130*`**, etc. near the end of the pipeline, - those typically correspond to **`GPU.1`**, **`GPU.2`**, and so on. - → In this case, choose the `GPU.X` device that matches the `varenderDXXX*` elements used by the final encoder or - post‑processing stage. - -For the precise and up‑to‑date mapping between `GPU.X` devices and `varenderDXXX*` elements on your platform, -as well as additional examples, see the DLStreamer GPU device selection guide: -[GPU device selection in DLStreamer](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html). - ---- - -## Limitations - -### 1. Application restart removes user-created pipelines and jobs - -In the current release, restarting the ViPPET application removes: - -- All **pipelines created by the user**, and -- All types of **jobs** (tests, optimization runs, validation runs, and similar). - -After a restart, only **predefined pipelines** remain available. -If a restart happens during a long‑running operation (for example, during DLSOptimizer runs), the in‑progress job is -lost, and you need to recreate or reimport your custom pipelines and rerun the jobs. - ---- - -### 2. Support limited to DLStreamer 2025.2.0 pipelines and models - -ViPPET currently supports only pipelines and models that are supported by **DLStreamer 2025.2.0**. - -For the full list of supported models, elements, and other details, see the DLStreamer release notes: -[DLStreamer release notes](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/RELEASE_NOTES.md) - -If a custom pipeline works correctly with DLStreamer 2025.2.0, it is expected to also work in ViPPET (see also the -“Limited validation scope” limitation below). - ---- - -### 3. Limited metrics in the ViPPET UI - -At this stage, the ViPPET UI shows only a **limited set of metrics**: - -- Current **CPU utilization**, -- Current **utilization of a single GPU**, and -- The **most recently measured FPS**. - -More metrics (including timeline‑based charts) are planned for future releases. - ---- - -### 4. Limited validation scope - -Validation and testing in this release focused mainly on **sanity checks for predefined pipelines**. - -For **custom pipelines**: - -- Their behavior in ViPPET is less explored and may vary. -- However, if a custom pipeline is supported and works correctly with **DLStreamer 2025.2.0**, it is expected to behave - similarly when run via ViPPET (see also “Support limited to DLStreamer 2025.2.0 pipelines and models” above). - ---- - -### 5. No live preview video for running pipelines - -Live preview of the video from a running pipeline is **not supported** in this release. - -As a workaround, you can: - -- Enable the **“Save output”** option. -- After the pipeline finishes, inspect the generated **output video file**. - ---- - -### 6. Recommended to run only one operation at a time - -Currently, it is recommended to run **a single operation at a time** from the following set: - -- Tests, -- Optimization, -- Validation. - -In this release: - -- New jobs are **not rejected or queued** when another job is already running. -- Starting more than one job at the same time launches **multiple GStreamer instances**. -- This can significantly **distort performance results** (for example, CPU/GPU utilization and FPS). - -For accurate and repeatable measurements, run these operations **one by one**. - ---- - -### 7. Some GStreamer / DLStreamer elements may not be displayed correctly in the UI - -Some GStreamer or DLStreamer elements used in a pipeline may **not be displayed correctly** by the ViPPET UI. - -Even if some elements are not shown as expected in the UI, the underlying **pipeline is still expected to run**. - ---- - -### 8. Supported models list is limited and extending it is not guaranteed to work - -ViPPET currently supports only models defined in: - -- [supported_models.yaml](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/tools/visual-pipeline-and-platform-evaluation-tool/shared/models/supported_models.yaml) - -A user can try to extend this file with new models whose `source` is either `public` or `pipeline-zoo-models`, but -there is **no guarantee** that such models will work out of the box. - -- Models with `source: public` must be supported by the following script: - [download_public_models.sh](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/samples/download_public_models.sh) -- Models with `source: pipeline-zoo-models` must already exist in this repository: - [pipeline-zoo-models](https://github.com/dlstreamer/pipeline-zoo-models) - -After adding new models to `supported_models.yaml`, you must: - -```bash -make stop -make install-models-force -make run -``` - -Only then will ViPPET rescan and manage the updated model set. - ---- - -### 9. Pipelines cannot depend on files other than models or videos - -Currently, ViPPET does **not** support pipelines that require additional files beyond: - -- **Model files**, and -- **Video files**. - -Pipelines that depend on other external artifacts (for example, configuration files, custom resources, etc.) -are not supported in this release. - ---- - -## Troubleshooting - -### 1. Application containers fail to start - -In some environments, ViPPET services may fail to start correctly and the UI may not be reachable. - -#### Troubleshooting steps - -- Check container logs: - - ```bash - docker compose logs - ``` - -- Restart the stack using the provided Makefile: - - ```bash - make stop run - ``` - -This stops currently running containers and starts them again with the default configuration. - ---- - -### 2. Port conflicts for `vippet-ui` - -If the `vippet-ui` service cannot be accessed in the browser, it may be caused by a port conflict on the host. - -#### Troubleshooting steps - -- In the Compose file (`compose.yml`), find the `vippet-ui` service and its `ports` section: - - ```yaml - services: - vippet-ui: - ports: - - "80:80" - ``` - -- Change the **host port** (left side) to an available one, for example: - - ```yaml - services: - vippet-ui: - ports: - - "8081:80" - ``` - -- Restart the stack and access ViPPET using the new port, e.g. `http://localhost:8081`. diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md deleted file mode 100644 index dcd962ad8f..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/release-notes.md +++ /dev/null @@ -1,132 +0,0 @@ - - -# Release Notes - -Details about the changes, improvements, and known issues in this release of the application. - -## Current Release: [Version 2025.2] - -**Release Date**: [2025-12-10] - -### New Features (v2025.2) - -- **New graphical user interface (GUI)**: - - A visual representation of the underlying `gst-launch` pipeline graph is provided, presenting elements, links, and - branches in an interactive view. - - Pipeline parameters (such as sources, models, and performance-related options) can be inspected and - modified graphically, with changes propagated to the underlying configuration. - -- **Pipeline import and export**: - - Pipelines can be imported from and exported to configuration files, enabling sharing of configurations between - environments and easier version control. - - Exported definitions capture both topology and key parameters, allowing reproducible pipeline setups. - -- **Backend and frontend separation**: - - The application is now structured as a separate backend and frontend, allowing independent development and - deployment of each part. - - A fully functional REST API is exposed by the backend, which can be accessed directly by automation scripts or - indirectly through the UI. - -- **Extensible architecture for dynamic pipelines**: - - The internal architecture has been evolved to support dynamic registration and loading of pipelines. - - New pipeline types can be added without modifying core components, enabling easier experimentation with - custom topologies. - -- **POSE model support**: - - POSE estimation model is now supported as part of the pipeline configuration. - -- **DLStreamer Optimizer integration**: - - Integration with the DLStreamer Optimizer has been added to simplify configuration of GStreamer-based pipelines. - - Optimized elements and parameters can be applied automatically, improving performance and reducing manual tuning. - -### Improvements (v2025.2) - -- **Model management enhancements**: - - Supported models can now be added and removed directly through the application. - - The model manager updates available models in a centralized configuration, ensuring that only selected models are - downloaded, stored, and exposed in the UI and API. - ---- - -## Current Release: [Version 1.2] - -**Release Date**: [2025-08-20] - -### New Features (v1.2) - -- **Feature 1**: Simple Video Structurization Pipeline: The Simple Video Structurization (D-T-C) pipeline is a versatile, - use case-agnostic solution that supports license plate recognition, vehicle detection with attribute classification, - and other object detection and classification tasks, adaptable based on the selected model. -- **Feature 2**: Live pipeline output preview: The pipeline now supports live output, allowing users to view real-time results - directly in the UI. This feature enhances the user experience by providing immediate feedback on video processing tasks. -- **Feature 3**: New pre-trained models: The release includes new pre-trained models for object detection - (`YOLO v8 License Plate Detector`) and classification (`PaddleOCR`, `Vehicle Attributes Recognition Barrier 0039`), - expanding the range of supported use cases and improving accuracy for specific tasks. - -### Known Issues (v1.2) - -- **Issue**: Metrics are displayed only for the last GPU when the system has multiple discrete GPUs. - -## Version 1.0.0 - -**Release Date**: [2025-03-31] - -### New Features (v1.0.0) - - -- **Feature 1**: Pre-trained Models Optimized for Specific Use Cases: Visual Pipeline and Platform Evaluation Tool - includes pre-trained models that are optimized for specific use cases, such as object detection for Smart NVR - pipeline. These models can be easily integrated into the pipeline, allowing users to quickly evaluate their - performance on different Intel® platforms. -- **Feature 2**: Metrics Collection with Turbostat tool and Qmassa tool: Visual Pipeline and Platform Evaluation Tool - collects real-time CPU and GPU performance metrics using Turbostat tool and Qmassa tool. The collector agent runs - in a dedicated collector container, gathering CPU and GPU metrics. Users can access and analyze these metrics via - intuitive UI, enabling efficient system monitoring and optimization. -- **Feature 3**: Smart NVR Pipeline Integration: The Smart NVR Proxy Pipeline is seamlessly integrated into the tool, - providing a structured video recorder architecture. It enables video analytics by supporting AI inference on - selected input channels while maintaining efficient media processing. The pipeline includes multi-view composition, - media encoding, and metadata extraction for insights. - -### Known Issues (v1.0.0) - -- **Issue**: The Visual Pipeline and Platform Evaluation Tool container fails to start the analysis when the "Run" - button is clicked in the UI, specifically for systems without GPU. - - **Workaround**: Consider upgrading the hardware to meet the required specifications for optimal performance. diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/system-requirements.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/system-requirements.md deleted file mode 100644 index 442d4a1052..0000000000 --- a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/system-requirements.md +++ /dev/null @@ -1,64 +0,0 @@ -# System Requirements - -This page provides detailed hardware, software, and platform requirements to help set up and run the application -efficiently. - - - -## Supported Platforms - - - -### Operating Systems - -- Ubuntu 24.04.1 LTS - - - -## Minimum Requirements - -| **Component** | **Minimum** | **Recommended** | -|---------------------|---------------------------------|--------------------------------------| -| **Processor** | 11th Gen Intel® Core™ Processor | Intel® Core™ Ultra 7 Processor 155H | -| **Memory** | 8 GB | 8 GB | -| **Disk Space** | 256 GB SSD | 256 GB SSD | -| **GPU/Accelerator** | Intel® UHD Graphics | Intel® Arc™ Graphics | - -## Software Requirements - -- Docker Engine version 20.10 or higher. -- For GPU and/or NPU usage, appropriate drivers must be installed. The recommended method is to use the DLS installation -script, which detects available devices and installs the required drivers. Follow the **Prerequisites** section in: -[Install Guide Ubuntu – Prerequisites](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-1.2.0/libraries/dl-streamer/docs/source/get_started/install/install_guide_ubuntu.md#prerequisites) - -## Compatibility Notes - - - -**Known Limitations**: - -- GPU compute engine utilization metrics require Intel® Graphics. - -## Validation - -- Ensure all dependencies are installed and configured before proceeding to [Get Started](./get-started.md). diff --git a/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/usage.md b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/usage.md new file mode 100644 index 0000000000..5fca0c21f2 --- /dev/null +++ b/tools/visual-pipeline-and-platform-evaluation-tool/docs/user-guide/usage.md @@ -0,0 +1,180 @@ +# Usage + +This guide covers how to use the Visual Pipeline and Platform Evaluation Tool to edit pipelines, run performance tests, +and execute density tests. + +## Editing pipelines in Pipeline Builder + +The Visual Pipeline and Platform Evaluation Tool provides a graphical interface for building and editing pipelines. +Pipeline parameters such as sources, models, and performance-related options can be inspected and modified graphically, +with changes propagated to the underlying configuration. + +### Pipeline Import and Export + +Pipelines can be imported from and exported to configuration files, enabling sharing of configurations between +environments and easier version control. Exported definitions capture both topology and key parameters, allowing +reproducible pipeline setups. + +### Device Selection for Inference Elements + +The Visual Pipeline and Platform Evaluation Tool (ViPPET) lets you select the `device` for inference elements such as +`gvadetect` and `gvaclassify`. However, in the current implementation there is no integrated mechanism to also update +the DLStreamer codec and post-processing elements for multi-GPU or mixed-device pipelines. + +This means that: + +- You can change the `device` property on AI elements (for example, to run detection on another GPU), +- But the corresponding DLStreamer elements for **decoding**, **post-processing**, and **encoding** may remain bound to + a different GPU or to a default device. + +In such cases a pipeline can: + +- Fail to start, +- Error out during caps negotiation, +- Or run but produce an output video with no bounding boxes rendered, even though inference is executed. + +The relevant DLStreamer elements include: + +- **Decoder elements**, such as: + - `vah264dec` (for GPU.0, or simply `GPU` on single-GPU systems) + - `varenderD129h264dec` (for GPU.1) + - `varenderD130h264dec` (for GPU.2) +- **Post-processing elements**, such as: + - `vapostproc` (for GPU.0, or simply `GPU` on single-GPU systems) + - `varenderD129postproc` (for GPU.1) + - `varenderD130postproc` (for GPU.2) +- **Encoder elements**, such as: + - `vah264enc`, `vah264lpenc` (for GPU.0, or simply `GPU` on single-GPU systems) + - `varenderD129h264enc` (for GPU.1) + - `varenderD130h264enc` (for GPU.2) + +> **GPU.0 note:** In systems with only one GPU, it appears as just `GPU` and uses the generic elements above +> (`vah264dec`, `vapostproc`, `vah264enc`, `vah264lpenc`). Only on multi-GPU systems will elements for `GPU.1`, +> `GPU.2` etc. (`varenderD129*`, `varenderD130*`, etc.) appear. + +#### Workaround + +If you see that the pipeline fails or runs without expected bounding boxes: + +1. Export or re-create the pipeline description. +2. Manually adjust the DLStreamer decoder, post-processing, and encoder elements so they are explicitly bound to the + GPU/device consistent with the `device` used by `gvadetect` / `gvaclassify`. +3. Import this modified pipeline into ViPPET as a custom pipeline and run it with the corrected static device + assignments. + +Elements with suffixes like `D129`, `D130`, etc. are typically mapped to specific GPU indices (for example `GPU.1`, +`GPU.2`). The exact mapping between `varenderD129*` / `varenderD130*` elements and `GPU.X` devices depends on your +platform configuration and DLStreamer's GPU selection rules. For details on how these IDs map to GPU devices and how to +choose the correct elements for each GPU, see the DLStreamer documentation on GPU device selection: [GPU device +selection in +DLStreamer](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html). + +### Using DLSOptimizer + +When using the DLStreamer Optimizer (DLSOptimizer) from within ViPPET, optimization runs can be **long-running**: + +- It may take **5-6 minutes** (or more, depending on pipeline complexity and hardware) for DLSOptimizer to explore + variants and return an optimized pipeline. + +In the current implementation, it can also happen that while DLSOptimizer is searching for an optimized pipeline, the +ViPPET application is **restarted**. + +For more information about DLSOptimizer behavior and limitations, see the DLSOptimizer limitations section in the +DLStreamer repository: [DLSOptimizer +limitations](https://github.com/open-edge-platform/edge-ai-libraries/blob/release-2025.2.0/libraries/dl-streamer/scripts/optimizer/README.md#limitations). + +#### Risks related to application restart during optimization + +If ViPPET is restarted while DLSOptimizer is running: + +- Any **in-progress optimization job** is interrupted and its results are lost. +- In the current release, an application restart **removes all user-created pipelines and all types of jobs** (tests, + optimization runs, validation runs). Only predefined pipelines remain available after restart. +- You may need to **recreate or reimport** your custom pipelines and re-run your jobs after the application comes back. + +#### Recommendations + +- If this behavior is problematic in your environment (for example, it disrupts interactive work or automated + workflows), avoid using pipeline optimization and instead: + - Use baseline, hand-tuned pipelines. + - Adjust parameters manually rather than relying on DLSOptimizer. + +### Choosing the Encoding Device for "Save Output" + +When you enable the **"Save output"** option in ViPPET: + +- ViPPET records the output video to a file. +- You are asked to select a **device** that will be used for encoding. + +The current implementation does not automatically infer the best encoding device from the existing pipeline. To avoid +confusion and potential issues, use the following guidelines. + +#### How to choose the encoding device + +- Prefer the **same device that is already used by the downstream video elements** in your pipeline. +- In most cases, the most reliable choice is: + - The **device used by the element that is closest to the final `*sink`** in the pipeline (for example, the last + `va*` encoder or post-processing element before a sink). +- Using a different device for encoding than the one used by the rest of the downstream path can: + - Introduce unnecessary copies between devices, + - Or, in some environments, cause pipeline negotiation or stability issues. + +#### Mapping devices (`GPU.X`) to DLStreamer elements + +DLStreamer maps logical GPU devices (`GPU.0`, `GPU.1`, `GPU.2`, …) to specific element variants as follows: + +- **`GPU.0`** (or `GPU` in a single-GPU system) maps to the generic VA-API elements: + - Decoders: `vah264dec` + - Post-processing: `vapostproc` + - Encoders: `vah264enc`, `vah264lpenc` +- **`GPU.1`, `GPU.2`, …** map to per-GPU elements whose names encode the GPU index, for example: + - For `GPU.1`: elements like `varenderD129h264dec`, `varenderD129postproc`, `varenderD129h264enc` + - For `GPU.2`: elements like `varenderD130h264dec`, `varenderD130postproc`, `varenderD130h264enc` + - And so on for additional GPUs. + +> **Note:** On systems with only one GPU, the device will be listed as simply `GPU` (not `GPU.0`) and you should always +> use the generic elements above (`vah264dec`, `vapostproc`, `vah264enc`, `vah264lpenc`). + +#### Practical guidance + +When selecting the encoding device in the **"Save output"** dialog: + +- If your pipeline uses **`vah264dec` / `vapostproc` / `vah264enc` / `vah264lpenc`** near the end of the pipeline, it + is typically running on **`GPU.0`** (or just `GPU` on a single-GPU system). → In this case, choose **`GPU.0`** (or + `GPU`) for encoding. +- If your pipeline uses elements like **`varenderD129*`**, **`varenderD130*`**, etc. near the end of the pipeline, + those typically correspond to **`GPU.1`**, **`GPU.2`**, and so on. → In this case, choose the `GPU.X` device that + matches the `varenderDXXX*` elements used by the final encoder or post-processing stage. + +For the precise and up-to-date mapping between `GPU.X` devices and `varenderDXXX*` elements on your platform, as well +as additional examples, see the DLStreamer GPU device selection guide: [GPU device selection in +DLStreamer](https://docs.openedgeplatform.intel.com/2025.2/edge-ai-libraries/dl-streamer/dev_guide/gpu_device_selection.html). + +## Running performance tests + +The Visual Pipeline and Platform Evaluation Tool enables performance testing of AI pipelines on Intel® hardware. Key +metrics such as CPU usage, GPU usage, and throughput can be collected and analyzed to optimize hardware selection. + +### Current Limitations + +- **NPU metrics are not visible in the UI**: ViPPET currently does not support displaying NPU-related metrics. NPU + utilization, throughput, and latency are not exposed in the ViPPET UI. +- **Limited metrics in the ViPPET UI**: At this stage, the ViPPET UI shows only a limited set of metrics: current CPU + utilization, current utilization of a single GPU, and the most recently measured FPS. More metrics (including + timeline-based charts) are planned for future releases. +- **No live preview video for running pipelines**: Live preview of the video from a running pipeline is not supported + in this release. As a workaround, you can enable the "Save output" option. After the pipeline finishes, inspect the + generated output video file. + +## Running density tests + +Currently, it is recommended to run **a single operation at a time** from the following set: tests, optimization, +validation. + +In this release: + +- New jobs are **not rejected or queued** when another job is already running. +- Starting more than one job at the same time launches **multiple GStreamer instances**. +- This can significantly **distort performance results** (for example, CPU/GPU utilization and FPS). + +For accurate and repeatable measurements, run these operations **one by one**.