diff --git a/education-ai-suite/smart-classroom/docs/user-guide/how-it-works.md b/education-ai-suite/smart-classroom/docs/user-guide/how-it-works.md index 08c4ee34d..9a07a6770 100644 --- a/education-ai-suite/smart-classroom/docs/user-guide/how-it-works.md +++ b/education-ai-suite/smart-classroom/docs/user-guide/how-it-works.md @@ -40,7 +40,7 @@ The uploaded audio is passed to the Backend API, which acts as the gateway to th - **Pipeline Service** -The Pipeline Service manages multiple DLStreamer-based pipelines: +The Pipeline Service manages multiple DL Streamer-based pipelines: - Front Video Pipeline for front camera streams - Back Video Pipeline for back camera streams @@ -54,10 +54,8 @@ A Media Server (MediaMTX) supports streaming and distribution of processed video - Performance metrics (e.g., utilisation, model efficiency) are displayed for monitoring. - Localisation ensures outputs are available in multiple languages (English/Chinese). - ## Learn More - [System Requirements](system-requirements.md): Check the hardware and software requirements for deploying the application. - [Get Started](get-started.md): Follow step-by-step instructions to set up the application. - [Application Flow](application-flow.md): Check the flow of application. - diff --git a/manufacturing-ai-suite/README.md b/manufacturing-ai-suite/README.md index 59913aef6..aecce7815 100644 --- a/manufacturing-ai-suite/README.md +++ b/manufacturing-ai-suite/README.md @@ -28,7 +28,7 @@ The Manufacturing AI Suite helps you develop solutions for: | | | |:-------------|:------------| -| [Deep Learning Streamer](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/libraries/dl-streamer) | A framework for building optimized media analytics pipelines powered by OpenVINO™ toolkit. | +| [Deep Learning Streamer](https://github.com/open-edge-platform/dlstreamer/tree/master) | A framework for building optimized media analytics pipelines powered by OpenVINO™ toolkit. | | [Deep Learning Streamer Pipeline Server](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/microservices/dlstreamer-pipeline-server) | A containerized microservice, built on top of GStreamer, for development and deployment of video analytics pipelines. | | [Model Registry](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/microservices/model-registry) | Providing capabilities to manage the lifecycle of an AI model. | | [Time Series Analytics Microservice](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/microservices/time-series-analytics) | Built on top of **Kapacitor**, a containerized microservice for development and deployment of time series analytics capabilities | diff --git a/manufacturing-ai-suite/hmi-augmented-worker/README.md b/manufacturing-ai-suite/hmi-augmented-worker/README.md index 4a94c4968..107ccc312 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/README.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/README.md @@ -2,7 +2,7 @@ GenAI is transforming Human Machine Interfaces (HMI) by enabling more intuitive, conversational, and context-aware interactions between operators and industrial systems. By leveraging advanced language models and retrieval-augmented generation, GenAI enhances decision-making, streamlines troubleshooting, and delivers real-time, actionable insights directly within the HMI environment. This leads to improved operator efficiency, reduced downtime, and safer manufacturing operations. -The `HMI Augmented Worker` sample application show cases how RAG pipelines can be integrated with HMI application. Besides RAG, the key feature of this sample application is that it executes in a Hypervisor based setup where HMI application executes on Windows® OS based VM while the RAG application runs in native Ubuntu or EMT based setup. This enables running this application on Intel® Core™ portfolio. +The `HMI Augmented Worker` sample application show cases how RAG pipelines can be integrated with HMI application. Besides RAG, the key feature of this sample application is that it executes in a Hypervisor based setup where HMI application executes on Windows® OS based VM while the RAG application runs in native Ubuntu or Edge Microvisor Toolkit based setup. This enables running this application on Intel® Core™ portfolio. ## Documentation @@ -16,7 +16,7 @@ The `HMI Augmented Worker` sample application show cases how RAG pipelines can b - [System Requirements](./docs/user-guide/system-requirements.md): Requirements include hardware and software to deploy the sample application. - **Advanced** - - [Build From Source](./docs/user-guide/how-to-build-from-source.md): Guide to build the file watcher service on Windows® OS and how it can be interfaced with RAG pipeline that executes on the Ubuntu or EMT side. + - [Build From Source](./docs/user-guide/how-to-build-from-source.md): Guide to build the file watcher service on Windows® OS and how it can be interfaced with RAG pipeline that executes on the Ubuntu or Edge Microvisor Toolkit side. - **Release Notes** - [Release Notes](./docs/user-guide/release-notes.md): Notes on the latest releases, updates, improvements, and bug fixes. diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/get-started.md b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/get-started.md index bc641bc88..e4cb1bd58 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/get-started.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/get-started.md @@ -1,12 +1,12 @@ # Get Started -The `Get Started` guide explains how the HMI Augmented Worker application can be setup on a Type-2 hypervisor with the HMI on a Windows® VM while deploying the RAG pipeline natively on the Hypervisor host (EMT). +The `Get Started` guide explains how the HMI Augmented Worker application can be setup on a Type-2 hypervisor with the HMI on a Windows® VM while deploying the RAG pipeline natively on the Hypervisor host (Edge Microvisor Toolkit). ## Prerequisites The sample application has mandatory prerequisites that are covered in other documentation. The user is required to refer to the respective documentation for the details. The prerequisites listed below cover such dependencies. -- Set up EMT based Type-2 Hypervisor host on target hardware. EMT is a reference hypervisor which has been used for validation. Other Type-2 hypervisors can also be used as per user preference. Reference documentation link for EMT as VM host is provided in [Other Documentation](#other-documentation) section. The reader is advised to contact Intel representatives for further details on configuring EMT host VM and instructions on hosting the Windows® Guest OS. +- Set up Edge Microvisor Toolkit based Type-2 Hypervisor host on target hardware. Edge Microvisor Toolkit is a reference hypervisor which has been used for validation. Other Type-2 hypervisors can also be used as per user preference. Reference documentation link for Edge Microvisor Toolkit as a VM host is provided in [Other Documentation](#other-documentation) section. The reader is advised to contact Intel representatives for further details on configuring an Edge Microvisor Toolkit host VM and instructions on hosting the Windows® Guest OS. - The `HMI Augmented Worker` sample application utilizes `Chat Question and Answer Core` for the RAG pipeline. The [documentation](#other-documentation) available with `Chat Question-and-Answer Core` covers the details of how to set up the RAG pipeline, deploy it, and consume the application. Follow the instructions provided and set up the RAG pipeline. @@ -68,8 +68,8 @@ To use the application effectively, make sure that all the steps mentioned in th ## Other Documentation -- [EMT Main Page](https://github.com/open-edge-platform/edge-microvisor-toolkit) -- [Create EMT bootable USB drive using source code](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/get-started-guide.md#create-a-bootable-usb-drive-using-source-code) -- [Desktop Virtualization on EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/desktop-virtualization-image-guide.md) -- [EMT Documentation](https://github.com/open-edge-platform/edge-microvisor-toolkit/tree/3.0/docs/developer-guide) +- [Edge Microvisor Toolkit Main Page](https://github.com/open-edge-platform/edge-microvisor-toolkit) +- [Create Edge Microvisor Toolkit bootable USB drive using source code](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/get-started-guide.md#create-a-bootable-usb-drive-using-source-code) +- [Desktop Virtualization on Edge Microvisor Toolkit](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/desktop-virtualization-image-guide.md) +- [Edge Microvisor Toolkit Documentation](https://github.com/open-edge-platform/edge-microvisor-toolkit/tree/3.0/docs/developer-guide) - [Chat Question and Answer Core Main Page](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/sample-applications/chat-question-and-answer-core) diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md index 0eb530372..a39901734 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/index.md @@ -18,9 +18,9 @@ a single physical machine. In this architecture, the HMI application operates within a Windows® virtual machine managed by a Type-2 hypervisor such as -[EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit). +[Edge Microvisor Toolkit](https://github.com/open-edge-platform/edge-microvisor-toolkit). The Retrieval-Augmented Generation (RAG) pipeline and supporting AI services are deployed -natively on a host system, which is EMT in this implementation. +natively on a host system, which is the Edge Microvisor Toolkit in this implementation. [Chat Question-and-Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/sample-applications/chat-question-and-answer-core) provides the RAG capability. This separation ensures robust isolation between the HMI and AI components, enabling diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/overview.md b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/overview.md index 5acfad8c4..5aeec4f6c 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/overview.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/overview.md @@ -1,9 +1,9 @@ # Overview -The HMI Augmented Worker is a RAG enabled HMI application deployed on Type-2 hypervisors. Deploying RAG-enabled HMI applications in a Type-2 hypervisor setup allows flexible and efficient resource utilization by running multiple operating systems on a single physical machine. +The HMI Augmented Worker is a RAG enabled HMI application deployed on Type-2 hypervisors. Deploying RAG-enabled HMI applications in a Type-2 hypervisor setup allows flexible and efficient resource utilization by running multiple operating systems on a single physical machine. -In this architecture, the HMI application operates within a Windows® virtual machine managed by a Type-2 hypervisor such as [EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit). The Retrieval-Augmented Generation (RAG) pipeline and supporting AI services are deployed natively on a host system, which is EMT in this implementation. [Chat Question-and-Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/sample-applications/chat-question-and-answer-core) provides the RAG capability. This separation ensures robust isolation between the HMI and AI components, enabling independent scaling, maintenance, and updates. The setup leverages the strengths of both environments, providing a seamless integration that enhances operator experience while maintaining system reliability and security. +In this architecture, the HMI application operates within a Windows® virtual machine managed by a Type-2 hypervisor such as [Edge Microvisor Toolkit](https://github.com/open-edge-platform/edge-microvisor-toolkit). The Retrieval-Augmented Generation (RAG) pipeline and supporting AI services are deployed natively on a host system, which is the Edge Microvisor Toolkit in this implementation. [Chat Question-and-Answer Core](https://github.com/open-edge-platform/edge-ai-libraries/tree/main/sample-applications/chat-question-and-answer-core) provides the RAG capability. This separation ensures robust isolation between the HMI and AI components, enabling independent scaling, maintenance, and updates. The setup leverages the strengths of both environments, providing a seamless integration that enhances operator experience while maintaining system reliability and security. -RAG-enabled HMI applications offer a substantial opportunity to enhance the capabilities of manufacturing machine operators, especially those who are less experienced. RAG enabled LLM applications deliver a user-friendly interface for troubleshooting advice, data summarization, and planning, utilizing a knowledge base tailored to specific deployments, including telemetry data, support logs, machine manuals, and production plans. This document details the use cases, architectures, and requirements for implementing RAG LLMs in HMI systems to improve operational efficiency, decision-making, and overall productivity for machine operators. In this sample application, the focus is on providing an RAG pipeline in a Type-2 Hypervisor-based setup. There is no reference HMI used and the user is expected to do the HMI integration using the RAG pipeline APIs provided. +RAG-enabled HMI applications offer a substantial opportunity to enhance the capabilities of manufacturing machine operators, especially those who are less experienced. RAG enabled LLM applications deliver a user-friendly interface for troubleshooting advice, data summarization, and planning, utilizing a knowledge base tailored to specific deployments, including telemetry data, support logs, machine manuals, and production plans. This document details the use cases, architectures, and requirements for implementing RAG LLMs in HMI systems to improve operational efficiency, decision-making, and overall productivity for machine operators. In this sample application, the focus is on providing an RAG pipeline in a Type-2 Hypervisor-based setup. There is no reference HMI used and the user is expected to do the HMI integration using the RAG pipeline APIs provided. ## How it works This section highlights the high-level architecture of the sample application. diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md index 2890d0ee3..b7151d317 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/release-notes.md @@ -3,27 +3,26 @@ ## Current Release **Version**: RC1 \ -**Release Date**: 14 July 2025 +**Release Date**: 14 July 2025 **Key Features and Improvements:** - **HMI Augmented Worker Use Case:** First drop of the sample application implementing the documented features. - + **Development Testing:** -Intel® Core™ i7-14700 based systems with EMT and Windows® 11 based Guest VM. - +Intel® Core™ i7-14700 based systems with Edge Microvisor Toolkit and Windows® 11 based Guest VM. + **Documentation:** -Documentation is **completed**. [README.md](../../README.md) is updated with installation steps and reference documents. - +Documentation is **completed**. [README.md](../../README.md) is updated with installation steps and reference documents. + **Known Limitations and Issues:** -- EMF Deployment package is not applicable to this sample application. -- EMT as VM host setup documentation is dependent on what is available in EMT documentation. +- Edge Manageability Framework Deployment package is not applicable to this sample application. +- Edge Microvisor Toolkit as VM host setup documentation is dependent on what is available in the Edge Microvisor Toolkit documentation. - Windows® Guest VM setup is not documented. Users are requested to contact Intel representatives for the same. - + ## Previous releases None - diff --git a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/system-requirements.md b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/system-requirements.md index 8871a14a4..6f79fdc89 100644 --- a/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/system-requirements.md +++ b/manufacturing-ai-suite/hmi-augmented-worker/docs/user-guide/system-requirements.md @@ -3,17 +3,19 @@ This page provides detailed hardware, software, and platform requirements to help you set up and run the application efficiently. ## Hardware Platforms used for validation + The `Chat question and answer Core` sample application system requirements are documented in [this](https://github.com/open-edge-platform/edge-ai-libraries/blob/main/sample-applications/chat-question-and-answer-core/docs/user-guide/system-requirements.md) page. The Intel® Core™ portfolio mentioned in this page is applicable for `HMI Augmented Worker` sample application too. The delta configurations supported is further described in this page. -The `HMI Augmented worker` sample application has been validated on Intel® Core™ i7-14700 based systems. The memory configuration used was 32GB which is the recommended minimum configuration. This machine hosts EMT based host together with Windows VM. +The `HMI Augmented worker` sample application has been validated on Intel® Core™ i7-14700 based systems. The memory configuration used was 32GB, which is the recommended minimum configuration. This machine hosts an Edge Microvisor Toolkit based host together with Windows VM. ## Software Requirements Required Software: - Python 3.10 -- [EMT](https://github.com/open-edge-platform/edge-microvisor-toolkit) configuration requirements are documented in the [system requirement](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/3.0/docs/developer-guide/emt-system-requirements.md) page. +- [Edge Microvisor Toolkit](https://github.com/open-edge-platform/edge-microvisor-toolkit) configuration requirements are documented in the [system requirement](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/3.0/docs/developer-guide/emt-system-requirements.md) page. ## Supporting Resources -* [Overview](./overview.md) -* [Get Started Guide](./get-started.md) + +- [Overview](./overview.md) +- [Get Started Guide](./get-started.md) diff --git a/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/how-to-guides/how-to-configure-alerts.md b/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/how-to-guides/how-to-configure-alerts.md index 493d11a0b..0966b70c9 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/how-to-guides/how-to-configure-alerts.md +++ b/manufacturing-ai-suite/industrial-edge-insights-multimodal/docs/user-guide/how-to-guides/how-to-configure-alerts.md @@ -54,7 +54,7 @@ docker exec -ti ia-mqtt-broker mosquitto_sub -h localhost -v -t '#' -p 1883 docker exec -ti ia-mqtt-broker mosquitto_sub -h localhost -v -t alerts/weld_defect_detection -p 1883 ``` -#### Docker - Subscribing to DLStreamer Pipeline Server Results +#### Docker - Subscribing to DL Streamer Pipeline Server Results ```sh docker exec -ti ia-mqtt-broker mosquitto_sub -h localhost -v -t vision_weld_defect_classification -p 1883 diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/get-started.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/get-started.md index 4a50060ac..6727b3c0e 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/get-started.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/get-started.md @@ -27,7 +27,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co 3. Edit the below mentioned environment variables in the `.env` file as follows: ```bash - HOST_IP= # IP address of server where DLStreamer Pipeline Server is running. + HOST_IP= # IP address of server where DL Streamer Pipeline Server is running. MR_PSQL_PASSWORD= #PostgreSQL service & client adapter e.g. intel1234 @@ -108,7 +108,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co ./sample_start.sh -p pallet_defect_detection ``` - This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different available options. + This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different available options. > **IMPORTANT**: Before you run `sample_start.sh` script, make sure that > `jq` is installed on your system. See the diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-deploy-using-helm-charts.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-deploy-using-helm-charts.md index a73db0e11..be641bf97 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-deploy-using-helm-charts.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-deploy-using-helm-charts.md @@ -35,7 +35,7 @@ `rm -rf helm && mv pallet-defect-detection-reference-implementation helm` 4. Edit the HOST_IP, proxy and other environment variables in `helm/values.yaml` as follows ```yaml - env: + env: HOST_IP: # host IP address MINIO_ACCESS_KEY: # example: minioadmin MINIO_SECRET_KEY: # example: minioadmin @@ -72,7 +72,7 @@ 2. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for Pallet Defect Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pallet-defect-detection/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps @@ -83,8 +83,8 @@ ```sh ./sample_list.sh helm ``` - This lists the pipeline loaded in DLStreamer Pipeline Server. - + This lists the pipeline loaded in DL Streamer Pipeline Server. + Output: ```sh # Example output for Pallet Defect Detection @@ -119,8 +119,8 @@ ```sh ./sample_start.sh helm -p pallet_defect_detection ``` - This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available. - + This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available. + Output: ```sh # Example output for Pallet Defect Detection @@ -145,7 +145,7 @@ ./sample_status.sh helm ``` This command lists status of pipeline instances launched during the lifetime of sample application. - + Output: ```sh # Example output for Pallet Defect Detection @@ -168,7 +168,7 @@ ./sample_stop.sh helm ``` This command will stop all instances that are currently in `RUNNING` state and respond with the last status. - + Output: ```sh # Example output for Pallet Defect Detection @@ -189,7 +189,7 @@ "state": "RUNNING" } ``` - If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. + If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. For example, `./sample_stop.sh helm --id 99ac50d852b511f09f7c2242868ff651` 7. Uninstall the helm chart. @@ -200,9 +200,9 @@ ## Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -212,7 +212,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 3. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for Pallet Defect Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pallet-defect-detection/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps @@ -221,14 +221,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 4. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -238,9 +238,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -5. Create a S3 bucket using the following script. +5. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. @@ -301,7 +301,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ## MLOps using Model Registry -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -311,7 +311,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 3. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for Pallet Defect Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pallet-defect-detection/videos/warehouse.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps @@ -354,11 +354,11 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 6. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/INT8/pallet_defect_detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -7. Run the following curl command to upload the local model. +7. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https://:30443/registry/models" \ -H 'Content-Type: multipart/form-data' \ @@ -409,4 +409,4 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ![WebRTC streaming](./images/webrtc-streaming.png) ## Troubleshooting -- [Troubleshooting Guide](troubleshooting-guide.md) \ No newline at end of file +- [Troubleshooting Guide](troubleshooting-guide.md) diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-enable-mlops.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-enable-mlops.md index 1f7de1870..48938ee96 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-enable-mlops.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-enable-mlops.md @@ -4,7 +4,7 @@ With this feature, during runtime, you can download a new model from the registr ## Contents -### Launch a pipeline in DLStreamer Pipeline Server +### Launch a pipeline in DL Streamer Pipeline Server 1. Set up the sample application to start a pipeline. A pipeline named `pallet_defect_detection_mlops` is already provided in the `pipeline-server-config.json` for this demonstration with the pallet defect detection sample app. > Ensure that the pipeline inference element such as gvadetect/gvaclassify/gvainference should not have a `model-instance-id` property set. If set, this would not allow the new model to be run with the same value provided in the model-instance-id. @@ -68,21 +68,21 @@ With this feature, during runtime, you can download a new model from the registr ./sample_start.sh -p pallet_defect_detection_mlops ``` - + ### Upload a model to Model Registry - > The following section assumes Model Registry microservice is up and running. + > The following section assumes Model Registry microservice is up and running. > For this demonstration we will be using Geti trained pallet defect detection model. Usually, the newer model is the same as older, architecture wise, but is retrained for better performance. We will using the same model and call it a different version. 1. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/INT8/pallet_defect_detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -2. Run the following curl command to upload the local model. +2. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https:///registry/models" \ -H 'Content-Type: multipart/form-data' \ diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-multiple-ai-pipelines.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-multiple-ai-pipelines.md index 0e8f3cf3e..3ff283420 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-multiple-ai-pipelines.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-multiple-ai-pipelines.md @@ -1,8 +1,8 @@ # Run multiple AI pipelines -In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. +In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. -The DLStreamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. +The DL Streamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. ## Steps diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-store-frames-in-s3.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-store-frames-in-s3.md index cac82c7e9..e39f766e8 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-store-frames-in-s3.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/how-to-run-store-frames-in-s3.md @@ -1,6 +1,6 @@ # Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. ## Steps @@ -14,14 +14,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 3. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -31,9 +31,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -4. Create a S3 bucket using the following script. +4. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/troubleshooting-guide.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/troubleshooting-guide.md index ea690f5f9..8bc6a595a 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/troubleshooting-guide.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pallet-defect-detection/troubleshooting-guide.md @@ -136,4 +136,4 @@ sudo apt install unzip ``` To install `jq`, refer to the following -[instructions](#unable-to-parse-json-payload-due-to-missing-jq-package). \ No newline at end of file +[instructions](#unable-to-parse-json-payload-due-to-missing-jq-package). diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/get-started.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/get-started.md index 0db28aee6..6b4d2e7ee 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/get-started.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/get-started.md @@ -28,7 +28,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co 3. Edit the below mentioned environment variables in the `.env` file as follows: ```bash - HOST_IP= # IP address of server where DLStreamer Pipeline Server is running. + HOST_IP= # IP address of server where DL Streamer Pipeline Server is running. MR_PSQL_PASSWORD= #PostgreSQL service & client adapter e.g. intel1234 @@ -104,7 +104,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co ./sample_start.sh -p pcb_anomaly_detection ``` - This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different available options. + This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different available options. > **IMPORTANT**: Before you run `sample_start.sh` script, make sure that > `jq` is installed on your system. See the diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-deploy-using-helm-charts.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-deploy-using-helm-charts.md index 52adf21ce..4c750c960 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-deploy-using-helm-charts.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-deploy-using-helm-charts.md @@ -16,7 +16,7 @@ ```sh git clone https://github.com/open-edge-platform/edge-ai-suites.git cd edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/ - ``` + ``` 2. Set app specific values.yaml file. ```sh cp helm/values_pcb_anomaly_detection.yaml helm/values.yaml @@ -35,7 +35,7 @@ `rm -rf helm && mv pcb-anomaly-detection helm` 4. Edit the HOST_IP, proxy and other environment variables in `helm/values.yaml` as follows ```yaml - env: + env: HOST_IP: # host IP address MINIO_ACCESS_KEY: # example: minioadmin MINIO_SECRET_KEY: # example: minioadmin @@ -72,19 +72,19 @@ 2. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for PCB Anomaly Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pcb-anomaly-detection/videos/anomalib_pcb_test.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps - - kubectl cp resources/pcb-anomaly-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n apps + + kubectl cp resources/pcb-anomaly-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n apps ``` 3. Fetch the list of pipeline loaded available to launch ```sh ./sample_list.sh helm ``` - This lists the pipeline loaded in DLStreamer Pipeline Server. - + This lists the pipeline loaded in DL Streamer Pipeline Server. + Output: ```sh # Example output for PCB Anomaly Detection @@ -119,8 +119,8 @@ ```sh ./sample_start.sh helm -p pcb_anomaly_detection ``` - This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available. - + This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available. + Output: ```sh # Example output for PCB Anomaly Detection @@ -145,7 +145,7 @@ ./sample_status.sh helm ``` This command lists status of pipeline instances launched during the lifetime of sample application. - + Output: ```sh # Example output for PCB Anomaly Detection @@ -168,7 +168,7 @@ ./sample_stop.sh helm ``` This command will stop all instances that are currently in `RUNNING` state and respond with the last status. - + Output: ```sh # Example output for PCB Anomaly Detection @@ -189,7 +189,7 @@ "state": "RUNNING" } ``` - If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. + If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. For example, `./sample_stop.sh helm --id f0c0b5aa5d4911f0bca7023bb629a486` 7. Uninstall the helm chart. @@ -199,9 +199,9 @@ ## Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -211,23 +211,23 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 3. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for PCB Anomaly Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pcb-anomaly-detection/videos/anomalib_pcb_test.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps - + kubectl cp resources/pcb-anomaly-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n apps ``` 4. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -237,9 +237,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -5. Create a S3 bucket using the following script. +5. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. @@ -300,7 +300,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ## MLOps using Model Registry -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -310,11 +310,11 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 3. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for PCB Anomaly Detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/pcb-anomaly-detection/videos/anomalib_pcb_test.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps - + kubectl cp resources/pcb-anomaly-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n apps ``` @@ -353,11 +353,11 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 6. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/FP16/pcb-anomaly-detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -7. Run the following curl command to upload the local model. +7. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https://:30443/registry/models" \ -H 'Content-Type: multipart/form-data' \ @@ -408,4 +408,4 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ![WebRTC streaming](./images/webrtc-streaming.png) ## Troubleshooting -- [Troubleshooting Guide](troubleshooting-guide.md) \ No newline at end of file +- [Troubleshooting Guide](troubleshooting-guide.md) diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-enable-mlops.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-enable-mlops.md index ae6d5dd38..a863886b5 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-enable-mlops.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-enable-mlops.md @@ -4,7 +4,7 @@ With this feature, during runtime, you can download a new model from the registr ## Contents -### Launch a pipeline in DLStreamer Pipeline Server +### Launch a pipeline in DL Streamer Pipeline Server 1. Set up the sample application to start a pipeline. A pipeline named `pcb_anomaly_detection_mlops` is already provided in the `pipeline-server-config.json` for this demonstration with the PCB anomaly detection sample app. > Ensure that the pipeline inference element such as gvadetect/gvaclassify/gvainference should not have a `model-instance-id` property set. If set, this would not allow the new model to be run with the same value provided in the model-instance-id. @@ -68,21 +68,21 @@ With this feature, during runtime, you can download a new model from the registr ./sample_start.sh -p pcb_anomaly_detection_mlops ``` - + ### Upload a model to Model Registry - > The following section assumes Model Registry microservice is up and running. + > The following section assumes Model Registry microservice is up and running. > For this demonstration we will be using Geti trained PCB anomaly detection model. Usually, the newer model is the same as older, architecture wise, but is retrained for better performance. We will using the same model and call it a different version. 1. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/FP16/pcb-anomaly-detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -2. Run the following curl command to upload the local model. +2. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https:///registry/models" \ -H 'Content-Type: multipart/form-data' \ diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-multiple-ai-pipelines.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-multiple-ai-pipelines.md index 2e32e31d8..e638eb280 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-multiple-ai-pipelines.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-multiple-ai-pipelines.md @@ -1,8 +1,8 @@ # Run multiple AI pipelines -In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. +In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. -The DLStreamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. +The DL Streamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config, i.e., `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. ## Steps diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-store-frames-in-s3.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-store-frames-in-s3.md index 7fbb8c4b1..058790f4b 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-store-frames-in-s3.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/pcb-anomaly-detection/how-to-run-store-frames-in-s3.md @@ -1,6 +1,6 @@ # Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. ## Steps @@ -14,14 +14,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 3. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -31,9 +31,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -4. Create a S3 bucket using the following script. +4. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/get-started.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/get-started.md index 00ba716ba..94968195f 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/get-started.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/get-started.md @@ -1,7 +1,7 @@ # Get Started -- **Time to Complete:** 30 minutes -- **Programming Language:** Python 3 +- **Time to Complete:** 30 minutes +- **Programming Language:** Python 3 ## Prerequisites @@ -28,7 +28,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co 3. Edit the below mentioned environment variables in `.env` file, as follows: ```bash - HOST_IP= # IP address of server where DLStreamer Pipeline Server is running. + HOST_IP= # IP address of server where DL Streamer Pipeline Server is running. MR_PSQL_PASSWORD= #PostgreSQL service & client adapter e.g. intel1234 @@ -109,7 +109,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co ./sample_start.sh -p weld_porosity_classification ``` - This command will look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available. + This command will look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available. > **IMPORTANT**: Before you run `sample_start.sh` script, make sure that > `jq` is installed on your system. See the diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-deploy-using-helm-charts.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-deploy-using-helm-charts.md index 5c7473322..4104aa145 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-deploy-using-helm-charts.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-deploy-using-helm-charts.md @@ -35,7 +35,7 @@ `rm -rf helm && mv weld-porosity-sample-application helm` 4. Edit the HOST_IP, proxy and other environment variables in `helm/values.yaml` as follows ```yaml - env: + env: HOST_IP: # host IP address MINIO_ACCESS_KEY: # example: minioadmin MINIO_SECRET_KEY: # example: minioadmin @@ -72,7 +72,7 @@ 2. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for Weld Porosity Classification. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/weld-porosity/videos/welding.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps @@ -83,8 +83,8 @@ ```sh ./sample_list.sh helm ``` - This lists the pipeline loaded in DLStreamer Pipeline Server. - + This lists the pipeline loaded in DL Streamer Pipeline Server. + Output: ```sh # Example output for Weld Porosity Classification @@ -119,8 +119,8 @@ ```sh ./sample_start.sh helm -p weld_porosity_classification ``` - This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available. - + This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available. + Output: ```sh # Example output for Weld Porosity Classification @@ -145,7 +145,7 @@ ./sample_status.sh helm ``` This command lists status of pipeline instances launched during the lifetime of sample application. - + Output: ```sh # Example output for Weld Porosity Classification @@ -168,7 +168,7 @@ ./sample_stop.sh helm ``` This command will stop all instances that are currently in `RUNNING` state and respond with the last status. - + Output: ```sh # Example output for Weld Porosity Classification @@ -190,7 +190,7 @@ } ``` - If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. + If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. For example, `./sample_stop.sh helm --id 895130405c8e11f08b78029627ef9c6b` 7. Uninstall the helm chart. @@ -200,9 +200,9 @@ ## Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -221,14 +221,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 4. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -238,9 +238,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -5. Create a S3 bucket using the following script. +5. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. @@ -301,7 +301,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ## MLOps using Model Registry -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -350,11 +350,11 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 6. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/FP16/weld_porosity_classification.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -7. Run the following curl command to upload the local model. +7. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https://:30443/registry/models" \ -H 'Content-Type: multipart/form-data' \ diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-enable-mlops.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-enable-mlops.md index 0cac05cb4..f0df9cb35 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-enable-mlops.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-enable-mlops.md @@ -4,7 +4,7 @@ With this feature, during runtime, you can download a new model from the registr ## Contents -### Launch a pipeline in DLStreamer Pipeline Server +### Launch a pipeline in DL Streamer Pipeline Server 1. Set up the sample application to start a pipeline. A pipeline named `weld_porosity_classification_mlops` is already provided in the `pipeline-server-config.json` for this demonstration with the Weld Porosity classification sample app. > Ensure that the pipeline inference element such as gvadetect/gvaclassify/gvainference should not have a `model-instance-id` property set. If set, this would not allow the new model to be run with the same value provided in the model-instance-id. @@ -64,17 +64,17 @@ With this feature, during runtime, you can download a new model from the registr ./sample_start.sh -p weld_porosity_classification_mlops ``` - + ### Upload a model to Model Registry - > The following section assumes Model Registry microservice is up and running. + > The following section assumes Model Registry microservice is up and running. > For this demonstration we will be using Geti trained weld porosity model. Usually, the newer model is the same as older, architecture wise, but is retrained for better performance. We will using the same model and call it a different version. 1. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/FP16/weld_porosity_classification.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-multiple-ai-pipelines.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-multiple-ai-pipelines.md index 31381a14a..55f87dbd8 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-multiple-ai-pipelines.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-multiple-ai-pipelines.md @@ -1,8 +1,8 @@ # Run multiple AI pipelines -In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. +In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. -The DLStreamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. +The DL Streamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. ## Steps diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-store-frames-in-s3.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-store-frames-in-s3.md index 9b6f8dbfa..c5ab5968f 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-store-frames-in-s3.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/weld-porosity/how-to-run-store-frames-in-s3.md @@ -1,6 +1,6 @@ # Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. ## Steps @@ -14,14 +14,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 3. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -31,9 +31,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -4. Create a S3 bucket using the following script. +4. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/get-started.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/get-started.md index 0f89f14f9..33fcfdad9 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/get-started.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/get-started.md @@ -28,7 +28,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co 3. Edit the below mentioned environment variables in the `.env` file as follows: ```bash - HOST_IP= # IP address of server where DLStreamer Pipeline Server is running. + HOST_IP= # IP address of server where DL Streamer Pipeline Server is running. MR_PSQL_PASSWORD= #PostgreSQL service & client adapter e.g. intel1234 @@ -103,7 +103,7 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co ./sample_start.sh -p worker_safety_gear_detection ``` - This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different available options. + This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload.json` file and launch a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different available options. > **IMPORTANT**: Before you run `sample_start.sh` script, make sure that > `jq` is installed on your system. See the diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-deploy-using-helm-charts.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-deploy-using-helm-charts.md index 59a6432f3..f893f9feb 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-deploy-using-helm-charts.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-deploy-using-helm-charts.md @@ -35,7 +35,7 @@ `rm -rf helm && mv worker-safety-gear-detection helm` 4. Edit the HOST_IP, proxy and other environment variables in `helm/values.yaml` as follows ```yaml - env: + env: HOST_IP: # host IP address MINIO_ACCESS_KEY: # example: minioadmin MINIO_SECRET_KEY: # example: minioadmin @@ -72,19 +72,19 @@ 2. Copy the resources such as video and model from local directory to the `dlstreamer-pipeline-server` pod to make them available for application while launching pipelines. ```sh # Below is an example for Worker safety gear detection. Please adjust the source path of models and videos appropriately for other sample applications. - + POD_NAME=$(kubectl get pods -n apps -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | grep deployment-dlstreamer-pipeline-server | head -n 1) kubectl cp resources/worker-safety-gear-detection/videos/Safety_Full_Hat_and_Vest.avi $POD_NAME:/home/pipeline-server/resources/videos/ -c dlstreamer-pipeline-server -n apps - + kubectl cp resources/worker-safety-gear-detection/models/* $POD_NAME:/home/pipeline-server/resources/models/ -c dlstreamer-pipeline-server -n apps ``` 3. Fetch the list of pipeline loaded available to launch ```sh ./sample_list.sh helm ``` - This lists the pipeline loaded in DLStreamer Pipeline Server. - + This lists the pipeline loaded in DL Streamer Pipeline Server. + Output: ```sh # Example output for Worker Safety gear detection @@ -119,8 +119,8 @@ ```sh ./sample_start.sh helm -p worker_safety_gear_detection ``` - This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available. - + This command would look for the payload for the pipeline specified in `-p` argument above, inside the `payload.json` file and launch the a pipeline instance in DL Streamer Pipeline Server. Refer to the table, to learn about different options available. + Output: ```sh # Example output for Worker Safety gear detection @@ -145,7 +145,7 @@ ./sample_status.sh helm ``` This command lists status of pipeline instances launched during the lifetime of sample application. - + Output: ```sh # Example output for Worker Safety gear detection @@ -168,7 +168,7 @@ ./sample_stop.sh helm ``` This command will stop all instances that are currently in `RUNNING` state and respond with the last status. - + Output: ```sh # Example output for Worker Safety gear detection @@ -189,7 +189,7 @@ "state": "RUNNING" } ``` - If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. + If you wish to stop a specific instance, you can provide it with an `--id` argument to the command. For example, `./sample_stop.sh helm --id 784b87b45d1511f08ab0da88aa49c01e` 7. Uninstall the helm chart. @@ -199,9 +199,9 @@ ## Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -220,14 +220,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 4. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -237,9 +237,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -5. Create a S3 bucket using the following script. +5. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. @@ -300,7 +300,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ## MLOps using Model Registry -1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. +1. Run all the steps mentioned in above [section](./how-to-deploy-using-helm-charts.md#setup-the-application) to setup the application. 2. Install the helm chart ```sh @@ -353,11 +353,11 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 6. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/INT8/worker-safety-gear-detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -7. Run the following curl command to upload the local model. +7. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https://:30443/registry/models" \ -H 'Content-Type: multipart/form-data' \ @@ -405,7 +405,7 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S 11. View the WebRTC streaming on `https://:30443/mediamtx//` by replacing `` with the value used in the original cURL command to start the pipeline. - ![WebRTC streaming](./images/webrtc-streaming.png) + ![WebRTC streaming](./images/webrtc-streaming.png) ## Troubleshooting - [Troubleshooting Guide](troubleshooting-guide.md) \ No newline at end of file diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-enable-mlops.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-enable-mlops.md index 09fcdcd27..1221436ca 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-enable-mlops.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-enable-mlops.md @@ -4,7 +4,7 @@ With this feature, during runtime, you can download a new model from the registr ## Contents -### Launch a pipeline in DLStreamer Pipeline Server +### Launch a pipeline in DL Streamer Pipeline Server 1. Set up the sample application to start a pipeline. A pipeline named `worker_safety_gear_detection_mlops` is already provided in the `pipeline-server-config.json` for this demonstration with the Worker Safety Gear Detection sample app. > Ensure that the pipeline inference element such as gvadetect/gvaclassify/gvainference should not have a `model-instance-id` property set. If set, this would not allow the new model to be run with the same value provided in the model-instance-id. @@ -68,21 +68,21 @@ With this feature, during runtime, you can download a new model from the registr ./sample_start.sh -p worker_safety_gear_detection_mlops ``` - + ### Upload a model to Model Registry - > The following section assumes Model Registry microservice is up and running. + > The following section assumes Model Registry microservice is up and running. > For this demonstration we will be using Geti trained worker safety gear detection model. Usually, the newer model is the same as older, architecture wise, but is retrained for better performance. We will using the same model and call it a different version. 1. Download and prepare the model. ```sh export MODEL_URL='https://github.com/open-edge-platform/edge-ai-resources/raw/a7c9522f5f936c47de8922046db7d7add13f93a0/models/INT8/worker-safety-gear-detection.zip' - + curl -L "$MODEL_URL" -o "$(basename $MODEL_URL)" ``` -2. Run the following curl command to upload the local model. +2. Run the following curl command to upload the local model. ```sh curl -k -L -X POST "https:///registry/models" \ -H 'Content-Type: multipart/form-data' \ @@ -144,4 +144,3 @@ With this feature, during runtime, you can download a new model from the registr ```sh curl -k --location -X DELETE https:///api/pipelines/{instance_id} ``` - diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-multiple-ai-pipelines.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-multiple-ai-pipelines.md index 264220fdb..a13196aaa 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-multiple-ai-pipelines.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-multiple-ai-pipelines.md @@ -1,8 +1,8 @@ # Run multiple AI pipelines -In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. +In a typical deployment, multiple cameras deliver video streams that are connected to AI pipelines to improve the detection and recognition accuracy. -The DLStreamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. +The DL Streamer Pipeline Server config supports multiple pipelines that you can use to launch pipeline instances. The sample application has been provided with such a config i.e. `pipeline-server-config.json`. We will use the same to demonstrate launching multiple AI pipelines. ## Steps diff --git a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-store-frames-in-s3.md b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-store-frames-in-s3.md index 6f5816cf8..4b70b6e61 100644 --- a/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-store-frames-in-s3.md +++ b/manufacturing-ai-suite/industrial-edge-insights-vision/docs/user-guide/worker-safety-gear-detection/how-to-run-store-frames-in-s3.md @@ -1,6 +1,6 @@ # Storing frames to S3 storage -Applications can take advantage of S3 publish feature from DLStreamer Pipeline Server and use it to save frames to an S3 compatible storage. +Applications can take advantage of S3 publish feature from DL Streamer Pipeline Server and use it to save frames to an S3 compatible storage. ## Steps @@ -14,14 +14,14 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S ``` 3. Install the package `boto3` in your python environment if not installed. - + It is recommended to create a virtual environment and install it there. You can run the following commands to add the necessary dependencies as well as create and activate the environment. - + ```sh sudo apt update && \ sudo apt install -y python3 python3-pip python3-venv ``` - ```sh + ```sh python3 -m venv venv && \ source venv/bin/activate ``` @@ -31,9 +31,9 @@ Applications can take advantage of S3 publish feature from DLStreamer Pipeline S pip3 install --upgrade pip && \ pip3 install boto3==1.36.17 ``` - > **Note** DLStreamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. + > **Note** DL Streamer Pipeline Server expects the bucket to be already present in the database. The next step will help you create one. -4. Create a S3 bucket using the following script. +4. Create a S3 bucket using the following script. Update the `HOST_IP` and credentials with that of the running MinIO server. Name the file as `create_bucket.py`. diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/get-started.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/get-started.md index efecc9fca..2f75c84ba 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/get-started.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/get-started.md @@ -2,7 +2,7 @@ ## Overview -The Metro Vision AI SDK provides a comprehensive development environment for computer vision applications using Intel's optimized tools and frameworks. This guide demonstrates the installation process and provides a practical object detection implementation using DLStreamer and OpenVINO. +The Metro Vision AI SDK provides a comprehensive development environment for computer vision applications using Intel's optimized tools and frameworks. This guide demonstrates the installation process and provides a practical object detection implementation using DL Streamer and OpenVINO. ## Learning Objectives @@ -34,7 +34,7 @@ curl https://raw.githubusercontent.com/open-edge-platform/edge-ai-suites/refs/he The installation process configures the following components: - Docker containerization platform -- Intel DLStreamer video analytics framework +- DL Streamer video analytics framework - OpenVINO inference optimization toolkit - Pre-trained model repositories and sample implementations @@ -110,9 +110,9 @@ The resulting output displays the original video content with overlaid detection ## Technology Framework Overview -### DLStreamer Framework +### DL Streamer Framework -DLStreamer provides a comprehensive video analytics framework built on GStreamer technology. Key capabilities include: +DL Streamer provides a comprehensive video analytics framework built on GStreamer technology. Key capabilities include: - Multi-format video input support (files, network streams, camera devices) - Real-time inference execution on video frame sequences @@ -156,9 +156,9 @@ Profiling and monitoring performance of Metro Vision AI workloads using command- ### Technical Documentation -- [DLStreamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html) +- [DL Streamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html) \- Comprehensive documentation for Intel's GStreamer-based video analytics framework -- [DLStreamer Pipeline Server](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer-pipeline-server/index.html) +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer-pipeline-server/index.html) \- RESTful microservice architecture documentation for scalable video analytics deployment - [OpenVINO](https://docs.openvino.ai/2025/get-started.html) \- Complete reference for Intel's cross-platform inference optimization toolkit @@ -184,4 +184,4 @@ tutorial-3 tutorial-4 tutorial-5 ::: -hide_directive--> \ No newline at end of file +hide_directive--> diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-1.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-1.md index c91fc69c3..17ada442c 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-1.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-1.md @@ -53,10 +53,10 @@ wget -O bottle-detection.mp4 https://storage.openvinotoolkit.org/test_data/video ### Step 2: Download Pre-trained Model -Download the YOLOv10s model using the DLStreamer container: +Download the YOLOv10s model using the DL Streamer container: ```bash -# Download YOLOv10s model using DLStreamer +# Download YOLOv10s model using DL Streamer docker run --rm --user=root \ -e http_proxy -e https_proxy -e no_proxy \ -v "${PWD}:/home/dlstreamer/" \ diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-3.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-3.md index 64808990d..8eebed87a 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-3.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-3.md @@ -49,12 +49,12 @@ This tutorial requires **Ubuntu Desktop** with a physical display and active gra - Ubuntu Server (no GUI) - Remote SSH sessions without X11 forwarding - Headless systems - + You must be logged in to a local desktop session with a connected monitor or Remote Desktop/VNC connection for the video output to display correctly. ## Tutorial Steps -### Step 1: Create Working Directory +### Step 1: Create Working Directory Set up your workspace and download a sample city intersection video @@ -75,7 +75,7 @@ This sample video shows a busy city intersection with vehicles, pedestrians, and Download the YOLOv10s object detection model and convert it to OpenVINO format: ```bash -# Download YOLOv10s model using DLStreamer container +# Download YOLOv10s model using DL Streamer container docker run --rm --user=root \ -e http_proxy -e https_proxy -e no_proxy \ -v "${PWD}:/home/dlstreamer/" \ @@ -158,37 +158,37 @@ def postprocess(frame, results): # YOLOv10 output shape: [1, 300, 6] where 6 = [x1, y1, x2, y2, conf, class_id] detections = np.squeeze(results) # Remove batch dimension ih, iw, _ = frame.shape - + print(f"Detections shape: {detections.shape}") # Debug info - + for det in detections: conf = det[4] if conf < conf_threshold: continue - + # YOLOv10 output: [x1, y1, x2, y2, conf, class_id] x1, y1, x2, y2 = det[:4] class_id = int(det[5]) - + # Coordinates are normalized to input size (640x640) # Scale to original frame size x1 = int(x1 * iw / w) y1 = int(y1 * ih / h) x2 = int(x2 * iw / w) y2 = int(y2 * ih / h) - + # Ensure coordinates are within frame bounds x1 = max(0, min(x1, iw)) y1 = max(0, min(y1, ih)) x2 = max(0, min(x2, iw)) y2 = max(0, min(y2, ih)) - + color = colors[class_id % len(colors)] cv2.rectangle(frame, (x1, y1), (x2, y2), color, thickness=3) label = class_names[class_id] if class_id < len(class_names) else f"ID:{class_id}" cv2.putText(frame, label, (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 1.5, color, thickness=3) - + print(f"Detection: class={class_id}, conf={conf:.2f}, box=[{x1},{y1},{x2},{y2}]") # Debug return frame @@ -250,7 +250,7 @@ docker run -it --rm \ --env https_proxy=$https_proxy \ --env no_proxy=$no_proxy \ --user root \ - openvino/ubuntu24_dev:2025.3.0 + openvino/ubuntu24_dev:2025.3.0 ``` ```bash apt update @@ -259,7 +259,7 @@ pip install opencv-python "numpy<2" ``` ```bash -python3 /home/openvino/inference.py +python3 /home/openvino/inference.py ``` **Expected Console Output:** @@ -282,7 +282,7 @@ docker run -it --rm \ --env https_proxy=$https_proxy \ --env no_proxy=$no_proxy \ --user root \ - openvino/ubuntu24_dev:2025.3.0 + openvino/ubuntu24_dev:2025.3.0 ``` ```bash apt update @@ -290,7 +290,7 @@ apt install -y libgtk2.0-dev pkg-config libcanberra-gtk-module libcanberra-gtk3- pip install opencv-python "numpy<2" ``` ```bash -python3 /home/openvino/inference.py +python3 /home/openvino/inference.py ``` **Custom Thresholds:** @@ -353,4 +353,4 @@ The model can detect 80 different object classes from the COCO dataset. In the c - **FP16 Precision**: Reduced memory usage and faster computation - **Batch Processing**: Single frame inference optimized for real-time performance - **Pipeline Parallelism**: Overlapped preprocessing and inference operations -- **Efficient NMS**: Optimized Non-Maximum Suppression implementation \ No newline at end of file +- **Efficient NMS**: Optimized Non-Maximum Suppression implementation diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-5.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-5.md index d32c87ebb..456a951da 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-5.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/metro-vision-ai-sdk/tutorial-5.md @@ -1,6 +1,6 @@ # Metro Vision AI SDK - Tutorial 5 -This tutorial will guide you through profiling and monitoring performance of Metro Vision AI workloads using command-line tools. You'll learn to use `perf`, `htop`, and `intel_gpu_top` to analyze system performance while running DLStreamer Pipeline Server or OpenVINO applications. +This tutorial will guide you through profiling and monitoring performance of Metro Vision AI workloads using command-line tools. You'll learn to use `perf`, `htop`, and `intel_gpu_top` to analyze system performance while running DL Streamer Pipeline Server or OpenVINO applications. ## Prerequisites @@ -54,7 +54,7 @@ ls -la /dev/dri/ ## Step 3: Start Your Metro Vision AI Workload -Create and start a DLStreamer pipeline that continuously runs in the background for profiling: +Create and start a DL Streamer pipeline that continuously runs in the background for profiling: ```bash mkdir -p ~/metro/metro-vision-tutorial-5 @@ -63,14 +63,14 @@ cd ~/metro/metro-vision-tutorial-5 # Download sample video for object detection wget -O bottle-detection.mp4 https://storage.openvinotoolkit.org/test_data/videos/bottle-detection.mp4 -# Download YOLOv10s model using DLStreamer container +# Download YOLOv10s model using DL Streamer container docker run --rm --user=root \ -e http_proxy -e https_proxy -e no_proxy \ -v "${PWD}:/home/dlstreamer/" \ intel/dlstreamer:2025.1.2-ubuntu24 \ bash -c "export MODELS_PATH=/home/dlstreamer && /opt/intel/dlstreamer/samples/download_public_models.sh yolov10s" -# Create a continuous DLStreamer pipeline script +# Create a continuous DL Streamer pipeline script cat > metro_vision_pipeline.sh << 'EOF' #!/bin/bash @@ -131,7 +131,7 @@ echo "Metro Vision AI pipeline started with PID: $PIPELINE_PID" echo "Use 'kill $PIPELINE_PID' to stop the pipeline when done profiling" ``` -**Note**: This creates a continuously running Docker-based DLStreamer pipeline that processes real video using the YOLOv10s object detection model, providing a realistic AI workload for performance profiling. The pipeline runs in a Docker container with access to Intel GPU hardware. +**Note**: This creates a continuously running Docker-based DL Streamer pipeline that processes real video using the YOLOv10s object detection model, providing a realistic AI workload for performance profiling. The pipeline runs in a Docker container with access to Intel GPU hardware. ## Step 4: Monitor Overall System Performance with htop @@ -176,7 +176,7 @@ sudo intel_gpu_top When you're done profiling, stop the background pipeline: ```bash -# Stop the background DLStreamer pipeline +# Stop the background DL Streamer pipeline pkill -9 -f metro_vision_pipeline.sh ``` diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/get-started.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/get-started.md index 4980ddaf3..18a5739b7 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/get-started.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/get-started.md @@ -2,7 +2,7 @@ ## Overview -The Visual AI Demo Kit provides a comprehensive demonstration environment for computer vision applications using Intel's optimized tools and frameworks. This guide demonstrates the installation process and provides practical AI application implementations including smart parking, smart intersection, and other visual AI use cases using DLStreamer and OpenVINO. +The Visual AI Demo Kit provides a comprehensive demonstration environment for computer vision applications using Intel's optimized tools and frameworks. This guide demonstrates the installation process and provides practical AI application implementations including smart parking, smart intersection, and other visual AI use cases using DL Streamer and OpenVINO. ## Learning Objectives @@ -36,7 +36,7 @@ curl https://raw.githubusercontent.com/open-edge-platform/edge-ai-suites/refs/he The installation process configures the following components: - Docker containerization platform -- Intel DLStreamer video analytics framework +- DL Streamer video analytics framework - OpenVINO inference optimization toolkit - Grafana dashboard for monitoring - MQTT Broker for messaging @@ -86,8 +86,8 @@ docker ps - Grafana Dashboard - DL Streamer Pipeline Server - MQTT Broker -- Node-RED (for applications without Scenescape) -- Scenescape services (for Smart Intersection only) +- Node-RED (for applications without Intel® SceneScape) +- Intel® SceneScape services (for Smart Intersection only) @@ -121,7 +121,7 @@ docker compose down The Visual AI Demo Kit implements a microservice architecture with the following components: -1. **DLStreamer Pipeline Server**: Handles video analytics and AI inference processing +1. **DL Streamer Pipeline Server**: Handles video analytics and AI inference processing 2. **Grafana Dashboard**: Provides real-time visualization and monitoring 3. **MQTT Broker**: Manages message communication between services 4. **Node-RED**: Orchestrates workflow automation and data processing @@ -135,7 +135,7 @@ The resulting application provides a complete visual AI solution with real-time The Visual AI Demo Kit integrates multiple technologies to provide a comprehensive demonstration environment: -- DLStreamer Pipeline Server +- DL Streamer Pipeline Server - Grafana Dashboard - MQTT Broker - Node-RED @@ -175,9 +175,9 @@ Create compelling visualization experiences for your AI applications. This tutor ### Technical Documentation -- [DLStreamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html) +- [DL Streamer](http://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dl-streamer/index.html) \- Comprehensive documentation for Intel's GStreamer-based video analytics framework -- [DLStreamer Pipeline Server](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer-pipeline-server/index.html) +- [DL Streamer Pipeline Server](https://docs.openedgeplatform.intel.com/dev/edge-ai-libraries/dlstreamer-pipeline-server/index.html) \- RESTful microservice architecture documentation for scalable video analytics deployment - [OpenVINO](https://docs.openvino.ai/2025/get-started.html) \- Complete reference for Intel's cross-platform inference optimization toolkit diff --git a/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/tutorial-1.md b/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/tutorial-1.md index 3a653dd0d..c5964cd08 100644 --- a/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/tutorial-1.md +++ b/metro-ai-suite/metro-sdk-manager/docs/user-guide/visual-ai-demo-kit/tutorial-1.md @@ -4,7 +4,7 @@ **Sample Description**: This tutorial demonstrates how to build an intelligent tolling system using edge AI technologies for real-time vehicle detection, license plate recognition, and vehicle attribute analysis. --> -This tutorial walks you through creating an AI-powered tolling system that automatically detects vehicles, recognizes license plates, and analyzes vehicle attributes in real-time. The system leverages Intel's DLStreamer framework with pre-trained AI models to process video streams from toll booth cameras, enabling automated toll collection and traffic monitoring. +This tutorial walks you through creating an AI-powered tolling system that automatically detects vehicles, recognizes license plates, and analyzes vehicle attributes in real-time. The system leverages Intel's Deep Learning Streamer (DL Streamer) framework with pre-trained AI models to process video streams from toll booth cameras, enabling automated toll collection and traffic monitoring. - -
Backend
User
Frontend
Nginx
Grafana
WebRTC Server
Scene Management UI*
InfluxDB*
MQTT Broker
NodeRed
Scene Controller*
Scene DB*
DLStreamer Pipeline Server
WebRTC
frames
Intel® microservice
3rd party microservice
optional SceneScape services are marked with "*"
MQTT
\ No newline at end of file + + + + + + + + + + +
+
+
+
+ + Backend + +
+
+
+
+
+ + Backend + +
+
+
+ + + + + + + +
+
+
+ User +
+
+
+
+ + User + +
+
+
+ + + + + + + + + + + + +
+
+
+
+ + Frontend + +
+
+
+
+
+ + Frontend + +
+
+
+ + + + + + + + + + + + +
+
+
+ Nginx +
+
+
+
+ + Nginx + +
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+
+ Grafana +
+
+
+
+ + Grafana + +
+
+
+ + + + + + + + + + + + + + + + +
+
+
+ WebRTC Server +
+
+
+
+ + WebRTC Server + +
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+
+ Scene Management UI* +
+
+
+
+ + Scene Management... + +
+
+
+ + + + + + + + + + +
+
+
+ InfluxDB* +
+
+
+
+ + InfluxDB* + +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ MQTT Broker +
+
+
+
+ + MQTT Broker + +
+
+
+ + + + + + + + + + + +
+
+
+ NodeRed +
+
+
+
+ + NodeRed + +
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ Scene Controller* +
+
+
+
+ + Scene Controller* + +
+
+
+ + + + + + + + + + + + + + + + + + +
+
+
+ Scene DB* +
+
+
+
+ + Scene DB* + +
+
+
+ + + + + + + + + + + + + + + + + + + +
+
+
+ DLStreamer Pipeline Server +
+
+
+
+ + DLStreamer Pipeli... + +
+
+
+ + + + + + + + + + + + + + + + + + + + +
+
+
+ WebRTC +
+ frames +
+
+
+
+ + WebRTC... + +
+
+
+ + + + + + + + + + + + + + + + + +
+
+
+ Intel + + + ® + + + microservice +
+
+
+
+ + Intel® microservice + +
+
+
+ + + + + + + +
+
+
+ 3 + + + rd + + + party microservice +
+
+
+
+ + 3rd party microservice + +
+
+
+ + + + + + + +
+
+
+ optional Intel® + + SceneScape services are marked with "*" + +
+
+
+
+ + optional Intel® SceneScape... + +
+
+
+ + + + + + + + + + +
+
+
+ MQTT +
+
+
+
+ + MQTT + +
+
+
+
+ + + + + Text is not SVG - cannot display + + + +
\ No newline at end of file diff --git a/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/index.md b/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/index.md index 8fe257eaf..4bbdd6b64 100644 --- a/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/index.md +++ b/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/index.md @@ -11,7 +11,7 @@ set up the applications, system requirements, and best practices for deployment. **Available Sample Applications:** -- **Smart Intersection Management** (includes optional Scenescape components) - AI-driven traffic flow optimization and intersection monitoring +- **Smart Intersection Management** (includes optional Intel® SceneScape components) - AI-driven traffic flow optimization and intersection monitoring - **Loitering Detection** - Real-time detection of loitering behavior in transportation hubs - **Smart Parking** - Automated parking space monitoring and management @@ -56,9 +56,9 @@ insights for traffic management. ### Optional Components -- **Scenescape Management UI:** A web-based user interface for advanced scene configuration, camera calibration, and visual rule setup. Provides intuitive tools for defining detection zones, traffic lanes, and monitoring areas through a graphical interface. -- **Scenescape Controller:** The backend service that manages scene configurations, processes spatial analytics, and coordinates between the Management UI and the video analytics pipeline. Handles complex scene understanding and geometric transformations. -- **Scenescape Database (PostgreSQL):** A robust relational database that stores scene configurations, camera metadata, calibration parameters, and historical analytics data. Ensures data persistence and enables complex queries for reporting and analysis. +- **Intel® SceneScape Management UI:** A web-based user interface for advanced scene configuration, camera calibration, and visual rule setup. Provides intuitive tools for defining detection zones, traffic lanes, and monitoring areas through a graphical interface. +- **Intel® SceneScape Controller:** The backend service that manages scene configurations, processes spatial analytics, and coordinates between the Management UI and the video analytics pipeline. Handles complex scene understanding and geometric transformations. +- **Intel® SceneScape Database (PostgreSQL):** A robust relational database that stores scene configurations, camera metadata, calibration parameters, and historical analytics data. Ensures data persistence and enables complex queries for reporting and analysis. - **InfluxDB:** A time-series database optimized for storing and querying high-frequency transportation metrics such as vehicle counts, traffic flow rates, speed measurements, and system performance data. Enables efficient historical analysis and trend monitoring. diff --git a/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/tutorial-1.md b/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/tutorial-1.md index bcd399a1e..344ae6108 100644 --- a/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/tutorial-1.md +++ b/metro-ai-suite/metro-vision-ai-app-recipe/docs/user-guide/tutorial-1.md @@ -4,7 +4,7 @@ **Sample Description**: This tutorial demonstrates how to build an intelligent tolling system using edge AI technologies for real-time vehicle detection, license plate recognition, and vehicle attribute analysis. --> -This tutorial walks you through creating an AI-powered tolling system that automatically detects vehicles, recognizes license plates, and analyzes vehicle attributes in real-time. The system leverages Intel's DLStreamer framework with pre-trained AI models to process video streams from toll booth cameras, enabling automated toll collection and traffic monitoring. +This tutorial walks you through creating an AI-powered tolling system that automatically detects vehicles, recognizes license plates, and analyzes vehicle attributes in real-time. The system leverages Intel's Deep Learning Streamer (DL Streamer) framework with pre-trained AI models to process video streams from toll booth cameras, enabling automated toll collection and traffic monitoring.