Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,4 @@ OPCUA_SERVER_PASSWORD=

# Sample App, name should match the app dir name
SAMPLE_APP=pallet-defect-detection
DEVICE=CPU # Possible values: CPU, GPU, NPU
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,4 @@ OPCUA_SERVER_PASSWORD=

# Sample App, name should match the app dir name
SAMPLE_APP=pcb-anomaly-detection
DEVICE=CPU # Possible values: CPU, GPU, NPU
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,4 @@ OPCUA_SERVER_PASSWORD=

# Sample App, name should match the app dir name
SAMPLE_APP=weld-porosity
DEVICE=CPU # Possible values: CPU, GPU, NPU
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,4 @@ OPCUA_SERVER_PASSWORD=

# Sample App, name should match the app dir name
SAMPLE_APP=worker-safety-gear-detection
DEVICE=CPU # Possible values: CPU, GPU, NPU
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ Following directory structure consisting of generic deployment code as well as p
pipeline-server-config.json
setup.sh
payload.json
payload_gpu.json
helm/
apps/
application_name/
Expand Down Expand Up @@ -67,6 +68,8 @@ Following directory structure consisting of generic deployment code as well as p
pre-requisite installer to setup envs, download artificats such as models/videos to `resources/` directory. It also sets executable permissions for scripts.
- *payload.json*:
A JSON array file containing one or more request(s) to be sent to DLStreamer Pipeline Server to launch GStreamer pipeline(s). The payload data is associated with the *configs/pipeline-server-config.json* provided for that application. Each JSON inside the array has two keys- `pipeline` and `payload` that refers to the pipeline it belongs to and the payload used to launch an instance of the pipeline.
- *payload_gpu.json*:
A JSON array file containing one or more request(s) to be sent to DLStreamer Pipeline Server to launch GStreamer pipeline(s). The payload data is associated with the *configs/pipeline-server-config.json* provided for that application. Each JSON inside the array has two keys- `pipeline` and `payload` that refers to the pipeline it belongs to and the payload used to launch an instance of the pipeline. The device used here is GPU.

- **helm**: contains helm charts and application specific pre-requisite installers, configurations and runtime data. The configs and data within it are similar to **apps** but are kept here for easy packaging.

Expand Down Expand Up @@ -98,7 +101,7 @@ Please ensure that you have the correct version of the DL Streamer Pipeline Serv

General instructions for docker based deployment is as follows.

1. Prepare the `.env` file for compose to source during deployment. This chosen env file defines the application you would be running.
1. Prepare the `.env` file for compose to source during deployment. This chosen env file defines the application you would be running. Please set the device type (CPU, GPU and NPU) here.
2. Run `setup.sh` to setup pre-requisites, download artifacts, etc.
3. Bring the services up with `docker compose up`.
4. Run `sample_start.sh` to start pipeline. This sends curl request with pre-defined payload to the running DLStreamer Pipeline Server.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,32 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co
Payload for pipeline 'pallet_defect_detection' {"source":{"uri":"file:///home/pipeline-server/resources/videos/warehouse.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"pdd"}},"parameters":{"detection-properties":{"model":"/home/pipeline-server/resources/models/pallet-defect-detection/model.xml","device":"CPU"}}}
Posting payload to REST server at https://<HOST_IP>/api/pipelines/user_defined_pipelines/pallet_defect_detection
Payload for pipeline 'pallet_defect_detection' posted successfully. Response: "4b36b3ce52ad11f0ad60863f511204e2"
```

```bash
./sample_start.sh -p pallet_defect_detection_gpu
```

This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload_gpu.json` file and launch a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different available options.

Output:

```bash
# Example output for Pallet Defect Detection
Environment variables loaded from /home/intel/OEP/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
Running sample app: pallet-defect-detection
Checking status of dlstreamer-pipeline-server...
Server reachable. HTTP Status Code: 200
Using GPU payload file based on DEVICE=GPU
Loading payload from /home/intel/OEP/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/apps/pallet-defect-detection/payload_gpu.json
Payload loaded successfully.
Starting pipeline: pallet_defect_detection_gpu
Launching pipeline: pallet_defect_detection_gpu
Extracting payload for pipeline: pallet_defect_detection_gpu
Found 1 payload(s) for pipeline: pallet_defect_detection_gpu
Payload for pipeline 'pallet_defect_detection_gpu' {"source":{"uri":"file:///home/pipeline-server/resources/videos/warehouse.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"pdd"}},"parameters":{"detection-properties":{"model":"/home/pipeline-server/resources/models/pallet-defect-detection/deployment/Detection/model/model.xml","device":"GPU"}}}
Posting payload to REST server at https://10.107.248.78/api/pipelines/user_defined_pipelines/pallet_defect_detection_gpu
Payload for pipeline 'pallet_defect_detection_gpu' posted successfully. Response: "32d21dc2a5bf11f0bdcd3e9cdb54fa68"
```

> **NOTE:** This will start the pipeline. To view the inference stream on WebRTC, open a browser and navigate to https://<HOST_IP>/mediamtx/pdd/ for Pallet Defect Detection
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[
{
"pipeline": "pallet_defect_detection_gpu",
"payload":{
"source": {
"uri": "file:///home/pipeline-server/resources/videos/warehouse.avi",
"type": "uri"
},
"destination": {
"frame": {
"type": "webrtc",
"peer-id": "pddgpu"
}
},
"parameters": {
"detection-properties": {
"model": "/home/pipeline-server/resources/models/pallet-defect-detection/deployment/Detection/model/model.xml",
"device": "GPU"
}
}
}
}
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[
{
"pipeline": "pcb_anomaly_detection_gpu",
"payload":{
"source": {
"uri": "file:///home/pipeline-server/resources/videos/anomalib_pcb_test.avi",
"type": "uri"
},
"destination": {
"frame": {
"type": "webrtc",
"peer-id": "anomalygpu"
}
},
"parameters": {
"classification-properties": {
"model": "/home/pipeline-server/resources/models/pcb-anomaly-detection/deployment/Anomaly classification/model/model.xml",
"device": "GPU"
}
}
}
}
]
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,32 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co

```

```bash
./sample_start.sh -p weld_porosity_classification_gpu
```

This command will look for the payload for the pipeline specified in `-p` argument above, inside the `payload_gpu.json` file and launch the a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different options available.

Output:

```bash

Environment variables loaded from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/.env
Running sample app: weld-porosity
Checking status of dlstreamer-pipeline-server...
Server reachable. HTTP Status Code: 200
Using GPU payload file based on DEVICE=GPU
Loading payload from /home/intel/OEP/new/edge-ai-suites/manufacturing-ai-suite/industrial-edge-insights-vision/apps/weld-porosity/payload_gpu.json
Payload loaded successfully.
Starting pipeline: weld_porosity_classification_gpu
Launching pipeline: weld_porosity_classification_gpu
Extracting payload for pipeline: weld_porosity_classification_gpu
Found 1 payload(s) for pipeline: weld_porosity_classification_gpu
Payload for pipeline 'weld_porosity_classification_gpu' {"source":{"uri":"file:///home/pipeline-server/resources/videos/welding.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"weld"}},"parameters":{"classification-properties":{"model":"/home/pipeline-server/resources/models/weld-porosity/deployment/Classification/model/model.xml","device":"GPU"}}}
Posting payload to REST server at https://10.107.248.78/api/pipelines/user_defined_pipelines/weld_porosity_classification_gpu
Payload for pipeline 'weld_porosity_classification_gpu' posted successfully. Response: "e978a766a5c511f0b5ed5abd6f584899"
```

> **NOTE:** This will start the pipeline. The inference stream can be viewed on WebRTC, in a browser, at the following url:

```bash
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[
{
"pipeline": "weld_porosity_classification_gpu",
"payload": {
"source": {
"uri": "file:///home/pipeline-server/resources/videos/welding.avi",
"type": "uri"
},
"destination": {
"frame": {
"type": "webrtc",
"peer-id": "weldgpu"
}
},
"parameters": {
"classification-properties": {
"model": "/home/pipeline-server/resources/models/weld-porosity/deployment/Classification/model/model.xml",
"device": "GPU"
}
}
}
}
]
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,33 @@ If not, follow the [installation guide for docker engine](https://docs.docker.co
Payload for pipeline 'worker_safety_gear_detection' posted successfully. Response: "784b87b45d1511f08ab0da88aa49c01e"
```

```bash
./sample_start.sh -p worker_safety_gear_detection_gpu
```

This command will look for the payload for the pipeline specified in the `-p` argument above, inside the `payload_gpu.json` file and launch a pipeline instance in DLStreamer Pipeline Server. Refer to the table, to learn about different available options.

Output:

```bash
# Example output for Worker Safety gear detection
Environment variables loaded from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/.env
Running sample app: worker-safety-gear-detection
Checking status of dlstreamer-pipeline-server...
Server reachable. HTTP Status Code: 200
Using GPU payload file based on DEVICE=GPU
Loading payload from [WORKDIR]/manufacturing-ai-suite/industrial-edge-insights-vision/apps/worker-safety-gear-detection/payload_gpu.json
Payload loaded successfully.
Starting pipeline: worker_safety_gear_detection_gpu
Launching pipeline: worker_safety_gear_detection_gpu
Extracting payload for pipeline: worker_safety_gear_detection_gpu
Found 1 payload(s) for pipeline: worker_safety_gear_detection_gpu
Payload for pipeline 'worker_safety_gear_detection_gpu' {"source":{"uri":"file:///home/pipeline-server/resources/videos/Safety_Full_Hat_and_Vest.avi","type":"uri"},"destination":{"frame":{"type":"webrtc","peer-id":"worker_safety"}},"parameters":{"detection-properties":{"model":"/home/pipeline-server/resources/models/worker-safety-gear-detection/deployment/Detection/model/model.xml","device":"GPU"}}}
Posting payload to REST server at https://10.107.248.78/api/pipelines/user_defined_pipelines/worker_safety_gear_detection_gpu
Payload for pipeline 'worker_safety_gear_detection_gpu' posted successfully. Response: "04fca2f8a5cb11f0bfae5a85c03cd2f6"
```


NOTE: This will start the pipeline. The inference stream can be viewed on WebRTC, in a browser, at the following url:

```sh
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[
{
"pipeline": "worker_safety_gear_detection_gpu",
"payload":{
"source": {
"uri": "file:///home/pipeline-server/resources/videos/Safety_Full_Hat_and_Vest.avi",
"type": "uri"
},
"destination": {
"frame": {
"type": "webrtc",
"peer-id": "worker_safety_gpu"
}
},
"parameters": {
"detection-properties": {
"model": "/home/pipeline-server/resources/models/worker-safety-gear-detection/deployment/Detection/model/model.xml",
"device": "GPU"
}
}
}
}
]
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,16 @@ init() {
}

load_payload() {
# Load all pipelines payload
PAYLOAD_FILE="$APP_DIR/payload.json"
# Load all pipelines payload based on DEVICE environment variable
# Use payload_gpu.json if DEVICE=GPU, otherwise use payload.json (default for CPU)
if [[ "$DEVICE" == "GPU" ]]; then
PAYLOAD_FILE="$APP_DIR/payload_gpu.json"
echo "Using GPU payload file based on DEVICE=$DEVICE"
else
PAYLOAD_FILE="$APP_DIR/payload.json"
echo "Using CPU payload file (DEVICE=${DEVICE:-CPU})"
fi

if [[ -f "$PAYLOAD_FILE" ]]; then
echo "Loading payload from $PAYLOAD_FILE"
if command -v jq &>/dev/null; then
Expand Down