Skip to content

Commit

Permalink
Merge pull request #353 from dtischler/main
Browse files Browse the repository at this point in the history
Jetson Nano Quality Control image fixes
  • Loading branch information
dtischler authored Jan 17, 2024
2 parents 1ee9199 + a15c5b6 commit 36591a9
Show file tree
Hide file tree
Showing 19 changed files with 16 additions and 16 deletions.
32 changes: 16 additions & 16 deletions image-projects/quality-control-jetson-nano.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Public Project Link: [https://studio.edgeimpulse.com/public/320746/latest](https
GitHub Repo: [https://github.com/Jallson/PizzaQC_Conveyor_Belt](https://github.com/Jallson/PizzaQC_Conveyor_Belt)


![](../.gitbook/assets/quality-control-jetson-nano/Photo01.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo01.png)

## Problem Statement

Expand All @@ -26,7 +26,7 @@ A computer vision system for quality/quantity inspection of product manufacturin

This project uses Edge Impulse's FOMO (Faster Objects, More Objects) algorithm, which can quickly detect objects and use them as a quality/quantity check for products on a running conveyor belt. FOMO's ability to know the number and position of coordinates of an object is the basis of this system. This project will explore the capability of the Nvidia Jetson Nano's GPU to handle color video (RGB) with a higher resolution (320x320) than some other TinyML projects, while still maintaining a high inference speed. The machine learning model (`model.eim`) will be deployed with the TensorRT library, which will be compiled with optimizations for the GPU and will be setup via the Linux C++ SDK. Once the model can identify different pizza toppings, an additional Python program will be added, to check each pizza for a standard quantity of pepperoni, mushrooms, and paprikas. This project is a proof-of-concept that can be widely applied in the product manufacturing and food production industries to perform quality checks based on a quantity requirement of part in a product.

![](../.gitbook/assets/quality-control-jetson-nano/Photo02.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo02.png)

### Hardware Components

Expand All @@ -51,7 +51,7 @@ This project uses Edge Impulse's FOMO (Faster Objects, More Objects) algorithm,

In this project we can use a camera (webcam) connected to a PC/laptop to capture the images for data collection for ease of use. Take pictures of your pizza components from above, with slightly different angles and lighting conditions to ensure that the model can work under different conditions (to prevent overfitting). While using FOMO, object size is a crucial aspect to ensure the performance of this model. You must keep the camera distance from objects consistent, because significant differences in object size will confuse the FOMO algorithm.

![](../.gitbook/assets/quality-control-jetson-nano/Photo03.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo03.png)


### 2. Data Acquisition and Labeling
Expand All @@ -64,17 +64,17 @@ For Developer (free) accounts, next click on the **Labelling queue** tab, then s

For Enterprise (paid) accounts, you will instead click on **Auto-Labeler** in _Data Acquisition_. This auto-labeling segmentation / cluster process will save a lot of time over the manual process above. Set the min/max object pixels and sim threshold (0.9 - 0.999) to adjust the sensitivity of cluster detection, then click **Run**. Next, you can label each cluster result as desired. If something doesn’t match or if there is additional data, labeling can be done manually as well.

![](../.gitbook/assets/quality-control-jetson-nano/Photo04.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo04.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo05.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo05.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo06.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo06.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo07.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo07.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo08.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo08.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo09.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo09.png)

### 3. Train and Build Model using FOMO Object Detection

Expand All @@ -84,21 +84,21 @@ In the _Image_ parameters section, set the color depth as **RGB** then press **S

If everything is OK, the training will job will finish in a short while, then we can test the model. Go to the **Model Testing** section and click **Classify all**. Our result is above 90%, so we can move on to the next step — Deployment.

![](../.gitbook/assets/quality-control-jetson-nano/Photo10.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo10.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo11.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo11.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo12.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo12.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo13.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo13.png)

![](../.gitbook/assets/quality-control-jetson-nano/Photo14.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo14.png)

### 4. Deploy Model to NVIDIA Jetson Nano GPU

Click on the **Deployment** tab then search for _TensorRT_, then select _Float32_ and click **Build**. This will build an Nvidia TensorRT library for running inference, targeting the Jetson Nano's GPU. Once the build finishes and the file is downloaded, open the `.zip` file, then we're ready for model deployment with the Edge Impulse C++ SDK on the Jetson Nano side.

![](../.gitbook/assets/quality-control-jetson-nano/Photo15.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo15.png)

On the Jetson Nano, there are several things that need to be done. Flash the Nvidia-provided Ubuntu OS with JetPack, which can be downloaded from the Nvidia Jetson website, to an SD Card. Insert the SD Card and power on the board, go through the setup process to finish the OS configuration, and connect the board to your local network. Then `ssh` from your PC/laptop and install the Edge Impulse tooling via the terminal:

Expand Down Expand Up @@ -167,7 +167,7 @@ Because we'll use Python, we need to install the Edge Impulse Python SDK and clo

The program I made (`topping.py`) is a modification of Edge Impulse's `classify.py` in the `examples/image` folder from the `linux-python-sdk` directory.

![](../.gitbook/assets/quality-control-jetson-nano/Photo16.jpg)
![](../.gitbook/assets/quality-control-jetson-nano/Photo16.png)

My program will change the moving object detection input from the model file (`model.eim`), for example: `0 0 2 3 3 1 0 1 3 3 3 2 0 0 0 2 3 3 2 0 0 2 5 5 1 0 0 2 3 3 1 0 0 1 2 2 0 0` will record **0** as the sequence separator and record the peak value in each sequence. As an example, if the correct number of toppings on a pizza (per quality control standards) is **3**, and we know that a 0 is a seperator, and anything other than 3 is bad...then `0 3 0 3 0 3 0 5 0 3 0 2 0` is: `OK OK OK BAD OK BAD`

Expand Down

0 comments on commit 36591a9

Please sign in to comment.