Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more link fixes #296

Merged
merged 1 commit into from
Aug 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
File renamed without changes
File renamed without changes
2 changes: 1 addition & 1 deletion esd-protection-using-computer-vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,6 @@ This project was a lot of fun. I have worked with the Jetson Nano for a little o

The memory limitations were unfortunate and took up a bit of my time, and I hope that gets resolved in the future. I even had to do my development off the Nano because I didn't have enough space to install VS Code (my IDE of choice). Not a show-stopper by any means, this is still a very capable piece of hardware.

I think this project could be further expanded in the future. You could add a Twilio interface to text a supervisor if an ESD risk is present. Different types of objects could be classified (maybe ensuring an ESD smock is worn?) and what I'm more excited about is [FOMO-AD](https://mobile.twitter.com/janjongboom/status/1575530285814362112?cxt=HHwWgMCtrdGctN0rAAAA) (the AD stands for Anomaly Detection), announced at [Edge Impulse Imagine 2022](https://edgeimpulse.com/imagine). It won't be ready until 2023 but I think there is a lot of opportunity to use that technology for recognizing what is right and what is not right in an image. I'm exciting to test its capabilities!
I think this project could be further expanded in the future. You could add a Twilio interface to text a supervisor if an ESD risk is present. Different types of objects could be classified (maybe ensuring an ESD smock is worn?) and what I'm more excited about is [FOMO-AD](https://mobile.twitter.com/janjongboom/status/1575530285814362112) (the AD stands for Anomaly Detection), announced at [Edge Impulse Imagine 2022](https://edgeimpulse.com/imagine). It won't be ready until 2023 but I think there is a lot of opportunity to use that technology for recognizing what is right and what is not right in an image. I'm exciting to test its capabilities!

Thank you again to Seeed Studio for providing the hardware for me to work on. I hope to do more projects with this equipment in the future. Happy coding!
2 changes: 1 addition & 1 deletion food-irradiation-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -1193,7 +1193,7 @@ After generating training and testing samples successfully, I uploaded them to m

![image](.gitbook/assets/food-irradiation/edge_set_2.png)

![image](.gitbook/assets/food-irradiation/edge_set_3.PNG)
![image](.gitbook/assets/food-irradiation/edge_set_3.png)

:hash: Then, choose the data category (training or testing) and select *Infer from filename* under *Label* to deduce labels from file names automatically.

Expand Down
2 changes: 1 addition & 1 deletion gas-detection-thingy-91.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ If you are going to be using a Linux computer for this application, make sure to
sudo apt install screen
```

Afterwards, download the official [Edge Impulse Nordic Thingy:91 firmware](https://cdn.edgeimpulse.com/firmware/nordic-thingy91.zip) and extract it.
Afterwards, download the official [Edge Impulse Nordic Thingy:91 firmware](https://cdn.edgeimpulse.com/firmware/thingy91.zip) and extract it.

Next up, make sure the board is turned off and connect it to your computer. Put the board in MCUboot mode by pressing the multi-function button placed in the middle of the device and with the button pressed, turn the board on.

Expand Down
14 changes: 7 additions & 7 deletions indoor-co2-level-estimation-using-tinyml.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Swapnil Verma
Public Project Link:
[https://studio.edgeimpulse.com/public/93652/latest](https://studio.edgeimpulse.com/public/93652/latest)

![Indoor CO2](.gitbook/assets/indoor-co2.jpg)
![Indoor CO2](.gitbook/assets/indoor-co2/indoor-co2.jpg)

### Problem Overview

Expand Down Expand Up @@ -43,25 +43,25 @@ In this project, the dataset I am using is a subset of the PIROPO database \[3].

The dataset contains multiple sequences recorded in the two indoor rooms using a perspective camera.

![Indoor Environment 1](.gitbook/assets/indoor-1.jpg)
![Indoor Environment 1](.gitbook/assets/indoor-co2/indoor-1.jpg)

![Indoor Environment 2](.gitbook/assets/indoor-2.jpg)
![Indoor Environment 2](.gitbook/assets/indoor-co2/indoor-2.jpg)

The original PIROPO database contains perspective as well as omnidirectional camera images.

I imported the subset of the PIROPO database to the Edge Impulse via the [data acquisition](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-uploader#upload-data-from-the-studio) tab. This tab has a cool feature called [labelling queue](https://www.edgeimpulse.com/blog/3-ways-to-do-ai-assisted-labeling-for-object-detection), which uses YOLO to label an object in the image automatically for you.

![Automatically label data using the labelling queue feature](.gitbook/assets/ezgif\_com\_gif\_maker\_3\_2924bbe7c1.gif)
![Automatically label data using the labelling queue feature](.gitbook/assets/indoor-co2/ezgif_2924bbe7c1.gif)

I used this feature to label _people_ in the PIROPO images. I then divided the data into _training_ and _test_ sets using the _train/test split_ feature. While training, the Edge Impulse automatically divides the training dataset into _training_ and _validation_ datasets.

#### **2. Training and Testing**

Training and testing are done using above mentioned PIROPO dataset. I used the [FOMO](https://www.edgeimpulse.com/blog/announcing-fomo-faster-objects-more-objects) architecture by the Edge Impulse to train this model. To prepare a model using FOMO, please follow this [link](https://docs.edgeimpulse.com/docs/tutorials/object-detection/detect-objects-using-fomo).

![Training statistics](.gitbook/assets/training.jpg)
![Training statistics](.gitbook/assets/indoor-co2/training.jpg)

![Model testing results](.gitbook/assets/testing.jpg)
![Model testing results](.gitbook/assets/indoor-co2/testing.jpg)

The training F1 score of my model is 91.6%, and the testing accuracy is 86.42%. For live testing, I deployed the model by building openMV firmware and flashed that firmware using the OpenMV IDE. A video of live testing performed on Arduino Portenta H7 is attached in the Demo section below.

Expand All @@ -75,7 +75,7 @@ This section contains a step-by-step guide to downloading and running the softwa

### How does it work?

![System overview](.gitbook/assets/how-it-works.jpg)
![System overview](.gitbook/assets/indoor-co2/how-it-works.jpg)

This system is quite simple. The Vision shield (or any camera) captures a 240x240 image of the environment and passes it to the FOMO model prepared using Edge Impulse. This model then identifies the people in the image and passes the number of people to the CO2 level estimation function every minute. The function then estimates the amount of CO2 using the below formula.

Expand Down
2 changes: 1 addition & 1 deletion ml-knob-eye.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ To start the training, a good number of pictures with variations of the knob in

You can download a sample data acquisition script, and the recording script from:

[https://github.com/ronibandini/MlKnobReading](https://github.com/ronibandini/MlKnobReading)
[https://github.com/ronibandini/MLAnalogKnobReading](https://github.com/ronibandini/MLAnalogKnobReading)

Place the camera in the 3d printed arm, around 10cm over the knob, with good lighting. Place the knob in the "Minimum" (low) position. Then on the Raspberry Pi, run:

Expand Down
4 changes: 2 additions & 2 deletions nvidia-omniverse-synthetic-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ def dome_lights(num=1):
rep.randomizer.register(dome_lights)
```

For more information about using lights with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/app_code/prod_materials-and-rendering/lighting.html).
For more information about using lights with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/materials-and-rendering/latest/lighting.html).

### Fruits

Expand Down Expand Up @@ -245,7 +245,7 @@ camera2 = rep.create.camera(
render_product2 = rep.create.render_product(camera2, (512, 512))
```

For more information about using cameras with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/app_isaacsim/prod_materials-and-rendering/cameras.html).
For more information about using cameras with Replicator, you can check out the [NVIDIA documentation](https://docs.omniverse.nvidia.com/materials-and-rendering/latest/cameras.html).

### Basic Writer

Expand Down
2 changes: 1 addition & 1 deletion occupancy-sensing-with-silabs.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ A very important mention concerning privacy is that we will use the microphones

- [Simplicity Commander](https://community.silabs.com/s/article/simplicity-commander?language=en_US) - a utility that provides command line and GUI access to the debug features of EFM32 devices. It enables us to flash the firmware on the device.
- The [Edge Impulse CLI](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-installation) - A suite of tools that will enable you to control the xG24 Kit without being connected to the internet and ultimately, collect raw data and trigger in-system inferences
- The [base firmware image provided by Edge Impulse](https://cdn.edgeimpulse.com/firmware/silabs-xg24-devkit.bin) - enables you to connect your SiLabs kit to your project and do data acquisition straight from the online platform.
- The [base firmware image provided by Edge Impulse](https://cdn.edgeimpulse.com/firmware/silabs-xg24.zip) - enables you to connect your SiLabs kit to your project and do data acquisition straight from the online platform.

## Hardware Setup

Expand Down
2 changes: 1 addition & 1 deletion renesas-ra6m5-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ To begin, you'll need to create an Edge Impulse account and a project in the Edg
The next step is connecting our Renesas CK-RA6M5 board to the Edge Impulse Studio, so we can ingest sensor data for the machine learning model. Please follow the below steps to do so:

- Connect the Renesas CK-RA6M5 board to the computer by following the steps mentioned in the _Quick Start_ section.
- Open a terminal or command prompt and type `edge-impulse-daemon`. The [Edge Impulse daemon](https://docs.edgeimpulse.com/docs/Edge Impulse-cli/cli-daemon) will start and prompt for user credentials.
- Open a terminal or command prompt and type `edge-impulse-daemon`. The [Edge Impulse daemon](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-daemon) will start and prompt for user credentials.
- After providing user credentials, it will prompt you to select an Edge Impulse project. Please navigate and select the project created in the previous steps, using the arrow keys.

![Daemon](.gitbook/assets/renesas-ra6m5-getting-started/daemon.jpg)
Expand Down
2 changes: 1 addition & 1 deletion renesas-rzv2l-pose-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ The 2 stage pipeline runs sequentially and the more objects detected the more cl

While this pipeline can be deployed to any Linux board that supports EIM, it can be used with DRP-AI on the Renesas RZ/V2L Eval kit or RZ/Board leveraging the highly performant and low power DRP-AI by selecting these options in Edge Impulse Studio as shown earlier. By deploying to the RZ/V2L you will achieve the lowest power consumption vs framerate against any of the other supported platforms. YOLO Object Detection also ensures you get the level of performance needed for demanding applications.

The application consists of two files [app.py](http://app.py) which contains the main 2 stage pipeline and web server and [eim.py](http://eim.py) which is a custom Python SDK for using EIM’s in your own application
The application consists of two files `app.py` which contains the main 2 stage pipeline and web server and `eim.py` which is a custom Python SDK for using EIM’s in your own application

To configure the application various configuration options are available in the Application Configuration Options section near the top of the application:

Expand Down
2 changes: 1 addition & 1 deletion renesas-rzv2l-product-quality-inspection.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ ssh root@smarc-rzv2l

Note: if the `smarc-rzv2l` hostname is not identified on your network, you can use the board's local IP address instead.

![RZ/V2L with camera](.gitbook/assets/renesas-rzv2l-product-quality-inspection/img10-RZ_V2L-with-camera.JPG)
![RZ/V2L with camera](.gitbook/assets/renesas-rzv2l-product-quality-inspection/img10-RZ-V2L-with-camera.JPG)

To run the model locally on the RZ/V2L we can run the command `edge-impulse-linux-runner` which lets us log in to our Edge Impulse account and select a project. The model will be downloaded and inference will start automatically.

Expand Down
2 changes: 1 addition & 1 deletion ros2-part2-microros.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ To add it to your MicroROS environment, navigate to the MicroROS Arduino library
~/Arduino/libraries/micro_ros_arduino-2.0.5-humble/extras/library_generation/extra_packages
```

Paste the directory there, **return to the main** `micro_ros_arduino-2.0.5-humble` **directory,** and use the docker commands from [this part](micro_ros_arduino-2.0.5-humble) of the MicroROS Arduino readme:
Paste the directory there, **return to the main** `micro_ros_arduino-2.0.5-humble` **directory,** and use the docker commands from [this part](https://github.com/micro-ROS/micro_ros_arduino) of the MicroROS Arduino readme:

```
docker pull microros/micro_ros_static_library_builder:humble
Expand Down
Loading