In this exercise, you will set up your camera and deploy the get-started vision module with the guidance of the OOBE (Out-Of-Box Experience) webpages.
- Connect a USB-type C cable from your workstation to the AI Vision Developer Kit. This is the only cable we need to provide power and transfer data.
- Make sure the device LEDs are blinking RED. From your workstation, check the Wi-Fi scanning results and you will see an MSIOT_xxxxxx (printed on the label at the bottom of the device) discovered where xxxxxx is the last 6 characters of your Wi-Fi mac address. For example, it looks like MSIOT_BD097D. Please check this clear so that you won't connect to other attendees' cameras. Connect to that Soft Access Point broadcasted by Peabody.
The default passphrase is printed on a label at the bottom of the camera, for example: password: 88888888. If you didn't see this line on the label, please use 12345678 as passphrase.
- Once you connected to above Wi-Fi network, a default OOBE setup page will pop up automatically. If not, please launch a browser and visit http://setupaicamera.ms.
- Follow the setup page to fill in the necessary information. You can use the default configuration. For the Wi-Fi SSID in the 3rd page, please select an AP that can connect to internet (not MSIOT_xxxxxx). The Wi-Fi profile will be stored in Peabody so that it can automatically connect to the same Wi-Fi network after reboot.
Tips : The Wi-Fi connection in the lab venue may be slow and congested because of the limited bandwidth for too many devices connecting at the same time. Please feel free to use your own cell phone hotspot if you encounter issues.
- Follow the steps in the webpages to create corresponding Azure resources and deploy the get-started vision module from Azure Marketplace.
- You will be able to see the inferencing output on HDMI on completion.
In this exercise, you set up the Vision AI Dev Kit so that it connects to internet and IoT Hub. When you went through the OOBE webpages to create the IoT Edge device, it also associated to your camera in the background. Finally, it deployed the get-started vision module from Azure Marketplace dynamically and then you were able to see the inferencing output on HDMI.
This is the system architecture.
We'll build our own AI model (Image Classification) to detect when someone is wearing a hard hat. You will share a hard hat with other attendees to validate your model built with Custom Vision.
- Login to the Azure Custom Vision Service (Preview) at https://www.customvision.ai/.
- Create a new project, use these recommended settings: Give it a name like Simulated HardHat Detector Project Type - [Classification] Create a resource group Classification Type - [Multiclass (Single tag per image)] Domain - [General(compact)] Export Capabilities - Vision AI Dev Kit
Some training images have already been collected for you for the hard hat use case. Download them and upload them to your Custom Vision project:
- Downlaod the .zip file at this location: https://1drv.ms/u/s!AkzLzaBpSgoMo9hXX4NPjd8QrfhQLA?e=M3ehCL
- Decompress it
- Upload images to custom vision per tag (HardHat/NoHardHat) and tag them appropriately them during upload. Upload all pictures names similarly (like HardHat) at the same time.
- To train it with your new training data, go to your Custom Vision project and click on Train.
- To export it, select tab Performances and Export button. Choose the Vision AI Dev Kit option, right click on the download button and copy the link. Check the screenshot
- To confirm the link is really pointing to a zip file, visit that link on a browser and see it's asking you to download a zip. If so, keep this link for next step.
- Login to the http://portal.azure.com and go to your IoT Hub resource
- Click on IoT Edge tab and then on your camera device name.
- Click on the AIVisionDevKitGetStartedModule module name and click on Module Identity Twin
- Update the ModelZipUrl to map to your new URLs and hit Save
Within a few minutes, your device should now be running your custom model! You can check the progress locally from the device. For details please see next exercise.
- Put your hard hat on and smile at the camera!
- Verify that the camera picks it up and correctly classify you as wearing the hard hat.
In this exercise, you trained your own Image Classification model on Microsoft Custom Vision Service. Then you got the DLC files from the Custom Vision Service output. By using the Module Identity Twin, you replaced your model directly from cloud storage to local device in a few minutes.
As a next step, you could reuse the same training images but build an object detection model instead of the image classification model in exercise 2. Again, please use Custom Vision Service https://www.customvision.ai/ and its labeling tool.
The device provides a shell to directly access to local resources. Here are a few commands you may find them useful. Please use a command prompt to get started.
Please install the USB driver (QUD) first. After installed it, please launch a command prompt and traverse to desktop > Lab Room >Platform tools and then start using the following commands.
Commands | Descriptions |
---|---|
adb devices | Check if your workstation detects the camera correctly |
adb shell | Enter the local shell |
ifconfig | Check the network configurations |
docker ps | List what containers are running on Docker |
docker images | List what docker images have been downloaded to local device |
docker logs -f edgeAgent | Check by following the logs output from edgeAgent container. Ctrl + c to exit |
docker logs -f AIVisionDevKitGetStartedModule | Check by following the logs output from the vision module |
To learn more about this Vision AI Dev Kit, visit https://azure.github.io/Vision-AI-DevKit-Pages/docs/Get_Started/