Skip to content

Releases: microsoft/azure-percept-advanced-development

2112-1

15 Dec 18:15
8ed49d3
Compare
Choose a tag to compare

Release notes:

  • Updated azureeye base image to include a patch for the Eye SoM firmware. This fix may increase the stability of the azureeyemodule, which has been having stability issues since the firmware was upgraded in 2108-1. Not all firmware stability issues have been fixed by this patch, and we are actively working with Intel to increase the stability of the firmware.
  • Fixed a bug where horizontal gray lines could be seen in some images uploaded as part of data collection for the retraining loop.
  • Updated the azureeyemodule to use a non-root user by default. This is a security best practice, and was required by our security team.

2108-1

01 Sep 21:56
335c819
Compare
Choose a tag to compare

This release does the following things:

  1. Updates the firmware in the base image to the latest one from Intel (May release).
  2. Enables UVC (USB Video Class) camera as input source instead of the packaged camera.
  3. Fixes a bug where the connecting a client to the H.264 stream would crash the azureeyemodule after about 7 minutes.
  4. Adds the ability to turn off the H.264 stream by setting "H264Stream": false in the module twin.

2106-2

04 Aug 17:42
ac8c54d
Compare
Choose a tag to compare

This release adds support for time-aligning the inferences of slow neural networks with the video streams. This will add latency into the video stream equal to approximately the latency of the neural network, but will result in the inferences (bounding boxes for example) being drawn over the video in the appropriate locations.

To enable this feature, add "TimeAlignRTSP: true" to your module twin in the IoT Azure Portal.

2104-2

11 May 20:04
cfcf1d3
Compare
Choose a tag to compare

This release adds some bug fixes:

  • Fix IoT Hub message format to be UTF-8 encoded JSON (previously it was mostly useless 64-bit encoded nonsense)
  • Fix bug with Custom Vision classifier (previously, the Custom Vision classifier models were not working properly - they were interpreting the
    wrong dimension of the output tensor as the class confidences, which led to always predicting a single class, regardless of confidence)
  • Update H.264 to use TCP instead of UDP, which is a requirement for LVA integration