Validating the Performance of Edge AI Workloads on Intel Processors with Edge Microvisor Toolkit #497
stevenhoenisch
started this conversation in
General
Replies: 1 comment
-
|
@stevenhoenisch that's a great starting point for who is looking into enabling low-latency, cost-effective, and scalable AI deployments that would otherwise be infeasible with cloud-only pipelines. It would be useful to understand where developers might see the biggest performance gains when selecting hardware for similar edge AI workloads. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Deploying edge AI applications can quickly rack up cost, complexity, overhead, and, perhaps worst of all, latency. For engineers working to deploy edge AI solutions, a critical question quickly materializes: How can you validate the performance of an Edge AI workload's underlying silicon processors as part of your efforts to identify a solution that cost-effectively meets your workload's performance requirements?
Solutions from Intel answer that critical question with a combination of edge AI suites, libraries, and applications that can run on Edge Microvisor Toolkit to demonstrate the performance and capabilities of Intel processors and partner technology for artificial intelligence. Edge Microvisor Toolkit lets you rapidly deploy and easily evaluate edge AI solutions on Intel platforms. A Linux kernel maintained by Intel with the latest optimizations and patches streamlines integration for operating system vendors and other technology partners.
The toolkit's immutable and mutable versions -- including a standalone node prepared for partner evaluation and a real-time developer node designed with the Preempt RT Linux Kernel for predictable performance -- results in a reference Linux operating system primed to demonstrate how Intel processors can cost effectively minimize latency and maximize security for edge AI workloads.
In addition, the recent launch of Open Edge Platform 2025.1 makes Intel® Arc™ B-Series Graphics and other components discoverable to containerized applications and VMs with pass-through mode to deliver processing power to distributed applications at the edge -- processing power that can be fine tuned to minimize latency.
Sample Edge AI Applications for Benchmarking
Running easy-to-benchmark sample applications for smart traffic intersections and wind turbine predictive maintenance on Edge Microvisor Toolkit can help you to quickly prove out the power and performance of running edge AI workloads on Intel processors.
The pre-configured access to GPU acceleration in Edge Microvisor Toolkit speeds everything up, and various Intel technology -- such as Intel® Atom® X Series processors, Intel® Core™ processors, and Intel® Xeon® processors -- deliver optimal performance for edge AI workloads like anomaly detection for wind turbines and smart traffic intersections.
The Wind Turbine Anomaly Detection sample application, part of the Manufacturing AI Suite, demonstrates how predictive maintenance scenarios work with anomaly detection and time series data to monitor patterns and detect changes. This sample application uses a customizable pipeline to ingest, store, process, and visualize time series data. The integrations with MQTT and OPC-UA enable easy ingestion of data. The wind turbine anomaly detection sample application is complemented by a Time Series Analytics microservice that tracks changes over time for predictive maintenance and defect detection.
The Smart Traffic Intersection sample application, part of the Metro AI Suite, combines analytics from multiple traffic cameras to provide a unified intersection view, enabling advanced use cases such as object tracking across multiple viewpoints, motion vector analysis (e.g., speed and heading), and understanding object interactions in three-dimensional space.
Give It a Whirl
Here are some resources to help you give these new sample applications a whirl on Edge Microvisor Toolkit so you can discover the power and performance of Intel processors for edge AI workloads:
Read the get started guide for the Smart Traffic Intersection sample application.
Read the get started guide for how to use docker compose to deploy the Wind Turbine Anomaly Detection sample application. Alternatively, you can deploy it on a Kubernetes cluster by using Helm.
Download the ISO directly or download the latest weekly release of Edge Microvisor Toolkit and view its documentation website.
After you give a sample application a try on Edge Microvisor Toolkit, let us know your thoughts on the discussions page.
Beta Was this translation helpful? Give feedback.
All reactions