Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 9 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,28 +4,22 @@

Edge Microvisor Toolkit is a reference Linux operating system that demonstrates the full
capabilities of Intel® platforms for Edge AI workloads. Built on Azure Linux, it features an
Intel®-maintained Linux Kernel, incorporating all the latest patches that have not yet been
[Intel®-maintained Linux Kernel](./docs/developer-guide/emt-architecture-overview.md#next-kernel),
incorporating all the latest patches that have not yet been
upstreamed. These patches optimize performance and enhance other capabilities for Intel®
silicon, streamlining integration for operating system vendors and technology partners.

Edge Microvisor Toolkit is published in several versions, both immutable and mutable.
Edge Microvisor Toolkit is [published in several versions](./docs/developer-guide/get-started/emt-versions.md),
both immutable and mutable.
It may be used to quickly deploy, validate, and benchmark edge AI workloads, including those
requiring real-time processing. You can also use the toolkit's flexible build infrastructure
to create custom images from a large set of pre-provisioned packages.

Here are the published versions:

* [Edge Microvisor Toolkit Standalone Node (immutable)](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node)
* [Edge Microvisor Toolkit Developer Node with or without real-time extensions (mutable)](./docs/developer-guide/emt-architecture-overview.md#developer-node-mutable-iso-image)
* [Edge Microvisor Toolkit (mutuable or immutable) for use with Edge Manageability Framework](./docs/developer-guide/emt-deployment-edge-orchestrator.md)
* [Edge Microvisor Bootkit](./docs/developer-guide/emt-bootkit.md)

Edge Microvisor Toolkit has undergone extensive validation across the Intel® Xeon®,
Intel® Core Ultra™, Intel Core™, and Intel® Atom® processor families. It provides robust
support for integrated NPU as well as a
[selection of discrete GPU cards](./docs/developer-guide/emt-system-requirements.md#hardware-requirements).


You can either build Edge Microvisor Toolkit by following step-by-step instructions or
download it directly. Both the build system and Edge Microvisor Toolkit are available as open
source.
Expand All @@ -44,14 +38,18 @@ If you're interested in most up-to-date versions, check out the
and
[CVE](https://github.com/open-edge-platform/edge-microvisor-toolkit/discussions?discussions_q=is%3Aopen+cve+) releases.


**Demos on YouTube**

* [Standalone Edge Microvisor Toolkit (EMT-S) integration with Edge Microvisor Bootkit](https://www.youtube.com/watch?v=rmgmWYi6OpE):
USB Device Preparation, Provisioning Process, System Readiness, and Final Boot with the cluster starting successfully.
* [Edge Microvisor Toolkit Standalone Node 3.0](https://www.youtube.com/watch?v=j_4EX_wggSI):
a brief walkthrough of Edge Microvisor Toolkit Standalone Node for the 3.0 release, covering various use cases.

You can also try out the
[OS Image Composer](http://github.com/open-edge-platform/os-image-composer) -
a *new* project in the Open Edge platform family that allows you to compose
custom OS images from popular distributions using pre-built artifacts.

## Get Help or Contribute

If you want to participate in the GitHub community for Edge Microvisor Toolkit, you can
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
121 changes: 73 additions & 48 deletions docs/developer-guide/emt-architecture-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ architectural details of the OS itself.

## Edge Microvisor Toolkit

Edge Microvisor Toolkit is produced and maintained in several editions, in both immutable and
Edge Microvisor Toolkit is produced and maintained in
[several editions](./get-started/emt-versions.md), in both immutable and
mutable images. It enables you to quickly deploy and validate workloads on Intel®
platforms in order to demonstrate the full capabilities of Intel silicon for various scenarios. There are several options for deploying the toolkit:

Expand Down Expand Up @@ -208,62 +209,63 @@ real-time performance.
To configure kernel command line arguments, add them in the `"ExtraCommandLine"` parameter
inside the imageconfig file, as shown in [edge-image](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/e22a8f4e72d0edc652f1aacd514d0b5bf5de8b80/toolkit/imageconfigs/edge-image.json#L107).

- **idle=poll**
#### **idle=poll**
Forces the CPU to actively poll for work when idle, rather than entering low-power idle
states. In RT systems, this can reduce latency by ensuring the CPU is always ready to
handle high-priority tasks immediately, at the cost of higher power consumption.

> **Note:**
It is currently not possible to directly modify the kernel command-line parameters once
a build has been generated, as it is packaged inside the signed UKI. Modifying the kernel
command line would invalidate the signature. The mechanism to enable customization of the
kernel command line will be added in future releases.
> It is currently not possible to directly modify the kernel command-line parameters once
> a build has been generated, as it is packaged inside the signed UKI. Modifying the kernel
> command line would invalidate the signature. The mechanism to enable customization of the
> kernel command line will be added in future releases.

##### **isolcpus=\<list>**

- **isolcpus=<list>**
Isolates specific CPU cores from the general scheduler, preventing non-RT tasks from
being scheduled on those cores. This ensures that designated cores are available solely
for RT tasks. This way, for example, the workloads can be shifted between efficient
and performance cores. The parameter takes lists as values:

- isolcpus=\<cpu core number>,...,\<cpu core number>
- isolcpus=\<cpu core number>,...,\<cpu core number>

```bash
isolcpus=1,2,3
```
```bash
isolcpus=1,2,3
```

- isolcpus=\<cpu core number>-\<cpu core number>
- isolcpus=\<cpu core number>-\<cpu core number>

```bash
isolcpus=1-3
```
```bash
isolcpus=1-3
```

- isolcpus=\<cpu core number>,...,\<cpu core number>-\<cpu number>
- isolcpus=\<cpu core number>,...,\<cpu core number>-\<cpu number>

```bash
isolcpus=1,4-5
```
```bash
isolcpus=1,4-5
```

- **nohz_full=<list>**
##### **nohz_full=\<list>**
Enables full tickless (nohz) mode on specified cores, reducing periodic timer interrupts
that could introduce latency on cores dedicated to RT workloads.

- **rcu_nocbs=<list>**
##### **rcu_nocbs=\<list>**
Offloads RCU (Read-Copy-Update) callbacks from the specified CPUs, reducing interference
on cores that need to be as responsive as possible.

- **threadirqs**
##### **threadinqs**
Forces interrupts to be handled by dedicated threads rather than in interrupt context,
which can improve the predictability and granularity of scheduling RT tasks.

- **nosmt**
##### **nosmt**
Disables simultaneous multi-threading (hyperthreading). This can prevent contention between
sibling threads that share the same physical core, leading to more predictable performance.

- **numa_balancing=0**
##### **numa_balancing=0**
Disables automatic NUMA balancing. While NUMA awareness is important, automatic migration
of processes can introduce latency. Disabling it helps maintain predictable memory locality.

- **intel_idle.max_cstate=0**
#### **intel_idle.max_cstate=0**
Limits deep idle states on Intel® CPUs, reducing wake-up latencies that can adversely
affect RT performance.

Expand All @@ -279,44 +281,67 @@ the used image configuration. The artifacts come with associated `sha256` files.
- Image in VHD format.
- Signing key.

## "Next" Kernel

We are excited to announce the EMT "Next" `v6.19` kernel which will converge
to the next LTS kernel for EMT by 2026.1 (mid-2026) release. The stable `v6.12` kernel
continues to be maintained and recommended for most users unless you have newer Intel
platforms that require an earlier move ot the "Next" kernel.

| Intel Platform | Recommended EMT | Support |
| -------------- | ------------------| ------- |
| ARL-U/H | EMT Stable | Supported|
| ARL-S | EMT Stable | Supported|
| ASL | EMT Stable | Supported|
| TWL | EMT Stable | Supported|
| MTL-U/H | EMT Stable | Supported|
| MTL-PS | EMT Stable | Supported|
| BTL-S hybrid | EMT Stable | Supported|
| BTL-S 12P | EMT Stable | Supported|
| PTL | EMT Next | Preview |
| WCL | EMT Next | Not yet supported|
| NVL | EMT Next | Not yet supported|

## K3s Extensions

Deploying of Edge Microvisor Toolkit with Lightweight Kubernetes (K3s)
requires additional extensions which are downloaded as docker images. Below is
a list of components essential for scaled deployment of the toolkit.

- [Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni)
### [Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni)

A Container Network Interface (CNI) plugin for Kubernetes that enables you to
attach multiple network interfaces to Kubernetes pods, which usually have only
one network interface.

### [Intel Device Plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)

A Container Network Interface (CNI) plugin for Kubernetes that enables you to
attach multiple network interfaces to Kubernetes pods, which usually have only
one network interface.
[GPU Plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/cmd/gpu_plugin/README.md)

- [Intel Device Plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes)
- [GPU Plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/cmd/gpu_plugin/README.md)
Intel GPU plugin provides access to discrete and integrated Intel GPU devices
supported by the host kernel. It enables offloading compute operations of
Kubernetes workload to GPU devices. It may be beneficial in such use cases as
media transcoding and analytics, cloud gaming, AI training and inference.

Intel GPU plugin provides access to discrete and integrated Intel GPU devices
supported by the host kernel. It enables offloading compute operations of
Kubernetes workload to GPU devices. It may be beneficial in such use cases as
media transcoding and analytics, cloud gaming, AI training and inference.
### [Calico](https://github.com/projectcalico/calico)

- [Calico](https://github.com/projectcalico/calico)
- [CNI Plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin)
\- as [docker image](https://hub.docker.com/r/calico/cni).
- [CNI Plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin)
\- as [docker image](https://hub.docker.com/r/calico/cni).

A plugin that enables you to use Calico for deployments based on Container
Network Interface (CNI).
A plugin that enables you to use Calico for deployments based on Container
Network Interface (CNI).

- [Node](https://github.com/projectcalico/calico/tree/master/node)
\- as [docker image](https://hub.docker.com/r/calico/node/).
- [Node](https://github.com/projectcalico/calico/tree/master/node)
\- as [docker image](https://hub.docker.com/r/calico/node/).

A CNI plugin that enables you to create a Layer 3 network for Kubernetes
pods and assign a unique IP address for each.
A CNI plugin that enables you to create a Layer 3 network for Kubernetes
pods and assign a unique IP address for each.

- [Kube controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers)
\- as [docker image](https://hub.docker.com/r/calico/kube-controllers).
- [Kube controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers)
\- as [docker image](https://hub.docker.com/r/calico/kube-controllers).

A set of controllers that monitor the resources in the Kubernetes API (network,
policies, nodes) and adjust Calico's CNI configuration.
A set of controllers that monitor the resources in the Kubernetes API (network,
policies, nodes) and adjust Calico's CNI configuration.

## Packaging

Expand Down
6 changes: 5 additions & 1 deletion docs/developer-guide/emt-get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,9 @@ technology partners.

## Usage Scenarios

To validate workloads on Intel silicon, you can deploy Edge Microvisor Toolkit as a
Choose a [version of a Edge Microvisor Toolkit](./get-started/emt-versions.md) that
best suits your workflow. You can validate Edge AI workloads on Intel silicon, by
deploying the toolkit as a
[standalone edge node](./get-started/emt-build-and-deploy.md) or with Edge Manageability
Framework, a complete integrated system for edge devices with full lifecycle management,
including remote deployment and management of applications orchestrated by Kubernetes.
Expand All @@ -32,6 +34,8 @@ including remote deployment and management of applications orchestrated by Kuber

<!--hide_directive
:::{toctree}

./get-started/emt-versions.md
./get-started/emt-building-howto.md
./get-started/emt-build-and-deploy.md
./get-started/emt-installation-howto.md
Expand Down
4 changes: 2 additions & 2 deletions docs/developer-guide/emt-system-requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ Edge Microvisor Toolkit is designed to support all Intel® platforms with the la
Intel® kernel to provide all available features for applications
and workloads. It has been validated on the following platforms:

**CPU**
### CPU

| Atom | Core™ | Xeon® |
| ----------------------| ----------------------------- | ----------------------- |
| Intel® Atom® X Series | 12th Gen Intel® Core™ | 5th Gen Intel® Xeon® SP |
| | 13th Gen Intel® Core™ | 4th Gen Intel® Xeon® SP |
| | Intel® Core™ Ultra (Series 1) | 3rd Gen Intel® Xeon® SP |

**Discrete GPU**
### Discrete GPU

| Intel® | NVIDIA® |
|-----------------------|-------------------------------|
Expand Down
36 changes: 36 additions & 0 deletions docs/developer-guide/get-started/emt-versions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Edge Microvisor Toolkit Versions

Edge Microvisor Toolkit is available in several pre-configured versions that
serve different purposes. Some are published as binaries, others are available
from a custom build. This document will help you select the version that best
suits your needs. To do so, check out:

1. How to select the right EMT.

The diagram
below will help you select the toolkit version right for your workflow.

![emt-version-deployment](../assets/emt-version-deployment.drawio.svg)

2. How EMT differs between versions.

| Version | Real Time | Stable Kernel | [Next Kernel](../emt-architecture-overview.md#next-kernel) |
|--------|---------|-------------|------|
| [**Standalone (Immutable)**](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node) | Available for opt-in | ✓ | ✓ |
| [**Developer Node (Mutable)**](../emt-architecture-overview.md#developer-node-mutable-iso-image) | Optional | ✓ | ✓ |
| [**EMT for EMF**](../emt-deployment-edge-orchestrator.md) | Available for opt-in | ✓ | ✓ |
| [**Bootkit**](../emt-bootkit.md) | - | ✓ | – |

3. How usage scenarios affect EMT setup.

| Scenario | Description | Primary outcomes | Technology areas |
|---|---|---|---|
| Real-time & deterministic workloads | Run latency-sensitive workloads with guaranteed bounded jitter and repeatable execution timelines across one or more hosts, maintainable under steady-state and failure-recovery conditions | <br> - Bounded end-to-end latency & jitter <br> - Repeatable scheduling windows under load <br> - Cross-host timing consistency for distributed stages <br> - Fast, predictable recovery without violating SLOs | <br> - [PREEMPT_RT kernel](../emt-architecture-overview.md#preempt-rt-kernel) <br> - [Resource Director Technologies](../emt-architecture-overview.md#resource-director-technology) <br> - [Intel GPU RT](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) <br> - [CPU & Scheduler Isolation](../emt-architecture-overview.md#isolcpuslist) <br> - [Memory Determinism](../emt-architecture-overview.md#preempt-rt-kernel) <br> - Time & Clocks <br> - [Network Determinism (TSN)](../emt-architecture-overview.md#time-sensitive-networking-support) |
| VM-based workloads on Kubernetes with shared GPUs | Run multiple virtual machines on Kubernetes that concurrently share one or more physical GPUs, with predictable fairness, isolation, and policy-driven placement—using a KubeVirt stack extended for GPU sharing | <br> - Stable, repeatable GPU performance per VM under contention <br> - Hard/soft sharing policies (fair-share, priority tiers, or quotas) <br> - Safe isolation between tenants/VMs (memory, contexts, resets) <br> - Schedulable resources with clear admission signals (no surprise fails) <br> - Operational guardrails: health checks, graceful drain/eviction, rollback | <br> - [SRIOV](./deployment/emt-vm-host.md) <br> - [Intel GPU](../emt-system-requirements.md#discrete-gpu) <br> - [kubevirt](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node/blob/main/standalone-node/docs/user-guide/desktop-virtualization-image-guide.md) <br> - [Host virtualization](./deployment/emt-vm-host.md) <br> - [Intel GPU device plugin](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) |
| AI & Vision workloads | Enable AI inference and computer-vision workloads on edge nodes using Intel GPU and NPU acceleration, exposing unified hardware-assisted pipelines through standard APIs and user-space libraries | <br> - Efficient execution of deep-learning and vision inference on-device without cloud dependency <br> - Unified GPU/NPU compute abstraction for developers (OpenVINO backend, media pipelines) <br> - Deterministic frame-rate and latency for multi-stream analytics workloads (e.g., camera ingest) <br> - Seamless integration with containers or pods, including dynamic device discovery and sharing <br> - Stable ABI/API interface across [OS updates](../architecture/emt-updates.md) and driver versions | <br> - [Edge AI packages](https://eci.intel.com/docs/3.3/packages_list.html) <br> - [OpenVino](https://docs.openvino.ai) <br> - [Intel GPU and NPU drivers](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html) <br> - [Intel GPU device plugin](../emt-architecture-overview.md#intel-device-plugins-for-kubernetes) |

4. How to build your own version of EMT.

You can create your own custom version of Edge Microvisor Toolkit by following
[the guide](./emt-building-howto.md). You can also try and learn how to
[build your own solution and deploy it on edge](./emt-build-and-deploy.md).
1 change: 1 addition & 0 deletions docs/developer-guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ document.

<!--hide_directive
:::{toctree}
:hidden:

emt-get-started
emt-architecture-overview
Expand Down
Loading