diff --git a/README.md b/README.md index ecb7f21523..700b11fa31 100644 --- a/README.md +++ b/README.md @@ -4,28 +4,22 @@ Edge Microvisor Toolkit is a reference Linux operating system that demonstrates the full capabilities of Intel® platforms for Edge AI workloads. Built on Azure Linux, it features an -Intel®-maintained Linux Kernel, incorporating all the latest patches that have not yet been +[Intel®-maintained Linux Kernel](./docs/developer-guide/emt-architecture-overview.md#next-kernel), +incorporating all the latest patches that have not yet been upstreamed. These patches optimize performance and enhance other capabilities for Intel® silicon, streamlining integration for operating system vendors and technology partners. -Edge Microvisor Toolkit is published in several versions, both immutable and mutable. +Edge Microvisor Toolkit is [published in several versions](./docs/developer-guide/get-started/emt-versions.md), +both immutable and mutable. It may be used to quickly deploy, validate, and benchmark edge AI workloads, including those requiring real-time processing. You can also use the toolkit's flexible build infrastructure to create custom images from a large set of pre-provisioned packages. -Here are the published versions: - -* [Edge Microvisor Toolkit Standalone Node (immutable)](https://github.com/open-edge-platform/edge-microvisor-toolkit-standalone-node) -* [Edge Microvisor Toolkit Developer Node with or without real-time extensions (mutable)](./docs/developer-guide/emt-architecture-overview.md#developer-node-mutable-iso-image) -* [Edge Microvisor Toolkit (mutuable or immutable) for use with Edge Manageability Framework](./docs/developer-guide/emt-deployment-edge-orchestrator.md) -* [Edge Microvisor Bootkit](./docs/developer-guide/emt-bootkit.md) - Edge Microvisor Toolkit has undergone extensive validation across the Intel® Xeon®, Intel® Core Ultra™, Intel Core™, and Intel® Atom® processor families. It provides robust support for integrated NPU as well as a [selection of discrete GPU cards](./docs/developer-guide/emt-system-requirements.md#hardware-requirements). - You can either build Edge Microvisor Toolkit by following step-by-step instructions or download it directly. Both the build system and Edge Microvisor Toolkit are available as open source. @@ -44,7 +38,6 @@ If you're interested in most up-to-date versions, check out the and [CVE](https://github.com/open-edge-platform/edge-microvisor-toolkit/discussions?discussions_q=is%3Aopen+cve+) releases. - **Demos on YouTube** * [Standalone Edge Microvisor Toolkit (EMT-S) integration with Edge Microvisor Bootkit](https://www.youtube.com/watch?v=rmgmWYi6OpE): @@ -52,6 +45,11 @@ and * [Edge Microvisor Toolkit Standalone Node 3.0](https://www.youtube.com/watch?v=j_4EX_wggSI): a brief walkthrough of Edge Microvisor Toolkit Standalone Node for the 3.0 release, covering various use cases. +You can also try out the +[OS Image Composer](http://github.com/open-edge-platform/os-image-composer) - +a *new* project in the Open Edge platform family that allows you to compose +custom OS images from popular distributions using pre-built artifacts. + ## Get Help or Contribute If you want to participate in the GitHub community for Edge Microvisor Toolkit, you can diff --git a/docs/developer-guide/assets/emt-version-deployment.drawio.svg b/docs/developer-guide/assets/emt-version-deployment.drawio.svg new file mode 100644 index 0000000000..cd70846273 --- /dev/null +++ b/docs/developer-guide/assets/emt-version-deployment.drawio.svg @@ -0,0 +1,4 @@ + + + +
Yes
No
Need
full 
developer 
toolchain?
EMT-D Developer
Yes
No
Provisioning only?
EMT Bootkit
Provisioning
Standalone
EMF
Deploy
Real Time Solution
EMF
Standalone
Deploy
Non Real Time Solution
RT +
EMT Standalone Node
Non RT +
Edge Manageability
Framework
Non RT +
EMT Standalone Node
RT +
Edge Manageability
Framework
Minimal iPXE
provisioning image
EMF-integrated orchestrated deployments
Full toolchain,
optional RT
Minimal runtime,
immutable rootfs
Minimal runtime,
immutable rootfs
\ No newline at end of file diff --git a/docs/developer-guide/emt-architecture-overview.md b/docs/developer-guide/emt-architecture-overview.md index 011da56d92..3d21ae4424 100644 --- a/docs/developer-guide/emt-architecture-overview.md +++ b/docs/developer-guide/emt-architecture-overview.md @@ -7,7 +7,8 @@ architectural details of the OS itself. ## Edge Microvisor Toolkit -Edge Microvisor Toolkit is produced and maintained in several editions, in both immutable and +Edge Microvisor Toolkit is produced and maintained in +[several editions](./get-started/emt-versions.md), in both immutable and mutable images. It enables you to quickly deploy and validate workloads on Intel® platforms in order to demonstrate the full capabilities of Intel silicon for various scenarios. There are several options for deploying the toolkit: @@ -208,62 +209,63 @@ real-time performance. To configure kernel command line arguments, add them in the `"ExtraCommandLine"` parameter inside the imageconfig file, as shown in [edge-image](https://github.com/open-edge-platform/edge-microvisor-toolkit/blob/e22a8f4e72d0edc652f1aacd514d0b5bf5de8b80/toolkit/imageconfigs/edge-image.json#L107). -- **idle=poll** +#### **idle=poll** Forces the CPU to actively poll for work when idle, rather than entering low-power idle states. In RT systems, this can reduce latency by ensuring the CPU is always ready to handle high-priority tasks immediately, at the cost of higher power consumption. > **Note:** - It is currently not possible to directly modify the kernel command-line parameters once - a build has been generated, as it is packaged inside the signed UKI. Modifying the kernel - command line would invalidate the signature. The mechanism to enable customization of the - kernel command line will be added in future releases. +> It is currently not possible to directly modify the kernel command-line parameters once +> a build has been generated, as it is packaged inside the signed UKI. Modifying the kernel +> command line would invalidate the signature. The mechanism to enable customization of the +> kernel command line will be added in future releases. + +##### **isolcpus=\** -- **isolcpus=** Isolates specific CPU cores from the general scheduler, preventing non-RT tasks from being scheduled on those cores. This ensures that designated cores are available solely for RT tasks. This way, for example, the workloads can be shifted between efficient and performance cores. The parameter takes lists as values: - - isolcpus=\,...,\ +- isolcpus=\,...,\ - ```bash - isolcpus=1,2,3 - ``` + ```bash + isolcpus=1,2,3 + ``` - - isolcpus=\-\ +- isolcpus=\-\ - ```bash - isolcpus=1-3 - ``` + ```bash + isolcpus=1-3 + ``` - - isolcpus=\,...,\-\ +- isolcpus=\,...,\-\ - ```bash - isolcpus=1,4-5 - ``` + ```bash + isolcpus=1,4-5 + ``` -- **nohz_full=** +##### **nohz_full=\** Enables full tickless (nohz) mode on specified cores, reducing periodic timer interrupts that could introduce latency on cores dedicated to RT workloads. -- **rcu_nocbs=** +##### **rcu_nocbs=\** Offloads RCU (Read-Copy-Update) callbacks from the specified CPUs, reducing interference on cores that need to be as responsive as possible. -- **threadirqs** +##### **threadinqs** Forces interrupts to be handled by dedicated threads rather than in interrupt context, which can improve the predictability and granularity of scheduling RT tasks. -- **nosmt** +##### **nosmt** Disables simultaneous multi-threading (hyperthreading). This can prevent contention between sibling threads that share the same physical core, leading to more predictable performance. -- **numa_balancing=0** +##### **numa_balancing=0** Disables automatic NUMA balancing. While NUMA awareness is important, automatic migration of processes can introduce latency. Disabling it helps maintain predictable memory locality. -- **intel_idle.max_cstate=0** +#### **intel_idle.max_cstate=0** Limits deep idle states on Intel® CPUs, reducing wake-up latencies that can adversely affect RT performance. @@ -279,44 +281,67 @@ the used image configuration. The artifacts come with associated `sha256` files. - Image in VHD format. - Signing key. +## "Next" Kernel + +We are excited to announce the EMT "Next" `v6.19` kernel which will converge +to the next LTS kernel for EMT by 2026.1 (mid-2026) release. The stable `v6.12` kernel +continues to be maintained and recommended for most users unless you have newer Intel +platforms that require an earlier move ot the "Next" kernel. + +| Intel Platform | Recommended EMT | Support | +| -------------- | ------------------| ------- | +| ARL-U/H | EMT Stable | Supported| +| ARL-S | EMT Stable | Supported| +| ASL | EMT Stable | Supported| +| TWL | EMT Stable | Supported| +| MTL-U/H | EMT Stable | Supported| +| MTL-PS | EMT Stable | Supported| +| BTL-S hybrid | EMT Stable | Supported| +| BTL-S 12P | EMT Stable | Supported| +| PTL | EMT Next | Preview | +| WCL | EMT Next | Not yet supported| +| NVL | EMT Next | Not yet supported| + ## K3s Extensions Deploying of Edge Microvisor Toolkit with Lightweight Kubernetes (K3s) requires additional extensions which are downloaded as docker images. Below is a list of components essential for scaled deployment of the toolkit. -- [Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) +### [Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) + +A Container Network Interface (CNI) plugin for Kubernetes that enables you to +attach multiple network interfaces to Kubernetes pods, which usually have only +one network interface. + +### [Intel Device Plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes) - A Container Network Interface (CNI) plugin for Kubernetes that enables you to - attach multiple network interfaces to Kubernetes pods, which usually have only - one network interface. +[GPU Plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/cmd/gpu_plugin/README.md) -- [Intel Device Plugins for Kubernetes](https://github.com/intel/intel-device-plugins-for-kubernetes) - - [GPU Plugin](https://github.com/intel/intel-device-plugins-for-kubernetes/blob/main/cmd/gpu_plugin/README.md) +Intel GPU plugin provides access to discrete and integrated Intel GPU devices +supported by the host kernel. It enables offloading compute operations of +Kubernetes workload to GPU devices. It may be beneficial in such use cases as +media transcoding and analytics, cloud gaming, AI training and inference. - Intel GPU plugin provides access to discrete and integrated Intel GPU devices - supported by the host kernel. It enables offloading compute operations of - Kubernetes workload to GPU devices. It may be beneficial in such use cases as - media transcoding and analytics, cloud gaming, AI training and inference. +### [Calico](https://github.com/projectcalico/calico) -- [Calico](https://github.com/projectcalico/calico) - - [CNI Plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin) - \- as [docker image](https://hub.docker.com/r/calico/cni). +- [CNI Plugin](https://github.com/projectcalico/calico/tree/master/cni-plugin) +\- as [docker image](https://hub.docker.com/r/calico/cni). - A plugin that enables you to use Calico for deployments based on Container - Network Interface (CNI). + A plugin that enables you to use Calico for deployments based on Container + Network Interface (CNI). - - [Node](https://github.com/projectcalico/calico/tree/master/node) - \- as [docker image](https://hub.docker.com/r/calico/node/). +- [Node](https://github.com/projectcalico/calico/tree/master/node) + \- as [docker image](https://hub.docker.com/r/calico/node/). - A CNI plugin that enables you to create a Layer 3 network for Kubernetes - pods and assign a unique IP address for each. + A CNI plugin that enables you to create a Layer 3 network for Kubernetes + pods and assign a unique IP address for each. - - [Kube controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers) - \- as [docker image](https://hub.docker.com/r/calico/kube-controllers). +- [Kube controllers](https://github.com/projectcalico/calico/tree/master/kube-controllers) + \- as [docker image](https://hub.docker.com/r/calico/kube-controllers). - A set of controllers that monitor the resources in the Kubernetes API (network, - policies, nodes) and adjust Calico's CNI configuration. + A set of controllers that monitor the resources in the Kubernetes API (network, + policies, nodes) and adjust Calico's CNI configuration. ## Packaging diff --git a/docs/developer-guide/emt-get-started.md b/docs/developer-guide/emt-get-started.md index b779c9fdfb..81d2c1093e 100644 --- a/docs/developer-guide/emt-get-started.md +++ b/docs/developer-guide/emt-get-started.md @@ -10,7 +10,9 @@ technology partners. ## Usage Scenarios -To validate workloads on Intel silicon, you can deploy Edge Microvisor Toolkit as a +Choose a [version of a Edge Microvisor Toolkit](./get-started/emt-versions.md) that +best suits your workflow. You can validate Edge AI workloads on Intel silicon, by +deploying the toolkit as a [standalone edge node](./get-started/emt-build-and-deploy.md) or with Edge Manageability Framework, a complete integrated system for edge devices with full lifecycle management, including remote deployment and management of applications orchestrated by Kubernetes. @@ -32,6 +34,8 @@ including remote deployment and management of applications orchestrated by Kuber