Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed many (not all) accessibility issues. #22002

Merged
merged 4 commits into from
Sep 10, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Fixed many (not all) accessibility issues. Likely will need to change…
… HLJS theme for the rest.
MaanavD committed Sep 5, 2024
commit f94a059c42eaac6409d523ab46abd7e440322c83
42 changes: 20 additions & 22 deletions docs/tutorials/on-device-training/android-app.md
Original file line number Diff line number Diff line change
@@ -12,7 +12,7 @@ In this tutorial, we will explore how to build an Android application that incor

Here is what the application will look like at the end of this tutorial:

<img src="../../../images/on-device-training-application-prediction-tom.jpg" width="30%" height="30%">
<img src="../../../images/on-device-training-application-prediction-tom.jpg" alt="an image classification app with Tom Cruise in the middle." width="30%" height="30%">

## Introduction

@@ -26,24 +26,22 @@ In this tutorial, we will use data to learn to:

## Contents

- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Offline Phase - Building the training artifacts](#offline-phase---building-the-training-artifacts)
- [Export the model to ONNX](#op1)
- [Define the trainable and non trainable parameters](#op2)
- [Generate the training artifacts](#op3)
- [Training Phase - Android application development](#training-phase---android-application-development)
- [Setting up the project in Android Studio](#tp1)
- [Adding the ONNX Runtime dependency](#tp2)
- [Packaging the Prebuilt Training Artifacts and Dataset](#tp3)
- [Interfacing with ONNX Runtime - C++ Code](#tp4)
- [Image Preprocessing](#tp5)
- [Application Frontend](#tp6)
- [Training Phase - Running the application on a device](#training-phase---running-the-application-on-a-device)
- [Running the application on a device](#tp7)
- [Training with a pre-loaded dataset - Animals](#tp8)
- [Training with a custom dataset - Celebrities](#tp9)
- [Conclusion](#conclusion)
- [On-Device Training: Building an Android Application](#on-device-training-building-an-android-application)
MaanavD marked this conversation as resolved.
Show resolved Hide resolved
- [Introduction](#introduction)
- [Contents](#contents)
- [Prerequisites](#prerequisites)
- [Offline Phase - Building the training artifacts](#offline-phase---building-the-training-artifacts)
- [The original model is trained on imagenet which has 1000 classes.](#the-original-model-is-trained-on-imagenet-which-has-1000-classes)
- [For our image classification scenario, we need to classify among 4 categories.](#for-our-image-classification-scenario-we-need-to-classify-among-4-categories)
- [So we need to change the last layer of the model to have 4 outputs.](#so-we-need-to-change-the-last-layer-of-the-model-to-have-4-outputs)
- [Export the model to ONNX.](#export-the-model-to-onnx)
- [Load the onnx model.](#load-the-onnx-model)
- [Define the parameters that require their gradients to be computed](#define-the-parameters-that-require-their-gradients-to-be-computed)
- [(trainable parameters) and those that do not (frozen/non trainable parameters).](#trainable-parameters-and-those-that-do-not-frozennon-trainable-parameters)
- [Generate the training artifacts.](#generate-the-training-artifacts)
- [Training Phase - Android application development](#training-phase---android-application-development)
- [Training Phase - Running the application on a device](#training-phase---running-the-application-on-a-device)
- [Conclusion](#conclusion)

## Prerequisites

@@ -791,7 +789,7 @@ To follow this tutorial, you should have a basic understanding of Android app de

b. Launching the application on the device should look like this:

<img src="../../../images/on-device-training-application-landing-page.jpg" width="30%" height="30%">
<img src="../../../images/on-device-training-application-landing-page.jpg" alt="Barebones ORT Personalize app" width="30%" height="30%">

2. <a name="tp8"></a>Training with a pre-loaded dataset - Animals

@@ -805,7 +803,7 @@ To follow this tutorial, you should have a basic understanding of Android app de

e. Use any animal image from your library for inferencing now.

<img src="../../../images/on-device-training-application-prediction-cow.jpg" width="30%" height="30%">
<img src="../../../images/on-device-training-application-prediction-cow.jpg" alt="ORT Personalize app with an image of a cow" width="30%" height="30%">

As can be seen from the image above, the model correctly predicted `Cow`.

@@ -825,7 +823,7 @@ To follow this tutorial, you should have a basic understanding of Android app de

g. That's it!. Hopefully the application classified the image correctly.

<img src="../../../images/on-device-training-application-prediction-tom.jpg" width="30%" height="30%">
<img src="../../../images/on-device-training-application-prediction-tom.jpg" alt="an image classification app with Tom Cruise in the middle." width="30%" height="30%">


## Conclusion
10 changes: 5 additions & 5 deletions docs/tutorials/on-device-training/ios-app.md
Original file line number Diff line number Diff line change
@@ -947,27 +947,27 @@ Now, we are ready to run the application. You can run the application on the sim

a. Now, when you run the application, you should see the following screen:

<img src="../../../images/iOS_speaker_identification_app.png" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_app.png" alt="My Voice application with Train and Infer buttons" width="30%" height="30%">


b. Next, click on the `Train` button to navigate to the `TrainView`. The `TrainView` will prompt you to record your voice. You will need to record your voice `kNumRecordings` times.

<img src="../../../images/iOS_speaker_identification_training_screen.jpg" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_training_screen.jpg" alt="My Voice application with words to record" width="30%" height="30%">


c. Once all the recordings are complete, the application will train the model on the given data. You will see the progress bar indicating the progress of the training.

<img src="../../../images/iOS_speaker_identification_training_progress_screen.jpg" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_training_progress_screen.jpg" alt="Loading bar while the app is training" width="30%" height="30%">


d. Once the training is complete, you will see the following screen:

<img src="../../../images/iOS_speaker_identification_training_complete_screen.jpg" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_training_complete_screen.jpg" alt="The app informs you training finished successfully!" width="30%" height="30%">


e. Now, click on the `Infer` button to navigate to the `InferView`. The `InferView` will prompt you to record your voice. Once the recording is complete, it will perform inference with the trained model and display the result of the inference.

<img src="../../../images/iOS_speaker_identification_infer_screen.jpg" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_infer_screen.jpg" alt="My Voice application allows you to record and infer whether it's you or not." width="30%" height="30%">


That's it! Hopefully, it identified your voice correctly.
32 changes: 16 additions & 16 deletions src/routes/blogs/pytorch-on-the-edge/+page.svelte
Original file line number Diff line number Diff line change
@@ -179,9 +179,9 @@ fun run(audioTensor: OnnxTensor): Result {
<div class="container mx-auto px-4 md:px-8 lg:px-48 pt-8">
<h1 class="text-5xl pb-2">Run PyTorch models on the edge</h1>
<p class="text-neutral">
By: <a href="https://www.linkedin.com/in/natkershaw/" class="text-blue-700">Natalie Kershaw</a>
By: <a href="https://www.linkedin.com/in/natkershaw/" class="dark:text-blue-300 text-blue-800 underline">Natalie Kershaw</a>
and
<a href="https://www.linkedin.com/in/prasanthpulavarthi/" class="text-blue-700"
<a href="https://www.linkedin.com/in/prasanthpulavarthi/" class="dark:text-blue-300 text-blue-800 underline"
>Prasanth Pulavarthi</a
>
</p>
@@ -217,12 +217,12 @@ fun run(audioTensor: OnnxTensor): Result {
anywhere that is outside of the cloud, ranging from large, well-resourced personal computers
to small footprint devices such as mobile phones. This has been a challenging task to
accomplish in the past, but new advances in model optimization and software like
<a href="https://onnxruntime.ai/pytorch" class="text-blue-700">ONNX Runtime</a>
<a href="https://onnxruntime.ai/pytorch" class="dark:text-blue-300 text-blue-800 underline">ONNX Runtime</a>
make it more feasible - even for new generative AI and large language models like Stable Diffusion,
Whisper, and Llama2.
</p>

<h2 class="text-blue-700 text-3xl mb-4">Considerations for PyTorch models on the edge</h2>
<h2 class="dark:text-blue-300 text-blue-800 underline text-3xl mb-4">Considerations for PyTorch models on the edge</h2>

<p class="mb-4">
There are several factors to keep in mind when thinking about running a PyTorch model on the
@@ -292,7 +292,7 @@ fun run(audioTensor: OnnxTensor): Result {
</li>
</ul>

<h2 class="text-blue-700 text-3xl mb-4">Tools for PyTorch models on the edge</h2>
<h2 class="dark:text-blue-300 text-blue-800 underline text-3xl mb-4">Tools for PyTorch models on the edge</h2>

<p class="mb-4">
We mentioned ONNX Runtime several times above. ONNX Runtime is a compact, standards-based
@@ -305,7 +305,7 @@ fun run(audioTensor: OnnxTensor): Result {
format that doesn't require the PyTorch framework and its gigabytes of dependencies. PyTorch
has thought about this and includes an API that enables exactly this - <a
href="https://pytorch.org/docs/stable/onnx.html"
class="text-blue-700">torch.onnx</a
class="dark:text-blue-300 text-blue-800 underline">torch.onnx</a
>. <a href="https://onnx.ai/">ONNX</a> is an open standard that defines the operators that make
up models. The PyTorch ONNX APIs take the Pythonic PyTorch code and turn it into a functional
graph that captures the operators that are needed to run the model without Python. As with everything
@@ -318,7 +318,7 @@ fun run(audioTensor: OnnxTensor): Result {
The popular Hugging Face library also has APIs that build on top of this torch.onnx
functionality to export models to the ONNX format. Over <a
href="https://huggingface.co/blog/ort-accelerating-hf-models"
class="text-blue-700">130,000 models</a
class="dark:text-blue-300 text-blue-800 underline">130,000 models</a
> are supported making it very likely that the model you care about is one of them.
</p>

@@ -328,7 +328,7 @@ fun run(audioTensor: OnnxTensor): Result {
and web browsers) via various languages (from C# to JavaScript to Swift).
</p>

<h2 class="text-blue-700 text-3xl mb-4">Examples of PyTorch models on the edge</h2>
<h2 class="dark:text-blue-300 text-blue-800 underline text-3xl mb-4">Examples of PyTorch models on the edge</h2>

<h3 class=" text-2xl mb-2">Stable Diffusion on Windows</h3>

@@ -345,15 +345,15 @@ fun run(audioTensor: OnnxTensor): Result {
<p class="mb-4">
You don't have to export the fifth model, ClipTokenizer, as it is available in <a
href="https://onnxruntime.ai/docs/extensions"
class="text-blue-700">ONNX Runtime extensions</a
class="dark:text-blue-300 text-blue-800 underline">ONNX Runtime extensions</a
>, a library for pre and post processing PyTorch models.
</p>

<p class="mb-4">
To run this pipeline of models as a .NET application, we build the pipeline code in C#. This
code can be run on CPU, GPU, or NPU, if they are available on your machine, using ONNX
Runtime's device-specific hardware accelerators. This is configured with the <code
class="bg-gray-200 p-1 rounded">ExecutionProviderTarget</code
class="bg-gray-200 dark:bg-gray-700 p-1 rounded">ExecutionProviderTarget</code
> below.
</p>
<Highlight language={csharp} code={dotnetcode} />
@@ -366,15 +366,15 @@ fun run(audioTensor: OnnxTensor): Result {
<p class="mb-4">
You can build the application and run it on Windows with the detailed steps shown in this <a
href="https://onnxruntime.ai/docs/tutorials/csharp/stable-diffusion-csharp.html"
class="text-blue-700">tutorial</a
class="dark:text-blue-300 text-blue-800 underline">tutorial</a
>.
</p>

<h3 class=" text-2xl mb-2">Text generation in the browser</h3>

<p class="mb-4">
Running a PyTorch model locally in the browser is not only possible but super simple with
the <a href="https://huggingface.co/docs/transformers.js/index" class="text-blue-700"
the <a href="https://huggingface.co/docs/transformers.js/index" class="dark:text-blue-300 text-blue-800 underline"
>transformers.js</a
> library. Transformers.js uses ONNX Runtime Web as its backend. Many models are already converted
to ONNX and served by the tranformers.js CDN, making inference in the browser a matter of writing
@@ -407,7 +407,7 @@ fun run(audioTensor: OnnxTensor): Result {
All components of the Whisper Tiny model (audio decoder, encoder, decoder, and text sequence
generation) can be composed and exported to a single ONNX model using the <a
href="https://github.com/microsoft/Olive/tree/main/examples/whisper"
class="text-blue-700">Olive framework</a
class="dark:text-blue-300 text-blue-800 underline">Olive framework</a
>. To run this model as part of a mobile application, you can use ONNX Runtime Mobile, which
supports Android, iOS, react-native, and MAUI/Xamarin.
</p>
@@ -420,7 +420,7 @@ fun run(audioTensor: OnnxTensor): Result {
<p class="mb-4">
The relevant snippet of a example <a
href="https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile/examples/speech_recognition"
class="text-blue-700">Android mobile app</a
class="dark:text-blue-300 text-blue-800 underline">Android mobile app</a
> that performs speech transcription on short samples of audio is shown below:
</p>
<Highlight language={kotlin} code={mobilecode} />
@@ -476,11 +476,11 @@ fun run(audioTensor: OnnxTensor): Result {
<p class="mb-4">
You can read the full <a
href="https://onnxruntime.ai/docs/tutorials/on-device-training/ios-app.html"
class="text-blue-700">Speaker Verification tutorial</a
class="dark:text-blue-300 text-blue-800 underline">Speaker Verification tutorial</a
>, and
<a
href="https://github.com/microsoft/onnxruntime-training-examples/tree/master/on_device_training/mobile/ios"
class="text-blue-700">build and run the application from source</a
class="dark:text-blue-300 text-blue-800 underline">build and run the application from source</a
>.
</p>

6 changes: 3 additions & 3 deletions src/routes/components/footer.svelte
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
<footer class="footer p-10 mt-10 text-base-content z-40 border-top border-t">
<div>
<p>ONNX Runtime<br />Copyright © Microsoft. All rights reserved.</p>
<span class="footer-title">Follow us at:</span>
<span class="dark:text-blue-200 footer-title">Follow us at:</span>
<div class="grid grid-flow-col gap-4">
<a aria-label="youtube" href="https://www.youtube.com/onnxruntime" target="_blank"
><div class="w-8 h-8 pt-0.5 hover:text-primary"><FaYoutube /></div></a
@@ -24,12 +24,12 @@
</div>
<div />
<div>
<span class="footer-title text-bold ">Get Started</span>
<span class="dark:text-blue-200 footer-title text-bold">Get Started</span>
<a href={pathvar + '/getting-started'} class="link link-hover">Install</a>
<a href={pathvar + '/pytorch'} class="link link-hover">PyTorch</a>
</div>
<div>
<span class="footer-title">Resources</span>
<span class="dark:text-blue-200 footer-title">Resources</span>
<a href={pathvar + '/blogs'} class="link link-hover">Blogs</a>
<a rel="external" href={pathvar + '/docs/tutorials'} class="link link-hover">Tutorials</a>
<a rel="external" href={pathvar + '/docs/api/'} class="link link-hover">APIs</a>
4 changes: 2 additions & 2 deletions src/routes/events/+page.svelte
Original file line number Diff line number Diff line change
@@ -20,8 +20,7 @@
}
],
image: converttoort,
imagealt:
'Slide detailing how to convert from various frameworks to ONNX, then deploy anywhere using ORT'
imagealt: 'Slide detailing how to convert from various frameworks to ONNX, then deploy anywhere using ORT'
}
];

@@ -74,6 +73,7 @@
date={event.date}
linkarr={event.linkarr}
image={event.image}
imagealt={event.imagealt}
/>
{/each}
</div>
4 changes: 2 additions & 2 deletions src/routes/events/event-post.svelte
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@
<div class="card-body col-span-3 md:col-span-2">
<h2 class="card-title">{title}</h2>
<p>{description}</p>
<p class="text-blue-700 text-right">
<p class="text-blue-800 text-right">
{date}
</p>
<div class="card-actions">
@@ -43,7 +43,7 @@
</div>
</div>
<div class="card-image col-span-1 m-auto hidden md:flex">
<img class="" src={image} alt={imagealt} />
<img src={image} alt={imagealt} />
</div>
</div>
</a>
6 changes: 3 additions & 3 deletions src/routes/getting-started/+page.svelte
Original file line number Diff line number Diff line change
@@ -34,7 +34,7 @@
<p class="pt-4">
For more in-depth installation instructions, check out the <a
href="https://onnxruntime.ai/docs/tutorials/"
class="text-blue-700">ONNX Runtime documentation</a
class="dark:text-blue-300 text-blue-800 underline">ONNX Runtime documentation</a
>.
</p>
</div>
@@ -45,9 +45,9 @@
If you are interested in joining the ONNX Runtime open source community, you might want to join
us on GitHub where you can interact with other users and developers, participate in<a
href="https://github.com/microsoft/onnxruntime/discussions"
class="text-blue-700">discussions</a
class="dark:text-blue-300 text-blue-800 underline">discussions</a
>, and get help with any
<a href="https://github.com/microsoft/onnxruntime/issues" class="text-blue-700">issues</a> you
<a href="https://github.com/microsoft/onnxruntime/issues" class="dark:text-blue-300 text-blue-800 underline">issues</a> you
encounter. You can also contribute to the project by reporting bugs, suggesting features, or
submitting pull requests.
<div class="py-4">
32 changes: 16 additions & 16 deletions src/routes/huggingface/+page.svelte
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<script lang="ts">
scenario<script lang="ts">
import LandingHero from '../components/landing-hero.svelte';
import ImagesHf1 from '../../images/undraw/image_HF1.svelte';
import ImageHf2 from '../../images/undraw/image_HF2.svelte';
@@ -81,28 +81,28 @@
<p class="pb-4">
The top 30 most popular model architectures on Hugging Face are all supported by ONNX
Runtime, and over 80 Hugging Face model architectures in total boast ORT support. This list
includes <a href="https://huggingface.co/models?other=bert" class="text-blue-700">BERT</a>,
<a href="https://huggingface.co/models?other=gpt2" class="text-blue-700">GPT2</a>,
<a href="https://huggingface.co/models?other=t5" class="text-blue-700">T5</a>,
<a href="https://huggingface.co/models?other=stable-diffusion" class="text-blue-700"
includes <a href="https://huggingface.co/models?other=bert" class="dark:text-blue-300 text-blue-800 underline">BERT</a>,
<a href="https://huggingface.co/models?other=gpt2" class="dark:text-blue-300 text-blue-800 underline">GPT2</a>,
<a href="https://huggingface.co/models?other=t5" class="dark:text-blue-300 text-blue-800 underline">T5</a>,
<a href="https://huggingface.co/models?other=stable-diffusion" class="dark:text-blue-300 text-blue-800 underline"
>Stable Diffusion</a
>,
<a href="https://huggingface.co/models?other=whisper" class="text-blue-700">Whisper</a>, and
<a href="https://huggingface.co/models?other=whisper" class="dark:text-blue-300 text-blue-800 underline">Whisper</a>, and
many more.
</p>
<p class="pb-4">
ONNX models can be found directly from the Hugging Face Model Hub in its <a
href="https://huggingface.co/models?library=onnx"
class="text-blue-700">ONNX model library</a
class="dark:text-blue-300 text-blue-800 underline">ONNX model library</a
>.
</p>
<p class="pb-4">
Hugging Face also provides ONNX support for a variety of other models not listed in the ONNX
model library. With <a
href="https://huggingface.co/docs/optimum/exporters/onnx/overview"
class="text-blue-700">Hugging Face Optimum</a
class="dark:text-blue-300 text-blue-800 underline">Hugging Face Optimum</a
>, you can easily convert pretrained models to ONNX, and
<a href="https://huggingface.co/docs/transformers.js/index" class="text-blue-700"
<a href="https://huggingface.co/docs/transformers.js/index" class="dark:text-blue-300 text-blue-800 underline"
>Transformers.js</a
> lets you run Hugging Face Transformers directly from your browser!
</p>
@@ -119,16 +119,16 @@
ONNX Runtime also supports many increasingly popular large language model (LLM)
architectures, including <a
href="https://huggingface.co/models?other=llama"
class="text-blue-700">LLaMA</a
class="dark:text-blue-300 text-blue-800 underline">LLaMA</a
>,
<a href="https://huggingface.co/models?other=gpt_neo" class="text-blue-700">GPT Neo</a>,
<a href="https://huggingface.co/models?other=bloom" class="text-blue-700">BLOOM</a>, and
<a href="https://huggingface.co/models?other=gpt_neo" class="dark:text-blue-300 text-blue-800 underline">GPT Neo</a>,
<a href="https://huggingface.co/models?other=bloom" class="dark:text-blue-300 text-blue-800 underline">BLOOM</a>, and
many more.
</p>
<p>
Hugging Face also provides an <a
href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard"
class="text-blue-700">Open LLM Leaderboard</a
class="dark:text-blue-300 text-blue-800 underline">Open LLM Leaderboard</a
> with more detailed tracking and evaluation of recently releases LLMs from the community.
</p>
</div>
@@ -149,7 +149,7 @@
and designs responsible AI solutions.
</p>
<p>
<a href="https://ml.azure.com/" class="text-blue-700">Azure Machine Learning</a> publishes a
<a href="https://ml.azure.com/" class="dark:text-blue-300 text-blue-800 underline">Azure Machine Learning</a> publishes a
curated model list that is updated regularly and includes the most popular models. You can run
the vast majority of the models on the curated list with ONNX Runtime, using HuggingFace Optimum.
</p>
@@ -166,12 +166,12 @@
<div>
<h1 class="text-3xl pb-4">Transformers.js + ONNX Runtime Web</h1>
<p class="pb-4">
<a href="https://huggingface.co/docs/transformers.js/index" class="text-blue-700"
<a href="https://huggingface.co/docs/transformers.js/index" class="dark:text-blue-300 text-blue-800 underline"
>Transformers.js</a
>
is an amazing tool to run transformers on the web, designed to be functionally equivalent to
Hugging Face’s
<a href="https://github.com/huggingface/transformers" class="text-blue-700">transformers</a>
<a href="https://github.com/huggingface/transformers" class="dark:text-blue-300 text-blue-800 underline">transformers</a>
python library.
</p>
<p class="pb-4">
4 changes: 2 additions & 2 deletions src/routes/testimonials/testimonial-card.svelte
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
<article
on:mouseenter={handleEnter}
on:mouseleave={handleLeave}
class="max-w-md mx-auto bg-blue-300 text-slate-50 rounded-sm overflow-hidden md:max-w-2xl"
class="max-w-md mx-auto bg-blue-300 text-primary-content rounded-sm overflow-hidden md:max-w-2xl"
id={title}
>
<div class="md:flex">
@@ -35,7 +35,7 @@
<p class="block mt-1 leading-tight font-bold text-lg">{title}</p>
<p class="mt-2">{description}</p>
<br />
<p class="text-blue-700 text-right">-{author}</p>
<p class="text-right">-{author}</p>
</div>
</div>
</article>
18 changes: 9 additions & 9 deletions src/routes/training/+page.svelte
Original file line number Diff line number Diff line change
@@ -48,8 +48,8 @@
<br />
<div class="bg-white w-100 md:w-1/2 p-4">
<code>
<span class="text-red-500">- model = build_model() # User's PyTorch model</span><br />
<span class="text-green-500">+ model = ORTModule(build_model())</span>
<span class="text-red-600">- model = build_model() # User's PyTorch model</span><br />
<span class="text-green-700">+ model = ORTModule(build_model())</span>
</code>
</div>
<br /><br />
@@ -87,12 +87,12 @@
<h2 class="card-title">Part of the PyTorch ecosystem</h2>
<p>
ONNX Runtime Training is available via the <a
class="text-blue-700"
class="dark:text-blue-300 text-blue-800 underline"
href="https://pytorch.org/ort/">torch-ort</a
>
package as part of the
<a
class="text-blue-700"
class="dark:text-blue-300 text-blue-800 underline"
href="https://learn.microsoft.com/en-us/azure/machine-learning/resource-azure-container-for-pytorch?view=azureml-api-2"
>Azure Container for PyTorch (ACPT)</a
> and seamlessly integrates with existing training pipelines for PyTorch models.
@@ -103,11 +103,11 @@
<div class="card-body items-center text-center">
<h2 class="card-title">Composable with popular acceleration systems</h2>
<p>
Compose with <a href="https://github.com/microsoft/DeepSpeed" class="text-blue-700"
Compose with <a href="https://github.com/microsoft/DeepSpeed" class="dark:text-blue-300 text-blue-800 underline"
>DeepSpeed</a
>,
<a href="https://github.com/facebookresearch/fairscale" class="text-blue-700">FairScale</a
>, <a href="https://github.com/NVIDIA/Megatron-LM" class="text-blue-700">Megatron</a>, and
<a href="https://github.com/facebookresearch/fairscale" class="dark:text-blue-300 text-blue-800 underline">FairScale</a
>, <a href="https://github.com/NVIDIA/Megatron-LM" class="dark:text-blue-300 text-blue-800 underline">Megatron</a>, and
more for even faster and more efficient training.
</p>
</div>
@@ -118,7 +118,7 @@
<p>
ORT Training is turned on for curated models in the <a
href="https://ml.azure.com/"
class="text-blue-700">Azure AI | Machine Learning Studio</a
class="dark:text-blue-300 text-blue-800 underline">Azure AI | Machine Learning Studio</a
> model catalog.
</p>
</div>
@@ -129,7 +129,7 @@
<p>
ORT Training can be used to accelerate Hugging Face models like Llama-2-7b through <a
href="https://github.com/huggingface/optimum/blob/main/examples/onnxruntime/training/text-classification/README.md#onnx-runtime-training"
class="text-blue-700">these scripts</a
class="dark:text-blue-300 text-blue-800 underline">these scripts</a
>.
</p>
</div>
6 changes: 3 additions & 3 deletions src/routes/windows/+page.svelte
Original file line number Diff line number Diff line change
@@ -92,18 +92,18 @@
<h2 class="card-title">Windows ML Samples Gallery</h2>
<p>
This gallery demonstrates different machine learning scenarios and features using <a
class="text-blue-700"
class="dark:text-blue-300 text-blue-800 underline"
href="https://docs.microsoft.com/en-us/windows/ai/windows-ml/">Windows ML</a
>
in an interactive format. The app is an interactive companion that shows the integration of
<a
class="text-blue-700"
class="dark:text-blue-300 text-blue-800 underline"
href="https://docs.microsoft.com/en-us/uwp/api/windows.ai.machinelearning"
>Windows Machine Learning Library APIs</a
>
into a desktop
<a
class="text-blue-700"
class="dark:text-blue-300 text-blue-800 underline"
href="https://docs.microsoft.com/en-us/uwp/api/windows.ai.machinelearning">WinUI 3</a
> application.
</p>
1 change: 1 addition & 0 deletions tailwind.config.js
Original file line number Diff line number Diff line change
@@ -4,6 +4,7 @@ import flattenColorPalette from 'tailwindcss/lib/util/flattenColorPalette';

/** @type {import('tailwindcss').Config} */
export default {
darkMode: ['selector', '[data-theme=" darkmode"]'],
content: ['./src/**/*.{html,svelte,js,ts}'],
theme: {
extend: {