Skip to content

Commit

Permalink
Updating code and image for accessibility reasons. (#21778)
Browse files Browse the repository at this point in the history
Fixes issues: #20602, #21294, #21637, #21639.

test site available here: https://maanavd.github.io/onnxruntime/
  • Loading branch information
MaanavD authored Aug 16, 2024
1 parent 2920978 commit 0321041
Show file tree
Hide file tree
Showing 4 changed files with 101 additions and 37 deletions.
88 changes: 79 additions & 9 deletions _sass/color_schemes/onnxruntime.scss
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,82 @@ $link-color: #226aca;
$btn-primary-color: #226aca;

// Code is too light in default theme //
.highlight .n {
color: #555 !important;
}
.highlight .nn {
color: #555 !important;
}
.highlight .c1 {
color: #188616 !important;
}
// .highlight .n {
// color: #555 !important;
// }
// .highlight .nn {
// color: #555 !important;
// }
// .highlight .c1 {
// color: #188616 !important;
// }

.highlight .hll { background-color: #ffffcc; }
.highlight { background: #ffffff; }
.highlight .c { color: #767676; }
.highlight .err { background-color: #FFAAAA; color: #a00000; }
.highlight .k { color: #008800; font-weight: bold; }
.highlight .o { color: #333333; }
.highlight .ch { color: #767676; }
.highlight .cm { color: #767676; }
.highlight .cp { color: #557799; }
.highlight .cpf { color: #767676; }
.highlight .c1 { color: #767676; }
.highlight .cs { color: #cc0000; font-weight: bold; }
.highlight .gd { color: #A00000; }
.highlight .ge { font-style: italic; }
.highlight .gr { color: #eb0000; }
.highlight .gh { color: #000080; font-weight: bold; }
.highlight .gi { color: #008700; }
.highlight .go { color: #767676; }
.highlight .gp { font-weight: bold; color: #bc5909; }
.highlight .gs { font-weight: bold; }
.highlight .gu { color: #800080; font-weight: bold; }
.highlight .gt { color: #0044DD; }
.highlight .kc { color: #008800; font-weight: bold; }
.highlight .kd { color: #008800; font-weight: bold; }
.highlight .kn { color: #008800; font-weight: bold; }
.highlight .kp { color: #003388; font-weight: bold; }
.highlight .kr { color: #008800; font-weight: bold; }
.highlight .kt { color: #333399; font-weight: bold; }
.highlight .m { color: #6600EE; font-weight: bold; }
.highlight .s { background-color: #fff0f0; }
.highlight .na { color: #0000CC; }
.highlight .nb { color: #007020; }
.highlight .nc { color: #BB0066; font-weight: bold; }
.highlight .no { color: #003366; font-weight: bold; }
.highlight .nd { color: #555555; font-weight: bold; }
.highlight .ni { color: #880000; font-weight: bold; }
.highlight .ne { font-weight: bold; color: #eb0000; }
.highlight .nf { color: #0066BB; font-weight: bold; }
.highlight .nl { font-weight: bold; color: #8f6f00; }
.highlight .nn { font-weight: bold; color: #0e7eab; }
.highlight .nt { color: #007700; }
.highlight .nv { color: #996633; }
.highlight .ow { color: #000000; font-weight: bold; }
.highlight .w { color: #767676; }
.highlight .mb { color: #6600EE; font-weight: bold; }
.highlight .mf { color: #6600EE; font-weight: bold; }
.highlight .mh { color: #005588; font-weight: bold; }
.highlight .mi { color: #0000DD; font-weight: bold; }
.highlight .mo { color: #4400EE; font-weight: bold; }
.highlight .sa { background-color: #fff0f0; }
.highlight .sb { background-color: #fff0f0; }
.highlight .sc { color: #0044DD; }
.highlight .dl { background-color: #fff0f0; }
.highlight .sd { color: #d54220; }
.highlight .s2 { background-color: #fff0f0; }
.highlight .se { color: #666666; font-weight: bold; background-color: #fff0f0; }
.highlight .sh { background-color: #fff0f0; }
.highlight .si { background-color: #eeeeee; }
.highlight .sx { background-color: #fff0f0; color: #d82100; }
.highlight .sr { color: #000000; background-color: #fff0ff; }
.highlight .s1 { background-color: #fff0f0; }
.highlight .ss { color: #AA6600; }
.highlight .bp { color: #007020; }
.highlight .fm { color: #0066BB; font-weight: bold; }
.highlight .vc { color: #336699; }
.highlight .vg { font-weight: bold; color: #b55f00; }
.highlight .vi { color: #3333BB; }
.highlight .vm { color: #996633; }
.highlight .il { color: #0000DD; font-weight: bold; }
24 changes: 9 additions & 15 deletions docs/tutorials/on-device-training/ios-app.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ In this tutorial, we will build a simple speaker identification app that learns
Here is what the application will look like:


<img src="../../../images/iOS_speaker_identification_app.png" width="30%" height="30%">
<img src="../../../images/iOS_speaker_identification_app.png" alt="application demo, with buttons for voice, train, and infer." width="30%" height="30%">

## Introduction
We will guide you through the process of building an iOS application that can train a simple audio classification model using on-device training techniques. The tutorial showcases the `transfer learning` technique where knowledge gained from training a model on one task is leveraged to improve the performance of a model on a different but related task. Instead of starting the learning process from scratch, transfer learning allows us to transfer the knowledge or features learned by a pre-trained model to a new task.
Expand All @@ -30,28 +30,22 @@ In the tutorial, we will:


## Contents
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Generating the training artifacts](#generating-the-training-artifacts)
- [Export the model to ONNX](#export-the-model-to-onnx)
- [Define the trainable and non trainable parameters](#define-the-trainable-and-non-trainable-parameters)
- [Generate the training artifacts](#generate-the-training-artifacts)

- [Building the iOS application](#building-the-ios-application)
- [Building an iOS Application](#building-an-ios-application)
- [Introduction](#introduction)
- [Contents](#contents)
- [Prerequisites](#prerequisites)
- [Generating the training artifacts](#generating-the-training-artifacts)
- [Building the iOS application](#building-the-ios-application)
- [Xcode Setup](#xcode-setup)
- [Application Overview](#application-overview)
- [Training the model](#training-the-model)
- [Loading the training artifacts and initializing training session](#loading-the-training-artifacts-and-initializing-training-session)
- [Training the model](#training-the-model-1)
- [Exporting the trained model](#exporting-the-trained-model)

- [Inference with the trained model](#inference-with-the-trained-model)
- [Recording Audio](#recording-audio)
- [Train View](#train-view)
- [Infer View](#infer-view)
- [ContentView](#contentview)
- [Running the iOS application](#running-the-ios-application)
- [Conclusion](#conclusion)
- [Running the iOS application](#running-the-ios-application)
- [Conclusion](#conclusion)


## Prerequisites
Expand Down
22 changes: 11 additions & 11 deletions src/routes/blogs/accelerating-llama-2/+page.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,11 @@
<div class="container mx-auto px-4 md:px-8 lg:px-48 pt-8">
<h1 class="text-5xl pb-2">Accelerating LLaMA-2 Inference with ONNX Runtime</h1>
<p class="text-neutral">
By: <a href="https://www.linkedin.com/in/kunal-v-16315b94" class="text-blue-700"
By: <a href="https://www.linkedin.com/in/kunal-v-16315b94" class="text-blue-700 underline"
>Kunal Vaishnavi</a
>
and
<a href="https://www.linkedin.com/in/parinitaparinita/" class="text-blue-700">Parinita Rahi</a>
<a href="https://www.linkedin.com/in/parinitaparinita/" class="text-blue-700 underline">Parinita Rahi</a>
</p>
<p class="text-neutral">
14TH NOVEMBER, 2023 <span class="italic text-stone-500">(Updated 22nd November)</span>
Expand All @@ -76,7 +76,7 @@
Llama2 is a state-of-the-art open source LLM from Meta ranging in scale from 7B to 70B
parameters (7B, 13B, 70B). Microsoft and Meta <a
href="https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-meta-expand-their-ai-partnership-with-llama-2-on-azure-and-windows/"
class="text-blue-700">announced</a
class="text-blue-700 underline">announced</a
> their AI on Azure and Windows collaboration in July 2023. As part of the announcement, Llama2
was added to the Azure AI model catalog, which serves as a hub of foundation models that empower
developers and machine learning (ML) professionals to easily discover, evaluate, customize, and
Expand Down Expand Up @@ -152,7 +152,7 @@
<p class="mb-4">
More details on these metrics can be found <a
href="https://github.com/microsoft/onnxruntime-inference-examples/blob/main/python/models/llama/README.md"
class="text-blue-700">here</a
class="text-blue-700 underline">here</a
>.
</p>

Expand All @@ -165,7 +165,7 @@
</p>

<p class="mb-4">
ONNX Runtime applied <a href="https://arxiv.org/pdf/1909.08053.pdf" class="text-blue-700"
ONNX Runtime applied <a href="https://arxiv.org/pdf/1909.08053.pdf" class="text-blue-700 underline"
>Megatron-LM</a
>
Tensor Parallelism on the 70B model to split the original model weight onto different GPUs. Megatron
Expand All @@ -176,7 +176,7 @@
You can find additional example scripts
<a
href="https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/llama/"
class="text-blue-700">here</a
class="text-blue-700 underline">here</a
>.
</p>

Expand Down Expand Up @@ -252,19 +252,19 @@
calculate the rotary embeddings more efficiently with less memory usage. The rotary embedding
compute kernels also support interleaved and non-interleaved formats to support both the <a
href="https://github.com/microsoft/Llama-2-Onnx"
class="text-blue-700">Microsoft version of LLaMA-2</a
class="text-blue-700 underline">Microsoft version of LLaMA-2</a
>
and the Hugging Face version of LLaMA-2 respectively while sharing the same calculations.
</p>

<p class="mb-4">
The optimizations work for the <a
href="https://huggingface.co/meta-llama"
class="text-blue-700">Hugging Face versions</a
class="text-blue-700 underline">Hugging Face versions</a
>
(models ending with <i>-hf</i>) and the Microsoft versions. You can download the optimized HF
versions from
<a href="https://github.com/microsoft/Llama-2-Onnx/tree/main-CUDA_CPU" class="text-blue-700"
<a href="https://github.com/microsoft/Llama-2-Onnx/tree/main-CUDA_CPU" class="text-blue-700 underline"
>Microsoft's LLaMA-2 ONNX repository</a
>. Stay tuned for newer Microsoft versions coming soon!
</p>
Expand All @@ -281,7 +281,7 @@
<p class="mb-4">
Here is an example of <a
href="https://github.com/microsoft/Olive/tree/main/examples/llama2"
class="text-blue-700">Llama2 optimization with Olive</a
class="text-blue-700 underline">Llama2 optimization with Olive</a
>, which harnesses ONNX Runtime optimizations highlighted in this blog. Distinct optimization
flows cater to various requirements. For instance, you have the flexibility to choose
different data types for quantization in CPU and GPU inference, based on your accuracy
Expand All @@ -294,7 +294,7 @@
<p class="mb-4">
Here is a <a
href="https://github.com/microsoft/onnxruntime-inference-examples/blob/main/python/models/llama/LLaMA-2%20E2E%20Notebook.ipynb"
class="text-blue-700">sample notebook</a
class="text-blue-700 underline">sample notebook</a
> that shows you an end-to-end example of how you can use the above ONNX Runtime optimizations
in your application.
</p>
Expand Down
4 changes: 2 additions & 2 deletions src/routes/training/+page.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,8 @@
<span class="font-bold">Personalization tasks</span> where the model needs to be trained on
the user's data
</h2>
Examples:
<ul class="list-disc list-inside">
Examples:
<li>Image / Audio classification</li>
<li>Text Prediction</li>
</ul>
Expand All @@ -237,8 +237,8 @@
<span class="font-bold">Federated learning tasks</span> where the model is locally trained
on data distributed across multiple devices to build a more robust aggregated global model
</h2>
Examples:
<ul class="list-disc list-inside">
Examples:
<li>Medical research</li>
<li>Autonomous vehicles</li>
<li>Robotics</li>
Expand Down

0 comments on commit 0321041

Please sign in to comment.