Skip to content

Commit

Permalink
Remove mention of NPU
Browse files Browse the repository at this point in the history
  • Loading branch information
MaanavD authored Nov 13, 2024
1 parent e51b992 commit fa90b64
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions src/routes/blogs/olive-cli/+page.svx
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ At [Build 2023 Microsoft announced Olive (**O**NNX **Live**)](https://opensource
<div class="m-auto w55">
<img src="./olive-flow.png" alt="Olive workflow.">

<i>High-Level Olive Workflow. These hardware targets can include various AI accelerators (NPU, GPU, CPU) provided by major hardware vendors such as Qualcomm, AMD, Nvidia, and Intel</i>
<i>High-Level Olive Workflow. These hardware targets can include various AI accelerators (GPU, CPU) provided by major hardware vendors such as Qualcomm, AMD, Nvidia, and Intel</i>
</div>
<br/>

Expand Down Expand Up @@ -86,7 +86,6 @@ The command to run automatic optimizer for the Llama-3.2-1B-Instruct model on CP
> **Tip:** If want to target:
> - CUDA GPU, then update `--device` to `gpu` and `--provider` to `CUDAExecutionProvider`.
> - Windows DirectML, then update `--device` to `gpu` and `--provider` to `DmlExecutionProvider`.
> - Qualcomm NPU, then update `--device` to `npu` and `--provider` to `QNNExecutionProvider`.
>
> Olive will apply the optimizations specific to the device and provider.

Expand Down

0 comments on commit fa90b64

Please sign in to comment.