diff --git a/src/routes/blogs/olive-cli/+page.svx b/src/routes/blogs/olive-cli/+page.svx index 8bf6c7ee1142a..b9102e0330033 100644 --- a/src/routes/blogs/olive-cli/+page.svx +++ b/src/routes/blogs/olive-cli/+page.svx @@ -37,7 +37,7 @@ At [Build 2023 Microsoft announced Olive (**O**NNX **Live**)](https://opensource
Olive workflow. -High-Level Olive Workflow. These hardware targets can include various AI accelerators (NPU, GPU, CPU) provided by major hardware vendors such as Qualcomm, AMD, Nvidia, and Intel +High-Level Olive Workflow. These hardware targets can include various AI accelerators (GPU, CPU) provided by major hardware vendors such as Qualcomm, AMD, Nvidia, and Intel

@@ -86,7 +86,6 @@ The command to run automatic optimizer for the Llama-3.2-1B-Instruct model on CP > **Tip:** If want to target: > - CUDA GPU, then update `--device` to `gpu` and `--provider` to `CUDAExecutionProvider`. > - Windows DirectML, then update `--device` to `gpu` and `--provider` to `DmlExecutionProvider`. -> - Qualcomm NPU, then update `--device` to `npu` and `--provider` to `QNNExecutionProvider`. > > Olive will apply the optimizations specific to the device and provider.