diff --git a/src/routes/components/training-and-inference.svelte b/src/routes/components/training-and-inference.svelte index 3f341be059353..5400e17969fd9 100644 --- a/src/routes/components/training-and-inference.svelte +++ b/src/routes/components/training-and-inference.svelte @@ -9,17 +9,17 @@
- ONNX Runtime is the same tech that powers AI in Microsoft products like Office, Azure, and Bing, - as well as in thousands of other projects across the world. + ONNX Runtime powers AI in Microsoft products including Windows, Office, Azure Cognitive Services, and Bing, + as well as in thousands of other projects across the world. ONNX Runtime is cross-platform, supporting cloud, edge, web, and mobile experiences.
- + Learn more about ONNX Runtime Inferencing →- Run PyTorch and other ML models locally in the web browser with the cross-platform ONNX + Run PyTorch and other ML models in the web browser with ONNX Runtime Web.
- Infuse your Android and iOS mobile apps with AI and take advantage of ML accelerator - hardware with ONNX Runtime Mobile. + Infuse your Android and iOS mobile apps with AI using ONNX Runtime Mobile.
- ORT Training can be used to accelerate training for a large number of popular models, + Accelerate training of popular models, including Hugging Face models like Llama-2-7b and curated models from the Azure AI | Machine Learning Studio model catalog.
diff --git a/src/routes/inference/+page.svelte b/src/routes/inference/+page.svelte index cfdec06c5a47e..64811516ce070 100644 --- a/src/routes/inference/+page.svelte +++ b/src/routes/inference/+page.svelte @@ -6,7 +6,7 @@ const title = 'ONNX Runtime for Inferencing'; const description = - 'ONNX Runtime mobile runs models on mobile devices using the same API used for cloud-based inferencing. Developers can use their mobile language and development environment of choice to add AI to Android, iOS, react-native, MAUI/Xamarin applications in Swift, Objective-C, Java, Kotlin, JavaScript, C, and C++.'; + 'ONNX Runtime provides a performant solution to inference models from varying source frameworks (PyTorch, Hugging Face, TensorFlow) on different software and hardware stacks. ONNX Runtime Inference takes advantage of hardware accelerators, supports APIs in multiple languages (Python, C++, C#, C, Java, and more), and works on cloud servers, edge and mobile devices, and in web browsers.'; const imgsrc = 'onnxruntimelogo'; const imgalt = 'ONNX Runtime Logo'; @@ -42,68 +42,33 @@