diff --git a/manifests/rhoai/shared/apps/nvidia-nim/nvidia-nim-app.yaml b/manifests/rhoai/shared/apps/nvidia-nim/nvidia-nim-app.yaml
index 08d450cdea..f7d6819e75 100644
--- a/manifests/rhoai/shared/apps/nvidia-nim/nvidia-nim-app.yaml
+++ b/manifests/rhoai/shared/apps/nvidia-nim/nvidia-nim-app.yaml
@@ -8,787 +8,34 @@ spec:
displayName: NVIDIA NIM
provider: NVIDIA
description: |-
- NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use microservices designed for secure, reliable deployment of high performance AI model inferencing across clouds, data centers and workstations.
+ NVIDIA NIM is a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing.
kfdefApplications: []
route: ''
- img: >-
-
-
+ img: |-
+
category: Self-managed
support: third party support
docsLink: https://developer.nvidia.com/nim
quickStart: ''
getStartedLink: 'https://developer.nvidia.com/nim'
enable:
- title: Enter NVIDIA AI Enterprise (NVAIE) license key
+ title: Enter NVIDIA AI Enterprise license key
actionLabel: Submit
description: ''
variables:
api_key: password
variableDisplayText:
- api_key: NVAIE license key
+ api_key: NVIDIA AI Enterprise license key
variableHelpText:
api_key: This key is given to you by NVIDIA
validationJob: nvidia-nim-periodic-validator
validationSecret: nvidia-nim-access
validationConfigMap: nvidia-nim-validation-result
- getStartedMarkDown: >-
+ getStartedMarkDown: |-
# **NVIDIA NIM**
-
NVIDIA NIM, part of NVIDIA AI Enterprise, is a set of easy-to-use
- microservices designed for secure, reliable deployment of high performance
+ microservices designed for secure, reliable deployment of high-performance
AI model inferencing across the cloud, data center and workstations.
Supporting a wide range of AI models, including open-source community and
NVIDIA AI Foundation models, it ensures seamless, scalable AI inferencing,
on-premises or in the cloud, leveraging industry standard APIs.
-
- ## **Key Benefits**
-
- ### Performance and Scale
-
- * Improve TCO with low latency, high throughput AI inference that scales
- with cloud– The Llama 3 70B NIM delivers up to 5X higher throughput compared
- to off the shelf deployment on H100 systems.
-
- * Achieve best accuracy with support for fine-tuned models out of the box
-
- ### Ease of Use
-
- * Speed time to market with prebuilt, cloud-native microservices that are
- continuously maintained to deliver optimized inference on NVIDIA accelerated
- infrastructure
-
- * Empower enterprise developers with industry standard APIs and tools
- tailored for enterprise environments
-
- ### Security and Manageability
-
- * Maintain security and control of generative AI applications and data with
- self-hosted deployment of the latest AI models in your choice of
- infrastructure, on-premises or in the cloud
-
- * Leverage enterprise-grade software with dedicated feature branches,
- rigorous validation processes, and support including direct access to NVIDIA
- AI experts and defined service-level agreements