From f072c660d24b14342917ac5a31d0479b0d2efa73 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 00:05:01 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- docs/source/users/index.md | 2 +- docs/source/users/vllm.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/users/index.md b/docs/source/users/index.md index 4191b96b7..7042d6803 100644 --- a/docs/source/users/index.md +++ b/docs/source/users/index.md @@ -443,7 +443,7 @@ To get started, follow the instructions on the [Ollama website](https://ollama.c ### vLLM usage -`vLLM` is a fast and easy-to-use library for LLM inference and serving. The [vLLM website](https://docs.vllm.ai/en/latest/) explains installation and usage. To use `vLLM` in Jupyter AI, please see the dedicated documentation page on using [vLLM in Jupyter AI](vllm.md). +`vLLM` is a fast and easy-to-use library for LLM inference and serving. The [vLLM website](https://docs.vllm.ai/en/latest/) explains installation and usage. To use `vLLM` in Jupyter AI, please see the dedicated documentation page on using [vLLM in Jupyter AI](vllm.md). ### Asking about something in your notebook diff --git a/docs/source/users/vllm.md b/docs/source/users/vllm.md index 2e912633e..2e24f10b3 100644 --- a/docs/source/users/vllm.md +++ b/docs/source/users/vllm.md @@ -4,7 +4,7 @@ `vLLM` is a fast and easy-to-use library for LLM inference and serving. The [vLLM website](https://docs.vllm.ai/en/latest/) explains installation and usage. -Depending on your hardware set up you will install `vLLM` using these [instructions](https://docs.vllm.ai/en/latest/getting_started/installation/index.html). It is best to install it in a dedicated python environment. +Depending on your hardware set up you will install `vLLM` using these [instructions](https://docs.vllm.ai/en/latest/getting_started/installation/index.html). It is best to install it in a dedicated python environment. Once it is installed you may start serving any model with the command: ```python @@ -24,7 +24,7 @@ Start up Jupyter AI and update the AI Settings as follows (notice that we are us alt="Screen shot of AI setting for using vllm." class="screenshot" width="75%"/> -Since vLLM may be addressed using OpenAI's API, you can test if the model is available using the API call as shown: +Since vLLM may be addressed using OpenAI's API, you can test if the model is available using the API call as shown: