Skip to content

Latest commit

 

History

History
800 lines (708 loc) · 43 KB

2024.12.ocp.ai.mini.model.md

File metadata and controls

800 lines (708 loc) · 43 KB

Warning

working in progress

serving mini model on cpu only with ocp ai through vllm

We wants to testing openshift ai, focusing on networking, not focusing on gpu, and we do not have nvidia gpu. So we need a small llm model, and serving it on cpu, and try to expose it using http.

We will use DistilGPT-2, it is a very small model, the performance of this model is really bad, but it does not matter, it can produce response at least.

deploy ocp ai

We will deploy ocp ai, for exposing http, we keep kserve using raw deployment mode. This will not involve the service mesh and serverless, which is useless for my use case.

We also need minio for s3, and we

download model

First, we need to download the llm model it self.

# on vultr

# download the model first
dnf install -y conda


mkdir -p /data/env/
conda create -y -p /data/env/hg_cli python=3.11

conda init bash

conda activate /data/env/hg_cli
# conda deactivate

# python -m pip install --upgrade pip setuptools wheel

pip install --upgrade huggingface_hub


# for distilbert/distilgpt2
VAR_NAME=distilbert/distilgpt2

VAR_NAME_FULL=${VAR_NAME//\//-}
echo $VAR_NAME_FULL
# THUDM-ChatGLM2-6B

mkdir -p /data/huggingface/${VAR_NAME_FULL}
cd /data/huggingface/${VAR_NAME_FULL}

while true; do
    huggingface-cli download --repo-type model --revision main --cache-dir /data/huggingface/cache --local-dir ./ --local-dir-use-symlinks False --exclude "*.pt"  --resume-download ${VAR_NAME} 
    if [ $? -eq 0 ]; then
        break
    fi
    sleep 1  # Optional: waits for 1 second before trying again
done

create minio image with llm model

We will need a s3 service, which we will use minio to provide. We will embed llm model to the minio image, so after the minio start, the llm is there.

# create the model image
mkdir -p /data/workspace/llm
cd /data/workspace/llm

rsync -avz /data/huggingface/${VAR_NAME_FULL}/ /data/workspace/llm/models/

cat << 'EOF' > Dockerfile
FROM docker.io/minio/minio:RELEASE.2021-06-17T00-10-46Z.hotfix.35a0912ff as minio-examples

EXPOSE 9000

ARG MODEL_DIR=/data1/

USER root

RUN useradd -u 1000 -g 0 modelmesh
RUN mkdir -p ${MODEL_DIR}
RUN chown -R 1000:0 /data1 && \
    chgrp -R 0 /data1 && \
    chmod -R g=u /data1

# COPY --chown=1000:0 models ${MODEL_DIR}/models/models
COPY models ${MODEL_DIR}/models/models

USER 1000

EOF

podman build -t quay.io/wangzheng422/qimgs:minio-distilgpt2-v03 -f Dockerfile .

podman push quay.io/wangzheng422/qimgs:minio-distilgpt2-v03

deploy the minio

We deploy the minio.

# on helper
S3_NAME='distilgpt2'
S3_NS='demo-llm'
S3_IMAGE='quay.io/wangzheng422/qimgs:minio-distilgpt2-v03'

oc new-project ${S3_NS}
oc label --overwrite ns ${S3_NS} \
   pod-security.kubernetes.io/enforce=privileged

cat << EOF > ${BASE_DIR}/data/install/s3-codellama.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: minio-${S3_NAME}
spec:
  ports:
    - name: minio-client-port
      port: 9000
      protocol: TCP
      targetPort: 9000
    - name: minio-webui-port
      port: 9001
      protocol: TCP
      targetPort: 9001
  selector:
    app: minio-${S3_NAME}

---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: s3-${S3_NAME}
spec:
  to:
    kind: Service
    name: minio-${S3_NAME}
  port:
    targetPort: 9000

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio-${S3_NAME}
  labels:
    app: minio-${S3_NAME}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio-${S3_NAME}
  template:
    metadata:
      labels:
        app: minio-${S3_NAME}
    spec:
      containers:
        - args:
            - server
            - /data1
          env:
            - name: MINIO_ACCESS_KEY
              value:  admin
            - name: MINIO_SECRET_KEY
              value: password
          image: ${S3_IMAGE}
          imagePullPolicy: IfNotPresent
          name: minio
        #   nodeSelector:
        #     kubernetes.io/hostname: "worker-01-demo"
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
                drop:
                - ALL
            runAsNonRoot: true
            seccompProfile:
                type: RuntimeDefault

---
apiVersion: v1
kind: Secret
metadata:
  annotations:
    serving.kserve.io/s3-endpoint: minio-${S3_NAME}.${S3_NS}.svc:9000 # replace with your s3 endpoint e.g minio-service.kubeflow:9000
    serving.kserve.io/s3-usehttps: "0" # by default 1, if testing with minio you can set to 0
    serving.kserve.io/s3-region: "us-east-2"
    serving.kserve.io/s3-useanoncredential: "false" # omitting this is the same as false, if true will ignore provided credential and use anonymous credentials
  name: storage-config-${S3_NAME}
stringData:
  "AWS_ACCESS_KEY_ID": "admin"
  "AWS_SECRET_ACCESS_KEY": "password"

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sa-${S3_NAME}
secrets:
- name: storage-config-${S3_NAME}

EOF

oc apply -n ${S3_NS} -f ${BASE_DIR}/data/install/s3-codellama.yaml

# oc delete -n ${S3_NS} -f ${BASE_DIR}/data/install/s3-codellama.yaml

# open in browser to check
# http://s3-codellama-llm-demo.apps.demo-gpu.wzhlab.top/minio/models/
# http://s3-codellama-llm-demo.apps.demo-gpu.wzhlab.top/minio/models/CodeLlama-13b-Instruct-hf/

create a serving runtime

We need to create a vllm serving runtime for cpu only. The default one in ocp ai, will only work with nvidia gpu.

Here is the reference.

apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
labels:
  opendatahub.io/dashboard: "true"
metadata:
  annotations:
    openshift.io/display-name: "vLLM for CPU"
  name: vllm-runtime-cpu
spec:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
  builtInAdapter:
    modelLoadingTimeoutMillis: 90000
  containers:
    - args:
        - --model
        - /mnt/models/
        - --download-dir
        - /models-cache
        - --port
        - "8080"
        - --max-model-len
        - "1024"
      image: quay.io/rh-aiservices-bu/vllm-cpu-openai-ubi9:0.2
      command:
        - python
        - -m
        - vllm.entrypoints.openai.api_server
      name: kserve-container
      ports:
        - containerPort: 8080
          name: http1
          protocol: TCP
  multiModel: false
  supportedModelFormats:
    - autoSelect: true
      name: vLLM

create model servering

You can use upstream one, which is based on:

apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
labels:
  opendatahub.io/dashboard: "true"
metadata:
  annotations:
    openshift.io/display-name: vLLM for CPU
  name: vllm-runtime-cpu
spec:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
  builtInAdapter:
    modelLoadingTimeoutMillis: 90000
  containers:
    - args:
        - --model
        - /mnt/models/
        - --download-dir
        - /models-cache
        - --port
        - "8080"
        - --max-model-len
        - "1024"
      image: quay.io/rh-aiservices-bu/vllm-cpu-openai-ubi9:0.2
      command:
        - python
        - -m
        - vllm.entrypoints.openai.api_server
      name: kserve-container
      ports:
        - containerPort: 8080
          name: http1
          protocol: TCP
  multiModel: false
  supportedModelFormats:
    - autoSelect: true
      name: vLLM

Or, you can modify the pre-defined, build-in one, which will use quay.io/modh/vllm:rhoai-2.13-cuda , based on:

apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
  annotations:
    opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
    openshift.io/display-name: CPU of vLLM ServingRuntime for KServe
  labels:
    opendatahub.io/dashboard: "true"
  name: vllm-runtime-cpu-ocpai
spec:
  annotations:
    prometheus.io/path: /metrics
    prometheus.io/port: "8080"
  containers:
    - args:
        - --port=8080
        - --model=/mnt/models
        - --served-model-name={{.Name}}
        - --distributed-executor-backend=mp
        - --device cpu
      command:
        - python
        - -m
        - vllm.entrypoints.openai.api_server
      env:
        - name: HF_HOME
          value: /tmp/hf_home
      image: quay.io/modh/vllm:rhoai-2.13-cuda
      name: kserve-container
      ports:
        - containerPort: 8080
          protocol: TCP
  multiModel: false
  supportedModelFormats:
    - autoSelect: true
      name: vLLM

expose http route

After create a model servering, it will create pod and service, we expose the service by route.

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: llm
spec:
  host: llm-wzh-ai-01.apps.demo-01-rhsys.wzhlab.top
  to:
    kind: Service
    name: distilgpt2-predictor
    weight: 100
  port:
    targetPort: http1
  wildcardPolicy: None

curl to testing

curl to test, we can see, it expose the service using http.

curl -v http://llm-wzh-ai-01.apps.demo-01-rhsys.wzhlab.top/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"/mnt/models/",
     "messages":
        [{"role":"user",
          "content":
             [{"type":"text", "text":"give me example"
              }
             ]
         }
        ]
    }'
# *   Trying 192.168.50.23:80...
# * Connected to llm-wzh-ai-01.apps.demo-01-rhsys.wzhlab.top (192.168.50.23) port 80 (#0)
# > POST /v1/chat/completions HTTP/1.1
# > Host: llm-wzh-ai-01.apps.demo-01-rhsys.wzhlab.top
# > User-Agent: curl/7.76.1
# > Accept: */*
# > Content-Type: application/json
# > Content-Length: 200
# >
# * Mark bundle as not supporting multiuse
# < HTTP/1.1 200 OK
# < date: Fri, 06 Dec 2024 15:13:25 GMT
# < server: uvicorn
# < content-length: 1427
# < content-type: application/json
# < set-cookie: fb2e7f517fef0f3147f969b6e5c6592e=ccc239021d83b5fedb39f97374356160; path=/; HttpOnly
# <
# {"id":"cmpl-bd4205e5a732496d928283dc467cf0b5","object":"chat.completion","created":1733498006,"model":"/mnt/models/","choices":[{"index":0,"message":{"role":"assistant","content":"As a result, we have released a new version of the P-N-A-R-R for Android. This is the first release of P-N-A-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-R-","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":4,"total_tokens":1024,"completion_tokens":1020}}

ray to run distrubte training

If you want to run llama-factory on multiple nodes, you can use ray, by referencing:

It will create an empty ray cluster, and then, submit different rayjob to the cluster.

debug

vllm serving runtime, is a docker image indeed.

A16 1/4gpu

quay.io/modh/vllm@sha256:3c56d4c2a5a9565e8b07ba17a6624290c4fb39ac9097b99b946326c09a8b40c8
quay.io/modh/vllm:rhoai-2.13-cuda

# https://github.com/red-hat-data-services/vllm/blob/main/Dockerfile
quay.io/modh/vllm:rhoai-2.16-cuda


podman run --privileged --gpus all --rm -it \
--entrypoint bash \
quay.io/modh/vllm:rhoai-2.13-cuda

docker run --gpus all --privileged --rm -it \
-e 'HF_HUB_OFFLINE=false' \
quay.io/modh/vllm:rhoai-2.13-cuda

docker run --gpus all --privileged --rm -it \
-e 'HF_HUB_OFFLINE=false' \
--entrypoint bash \
quay.io/modh/vllm:rhoai-2.13-cuda

python3 -m vllm.entrypoints.openai.api_server --device=cpu --help
usage: api_server.py [-h] [--host HOST] [--port PORT] [--uvicorn-log-level {debug,info,warning,error,critical,trace}] [--allow-credentials] [--allowed-origins ALLOWED_ORIGINS]
                     [--allowed-methods ALLOWED_METHODS] [--allowed-headers ALLOWED_HEADERS] [--api-key API_KEY] [--lora-modules LORA_MODULES [LORA_MODULES ...]]
                     [--prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]] [--chat-template CHAT_TEMPLATE] [--response-role RESPONSE_ROLE] [--ssl-keyfile SSL_KEYFILE]
                     [--ssl-certfile SSL_CERTFILE] [--ssl-ca-certs SSL_CA_CERTS] [--ssl-cert-reqs SSL_CERT_REQS] [--root-path ROOT_PATH] [--middleware MIDDLEWARE] [--return-tokens-as-token-ids]
                     [--disable-frontend-multiprocessing] [--enable-auto-tool-choice] [--tool-call-parser {mistral,hermes}] [--model MODEL] [--tokenizer TOKENIZER] [--skip-tokenizer-init]
                     [--revision REVISION] [--code-revision CODE_REVISION] [--tokenizer-revision TOKENIZER_REVISION] [--tokenizer-mode {auto,slow,mistral}] [--trust-remote-code]
                     [--download-dir DOWNLOAD_DIR] [--load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral}] [--config-format {auto,hf,mistral}]
                     [--dtype {auto,half,float16,bfloat16,float,float32}] [--kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}] [--quantization-param-path QUANTIZATION_PARAM_PATH]
                     [--max-model-len MAX_MODEL_LEN] [--guided-decoding-backend {outlines,lm-format-enforcer}] [--distributed-executor-backend {ray,mp}] [--worker-use-ray]
                     [--pipeline-parallel-size PIPELINE_PARALLEL_SIZE] [--tensor-parallel-size TENSOR_PARALLEL_SIZE] [--max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS]
                     [--ray-workers-use-nsight] [--block-size {8,16,32}] [--enable-prefix-caching] [--disable-sliding-window] [--use-v2-block-manager] [--num-lookahead-slots NUM_LOOKAHEAD_SLOTS]
                     [--seed SEED] [--swap-space SWAP_SPACE] [--cpu-offload-gb CPU_OFFLOAD_GB] [--gpu-memory-utilization GPU_MEMORY_UTILIZATION] [--num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE]
                     [--max-num-batched-tokens MAX_NUM_BATCHED_TOKENS] [--max-num-seqs MAX_NUM_SEQS] [--max-logprobs MAX_LOGPROBS] [--disable-log-stats]
                     [--quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,None}]
                     [--rope-scaling ROPE_SCALING] [--rope-theta ROPE_THETA] [--enforce-eager] [--max-context-len-to-capture MAX_CONTEXT_LEN_TO_CAPTURE]
                     [--max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE] [--disable-custom-all-reduce] [--tokenizer-pool-size TOKENIZER_POOL_SIZE] [--tokenizer-pool-type TOKENIZER_POOL_TYPE]
                     [--tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG] [--limit-mm-per-prompt LIMIT_MM_PER_PROMPT] [--mm-processor-kwargs MM_PROCESSOR_KWARGS] [--enable-lora]
                     [--max-loras MAX_LORAS] [--max-lora-rank MAX_LORA_RANK] [--lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE] [--lora-dtype {auto,float16,bfloat16,float32}]
                     [--long-lora-scaling-factors LONG_LORA_SCALING_FACTORS] [--max-cpu-loras MAX_CPU_LORAS] [--fully-sharded-loras] [--enable-prompt-adapter]
                     [--max-prompt-adapters MAX_PROMPT_ADAPTERS] [--max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN] [--device {auto,cuda,neuron,cpu,openvino,tpu,xpu}]
                     [--num-scheduler-steps NUM_SCHEDULER_STEPS] [--multi-step-stream-outputs] [--scheduler-delay-factor SCHEDULER_DELAY_FACTOR] [--enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]]
                     [--speculative-model SPECULATIVE_MODEL]
                     [--speculative-model-quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,None}]
                     [--num-speculative-tokens NUM_SPECULATIVE_TOKENS] [--speculative-draft-tensor-parallel-size SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZE]
                     [--speculative-max-model-len SPECULATIVE_MAX_MODEL_LEN] [--speculative-disable-by-batch-size SPECULATIVE_DISABLE_BY_BATCH_SIZE]
                     [--ngram-prompt-lookup-max NGRAM_PROMPT_LOOKUP_MAX] [--ngram-prompt-lookup-min NGRAM_PROMPT_LOOKUP_MIN]
                     [--spec-decoding-acceptance-method {rejection_sampler,typical_acceptance_sampler}]
                     [--typical-acceptance-sampler-posterior-threshold TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_THRESHOLD]
                     [--typical-acceptance-sampler-posterior-alpha TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_ALPHA] [--disable-logprobs-during-spec-decoding [DISABLE_LOGPROBS_DURING_SPEC_DECODING]]
                     [--model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG] [--ignore-patterns IGNORE_PATTERNS] [--preemption-mode PREEMPTION_MODE]
                     [--served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]] [--qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH] [--otlp-traces-endpoint OTLP_TRACES_ENDPOINT]
                     [--collect-detailed-traces COLLECT_DETAILED_TRACES] [--disable-async-output-proc] [--override-neuron-config OVERRIDE_NEURON_CONFIG] [--disable-log-requests]
                     [--max-log-len MAX_LOG_LEN] [--disable-fastapi-docs]

vLLM OpenAI-Compatible RESTful API server.

options:
  -h, --help            show this help message and exit
  --host HOST           host name
  --port PORT           port number
  --uvicorn-log-level {debug,info,warning,error,critical,trace}
                        log level for uvicorn
  --allow-credentials   allow credentials
  --allowed-origins ALLOWED_ORIGINS
                        allowed origins
  --allowed-methods ALLOWED_METHODS
                        allowed methods
  --allowed-headers ALLOWED_HEADERS
                        allowed headers
  --api-key API_KEY     If provided, the server will require this key to be presented in the header.
  --lora-modules LORA_MODULES [LORA_MODULES ...]
                        LoRA module configurations in either 'name=path' formator JSON format. Example (old format): 'name=path' Example (new format): '{"name": "name", "local_path": "path",
                        "base_model_name": "id"}'
  --prompt-adapters PROMPT_ADAPTERS [PROMPT_ADAPTERS ...]
                        Prompt adapter configurations in the format name=path. Multiple adapters can be specified.
  --chat-template CHAT_TEMPLATE
                        The file path to the chat template, or the template in single-line form for the specified model
  --response-role RESPONSE_ROLE
                        The role name to return if `request.add_generation_prompt=true`.
  --ssl-keyfile SSL_KEYFILE
                        The file path to the SSL key file
  --ssl-certfile SSL_CERTFILE
                        The file path to the SSL cert file
  --ssl-ca-certs SSL_CA_CERTS
                        The CA certificates file
  --ssl-cert-reqs SSL_CERT_REQS
                        Whether client certificate is required (see stdlib ssl module's)
  --root-path ROOT_PATH
                        FastAPI root_path when app is behind a path based routing proxy
  --middleware MIDDLEWARE
                        Additional ASGI middleware to apply to the app. We accept multiple --middleware arguments. The value should be an import path. If a function is provided, vLLM will add it
                        to the server using @app.middleware('http'). If a class is provided, vLLM will add it to the server using app.add_middleware().
  --return-tokens-as-token-ids
                        When --max-logprobs is specified, represents single tokens as strings of the form 'token_id:{token_id}' so that tokens that are not JSON-encodable can be identified.
  --disable-frontend-multiprocessing
                        If specified, will run the OpenAI frontend server in the same process as the model serving engine.
  --enable-auto-tool-choice
                        Enable auto tool choice for supported models. Use --tool-call-parserto specify which parser to use
  --tool-call-parser {mistral,hermes}
                        Select the tool call parser depending on the model that you're using. This is used to parse the model-generated tool call into OpenAI API format. Required for --enable-
                        auto-tool-choice.
  --model MODEL         Name or path of the huggingface model to use.
  --tokenizer TOKENIZER
                        Name or path of the huggingface tokenizer to use. If unspecified, model name or path will be used.
  --skip-tokenizer-init
                        Skip initialization of tokenizer and detokenizer
  --revision REVISION   The specific model version to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
  --code-revision CODE_REVISION
                        The specific revision to use for the model code on Hugging Face Hub. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
  --tokenizer-revision TOKENIZER_REVISION
                        Revision of the huggingface tokenizer to use. It can be a branch name, a tag name, or a commit id. If unspecified, will use the default version.
  --tokenizer-mode {auto,slow,mistral}
                        The tokenizer mode. * "auto" will use the fast tokenizer if available. * "slow" will always use the slow tokenizer. * "mistral" will always use the `mistral_common`
                        tokenizer.
  --trust-remote-code   Trust remote code from huggingface.
  --download-dir DOWNLOAD_DIR
                        Directory to download and load the weights, default to the default cache dir of huggingface.
  --load-format {auto,pt,safetensors,npcache,dummy,tensorizer,sharded_state,gguf,bitsandbytes,mistral}
                        The format of the model weights to load. * "auto" will try to load the weights in the safetensors format and fall back to the pytorch bin format if safetensors format is
                        not available. * "pt" will load the weights in the pytorch bin format. * "safetensors" will load the weights in the safetensors format. * "npcache" will load the weights in
                        pytorch format and store a numpy cache to speed up the loading. * "dummy" will initialize the weights with random values, which is mainly for profiling. * "tensorizer" will
                        load the weights using tensorizer from CoreWeave. See the Tensorize vLLM Model script in the Examples section for more information. * "bitsandbytes" will load the weights
                        using bitsandbytes quantization.
  --config-format {auto,hf,mistral}
                        The format of the model config to load. * "auto" will try to load the config in hf format if available else it will try to load in mistral format
  --dtype {auto,half,float16,bfloat16,float,float32}
                        Data type for model weights and activations. * "auto" will use FP16 precision for FP32 and FP16 models, and BF16 precision for BF16 models. * "half" for FP16. Recommended
                        for AWQ quantization. * "float16" is the same as "half". * "bfloat16" for a balance between precision and range. * "float" is shorthand for FP32 precision. * "float32" for
                        FP32 precision.
  --kv-cache-dtype {auto,fp8,fp8_e5m2,fp8_e4m3}
                        Data type for kv cache storage. If "auto", will use model data type. CUDA 11.8+ supports fp8 (=fp8_e4m3) and fp8_e5m2. ROCm (AMD GPU) supports fp8 (=fp8_e4m3)
  --quantization-param-path QUANTIZATION_PARAM_PATH
                        Path to the JSON file containing the KV cache scaling factors. This should generally be supplied, when KV cache dtype is FP8. Otherwise, KV cache scaling factors default to
                        1.0, which may cause accuracy issues. FP8_E5M2 (without scaling) is only supported on cuda versiongreater than 11.8. On ROCm (AMD GPU), FP8_E4M3 is instead supported for
                        common inference criteria.
  --max-model-len MAX_MODEL_LEN
                        Model context length. If unspecified, will be automatically derived from the model config.
  --guided-decoding-backend {outlines,lm-format-enforcer}
                        Which engine will be used for guided decoding (JSON schema / regex etc) by default. Currently support https://github.com/outlines-dev/outlines and
                        https://github.com/noamgat/lm-format-enforcer. Can be overridden per request via guided_decoding_backend parameter.
  --distributed-executor-backend {ray,mp}
                        Backend to use for distributed serving. When more than 1 GPU is used, will be automatically set to "ray" if installed or "mp" (multiprocessing) otherwise.
  --worker-use-ray      Deprecated, use --distributed-executor-backend=ray.
  --pipeline-parallel-size PIPELINE_PARALLEL_SIZE, -pp PIPELINE_PARALLEL_SIZE
                        Number of pipeline stages.
  --tensor-parallel-size TENSOR_PARALLEL_SIZE, -tp TENSOR_PARALLEL_SIZE
                        Number of tensor parallel replicas.
  --max-parallel-loading-workers MAX_PARALLEL_LOADING_WORKERS
                        Load model sequentially in multiple batches, to avoid RAM OOM when using tensor parallel and large models.
  --ray-workers-use-nsight
                        If specified, use nsight to profile Ray workers.
  --block-size {8,16,32}
                        Token block size for contiguous chunks of tokens. This is ignored on neuron devices and set to max-model-len
  --enable-prefix-caching
                        Enables automatic prefix caching.
  --disable-sliding-window
                        Disables sliding window, capping to sliding window size
  --use-v2-block-manager
                        Use BlockSpaceMangerV2.
  --num-lookahead-slots NUM_LOOKAHEAD_SLOTS
                        Experimental scheduling config necessary for speculative decoding. This will be replaced by speculative config in the future; it is present to enable correctness tests
                        until then.
  --seed SEED           Random seed for operations.
  --swap-space SWAP_SPACE
                        CPU swap space size (GiB) per GPU.
  --cpu-offload-gb CPU_OFFLOAD_GB
                        The space in GiB to offload to CPU, per GPU. Default is 0, which means no offloading. Intuitively, this argument can be seen as a virtual way to increase the GPU memory
                        size. For example, if you have one 24 GB GPU and set this to 10, virtually you can think of it as a 34 GB GPU. Then you can load a 13B model with BF16 weight,which requires
                        at least 26GB GPU memory. Note that this requires fast CPU-GPU interconnect, as part of the model isloaded from CPU memory to GPU memory on the fly in each model forward
                        pass.
  --gpu-memory-utilization GPU_MEMORY_UTILIZATION
                        The fraction of GPU memory to be used for the model executor, which can range from 0 to 1. For example, a value of 0.5 would imply 50% GPU memory utilization. If
                        unspecified, will use the default value of 0.9.
  --num-gpu-blocks-override NUM_GPU_BLOCKS_OVERRIDE
                        If specified, ignore GPU profiling result and use this numberof GPU blocks. Used for testing preemption.
  --max-num-batched-tokens MAX_NUM_BATCHED_TOKENS
                        Maximum number of batched tokens per iteration.
  --max-num-seqs MAX_NUM_SEQS
                        Maximum number of sequences per iteration.
  --max-logprobs MAX_LOGPROBS
                        Max number of log probs to return logprobs is specified in SamplingParams.
  --disable-log-stats   Disable logging statistics.
  --quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,None}, -q {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,None}
                        Method used to quantize the weights. If None, we first check the `quantization_config` attribute in the model config file. If that is None, we assume the model weights are
                        not quantized and use `dtype` to determine the data type of the weights.
  --rope-scaling ROPE_SCALING
                        RoPE scaling configuration in JSON format. For example, {"type":"dynamic","factor":2.0}
  --rope-theta ROPE_THETA
                        RoPE theta. Use with `rope_scaling`. In some cases, changing the RoPE theta improves the performance of the scaled model.
  --enforce-eager       Always use eager-mode PyTorch. If False, will use eager mode and CUDA graph in hybrid for maximal performance and flexibility.
  --max-context-len-to-capture MAX_CONTEXT_LEN_TO_CAPTURE
                        Maximum context length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. (DEPRECATED. Use --max-seq-len-to-capture
                        instead)
  --max-seq-len-to-capture MAX_SEQ_LEN_TO_CAPTURE
                        Maximum sequence length covered by CUDA graphs. When a sequence has context length larger than this, we fall back to eager mode. Additionally for encoder-decoder models, if
                        the sequence length of the encoder input is larger than this, we fall back to the eager mode.
  --disable-custom-all-reduce
                        See ParallelConfig.
  --tokenizer-pool-size TOKENIZER_POOL_SIZE
                        Size of tokenizer pool to use for asynchronous tokenization. If 0, will use synchronous tokenization.
  --tokenizer-pool-type TOKENIZER_POOL_TYPE
                        Type of tokenizer pool to use for asynchronous tokenization. Ignored if tokenizer_pool_size is 0.
  --tokenizer-pool-extra-config TOKENIZER_POOL_EXTRA_CONFIG
                        Extra config for tokenizer pool. This should be a JSON string that will be parsed into a dictionary. Ignored if tokenizer_pool_size is 0.
  --limit-mm-per-prompt LIMIT_MM_PER_PROMPT
                        For each multimodal plugin, limit how many input instances to allow for each prompt. Expects a comma-separated list of items, e.g.: `image=16,video=2` allows a maximum of
                        16 images and 2 videos per prompt. Defaults to 1 for each modality.
  --mm-processor-kwargs MM_PROCESSOR_KWARGS
                        Overrides for the multimodal input mapping/processing,e.g., image processor. For example: {"num_crops": 4}.
  --enable-lora         If True, enable handling of LoRA adapters.
  --max-loras MAX_LORAS
                        Max number of LoRAs in a single batch.
  --max-lora-rank MAX_LORA_RANK
                        Max LoRA rank.
  --lora-extra-vocab-size LORA_EXTRA_VOCAB_SIZE
                        Maximum size of extra vocabulary that can be present in a LoRA adapter (added to the base model vocabulary).
  --lora-dtype {auto,float16,bfloat16,float32}
                        Data type for LoRA. If auto, will default to base model dtype.
  --long-lora-scaling-factors LONG_LORA_SCALING_FACTORS
                        Specify multiple scaling factors (which can be different from base model scaling factor - see eg. Long LoRA) to allow for multiple LoRA adapters trained with those scaling
                        factors to be used at the same time. If not specified, only adapters trained with the base model scaling factor are allowed.
  --max-cpu-loras MAX_CPU_LORAS
                        Maximum number of LoRAs to store in CPU memory. Must be >= than max_num_seqs. Defaults to max_num_seqs.
  --fully-sharded-loras
                        By default, only half of the LoRA computation is sharded with tensor parallelism. Enabling this will use the fully sharded layers. At high sequence length, max rank or
                        tensor parallel size, this is likely faster.
  --enable-prompt-adapter
                        If True, enable handling of PromptAdapters.
  --max-prompt-adapters MAX_PROMPT_ADAPTERS
                        Max number of PromptAdapters in a batch.
  --max-prompt-adapter-token MAX_PROMPT_ADAPTER_TOKEN
                        Max number of PromptAdapters tokens
  --device {auto,cuda,neuron,cpu,openvino,tpu,xpu}
                        Device type for vLLM execution.
  --num-scheduler-steps NUM_SCHEDULER_STEPS
                        Maximum number of forward steps per scheduler call.
  --multi-step-stream-outputs
                        If True, then multi-step will stream outputs for every step
  --scheduler-delay-factor SCHEDULER_DELAY_FACTOR
                        Apply a delay (of delay factor multiplied by previousprompt latency) before scheduling next prompt.
  --enable-chunked-prefill [ENABLE_CHUNKED_PREFILL]
                        If set, the prefill requests can be chunked based on the max_num_batched_tokens.
  --speculative-model SPECULATIVE_MODEL
                        The name of the draft model to be used in speculative decoding.
  --speculative-model-quantization {aqlm,awq,deepspeedfp,tpu_int8,fp8,fbgemm_fp8,modelopt,marlin,gguf,gptq_marlin_24,gptq_marlin,awq_marlin,gptq,compressed-tensors,bitsandbytes,qqq,experts_int8,neuron_quant,None}
                        Method used to quantize the weights of speculative model.If None, we first check the `quantization_config` attribute in the model config file. If that is None, we assume
                        the model weights are not quantized and use `dtype` to determine the data type of the weights.
  --num-speculative-tokens NUM_SPECULATIVE_TOKENS
                        The number of speculative tokens to sample from the draft model in speculative decoding.
  --speculative-draft-tensor-parallel-size SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZE, -spec-draft-tp SPECULATIVE_DRAFT_TENSOR_PARALLEL_SIZE
                        Number of tensor parallel replicas for the draft model in speculative decoding.
  --speculative-max-model-len SPECULATIVE_MAX_MODEL_LEN
                        The maximum sequence length supported by the draft model. Sequences over this length will skip speculation.
  --speculative-disable-by-batch-size SPECULATIVE_DISABLE_BY_BATCH_SIZE
                        Disable speculative decoding for new incoming requests if the number of enqueue requests is larger than this value.
  --ngram-prompt-lookup-max NGRAM_PROMPT_LOOKUP_MAX
                        Max size of window for ngram prompt lookup in speculative decoding.
  --ngram-prompt-lookup-min NGRAM_PROMPT_LOOKUP_MIN
                        Min size of window for ngram prompt lookup in speculative decoding.
  --spec-decoding-acceptance-method {rejection_sampler,typical_acceptance_sampler}
                        Specify the acceptance method to use during draft token verification in speculative decoding. Two types of acceptance routines are supported: 1) RejectionSampler which does
                        not allow changing the acceptance rate of draft tokens, 2) TypicalAcceptanceSampler which is configurable, allowing for a higher acceptance rate at the cost of lower
                        quality, and vice versa.
  --typical-acceptance-sampler-posterior-threshold TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_THRESHOLD
                        Set the lower bound threshold for the posterior probability of a token to be accepted. This threshold is used by the TypicalAcceptanceSampler to make sampling decisions
                        during speculative decoding. Defaults to 0.09
  --typical-acceptance-sampler-posterior-alpha TYPICAL_ACCEPTANCE_SAMPLER_POSTERIOR_ALPHA
                        A scaling factor for the entropy-based threshold for token acceptance in the TypicalAcceptanceSampler. Typically defaults to sqrt of --typical-acceptance-sampler-posterior-
                        threshold i.e. 0.3
  --disable-logprobs-during-spec-decoding [DISABLE_LOGPROBS_DURING_SPEC_DECODING]
                        If set to True, token log probabilities are not returned during speculative decoding. If set to False, log probabilities are returned according to the settings in
                        SamplingParams. If not specified, it defaults to True. Disabling log probabilities during speculative decoding reduces latency by skipping logprob calculation in proposal
                        sampling, target sampling, and after accepted tokens are determined.
  --model-loader-extra-config MODEL_LOADER_EXTRA_CONFIG
                        Extra config for model loader. This will be passed to the model loader corresponding to the chosen load_format. This should be a JSON string that will be parsed into a
                        dictionary.
  --ignore-patterns IGNORE_PATTERNS
                        The pattern(s) to ignore when loading the model.Default to 'original/**/*' to avoid repeated loading of llama's checkpoints.
  --preemption-mode PREEMPTION_MODE
                        If 'recompute', the engine performs preemption by recomputing; If 'swap', the engine performs preemption by block swapping.
  --served-model-name SERVED_MODEL_NAME [SERVED_MODEL_NAME ...]
                        The model name(s) used in the API. If multiple names are provided, the server will respond to any of the provided names. The model name in the model field of a response
                        will be the first name in this list. If not specified, the model name will be the same as the `--model` argument. Noted that this name(s)will also be used in `model_name`
                        tag content of prometheus metrics, if multiple names provided, metricstag will take the first one.
  --qlora-adapter-name-or-path QLORA_ADAPTER_NAME_OR_PATH
                        Name or path of the QLoRA adapter.
  --otlp-traces-endpoint OTLP_TRACES_ENDPOINT
                        Target URL to which OpenTelemetry traces will be sent.
  --collect-detailed-traces COLLECT_DETAILED_TRACES
                        Valid choices are model,worker,all. It makes sense to set this only if --otlp-traces-endpoint is set. If set, it will collect detailed traces for the specified modules.
                        This involves use of possibly costly and or blocking operations and hence might have a performance impact.
  --disable-async-output-proc
                        Disable async output processing. This may result in lower performance.
  --override-neuron-config OVERRIDE_NEURON_CONFIG
                        override or set neuron device configuration.
  --disable-log-requests
                        Disable logging requests.
  --max-log-len MAX_LOG_LEN
                        Max number of prompt characters or prompt ID numbers being printed in log. Default: Unlimited
  --disable-fastapi-docs
                        Disable FastAPI's OpenAPI schema, Swagger UI, and ReDoc endpoint

try to run in docker, and simulate the http

# in the docker shell terminal
python3 -m vllm.entrypoints.openai.api_server --port=8080 --model=TinyLlama/TinyLlama-1.1B-Chat-v1.0

# in another terminal
curl -v http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
     "model":"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
     "messages":[
        {
          "role":"user",
          "content":"give me example"
        }
     ]
    }'
# *   Trying 127.0.0.1:8080...
# * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
# > POST /v1/chat/completions HTTP/1.1
# > Host: 127.0.0.1:8080
# > User-Agent: curl/7.76.1
# > Accept: */*
# > Content-Type: application/json
# > Content-Length: 166
# >
# * Mark bundle as not supporting multiuse
# < HTTP/1.1 200 OK
# < date: Sun, 08 Dec 2024 05:59:48 GMT
# < server: uvicorn
# < content-length: 1309
# < content-type: application/json
# <
# {"id":"chat-eeca7520eaa94c49b0693e844abfbe29","object":"chat.completion","created":1733637592,"model":"TinyLlama/TinyLlama-1.1B-Chat-v1.0","choices":[{"index":0,"message":{"role":"assistant","content":"Example:\n\nImagine a world where all food is delivered by drones. In this future, we have the convenience of ordering food for delivery when we're out and about. The meals are prepared at a centralized location, and the drones transport them to our doorstep, ensuring that our diet is always fresh and healthy. We don't have to worry about cooking or grocery shopping, as these tasks are now handled automatically. In this world, we have the ability to customize our meals to our individual tastes and preferences, making grocery shopping a thing of the past. We can also communicate with our food delivery service to track their progress and ensure that our orders are on time and fresh.\n\nThis scenario highlights the benefits of a future where food delivery is automated and personalized. It demonstrates how technolo* Connection #0 to host 127.0.0.1 left intact
# gy has the potential to transform our food supply chain, making it more efficient, convenient, and sustainable.","tool_calls":[]},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"usage":{"prompt_tokens":20,"total_tokens":229,"completion_tokens":209},"prompt_logprobs":null}

if run on cpu only host, it will give the error, this is because nvidia container toolbox is not installed.

WARNING 12-08 04:21:42 _core_ext.py:180] Failed to import from vllm._core_C with ImportError('libcuda.so.1: cannot open shared object file: No such file or directory')
Traceback (most recent call last):
  File "<frozen runpy>", line 189, in _run_module_as_main
  File "<frozen runpy>", line 112, in _get_module_details
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/__init__.py", line 3, in <module>
    from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/engine/arg_utils.py", line 11, in <module>
    from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig,
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/config.py", line 12, in <module>
    from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/model_executor/layers/quantization/__init__.py", line 3, in <module>
    from vllm.model_executor.layers.quantization.aqlm import AQLMConfig
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/model_executor/layers/quantization/aqlm.py", line 11, in <module>
    from vllm import _custom_ops as ops
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/_custom_ops.py", line 8, in <module>
    from vllm._core_ext import ScalarType
  File "/opt/vllm/lib64/python3.12/site-packages/vllm/_core_ext.py", line 182, in <module>
    ScalarType = torch.classes._core_C.ScalarType
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/vllm/lib64/python3.12/site-packages/torch/_classes.py", line 13, in __getattr__
    proxy = torch._C._get_custom_class_python_wrapper(self.name, attr)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Tried to instantiate class '_core_C.ScalarType', but it does not exist! Ensure that it is registered via torch::class_

end