Skip to content

[RFC]: Fixing the ViT Backend especially ROCm #75

@tjtanaa

Description

@tjtanaa

Motivation.

Right now it is very messy and the AMD CI is broken.

The ViT changes also does not take into account the other model.py files, it only changes it for qwen_2_5_vl.py which potentially breaking all other models.py files.

vllm/model_executor/models/dots_ocr.py
vllm/model_executor/models/ernie45_vl.py
vllm/model_executor/models/glm4_1v.py
vllm/model_executor/models/qwen2_vl.py
vllm/model_executor/models/siglip2navit.py

vLLM currently have refactor to introduce the use of --mm-encoder-attn-backend to select the attention backend.
The PR is vllm-project#27061 , and a bugfix PR vllm-project#27124 .

Since the introduction of torch.compile into the ViT, currently only starting with qwen vl model in PR vllm-project#23207 , the AMD ViT Code path are broken. Multiple bugfix PR attempts are not working:

  1. [BUGFIX][ROCM] ViT FlashAttention on ROCm (no GFX9) and contiguous on qwen3vl ROCm TORCH_SDPA vllm-project/vllm#27190 fix torch.sdpa accuracy issue.
  2. [FIXBUG] Qwen3VL hallucinations without Contiguous on Torch.SDPA vllm-project/vllm#27744 fix torch.sdpa accuracy issue.

First, we should shrink down the https://github.com/vllm-project/vllm/pull/27061/files#r2443909604 the _Backend by introducing another _MHA_Backend registry.

Make sure that the ViT attention is a platform specific. We should determine platform interface. We also perform override in the platform interface. We should avoid doing that in the model.py files

In the platform interface, we should only return _MHA_Backend, we should not return the functions. The functions should only be returned through maybe_get_vit_flash_attn_backend .

Honor --mm-encoder-attn-backend so that we can write unit tests to test all different backends. AMD Instinct GPU is able to test all backends. Radeon GPUs only are able to use the TORCH_SDPA code path.

Proposed Change.

Changes

  1. First, we should shrink down the https://github.com/vllm-project/vllm/pull/27061/files#r2443909604 the _Backend by introducing another _MHA_Backend registry.

  2. Make sure that the ViT attention is a platform specific. We should determine platform interface. We also perform override in the platform interface. We should avoid doing that in the model.py files

  3. get_vit_attn_backend in the platform interface has to be able to access the --mm-encoder-attn-backend.

  4. We need to deprecate this line https://github.com/vllm-project/vllm/blob/33a0ea5f3264b5b2f571b8a53357e10efcc94670/vllm/model_executor/models/vision.py#L96 it is using VLLM_ATTENTION_BACKEND which is for Text Backbone. The ViT should not use this environment variable.

  5. In the platform interface, we should only return _MHA_Backend, we should not return the functions. The functions should only be returned through maybe_get_vit_flash_attn_backend .

  6. Added a logger.info_once so that users know which _MHA_Backend is selected in the end.

  7. Clean up cuda code path. Since vllm.vllm_flash_attn is just a wrapper for flash_attn library, on cuda, we always use vllm.vllm_flash_attn instead of flash_attn.

https://github.com/vllm-project/vllm/blob/ba33e8830dceb32e9b03508bbff435e3082759b8/vllm/attention/layer.py#L120-L125 .

  1. Write unit tests to test all different backends. Since there are large model sizes, we will check the VRAM size, if it is large enough, we run it. We provide such a unit test so that developers can run locally.

Feedback Period.

No response

CC List.

No response

Any Other Things.

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions