-
Notifications
You must be signed in to change notification settings - Fork 132
[Model] Add Flux2 support #302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
|
cc: @ZJY0516 |
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
Signed-off-by: Prajwal A <[email protected]>
| - Qwen2.5-Omni: user_guide/examples/offline_inference/qwen2_5_omni.md | ||
| - Qwen3-Omni: user_guide/examples/offline_inference/qwen3_omni.md | ||
| - Text-To-Image: user_guide/examples/offline_inference/text_to_image.md | ||
| - FLUX 2: user_guide/examples/offline_inference/flux2.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this model use text_to_image.md as an example? #274
| @@ -0,0 +1,53 @@ | |||
| # FLUX 2 Offline Inference | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think is no longer necessary ofter text_to_image.md
| "librosa>=0.11.0", | ||
| "resampy>=0.4.3", | ||
| "diffusers==0.35.2", | ||
| "diffusers==0.36.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ZJY0516
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ValueError: There is no module or parameter named 'transformer.transformer_blocks.0.attn.add_k_proj' in Flux2Pipeline
Loading safetensors checkpoint shards: 14% Completed | 1/7 [00:01<00:11, 1.98s/it]
Add FLUX 2 diffusion model support
Ref:
vllm-project/vllm-omni#153Summary
Adds support for FLUX 2 text-to-image diffusion with a dual-stream + single-stream transformer architecture.
What’s included
Flux2Transformer2DModel: dual-stream (8 blocks) + single-stream (48 blocks), 4D RoPE, vLLM-optimized linear/norm layers.Flux2Pipeline: Mistral3-based prompt embeddings, 128-channel latents with 2×2 patch packing, FlowMatch Euler scheduler, VAE decode handling.Flux2Pipelineadded toDiffusionModelRegistryand post-process function registry.diffusers==0.36.0(includespipelines.flux2+AutoencoderKLFlux2) — no custom diffusers path / monkey patch.Files changed
vllm_omni/diffusion/models/flux2/(pipeline_flux2.py,flux2_transformer.py,__init__.py)vllm_omni/diffusion/registry.pypyproject.toml(diffusers pin)docs/user_guide/examples/offline_inference/flux2.mddocs/.nav.ymldocs/models/supported_models.mdTest plan
Test results
.venv)Checklist
supported_models.md