Skip to content

Conversation

@linyueqian
Copy link
Contributor

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Add support for Stable Audio Open (stabilityai/stable-audio-open-1.0) for text-to-audio generation in vLLM-Omni.

Test Plan

python examples/offline_inference/text_to_audio/text_to_audio.py --model stabilityai/stable-audio-open-1.0 --prompt "The sound of a dog barking" --output dog_barking.wav

Test Result

dog_barking.wav


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: linyueqian <[email protected]>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +390 to +392
num_inference_steps = req.num_inference_steps or num_inference_steps
guidance_scale = req.guidance_scale if req.guidance_scale > 1.0 else guidance_scale

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Honor guidance_scale ≤1 from requests

guidance_scale from the request is only applied when it exceeds 1.0; otherwise the pipeline falls back to the default argument (7.0). This prevents callers from disabling classifier-free guidance or using a lower scale (e.g., requesting 0 or 1 via Omni.generate), because the model will always run CFG at scale 7 regardless of what was requested, making unconditional/low-guidance Stable Audio generation impossible.

Useful? React with 👍 / 👎.

@david6666666 david6666666 linked an issue Dec 16, 2025 that may be closed by this pull request
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[New Model]: stabilityai/stable-audio-open-1.0

1 participant