Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: [V1] New v1 engine does not support n>1? #12584

Open
1 task done
m-harmonic opened this issue Jan 30, 2025 · 1 comment
Open
1 task done

[Bug]: [V1] New v1 engine does not support n>1? #12584

m-harmonic opened this issue Jan 30, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@m-harmonic
Copy link

Your current environment

VLLM version 0.7.0

Model Input Dumps

No response

🐛 Describe the bug

When using v1 engine, LLM.generate() only returns 1 CompletionOutput even when SamplingParams sets n>1

Is this expected to work or is n>1 not yet supported for v1? If so, are there plans to support it?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@m-harmonic m-harmonic added the bug Something isn't working label Jan 30, 2025
@robertgshaw2-redhat
Copy link
Collaborator

Thanks, we are aware and working on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants