-
Notifications
You must be signed in to change notification settings - Fork 616
[Feature]: Support speculative decoding #3945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
28ddcc1 to
2c18726
Compare
| logger.warning(f'Overriding HF config with {hf_overrides}') | ||
| override_hf_config(model_config.hf_config, hf_overrides) | ||
|
|
||
| # for serialization of transformers modules |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might not work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works with tp case on one node, but not teste on dp case on multiple nodes
| inputs: ModelInputs, | ||
| cache_engine: CacheEngine, | ||
| stream: torch.cuda.Stream = None, | ||
| output_position_ids: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output position_ids is cheap, we can always output it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. But only the spec model would reuse position ids. For long input, outputing position_ ids seems inefficient.
| input_buffers['position_ids'] = torch.zeros((1, max_tokens), dtype=torch.int64, device=device) | ||
| if getattr(self.config, 'use_flash_mla', False) is True: | ||
| import flash_mla | ||
| seqlens_dtype = torch.int64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when would we need int64?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the default is int64. while mla, fa3 needs int32.
| """Get max tokens.""" | ||
| num_tokens = input_ids.size(1) | ||
| orig_batch = q_seqlens.size(0) | ||
| if num_tokens == orig_batch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think sending tensor here is a good idea.
| def get_logits(self, hidden_states: torch.Tensor): | ||
| """Get logits of model output.""" | ||
| draft_model = self.model | ||
| if not isinstance(draft_model, torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
graph_runner has expose get_logits of model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. but eagle do not have get_logits while eagle3 has . Base on graph_runner's get_logits method, we cannot differ these two. That's why here check if original model has get_logits
Motivation
Support speculative decoding
Examples
pipeline
serving
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist