Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor token healing initialization. #330

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

bjj
Copy link

@bjj bjj commented Feb 10, 2024

begin_steam now leaves the stream in a state where the first call to stream will generated the healed token without needing a special case

This is just the first step in making it possible to feed the streaming generator logits (from batch processing) rather than having it pull logits from the model. The heal_token case made the number of model.forward calls from stream variable, and this change makes it constant.

bjj added 2 commits February 10, 2024 15:01
begin_steam now leaves the stream in a state where the first call to stream will generated the healed token without needing a special case
Callers can take advantage of all the streaming logic while batching logit generation outside of this class.
@bjj
Copy link
Author

bjj commented Feb 10, 2024

Added a second patch with .append_logits()

@turboderp
Copy link
Owner

turboderp commented Feb 11, 2024

I don't understand why you'd want to send logits to the generator? The whole point of the generator is to pull logits from the model until a stop condition is met. If you just want to sample from logits you've produced with model.forward(), why not call the sampler directly?

Also, the rationale for not making the token healing a separate iteration of stream() is to make sure that every call to stream() uses exactly one token of the available context length.

@bjj
Copy link
Author

bjj commented Feb 11, 2024

If you just want to sample from logits you've produced with model.forward(), why not call the sampler directly?

The sampler isn't really the problem. It's all the token decode logic in the streaming generator that I want to re-use. I did start out calling sample/decode, and then I started running into all the issues with that (sentencepiece, partial UTF8 strings, multi-token stop strings, etc).

The whole point of the generator is to pull logits from the model until a stop condition is met.

But "pull logits from the model" is about one line out of several hundred (not counting speculative generation). I wanted to refactor the whole class so that the stream-decode part could be re-used. The best place to start seemed to be moving toward a model where init leaves the state in a "ready for next model.forward" and stream could have a variant like stream(logits, ...)

Also, the rationale for not making the token healing a separate iteration of stream() is to make sure that every call to stream() uses exactly one token of the available context length.

If that's important I think it can be restored easily by having an explicit stream(logits, ... variant (without that guarantee) and invoking _stream() twice on startup in the healing case for regular callers.

@bjj
Copy link
Author

bjj commented Mar 4, 2024

Do you have any further feedback on this PR? Can you spell out how you envision a server that supports continuous batching and streaming would use the facilities in exllamav2?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants