Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] lmformatenforcer integration seems to be broken on new versions #696

Open
3 tasks done
hvico opened this issue Dec 11, 2024 · 0 comments
Open
3 tasks done

[BUG] lmformatenforcer integration seems to be broken on new versions #696

hvico opened this issue Dec 11, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@hvico
Copy link

hvico commented Dec 11, 2024

OS

Linux

GPU Library

CUDA 12.x

Python version

3.10

Pytorch version

2.5

Model

No response

Describe the bug

When running lmformatenformatenforcer integration for JSON output I get:

Error during execution: 'ExLlamaV2TokenEnforcerFilter' object has no attribute 'background_drop'

and if I comment that call in the generator code it fails with another missing stuff related with logit mask.

Reproduction steps

Try to enforce json output on v0.2.6

Expected behavior

Integration works.

Logs

No response

Additional context

No response

Acknowledgements

  • I have looked for similar issues before submitting this one.
  • I understand that the developers have lives and my issue will be answered when possible.
  • I understand the developers of this program are human, and I will ask my questions politely.
@hvico hvico added the bug Something isn't working label Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant