Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support InfiniAI Megrez 3b #10893

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Support InfiniAI Megrez 3b #10893

wants to merge 2 commits into from

Conversation

dixyes
Copy link

@dixyes dixyes commented Dec 19, 2024

This pr is to add InfiniAI Megrez support into llama.cpp

The model now(@58f1df16523cb2a9acb225aa808146e052f2b5b2) seems have a wrong eos_token set in its tokenizer_config.json.
( <|turn_end|> at template and <|turn_end> in json) Not sure if this is on purpose. Also metioned here

So the converted model will not stop generating in chat mode. Modify it to <|turn_end|> in tokenizer_config.json, then the generated gguf will work.

@github-actions github-actions bot added testing Everything test related python python script changes labels Dec 19, 2024
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the tokenizer_pre == "megrez" part (if other collaborators know, please feel free to review this PR).

The template part looks good to me.

@arch-btw
Copy link
Contributor

Thanks for doing this, I was trying it myself but didn't finish it. Just so you know they fixed the eos 30 minutes ago.

src/llama.cpp Outdated Show resolved Hide resolved
@dixyes dixyes force-pushed the megrez branch 2 times, most recently from 048d345 to 73f3d01 Compare December 22, 2024 06:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
python python script changes testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants