Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemma integration #4

Open
Isaina opened this issue Feb 22, 2024 · 4 comments
Open

Gemma integration #4

Isaina opened this issue Feb 22, 2024 · 4 comments

Comments

@Isaina
Copy link

Isaina commented Feb 22, 2024

Hi

can you also implement gemma model to compare with llama

best regards

@AbeEstrada
Copy link

@Isaina open query_vdb.py and replace:

- 36 model, tokenizer = load("mlx-community/NeuralBeagle14-7B-4bit-mlx")
+ 36 model, tokenizer = load("mlx-community/quantized-gemma-2b-it")

@Isaina
Copy link
Author

Isaina commented Apr 5, 2024

Can YOU also support mlx on Intel macos platform

@AbeEstrada
Copy link

AbeEstrada commented Apr 5, 2024

@Isaina No, mlx was created for Apple silicon. If you want to use Intel try llama.cpp

@Isaina
Copy link
Author

Isaina commented Apr 5, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants