-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gemma integration #4
Comments
@Isaina open - 36 model, tokenizer = load("mlx-community/NeuralBeagle14-7B-4bit-mlx")
+ 36 model, tokenizer = load("mlx-community/quantized-gemma-2b-it") |
Can YOU also support mlx on Intel macos platform |
Ok thanks for gemma information On 05.04.2024. at 18:50, Abe Estrada wrote:
From: "Abe Estrada" ***@***.***>Date: 5 April 2024To: "vegaluisjose/mlx-rag" ***@***.***>Cc: "Mention" ***@***.***>,"Isaina" ***@***.***>Subject: Re: [vegaluisjose/mlx-rag] Gemma integration (Issue #4)
@Isaina no, mlx was created for Apple silicon
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi
can you also implement gemma model to compare with llama
best regards
The text was updated successfully, but these errors were encountered: