-
Notifications
You must be signed in to change notification settings - Fork 80
Open
Description
Feature request
I was trying for generating text-embeddings for mistral based model using sentence transformers, but facing an issue with memory since it is trying to download the complete model in one core and throwing memory constraint issues , since mistral model requires 16GB and one neuron core is of size is 16GB. So, i wanted to activate multiple cores using an argument in order to generate using optimum neuron.
Motivation
Need to activate multiple cores and also such that i can run two models in parellel using different cores
Your contribution
Was able to run smaller models, but for larger models facing issues.
Metadata
Metadata
Assignees
Labels
No labels