Best way to use an OpenAI-compatible embedding API #11809
Replies: 4 comments 4 replies
-
Hey there @BeautyyuYanli! Good to see you back in the world of embeddings 🌐🦙 Based on the code and context you've provided, it seems like you're on the right track. Using the In the Here's the relevant part of the code from class OpenAIEmbedding(BaseEmbedding):
...
def __init__(
...
self._query_engine = get_engine(mode, model, _QUERY_MODE_MODEL_DICT)
self._text_engine = get_engine(mode, model, _TEXT_MODE_MODEL_DICT)
if "model_name" in kwargs:
model_name = kwargs.pop("model_name")
self._query_engine = self._text_engine = model_name
else:
model_name = model As for the documentation, I agree with you. It would be beneficial to have more detailed documentation about the usage of I hope this helps! If you have any more questions, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
Greetings! Did you find the way to solve this issue? Thank you in advance! |
Beta Was this translation helpful? Give feedback.
-
What if I created OpenAI-like web application with huggingface models? Thank you in advance! |
Beta Was this translation helpful? Give feedback.
-
Did anyone find a solution to this problem? |
Beta Was this translation helpful? Give feedback.
-
Hello everyone!
I'm using my own OpenAI-compatible embedding API, the runnable code:
But if I use the arg
model
instead ofmodel_name
, It will raise:I checked the code in
llama_index/embeddings/openai/base.py
,So I'm not sure if it is a good idea to set the
model_name
arg to use my custom OpenAI-compatible API? If not, is there a better way? If yes, maybe we need to add some documents?Beta Was this translation helpful? Give feedback.
All reactions