Skip to content

[Bug] LlamaIndex embeddings using wrong method #515

@logan-markewich

Description

@logan-markewich

The llamaindex tea embeddings are using a private method

embed_vector = embeddings._get_query_embedding(input.text)

It's not clear to me if the embedding service is meant to handle query embeddings or document embeddings. But either way, we should be using embed_model.get_text_embedding(text) or embed_model.get_query_embedding(query)

Likely we should have two endpoints, one for query and one for normal text documents.

We might also want to consider using get_text_embeddings_batch() instead of processing one document at a time, but again, depends on how we want to define our embedding endpoints

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

Status

Done

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions