Replies: 3 comments 22 replies
-
🤖 Hey @OmriLevyTau! I'm Dosu, a friendly bot here to help you navigate through LlamaIndex while you wait for a human maintainer to assist you. I'm here to help with bugs, answer your questions, and guide you on becoming a contributor. Let's dive into your issue together! Based on the information provided, it seems that the However, there are a couple of ways you might be able to work around this:
@classmethod
def from_defaults(
cls,
docstore: Optional[BaseDocumentStore] = None,
index_store: Optional[BaseIndexStore] = None,
vector_store: Optional[Union[VectorStore, BasePydanticVectorStore]] = None,
image_store: Optional[VectorStore] = None,
vector_stores: Optional[
Dict[str, Union[VectorStore, BasePydanticVectorStore]]
] = None,
graph_store: Optional[GraphStore] = None,
persist_dir: Optional[str] = None,
fs: Optional[fsspec.AbstractFileSystem] = None,
in_memory: bool = False, # new parameter
) -> "StorageContext":
"""Create a StorageContext from defaults.
Args:
docstore (Optional[BaseDocumentStore]): document store
index_store (Optional[BaseIndexStore]): index store
vector_store (Optional[VectorStore]): vector store
graph_store (Optional[GraphStore]): graph store
image_store (Optional[VectorStore]): image store
in_memory (bool): whether to create in-memory stores or not
"""
if in_memory or persist_dir is None:
docstore = docstore or SimpleDocumentStore()
index_store = index_store or SimpleIndexStore()
graph_store = graph_store or SimpleGraphStore()
image_store = image_store or SimpleVectorStore()
if vector_store:
vector_stores = {DEFAULT_VECTOR_STORE: vector_store}
else:
vector_stores = vector_stores or {
DEFAULT_VECTOR_STORE: SimpleVectorStore()
}
if image_store:
# append image store to vector stores
vector_stores[IMAGE_VECTOR_STORE_NAMESPACE] = image_store
else:
# existing code
... In this modification, if
client = OpensearchVectorClient(endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field)
# initialize vector store
vector_store = OpensearchVectorStore(client) And start using that. Note that we changed our API in the latest version (0.6.0), so instead of Also, you may need to pass the embedding like this: from llama_index.vector_stores import OpensearchVectorStore
from llama_index.vector_stores.types import VectorStoreQuery
from gpt_index.embeddings.openai import OpenAIEmbedding
from llama_index.vector_stores import OpensearchVectorClient
embed_model = OpenAIEmbedding()
auth = {"user": "admin", "pass": "admin"}
client = OpensearchVectorClient(endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field, auth=auth)
vector_store = OpensearchVectorStore(client)
res = vector_store.query(VectorStoreQuery(query_str="which projects work on supermarkets?" ,query_embedding=embed_model.get_text_embedding("which projects work on supermarkets?")))
print(res.nodes[0].extra_info["content"]) Please note that such modifications might have other implications on the overall functionality of the LlamaIndex, so it's recommended to thoroughly test the system after such changes. I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
This is critical, why do the library need to create |
Beta Was this translation helpful? Give feedback.
-
OmriLevyTau How did you fix this? |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm trying to build a simple RAG application using llama index and milvus as the vector storage.
The issue is, I don't want to allocate any local or in-memory storage for indices or document storage: I want to store the document texts in Milvus (alongside with its embeddings), and to use this text in retrieval stage.
When I use the following simple code, it seems
VectorStoreIndex.from_vector_store
creates behind the scene in-memoryBaseIndexStore
andBaseDocumentStore
(while initializing defaultStorageContext
). Can I avoid this behaviour?Beta Was this translation helpful? Give feedback.
All reactions