Use a RouterQueryEngine
as a chat
#11907
-
I'm working on project that has a I have multiple If I want to ask a question on my But now, I would like to use the same system but as a chat where this chat keep memory of the previous messages. Is this something possible? How can I achieve it? I tried multiple things (with Here my (simplified) code bellow: from __future__ import annotations
from llama_index.core.query_engine import NLSQLTableQueryEngine
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core import SQLDatabase
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.core.query_engine import RouterQueryEngine
from llama_index.core.selectors import LLMSingleSelector
from llama_index.core import Settings
from dotenv import load_dotenv
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.agent.openai import OpenAIAgent
files = []
files_base_path = "./data/downloaded"
if __name__ == '__main__':
load_dotenv()
# Init LLM and store it globally
Settings.llm = AzureOpenAI()
# Init EMBED and store it globally
Settings.embed_model = AzureOpenAIEmbedding()
# List of tables that will be used
sql_tables = [table_name]
# SQL query engine to generate SQL Query
sql_query_engine = NLSQLTableQueryEngine(
sql_database=SQLDatabase(
engine=engine,
include_tables=sql_tables
),
tables=sql_tables,
)
# Vector stores
vector_stores: list[VectorStoreIndex] = []
# Load downloaded file
documents = SimpleDirectoryReader(
input_dir=files_base_path
).load_data()
# For each documents found, generate a Vector store
for document in documents:
# Create and automatically save vector
vector_store = VectorStoreIndex.from_documents(
documents=[document],
)
vector_stores.append(vector_store)
# Vector Query Engine from Vector Stores
vector_query_engines = [
index.as_query_engine()
for index in vector_stores
]
# Vector Query Engine Tool from Vector Query Engine
vector_query_engine_tools = [
QueryEngineTool.from_defaults(
query_engine=query_engine,
name=f"..._engine",
description=f"...",
)
for file, query_engine in zip(files, vector_query_engines)
]
# SQL Query Engine Tool from SQL Query Engine
sql_query_engine_tool = QueryEngineTool(
query_engine=sql_query_engine,
metadata=ToolMetadata(
name="sql_query_engine",
description="...",
),
)
# Router query engine based of all query engine tools
router_query_engine = RouterQueryEngine(
selector=LLMSingleSelector.from_defaults(),
query_engine_tools=(
[sql_query_engine_tool] +
vector_query_engine_tools
),
verbose=True,
)
while True:
question = input("> ")
if question == "exit":
break
response = router_query_engine.query(question) # TODO question here
print(str(response))
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 7 replies
-
Hey @KirianCaumes, back for more adventures in coding, I see! 🚀 Based on your requirements, it seems like you want to maintain a history of previous messages in your chat system. This can be achieved by using a memory buffer that keeps track of the conversation history. In your current setup, you're using the Here's a simplified example of how you can modify your code to use from llama_index.core.chat_engine.simple import SimpleChatEngine
from llama_index.core.memory import ChatMemoryBuffer
# Initialize memory buffer
memory = ChatMemoryBuffer.from_defaults()
# Initialize chat engine
chat_engine = SimpleChatEngine(llm=Settings.llm, memory=memory)
while True:
question = input("> ")
if question == "exit":
break
response = chat_engine.chat(question)
print(str(response)) In this example, Please note that this is a simplified example and you might need to adjust it according to your project's structure and requirements. For more details on how I hope this helps! If you have any further questions or need more clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
Hey @KirianCaumes! I am also checking out on this requirement wherein I need the RouterQueryEngine keep track of the chat history. Or if there is a way to switch between tools within the Chat Engine modes. Let me know if you were able to find any solution on this yet! |
Beta Was this translation helpful? Give feedback.
-
Hey @KirianCaumes I found an alternative solution. I am not sure it can be applied to your problem nonetheless. Instead of creating a RouterQueryEngine with chat mode, I create different retrievertools and a routerRetriever. Then I apply this new retriever to a ContextChatEngine object. Hope it can help |
Beta Was this translation helpful? Give feedback.
-
Hello @KirianCaumes ,
Sources: |
Beta Was this translation helpful? Give feedback.
-
In the following function i am not able to save the conversation history, also the prompt template is not working , i get answers outside of the context given . can anyone help me out this . def assistant(verbose: bool = True):
)
query_engine = assistant() |
Beta Was this translation helpful? Give feedback.
Hey @KirianCaumes
I found an alternative solution. I am not sure it can be applied to your problem nonetheless.
Instead of creating a RouterQueryEngine with chat mode, I create different retrievertools and a routerRetriever. Then I apply this new retriever to a ContextChatEngine object.
Hope it can help