-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow up prompts #430
Comments
Is there a workflow to just return the graph ? |
Also what testing has been done on inputting raw source html to the source parameter? |
hi @jesse-lane-ai, one way to achieve the above is to use the cache attribute to store the contents of a website, so that you only have to call the language model on it multiple times, once per search question you might have, instead of re-running the whole pipeline multiple times (automatically handled under the hood). see more on the cache in the graph config's additional parameters section in the documentation. |
that should be always possible in the fetch node any graph using that should work just fine |
you mean returning the vector store / KG? not yet, but we already had some requests for it, @VinciGit00 |
it’s not apparent how to request a series of prompts on the knowledge graph. Like if I wanted to ask a series of questions. I don’t want to make multiple api calls on the same website. Maybe I’m not understanding something.
How do I save the knowledge graph and then iterate prompt requests on it without actually calling the website again?
The text was updated successfully, but these errors were encountered: