-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft: torch_geometric.nn.nlp.TXT2KG
and examples/hotpot_qa.py
for recall/precision eval
#9728
base: zacks-pr-in-mainfork
Are you sure you want to change the base?
Conversation
torch_geometric.nn.nlp.TXT2KG
torch_geometric.nn.nlp.TXT2KG
and examples/hotpot_qa.py
for recall/precision eval
for more information, see https://pre-commit.ci
How difficult would it be to upgrade from Llama 2.0 to 3.1? |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
im not sure what you mean. I dont use either for this PR. if your talking about the default model for the g_retriever.py example, it should be trivial for you to swap in any LLM or GNN, thats the whole point of the framework. |
needs rebase onto master |
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
…tron-70b, but its still the default
for more information, see https://pre-commit.ci
…tron-70b, but its still the default
for more information, see https://pre-commit.ci
Support for Llama 3.3 79B out today that performs as well as Llama 3.1 405B for much less money… would be awesome! |
Co-authored-by: riship <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Many RAG Q+A datasets do not have existing KGs to work with. KG creation is an essential step in the real world. This is a simplistic approach, in the future hope to have more refined approaches to replace this
Deliverable: example of making a standard RAG benchmark like hotpotQA into a KG and how to measure precision/recall of retrieval
Basing on a copy of #9666, will change base of PR to master once his work is merged
Now works at small scale, testing at full scale (10% of hotpot QA) soon. Goal is >=.5 precision/recall