You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Firstly, thanks for the aid with my previous question! (#1023) Moreover, I found #685 to be very helpful for my needs—much appreciated for referencing it in my issue.
On the note of #685: I'm successfully able to programmatically evaluate a set of prompts in a JSON using:
from tests.utils import make_user_path_test
def run_eval(cpu=False, bits=None, base_model='h2oai/h2ogpt-oig-oasst1-512-6_9b',eval_filename=None,eval_prompts_only_num=1):
from src.gen import main
user_path = make_user_path_test()
kwargs = dict(
stream_output=False,
langchain_mode='UserData',
langchain_modes=['UserData']
)
eval_out_filename = main(base_model=base_model,
eval=True, gradio=False,
eval_filename=eval_filename,
eval_prompts_only_num=eval_prompts_only_num,
eval_as_output=False,
eval_prompts_only_seed=123456,
answer_with_sources=True,append_sources_to_answer=True,append_sources_to_chat=False, # !! Added so sources are appended
user_path='src/user_path',show_link_in_sources=True, # !! Added so sources are appended
**kwargs)
return eval_out_filename
eval_filename = 'my_prompts.json'
nprompts = 2
bits = 8
cpu = False
base_model = 'h2oai/h2ogpt-4096-llama2-7b-chat'
eval_out_filename = run_eval(cpu=cpu, bits=bits, base_model=base_model,eval_filename=eval_filename,eval_prompts_only_num=nprompts)
When running this code, however, I noticed that the model never correctly sets langchain_mode to UserData (it always stays as 'langchain_mode': None), and I'm never able to programmatically receive the citations/sources from the database with the prompts.
Notably, the following code generates a gradio user interface through which langchain_mode is correctly set as UserData and wherein I receive all citations/sources/tokens correctly (but interactively):
I need to be able to programmatically receive the responses from my custom prompts along with the paired sources/citations. Am I missing something simple here?
Thanks again!
The text was updated successfully, but these errors were encountered:
Hi there—does anyone else have trouble setting the Langchain mode and receiving source information when interacting programmatically with a model? Am I missing a flag somewhere, or is there something else happening?
Firstly, thanks for the aid with my previous question! (#1023) Moreover, I found #685 to be very helpful for my needs—much appreciated for referencing it in my issue.
On the note of #685: I'm successfully able to programmatically evaluate a set of prompts in a JSON using:
which is code that was streamlined from the suggested source: h2ogpt/tests/test_eval.py.
When running this code, however, I noticed that the model never correctly sets
langchain_mode
toUserData
(it always stays as'langchain_mode': None
), and I'm never able to programmatically receive the citations/sources from the database with the prompts.Notably, the following code generates a gradio user interface through which
langchain_mode
is correctly set asUserData
and wherein I receive all citations/sources/tokens correctly (but interactively):I attempted to use the following call as well, but
langchain_mode
remains set toNone
regardless of my inputs:I need to be able to programmatically receive the responses from my custom prompts along with the paired sources/citations. Am I missing something simple here?
Thanks again!
The text was updated successfully, but these errors were encountered: