Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question for the replug_parallel_reader #48

Open
szhang42 opened this issue Apr 24, 2024 · 2 comments
Open

Question for the replug_parallel_reader #48

szhang42 opened this issue Apr 24, 2024 · 2 comments

Comments

@szhang42
Copy link

szhang42 commented Apr 24, 2024

Hello,

Thanks for this fantastic repo! I have a question for the replug_parallel_reader.ipynb. I follow the same code and setting in replug_parallel_reader.ipynb. For the question ""Who is the main villan in Lord of the Rings?", my answer output is Roman Empire. This answer seems incorrect compared to the answer ("Sauron") shown in the above ipynb notebook.

If possible, could you please let me know what could be potential reasons for this? Thanks!

@szhang42
Copy link
Author

szhang42 commented Apr 26, 2024

Hello @peteriz @danielfleischer! Also, I have another follow-up question: If I want to remove the replug part in the replug_parallel reader file, should I just comment out the invocation_layer as below? Thanks!

PrompterModel = PromptModel(
model_name_or_path= "meta-llama/Llama-2-7b-chat-hf",
use_gpu= True,
# invocation_layer_class=ReplugHFLocalInvocationLayer,
model_kwargs= dict(
max_new_tokens=10,
model_kwargs= dict(
#device_map = "auto",
torch_dtype = torch.bfloat16),
generation_kwargs=dict(do_sample=True)))

@danielfleischer
Copy link
Contributor

We refactored the library to be compatible with Haystack v2; Generators replace the invocation layer concept. Please try it out and see if there are any issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants