-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Inconsistent context relevance displayed status #1254
Comments
please help @joshreini1 thank you |
Hey @kaoru-will! 👋 I'm here to help you with any bugs, questions, or contributions while you wait for a human maintainer. I'm Dosu, a bot designed to assist you with all things related to the repository. To resolve the issue with the
By following these steps, you should be able to resolve the inconsistencies and errors with the @joshreini1, it seems like this issue requires a deeper look and your expertise would be highly valuable here. |
these are the modules that I used and the provider used |
hi any updates on this? |
To address the issue with the
By following these steps, you should be able to resolve the inconsistencies and errors with the |
Hi any updates on this? Could really use some help.... already changed my query engine into
but the context is still giving inconsistent answers and failing randomly Sometimes, it wouldn't get the correct Context, sometimes it would.... can you help me understand how to go about this? Or would it be possible to create my own customized PromptTemplate |
Hi! Sorry for the delay on this. What is the failure rate? 1/2, 1/10, 1/100? |
Can you also share an example of the call when it fails? |
It may also help to try a more powerful model such as gpt-4o to see if that reduces the error rate. It may be that some of the text in the context you are selecting is confusing to the Judge LLM. |
|
Bug Description
The feedback function Context Relevance I am trying to use provides inconsistent answers. Sometimes it would give a successful Feedback, but most times it would give a Failed Feedback
To Reproduce
Which steps should someone take to run into the same error? A small, reproducible code example is useful here.
My Code
Expected behavior
A clear and concise description of what you expected to happen.
I have taken note that feedbacks dont immediately provide their respective result, which is why I had used wait_for_feedback_results so I can itirate per feedback result and get my supposed results
These are the logs that I get if it passes. It would only pass sometimes when I freshly run my code
Relevant Logs/Tracebacks
Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks. If the issue is related to the TruLens dashboard, please also include a screenshot.
Environment:
Additional context
Add any other context about the problem here.
Is there a way where we can check if we're getting the context properly? Is this an issue where the context is not yet set, but the feedback suddenly runs?
This is the calls that im getting if the feedback passes
The text was updated successfully, but these errors were encountered: