You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently checking the case of use for negative statements on chat whenever a user makes a prompt, the thing is that in some cases the model makes a clear answer saying that it cannot provide any information due to the content being negative or harmful, but in other cases it directly prompts an error box in red that says:
"400 litellm.BadRequestError: litellm.ContentPolicyViolationError: Azure Exception - Error code: 400"
if you make a prompt implying some harmfull request or perhaps instructions on how make certain wepons some times it woul answer you but in other cases the error appears.
I would expect that the behavior should be that the chat could respond always by giving feedback and not contribute to the negative or harmfull request from the user.
This happens using GPT 4.
The text was updated successfully, but these errors were encountered:
I'm currently checking the case of use for negative statements on chat whenever a user makes a prompt, the thing is that in some cases the model makes a clear answer saying that it cannot provide any information due to the content being negative or harmful, but in other cases it directly prompts an error box in red that says:
"400 litellm.BadRequestError: litellm.ContentPolicyViolationError: Azure Exception - Error code: 400"
if you make a prompt implying some harmfull request or perhaps instructions on how make certain wepons some times it woul answer you but in other cases the error appears.
I would expect that the behavior should be that the chat could respond always by giving feedback and not contribute to the negative or harmfull request from the user.
This happens using GPT 4.
The text was updated successfully, but these errors were encountered: