-
-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Oathkeeper bombards Ory Network with requests after upgrade to 40.x #1161
Comments
Thank you for the report! Can you pin-point which version introduced this regression? It would make the search for the regression much easier! |
I'm not sure if I understand you correctly, as I wrote in the description, we have upgraded from 39.4 to 40.7 or do you mean something else? |
It's worth mentioning that this is our second attempt with the upgrade to 0.40.x. First time we tried with 0.40.6 and had the same effect. |
Since there are a couple of versions between 39.4 and 40.6 I wanted to know if you specifically can pin point which version exactly introduced the issue, making it easier to find the root cause |
Unfortunately not, we've only tried these two versions :( |
@aeneasr we've managed to pin-point exact version which introduces this issue. This happens between v0.39.4 and v0.40.0. Hope this helps, please let me know if you need anything else. |
I think your Try playing around with that value to see if it has an impact. |
Fix for some of the config values: 2373057 |
Basically before we were using the internal cost which is I think the key length + cost function. Since your maxcost is like 100 the cache probably ran out of space after one or two keys so it's constantly evicting your values. The fixes ignore the internal cost so you actually get 1 cost = 1 token instead of 1 token = 1 cost + cost of key |
Soooo, with the help of @Demonsthere, we checked following values of the
I hope this helps, for the time being we will go back to the 39.4 and we'll wait for the further updates. Please let us know if you need anything else. |
So what you're saying is that it doesn't have an effect? |
So, I must admitt that I made a mistake that I realised just after posting last comment. Unfortunately, I didnt keep an eye on the pods after deployment and turned out that they were not restarting after changing |
Hot reloading only works for things that can be changed during runtime. Caches unfortunately are large memory objects that are allocated at process start and can not be changed at runtime. |
So, I have redo the tests (now making sure that after each update, pods are restarted correctly) and here are the results. Values same as before:
Results (thanks to @tricky42 curtesy): Also, we were suspecting that maybe oathkeeper containers are running out of memory, but we confirmed that it's not the case: |
OK so increasing the cache size fixes the problem? |
Preflight checklist
Ory Network Project
https://vibrant-dubinsky-6d1qtx5k0i.projects.oryapis.com
Describe the bug
After upgrading from 39.4 to 40.7, @tricky42 notified us, about huge amount of requests made from oathkeeper to ory network's /introspect endpoint, as can be seen on the first screenshot (Upgrade was performed on 23.04 at ~15:58). From our perspective we don't see any issues in the logs or metrics. On the second screenshot you can see oathkeeper traffic (spike is from the moment of upgrade). We didn't change any configuration between the versions (except the naming of log level), and we suspect that the cache might not be working properly.
Screenshot 1:
Screenshot 2:
Reproducing the bug
Relevant log output
No response
Relevant configuration
Version
40.7
On which operating system are you observing this issue?
None
In which environment are you deploying?
Kubernetes with Helm
Additional Context
No response
The text was updated successfully, but these errors were encountered: