-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We are seeing consistent throttling in the sleeper container #2161
Comments
I tried uploading the image with my grafana dashboard showing the throttling, but the image is not appearing. |
Hi @trouphaz, thanks for reporting this! The cpu request and limit for that pod are both hardcoded here. As you've noticed, the cpu request is What do your metrics show as the actual CPU usage of the pod? That pod is typically running sleep, which should consume very little CPU. In my experiments, it's usually around 0m-2m. Occasionally the Concierge pods automatically wake up and uses the Kubernetes The only way that I can get the pod's CPU metric to rise close to 20m is to manually execute Looking at the controller which will exec into the pod, I think it will run again whenever one of the following changes, or every ~3 minutes when nothing is changing:
I wonder if this is happening more often than expected on your cluster? Do you have Kubernetes audit logging enabled? Can you see how often that |
@trouphaz Any thoughts about the above? |
What happened?
The sleeper container in the pinniped-concierge-kube-cert-agent pod is showing consistent and regular throttling due to the CPU limit being set so low. This does not appear to affect the performance of the container itself since it is just on a sleep loop, but this is triggering our monitoring for critical platform workloads throttling.
What did you expect to happen?
It would be good to either have this resource quota configurable by the end user or just have the CPU limit set high enough that the workload isn't throttling.
What is the simplest way to reproduce this behavior?
Just look at your workload throttling metrics. Yours is likely throttling too.
In what environment did you see this bug?
kubectl version
): v1.28.2kubeadm version
): v1.26.15cat /etc/os-release
):uname -a
):What else is there to know about this bug?
The text was updated successfully, but these errors were encountered: