-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consider reducing TerminationGracePeriodSeconds for spin-apps deployment/pod spec #118
Comments
It needs to be >= the length of the longest request you expect to receive to allow for inflight events to safely drain. If it's not shutting down after draining inflight reqs and instead waits for a SIGKILL that sounds like a bug in spin or the shim. |
you are right. I looked into containerd logs:
|
oh I think this is same as: deislabs/containerd-wasm-shims#207 |
this turns out to be due to os signal handling in containerd-shim. |
I was trying to understand why the scaling down of spin apps (after manually editing the number of replicas) is taking so long. It is likely due to the default
30s
value ofTerminationGracePeriodSeconds
when creating the spin-app pods.I reduced
TerminationGracePeriodSeconds
to 2s on my local setup (via a custom build ofspin-operator), after which the scale-down is quite fast now. I believe that this change will also help with
HPAor
Keda` based scaledown.We should consider adding a decent default and should possibly make it configurable on the SpinApp CRD.
The text was updated successfully, but these errors were encountered: