You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version:1.31
What k8s version are you using (kubectl version)?:
v1.30.2
What environment is this in?: GKE
What did you expect to happen?: MaxNodesProcessor limits the number of nodes to scale down according to the passed maxCount -currently it's being set to math.MaxInt in the planner- we should expect from this processor and AtomicResizeFilteringProcessor to filter as much as possible nodes less than maxCount
What happened instead?:
We call MaxNodesProcessor before AtomicResizeFilteringProcessor which might result in removing candidate nodes from the end that block the AtomicResizeFilteringProcessor from removing other nodes in same node group
How to reproduce it (as minimally and precisely as possible):
1.Set maxCount in planner to low value for example 5
2.Create 2 node groups ng1 and ng2 of size 3 for both
3.Enable ZeroOrMaxNodeScaling for ng1
4.Consider all nodes are removable and put ng2 nodes first in passed candidates
5.MaxNodesProcessor will exclude one node from ng1 to have 5 nodes
6.AtomicResizeFilteringProcessor will exclude the other 2 from from ng1 because of ZeroOrMaxNodeScaling
7. Scale down will happen to only 3 nodes from ng2
8.We could have removed 3 nodes from ng1 and 2 from ng2
Anything else we need to know?:
I suggest calling AtomicResizeFilteringProcessor first and in MaxNodesProcessor we priortize full node groups deletions over single nodes so we can maximize number of nodes to be deleted less than maxCount
The text was updated successfully, but these errors were encountered:
#7307 removes MaxNodesProcessor now, so I guess it is safe to close this.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version:1.31
What k8s version are you using (
kubectl version
)?:v1.30.2
What environment is this in?: GKE
What did you expect to happen?:
MaxNodesProcessor limits the number of nodes to scale down according to the passed maxCount -currently it's being set to math.MaxInt in the planner- we should expect from this processor and AtomicResizeFilteringProcessor to filter as much as possible nodes less than maxCount
What happened instead?:
We call MaxNodesProcessor before AtomicResizeFilteringProcessor which might result in removing candidate nodes from the end that block the AtomicResizeFilteringProcessor from removing other nodes in same node group
How to reproduce it (as minimally and precisely as possible):
1.Set maxCount in planner to low value for example 5
2.Create 2 node groups ng1 and ng2 of size 3 for both
3.Enable ZeroOrMaxNodeScaling for ng1
4.Consider all nodes are removable and put ng2 nodes first in passed candidates
5.MaxNodesProcessor will exclude one node from ng1 to have 5 nodes
6.AtomicResizeFilteringProcessor will exclude the other 2 from from ng1 because of ZeroOrMaxNodeScaling
7. Scale down will happen to only 3 nodes from ng2
8.We could have removed 3 nodes from ng1 and 2 from ng2
Anything else we need to know?:
I suggest calling AtomicResizeFilteringProcessor first and in MaxNodesProcessor we priortize full node groups deletions over single nodes so we can maximize number of nodes to be deleted less than maxCount
The text was updated successfully, but these errors were encountered: