-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi-cluster autoscaler #94
Comments
Hi, this is an interesting idea. For the issues
you could use server side apply in the manifestwork, so autoscaler in the clusters will update replica while work agent will not. See "Resource Race and Adoption" section here https://open-cluster-management.io/concepts/manifestwork/
We currently do not have an going work to provide autoscaler yet, but it makes sense to me a lot. |
Thanks for your reply.
Sorry for the lack of words.
Yes, ManifestWorkReplicaSet is a great work. But it doesn't solve my problem. And my another idea is for Placement (or maybe the next version of ManifestworkReplicaSet) to implement scale sub resource. |
< Both multi-cluster autoscalers and in-cluster autoscalers (HPA, KEDA, etc.) would increase or decrease resources. I would think multi-cluster autoscalers is to scale the number of clusters of related placement, while hpa is to scale the real replicas of the deployment in a certain clusters. < If there is ManifestWorkReplicaSet A and ManifestworkReplicaSet B that reference the same Placement A, the scaling of ManifestWorkReplicaSet A will affect ManifestworkReplicaSet B. On the other hand, is it a valid case? You can bundle multiple ManifestworkReplicaSets as one "scaling group". So scaling the placement will scale workloads in all related ManifestworkReplicaSets. < And my another idea is for Placement (or maybe the next version of ManifestworkReplicaSet) |
You're correct, but I want "smart scaling" as in the following scenarios:
We can address this by using resource-usage-collect-addon to detect resource shortages, or by detecting Pod Pending events and having the multi-cluster autoscaler process after HPA.
This is not the problem I am facing and may be a "non-existent" problem.
"scaling group" is a case I didn't consider. Thinking about it, it is reasonable for placement to have numberOfClusters and accept "Placement per application".
👍 |
I did some further research on the scale subresource. |
Hi, are there any plans to develop a multi-cluster autoscaler?
Perhaps I can build it myself:
spec.numberOfClusters
of Placement if the threshold is exceeded.There are several possible problems:
I would like to know if there is an OCM community work on this.
The text was updated successfully, but these errors were encountered: