You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Upbound, thanks for your awesome Control Plane and Providers!
My team have found ourselves needing to implement certain resources using provider-terraform to 'fill the gaps' due to crossplane/upjet#346.
Our cluster has multiple tenant teams Claiming on our Compositions which may include one or more workspaces each.
We would like to be able to provide our tenants with consistent performance expectations for reconciliation time but this could be impacted if one of our tenants decides to apply several Claims at once, blocking other tenants reconciliation (if --max-reconcile-rate=1).
I understand we can attempt to increase --max-reconcile-rate as discussed in many other issues however if one tenant applies many Workspaces at once this will still delay other tenants Workspaces.
How could Official Terraform Provider help solve your problem?
provider-terraform could support horizontal scaling by sharding Workspaces on a label, similar to how Flux can horizontally scale.
This approach would fit our use case well as we could shard each tenant to 'guarantee' a certain level of performance, our less important or least friendly tenants could still use a 'shared' provider.
Our provider configuration could then look like this:
I am not sure whether this issue is best here in provider-terraform or would be something for crossplane-runtime, from the current implementation it looks like this would be implemented per provider when setting up the manager, although i'm not sure if that code is generated?
The text was updated successfully, but these errors were encountered:
I think this is definitely something that would need to be handled by crossplane-runtime. Similar discussions are in #189 and in crossplane/crossplane-runtime#739
The Flux implementation is interesting but I don't think we would want to expose the resource scheduling to the user. I think a better solution might be to have a webhook that is aware of how many provider instances are running and assigns labels to the incoming resources as they are created to distribute them across the available controllers. There is additional work that would be required to allow multiple instances of a controller to run simultaneously, since the existing locking mechanism only allows for one.
What problem are you facing?
Hi Upbound, thanks for your awesome Control Plane and Providers!
My team have found ourselves needing to implement certain resources using provider-terraform to 'fill the gaps' due to crossplane/upjet#346.
Our cluster has multiple tenant teams Claiming on our Compositions which may include one or more workspaces each.
We would like to be able to provide our tenants with consistent performance expectations for reconciliation time but this could be impacted if one of our tenants decides to apply several Claims at once, blocking other tenants reconciliation (if
--max-reconcile-rate=1
).I understand we can attempt to increase
--max-reconcile-rate
as discussed in many other issues however if one tenant applies many Workspaces at once this will still delay other tenants Workspaces.How could Official Terraform Provider help solve your problem?
provider-terraform could support horizontal scaling by sharding Workspaces on a label, similar to how Flux can horizontally scale.
The implementation looks relatively innocuous, adding a label selector to the client cache options, which could be a potentially simpler approach compared to other suggestions - crossplane/crossplane-runtime#739 and #189.
This approach would fit our use case well as we could shard each tenant to 'guarantee' a certain level of performance, our less important or least friendly tenants could still use a 'shared' provider.
Our provider configuration could then look like this:
And our Compositions would create Workspaces using the shard key for the given tenant and produce something like this:
I am not sure whether this issue is best here in provider-terraform or would be something for crossplane-runtime, from the current implementation it looks like this would be implemented per provider when setting up the manager, although i'm not sure if that code is generated?
The text was updated successfully, but these errors were encountered: