You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 4, 2024. It is now read-only.
The issue seems to be that KubeControlRequireer.scope is set to GLOBAL meaning that it expects to only ever have one kubes-master remote unit and if there are multiple, they all share the same conversation. I think this is partially an aspect of the conversation metaphor being inadequate and confusing (and there are plans to replace it with something simpler), but if the charms should be able to support scaling the master, then the scope needs to change and both the interface layer and charms updated to know how to deal with multiple masters being present.
@johnsca Thanks for the research. Multi-master is definitely a thing, so we'll need to change the scope. Pretty sure this was a bug in the old kube-dns interface too, we just never noticed.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
As per subject kube-control.connected state is removed incorrectly during scale back.
To replicate start with fully deployed k8s bundle and:
Result:
kubernetes-worker units eventually end up in blocked state because of missing kube-control.connected state
Expected result:
kube-control.connected state is NOT removed since the relation is still in place
The text was updated successfully, but these errors were encountered: