Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Master Assigned to Worker Nodes #9

Closed
lemongarbage opened this issue Jan 11, 2022 · 10 comments · May be fixed by #10
Closed

Master Assigned to Worker Nodes #9

lemongarbage opened this issue Jan 11, 2022 · 10 comments · May be fixed by #10

Comments

@lemongarbage
Copy link

lemongarbage commented Jan 11, 2022

It seems that in some cases, the VIP is given to worker nodes. Is there a way to make it so that it's only assigned to control-plane nodes?

[WARNING] Non-preferred master advertising: reasserting control of VIP with another gratuitous arp is displayed for multiple nodes, and the VIP is assigned to multiple nodes as well.

Some searching indicates this could be due to a time sync issue (I am not running an NTP server) so it's likely their clocks are not in-sync.

@immanuelfodor
Copy link
Owner

Yes, you can definitely do it with the Helm chart!

Edit the values.yaml here: https://github.com/immanuelfodor/kube-karp/blob/master/helm/values.yaml

nodeSelector:
    kubernetes.io/role: master

The tolerations in place allow the pods to run on master nodes, the node selection can restrict the pods only to the master nodes.

I only have clusters with master nodes, so I can't test this, please report back if it doesn't work.

Reference: https://stackoverflow.com/questions/60404630/kubernetes-daemonset-only-on-master-nodes

@lemongarbage
Copy link
Author

lemongarbage commented Jan 11, 2022

Thanks for responding swiftly!!

I know that's a possibility via nodeSelector but I was under the assumption that CARP must be active on all nodes, no? I should have clarified that in the original post, sorry.

@lemongarbage
Copy link
Author

That label doesn't seem to work anymore. I've used it once before but when I try installing the Helm chart with that label for nodeSelector, no pods are spawned.

Taking a look at the master labels, it seems it's been replaced with these:
image

It looks like their values are blank. I'm trying to get it functional

@immanuelfodor
Copy link
Owner

immanuelfodor commented Jan 11, 2022

Yes, it seems my cluster also doesn't have that, this happens when blindly trusting online resources 😃

I have node-role.kubernetes.io/controlplane: "true" labels that I could use in my cluster.

Check what labels and values do you have with kubectl describe no NODENAME -o yaml. Check master and worker nodes as well, should you find a label that is only present on master nodes, you can use that. You can also label any node with anything then use that in the selector.

@lemongarbage
Copy link
Author

lemongarbage commented Jan 12, 2022

Thanks! I ended up just assigning an arbitrary label (color=red) and using that for nodeSelector. It seems this is somewhat of a common issue. I tried using the operator but that didn't work.

However, now my 3 master nodes are all assigned the vIP and in MASTER state...is that expected behavior?

@immanuelfodor
Copy link
Owner

Hmmm, that shouldn't happen, VIP management is not even related to k8s labels. Master nodes are all within the same subnet? Can master nodes communicate with the general multicast IP addresses to advertise themselves? See https://github.com/lorf/UCarp MULTICAST IP SELECTION section for more info. This issue seems to be related to network, the wrapped ucarp binary is probably not able to communicate to other pod's binaries, so they all think they are alone, so they all elect themselves as master and assign the VIP.

@lemongarbage
Copy link
Author

lemongarbage commented Jan 12, 2022

Yes I agree with you; likely a networking issue somehow. They are all in the same subnet. Thanks for linking and pointing that out. I will continue trying to solve this and post back.

@lemongarbage
Copy link
Author

lemongarbage commented Jan 12, 2022

Is tcpdump -n net 224.0.0.0/24 a surefire way to tell if the host can communicate with the multicast IPs?

If so, no packets captured. Even when not using nodeSelector.

@immanuelfodor
Copy link
Owner

Honestly, I never needed to debug multicast packets

@tht
Copy link

tht commented Mar 11, 2022

I had the same issue. Adding this to the end of values.yaml solves the issue for me.

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "node-role.kubernetes.io/master"
          operator: Exists

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants