Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster port: expose 0.0.0.0 #347

Open
vsoch opened this issue Dec 5, 2024 · 4 comments
Open

Cluster port: expose 0.0.0.0 #347

vsoch opened this issue Dec 5, 2024 · 4 comments

Comments

@vsoch
Copy link
Contributor

vsoch commented Dec 5, 2024

Paired with #346, and as a follow up - we're thinking about the case of deploying Usernetes on a VM (on AWS) and then exposing the control plane to be connected to externally from other nodes that aren't a part of the cluster. My first thought was to test adding 0.0.0.0 here in the kubeadm-config.yaml:

  certSANs:
    - localhost
    - 127.0.0.1
    - "${U7S_NODE_NAME}"
    - "${U7S_HOST_IP}"

And then trying to bring up a vanilla node somewhere else that issues the same join command, but perhaps outside of docker. Has anyone done this?

@vsoch
Copy link
Contributor Author

vsoch commented Dec 5, 2024

I got this working! What I needed to do was (not 0.0.0.0) but add the extra address to the kubeadm-config.yaml template (the public ec2 VM address). Then I did a replacement of the 127.0.0.1 address in the kubeconfig to that address (on my machine outside of AWS), and it worked!

image

And then for a cloud, you'd need to update security groups (or more generally, firewalls) to allow port 6443 (or the port chosen) to be exposed for ingress (we assume most egress is already exposed).

High level, I think we could handle the custom tweak to make a second kubeconfig with the updated server field, and for the setup here, what would make sense is to allow adding another hostname to that certSANs list. What do you think?

@AkihiroSuda
Copy link
Member

custom tweak

This can be accomplished just with yq?

- name: "Relax disk pressure limit"
run: |
set -x
sudo snap install yq
yq -i 'select(.kind=="KubeletConfiguration").evictionHard."imagefs.available"="3Gi"' kubeadm-config.yaml

@vsoch
Copy link
Contributor Author

vsoch commented Dec 5, 2024

I'm testing https://docs.github.com/en/actions/sharing-automations/reusing-workflows so we don't have to repeat logic, but can just define variables for different multi-node runs with a few lines in the main file. I've never done this before, so won't be quick on it!

(Sorry, context of this is for #345)

@vsoch
Copy link
Contributor Author

vsoch commented Dec 5, 2024

Oh wow - this is super cool! This is a test from my branch (to this one) and I won't merge until everything is working. But I wanted to share some of the logic (I really like it to avoid replication of CI code). We are using these "reusable workflows" which render like this:

image

And the groups created by the templates collapse into a little accordion with arrow:

image

I'm fairly sure they have more isolation even than a set of jobs together, because the KUBECONFIG defined at the top of main.yaml needed to be added to the template to be seen. I think that makes sense, because these can be provided by other repositories and used elsewhere, and you would not want the environment to leak in.

I haven't merged into here because I still need to test the change of port (this is just moving the default setup into this template) but it moves the multi-node job logic into its own file reusable-multi-node.yaml and we can define whatever variables we want in the top section:

name: Multi Node
on:  
  workflow_call:
    # allow reuse of this workflow in other files here
    inputs:
      kube_apiserver_port:
        description: Kubernetes API server port
        # Using string, might be bug with number
        # https://github.com/orgs/community/discussions/67182
        type: string
        default: "6443"

And then the entire workflow for multi-node in main.yaml is just:

  # This uses the reusable-multi-node.yaml template
  multi-node:
    name: "Multi node with defaults"
    uses: ./.github/workflows/reusable-multi-node.yaml
    with:
      # This is the default (you could remove it, and it would work, but I'm leaving for the developer user to see)
      kube_apiserver_port: "6443"

So I'll add another block like that with a custom port! And we can customize anything else we like. 🥳

If you like the setup (and want to extend the single node setups) I could follow up with a PR for that too. It still requires additional runners (and each

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants