-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster port: expose 0.0.0.0 #347
Comments
I got this working! What I needed to do was (not 0.0.0.0) but add the extra address to the kubeadm-config.yaml template (the public ec2 VM address). Then I did a replacement of the 127.0.0.1 address in the kubeconfig to that address (on my machine outside of AWS), and it worked! And then for a cloud, you'd need to update security groups (or more generally, firewalls) to allow port 6443 (or the port chosen) to be exposed for ingress (we assume most egress is already exposed). High level, I think we could handle the custom tweak to make a second kubeconfig with the updated |
This can be accomplished just with yq? usernetes/.github/workflows/main.yaml Lines 116 to 120 in 31887a8
|
I'm testing https://docs.github.com/en/actions/sharing-automations/reusing-workflows so we don't have to repeat logic, but can just define variables for different multi-node runs with a few lines in the main file. I've never done this before, so won't be quick on it! (Sorry, context of this is for #345) |
Oh wow - this is super cool! This is a test from my branch (to this one) and I won't merge until everything is working. But I wanted to share some of the logic (I really like it to avoid replication of CI code). We are using these "reusable workflows" which render like this: And the groups created by the templates collapse into a little accordion with arrow: I'm fairly sure they have more isolation even than a set of jobs together, because the I haven't merged into here because I still need to test the change of port (this is just moving the default setup into this template) but it moves the multi-node job logic into its own file name: Multi Node
on:
workflow_call:
# allow reuse of this workflow in other files here
inputs:
kube_apiserver_port:
description: Kubernetes API server port
# Using string, might be bug with number
# https://github.com/orgs/community/discussions/67182
type: string
default: "6443" And then the entire workflow for multi-node in main.yaml is just: # This uses the reusable-multi-node.yaml template
multi-node:
name: "Multi node with defaults"
uses: ./.github/workflows/reusable-multi-node.yaml
with:
# This is the default (you could remove it, and it would work, but I'm leaving for the developer user to see)
kube_apiserver_port: "6443" So I'll add another block like that with a custom port! And we can customize anything else we like. 🥳 If you like the setup (and want to extend the single node setups) I could follow up with a PR for that too. It still requires additional runners (and each |
Paired with #346, and as a follow up - we're thinking about the case of deploying Usernetes on a VM (on AWS) and then exposing the control plane to be connected to externally from other nodes that aren't a part of the cluster. My first thought was to test adding
0.0.0.0
here in the kubeadm-config.yaml:And then trying to bring up a vanilla node somewhere else that issues the same join command, but perhaps outside of docker. Has anyone done this?
The text was updated successfully, but these errors were encountered: