Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: remove hetzner load balancer in favour of metallb #29

Merged
merged 2 commits into from
Nov 16, 2024

Conversation

mrsimonemms
Copy link
Owner

This gives great flexibility for using UDP ingress, required to run the unifi controller in the cluster

Description

Related Issue(s)

Fixes #

How to test

This gives great flexibility for using UDP ingress, required to run
the unifi controller in the cluster
Copy link

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-16T15:00:11Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-16T15:00:11Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!
hcloud_network.network: Refreshing state... [id=10354605]
hcloud_placement_group.workers["pool1"]: Refreshing state... [id=420427]
hcloud_ssh_key.server: Refreshing state... [id=24578647]
hcloud_network_subnet.subnet: Refreshing state... [id=10354605-10.0.0.0/16]
hcloud_firewall.firewall: Refreshing state... [id=1737826]
hcloud_server.workers[1]: Refreshing state... [id=55669911]
hcloud_server.workers[0]: Refreshing state... [id=55669909]
hcloud_server.manager[0]: Refreshing state... [id=55669910]
ssh_resource.workers_ready[1]: Refreshing state... [id=1098784051203831727]
ssh_resource.workers_ready[0]: Refreshing state... [id=1336259097138580029]
ssh_resource.manager_ready[0]: Refreshing state... [id=8069563676244191045]
module.k3s.ssh_resource.initial_manager: Refreshing state... [id=3120909857419828295]
module.k3s.ssh_sensitive_resource.join_token: Refreshing state... [id=394639511006271317]
module.k3s.ssh_sensitive_resource.kubeconfig: Refreshing state... [id=886816048740501130]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"]: Refreshing state... [id=5776348885119509656]
local_sensitive_file.kubeconfig: Refreshing state... [id=6ba83af72489e176d3a1e32ef8f4f3066930917b]
module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"]: Refreshing state... [id=1069508492166298813]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"]: Refreshing state... [id=7071232614457278460]
module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"]: Refreshing state... [id=5138077515926708158]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
  ~ update in-place

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be updated in-place
  ~ resource "hcloud_firewall" "firewall" {
        id     = "1737826"
        name   = "prod-k3s-firewall"
        # (1 unchanged attribute hidden)

      - rule {
          - description     = "Allow ICMP (ping)" -> null
          - destination_ips = [] -> null
          - direction       = "in" -> null
          - protocol        = "icmp" -> null
          - source_ips      = [
              - "0.0.0.0/0",
              - "::/0",
            ] -> null
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow TCP access to port 443"
          + destination_ips = []
          + direction       = "in"
          + port            = "443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow TCP access to port 80"
          + destination_ips = []
          + direction       = "in"
          + port            = "80"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }

        # (5 unchanged blocks hidden)
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/workspace/.kubeconfig"
      + id                   = (known after apply)
    }

Plan: 1 to add, 1 to change, 0 to destroy.
time=2024-11-16T15:00:19Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)

Terraform has been successfully initialized!
random_integer.ingress_load_balancer_id: Refreshing state... [id=6299]
helm_release.ingress_nginx: Refreshing state... [id=ingress-nginx]
kubernetes_namespace_v1.argocd: Refreshing state... [id=argocd]
kubernetes_namespace_v1.external_secrets: Refreshing state... [id=external-secrets]
kubernetes_secret_v1.hcloud: Refreshing state... [id=kube-system/hcloud]
kubernetes_secret_v1.infisical: Refreshing state... [id=external-secrets/infisical]
kubernetes_secret_v1.github_secret: Refreshing state... [id=argocd/github-oidc]
helm_release.hcloud_ccm: Refreshing state... [id=hccm]
helm_release.hcloud_csi: Refreshing state... [id=hcsi]
helm_release.argocd: Refreshing state... [id=argocd]
data.kubernetes_nodes.cluster: Reading...
data.kubernetes_nodes.cluster: Read complete after 0s [id=31c6e98c945ebf38fa47f9ae8216ec0976637baa057ff63ea1aafa84da73101a]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
  - destroy

Terraform will perform the following actions:

  # helm_release.ingress_nginx will be destroyed
  # (because helm_release.ingress_nginx is not in configuration)
  - resource "helm_release" "ingress_nginx" {
      - atomic                     = true -> null
      - chart                      = "ingress-nginx" -> null
      - cleanup_on_fail            = true -> null
      - create_namespace           = true -> null
      - dependency_update          = false -> null
      - disable_crd_hooks          = false -> null
      - disable_openapi_validation = false -> null
      - disable_webhooks           = false -> null
      - force_update               = false -> null
      - id                         = "ingress-nginx" -> null
      - lint                       = false -> null
      - max_history                = 0 -> null
      - metadata                   = [
          - {
              - app_version    = "1.11.3"
              - chart          = "ingress-nginx"
              - first_deployed = 1731683963
              - last_deployed  = 1731683963
              - name           = "ingress-nginx"
              - namespace      = "ingress-nginx"
              - notes          = <<-EOT
                    The ingress-nginx controller has been installed.
                    It may take a few minutes for the load balancer IP to be available.
                    You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'
                    
                    An example Ingress that makes use of the controller:
                      apiVersion: networking.k8s.io/v1
                      kind: Ingress
                      metadata:
                        name: example
                        namespace: foo
                      spec:
                        ingressClassName: nginx
                        rules:
                          - host: www.example.com
                            http:
                              paths:
                                - pathType: Prefix
                                  backend:
                                    service:
                                      name: exampleService
                                      port:
                                        number: 80
                                  path: /
                        # This section is only required if TLS is to be enabled for the Ingress
                        tls:
                          - hosts:
                            - www.example.com
                            secretName: example-tls
                    
                    If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
                    
                      apiVersion: v1
                      kind: Secret
                      metadata:
                        name: example-tls
                        namespace: foo
                      data:
                        tls.crt: <base64 encoded cert>
                        tls.key: <base64 encoded key>
                      type: kubernetes.io/tls
                EOT
              - revision       = 1
              - values         = jsonencode(
                    {
                      - controller = {
                          - config    = {
                              - use-proxy-protocol = false
                            }
                          - extraArgs = {
                              - enable-ssl-passthrough = true
                            }
                          - kind      = "DaemonSet"
                          - service   = {
                              - annotations = {
                                  - "load-balancer.hetzner.cloud/disable-private-ingress" = true
                                  - "load-balancer.hetzner.cloud/location"                = "nbg1"
                                  - "load-balancer.hetzner.cloud/name"                    = "k3s-6299"
                                  - "load-balancer.hetzner.cloud/type"                    = "lb11"
                                  - "load-balancer.hetzner.cloud/use-private-ip"          = true
                                  - "load-balancer.hetzner.cloud/uses-proxyprotocol"      = false
                                }
                            }
                        }
                    }
                )
              - version        = "4.11.3"
            },
        ] -> null
      - name                       = "ingress-nginx" -> null
      - namespace                  = "ingress-nginx" -> null
      - pass_credentials           = false -> null
      - recreate_pods              = false -> null
      - render_subchart_notes      = true -> null
      - replace                    = false -> null
      - repository                 = "https://kubernetes.github.io/ingress-nginx" -> null
      - reset_values               = true -> null
      - reuse_values               = false -> null
      - skip_crds                  = false -> null
      - status                     = "deployed" -> null
      - timeout                    = 300 -> null
      - values                     = [
          - <<-EOT
                # The proxy protocol settings conflict with cert-manager
                # @link https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/issues/354
                controller:
                  kind: DaemonSet
                  config:
                    use-proxy-protocol: false
                  extraArgs:
                    enable-ssl-passthrough: true
                  service:
                    annotations:
                      load-balancer.hetzner.cloud/name: "k3s-6299"
                      load-balancer.hetzner.cloud/location: "nbg1"
                      load-balancer.hetzner.cloud/type: "lb11"
                      load-balancer.hetzner.cloud/disable-private-ingress: true
                      load-balancer.hetzner.cloud/use-private-ip: true
                      load-balancer.hetzner.cloud/uses-proxyprotocol: false
            EOT,
        ] -> null
      - verify                     = false -> null
      - version                    = "4.11.3" -> null
      - wait                       = true -> null
      - wait_for_jobs              = false -> null
    }

  # kubernetes_config_map_v1.metallb will be created
  + resource "kubernetes_config_map_v1" "metallb" {
      + data      = {
          + "resource" = <<-EOT
                "apiVersion": "metallb.io/v1beta1"
                "kind": "IPAddressPool"
                "metadata":
                  "name": "nodes"
                  "namespace": "metallb-system"
                "spec":
                  "addresses":
                  - "5.75.184.75/32"
                  - "94.130.96.243/32"
                  - "159.69.5.219/32"
            EOT
        }
      + id        = (known after apply)
      + immutable = false

      + metadata {
          + generation       = (known after apply)
          + name             = "nodes"
          + namespace        = "metallb-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # kubernetes_namespace_v1.metallb will be created
  + resource "kubernetes_namespace_v1" "metallb" {
      + id                               = (known after apply)
      + wait_for_default_service_account = false

      + metadata {
          + generation       = (known after apply)
          + name             = "metallb-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be destroyed
  # (because random_integer.ingress_load_balancer_id is not in configuration)
  - resource "random_integer" "ingress_load_balancer_id" {
      - id     = "6299" -> null
      - max    = 9999 -> null
      - min    = 1000 -> null
      - result = 6299 -> null
    }

Plan: 2 to add, 0 to change, 2 to destroy.

@mrsimonemms mrsimonemms marked this pull request as ready for review November 16, 2024 17:26
@mrsimonemms mrsimonemms merged commit 21cf9bd into main Nov 16, 2024
2 checks passed
@mrsimonemms mrsimonemms deleted the sje/metallb branch November 16, 2024 17:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant