Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sje/cluster refactor #17

Closed
wants to merge 5 commits into from
Closed

Sje/cluster refactor #17

wants to merge 5 commits into from

Conversation

mrsimonemms
Copy link
Owner

Description

Related Issue(s)

Fixes #

How to test

Copy link

github-actions bot commented Nov 8, 2024

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-08T11:59:39Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-08T11:59:39Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be created
  + resource "hcloud_firewall" "firewall" {
      + id     = (known after apply)
      + labels = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name   = "prod-k3s-firewall"

      + apply_to {
          + label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
          + server         = (known after apply)
        }

      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow access to Kubernetes API"
          + destination_ips = []
          + direction       = "in"
          + port            = "6443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow all TCP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "tcp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "Allow all UDP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "udp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "SSH port"
          + destination_ips = []
          + direction       = "in"
          + port            = "2244"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
    }

  # hcloud_network.network will be created
  + resource "hcloud_network" "network" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "10.0.0.0/16"
      + labels                   = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name                     = "prod-k3s-network"
    }

  # hcloud_network_subnet.subnet will be created
  + resource "hcloud_network_subnet" "subnet" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "10.0.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.workers["pool1"] will be created
  + resource "hcloud_placement_group" "workers" {
      + id      = (known after apply)
      + labels  = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + name    = "prod-k3s-pool1"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.manager[0] will be created
  + resource "hcloud_server" "manager" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-manager-0"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[0] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-0"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[1] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_ssh_key.server will be created
  + resource "hcloud_ssh_key" "server" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + name        = "prod-k3s-ssh_key"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/home/.kube/config"
      + id                   = (known after apply)
    }

  # ssh_resource.manager_ready[0] will be created
  + resource "ssh_resource" "manager_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[0] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[1] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-0",
          + "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-0"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-1",
          + "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-1"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.initial_manager will be created
  + resource "ssh_resource" "initial_manager" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
          + "sudo systemctl start k3s",
          + "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_sensitive_resource.join_token will be created
  + resource "ssh_sensitive_resource" "join_token" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo cat /var/lib/rancher/k3s/server/token",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_sensitive_resource.kubeconfig will be created
  + resource "ssh_sensitive_resource" "kubeconfig" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = (known after apply)
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hcloud_network_name = "prod-k3s-network"
  + k3s_cluster_cidr    = "10.42.0.0/16"
  + kube_api_server     = (known after apply)
  + kubeconfig          = (sensitive value)
  + location            = "nbg1"
  + network_name        = "prod-k3s-network"
  + pools               = (sensitive value)
  + region              = "eu-central"
  + ssh_port            = 2244
  + ssh_user            = "k3smanager"
time=2024-11-08T11:59:44Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.argocd will be created
  + resource "helm_release" "argocd" {
      + atomic                     = true
      + chart                      = "argo-cd"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "argocd"
      + namespace                  = "argocd"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://argoproj.github.io/argo-helm"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 600
      + values                     = [
          + <<-EOT
                global:
                  domain: argocd.prod.simonemms.com
                redis-ha:
                  enabled: true
                repoServer:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                server:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                  ingress:
                    enabled: true
                    ingressClassName: nginx
                    annotations:
                      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                      nginx.ingress.kubernetes.io/backend-protocol: HTTP
                      kubernetes.io/tls-acme: "true"
                      cert-manager.io/cluster-issuer: letsencrypt
                    tls: true
                    extraTLS:
                      - hosts:
                          - argocd.prod.simonemms.com
                        secretName: argocd-tls
                configs:
                  params:
                    server.insecure: true
            EOT,
        ]
      + verify                     = false
      + version                    = "7.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # helm_release.hcloud_ccm will be created
  + resource "helm_release" "hcloud_ccm" {
      + atomic                     = true
      + chart                      = "hcloud-cloud-controller-manager"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hccm"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.hcloud_csi will be created
  + resource "helm_release" "hcloud_csi" {
      + atomic                     = true
      + chart                      = "hcloud-csi"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hcsi"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "2.10.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = true
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = (known after apply)
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # kubernetes_secret_v1.hcloud will be created
  + resource "kubernetes_secret_v1" "hcloud" {
      + data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + name             = "hcloud"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be created
  + resource "random_integer" "ingress_load_balancer_id" {
      + id     = (known after apply)
      + max    = 9999
      + min    = 1000
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.

Copy link

github-actions bot commented Nov 8, 2024

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-08T12:02:23Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-08T12:02:23Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be created
  + resource "hcloud_firewall" "firewall" {
      + id     = (known after apply)
      + labels = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name   = "prod-k3s-firewall"

      + apply_to {
          + label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
          + server         = (known after apply)
        }

      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow access to Kubernetes API"
          + destination_ips = []
          + direction       = "in"
          + port            = "6443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow all TCP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "tcp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "Allow all UDP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "udp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "SSH port"
          + destination_ips = []
          + direction       = "in"
          + port            = "2244"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
    }

  # hcloud_network.network will be created
  + resource "hcloud_network" "network" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "10.0.0.0/16"
      + labels                   = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name                     = "prod-k3s-network"
    }

  # hcloud_network_subnet.subnet will be created
  + resource "hcloud_network_subnet" "subnet" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "10.0.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.workers["pool1"] will be created
  + resource "hcloud_placement_group" "workers" {
      + id      = (known after apply)
      + labels  = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + name    = "prod-k3s-pool1"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.manager[0] will be created
  + resource "hcloud_server" "manager" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-manager-0"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[0] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-0"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[1] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_ssh_key.server will be created
  + resource "hcloud_ssh_key" "server" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + name        = "prod-k3s-ssh_key"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/home/.kube/config"
      + id                   = (known after apply)
    }

  # ssh_resource.manager_ready[0] will be created
  + resource "ssh_resource" "manager_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[0] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[1] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-0",
          + "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-0"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-1",
          + "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-1"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.initial_manager will be created
  + resource "ssh_resource" "initial_manager" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
          + "sudo systemctl start k3s",
          + "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_sensitive_resource.join_token will be created
  + resource "ssh_sensitive_resource" "join_token" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo cat /var/lib/rancher/k3s/server/token",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_sensitive_resource.kubeconfig will be created
  + resource "ssh_sensitive_resource" "kubeconfig" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = (known after apply)
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hcloud_network_name = "prod-k3s-network"
  + k3s_cluster_cidr    = "10.42.0.0/16"
  + kube_api_server     = (known after apply)
  + kubeconfig          = (sensitive value)
  + location            = "nbg1"
  + network_name        = "prod-k3s-network"
  + pools               = (sensitive value)
  + region              = "eu-central"
  + ssh_port            = 2244
  + ssh_user            = "k3smanager"
time=2024-11-08T12:02:28Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.argocd will be created
  + resource "helm_release" "argocd" {
      + atomic                     = true
      + chart                      = "argo-cd"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "argocd"
      + namespace                  = "argocd"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://argoproj.github.io/argo-helm"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 600
      + values                     = [
          + <<-EOT
                global:
                  domain: argocd.prod.simonemms.com
                redis-ha:
                  enabled: true
                repoServer:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                server:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                  ingress:
                    enabled: true
                    ingressClassName: nginx
                    annotations:
                      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                      nginx.ingress.kubernetes.io/backend-protocol: HTTP
                      kubernetes.io/tls-acme: "true"
                      cert-manager.io/cluster-issuer: letsencrypt
                    tls: true
                    extraTLS:
                      - hosts:
                          - argocd.prod.simonemms.com
                        secretName: argocd-tls
                configs:
                  params:
                    server.insecure: true
            EOT,
        ]
      + verify                     = false
      + version                    = "7.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # helm_release.hcloud_ccm will be created
  + resource "helm_release" "hcloud_ccm" {
      + atomic                     = true
      + chart                      = "hcloud-cloud-controller-manager"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hccm"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.hcloud_csi will be created
  + resource "helm_release" "hcloud_csi" {
      + atomic                     = true
      + chart                      = "hcloud-csi"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hcsi"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "2.10.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = true
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = (known after apply)
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # kubernetes_secret_v1.hcloud will be created
  + resource "kubernetes_secret_v1" "hcloud" {
      + data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + name             = "hcloud"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be created
  + resource "random_integer" "ingress_load_balancer_id" {
      + id     = (known after apply)
      + max    = 9999
      + min    = 1000
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.

Copy link

github-actions bot commented Nov 8, 2024

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-08T12:03:13Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-08T12:03:13Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be created
  + resource "hcloud_firewall" "firewall" {
      + id     = (known after apply)
      + labels = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name   = "prod-k3s-firewall"

      + apply_to {
          + label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
          + server         = (known after apply)
        }

      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow access to Kubernetes API"
          + destination_ips = []
          + direction       = "in"
          + port            = "6443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow all TCP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "tcp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "Allow all UDP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "udp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "SSH port"
          + destination_ips = []
          + direction       = "in"
          + port            = "2244"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
    }

  # hcloud_network.network will be created
  + resource "hcloud_network" "network" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "10.0.0.0/16"
      + labels                   = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name                     = "prod-k3s-network"
    }

  # hcloud_network_subnet.subnet will be created
  + resource "hcloud_network_subnet" "subnet" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "10.0.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.workers["pool1"] will be created
  + resource "hcloud_placement_group" "workers" {
      + id      = (known after apply)
      + labels  = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + name    = "prod-k3s-pool1"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.manager[0] will be created
  + resource "hcloud_server" "manager" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-manager-0"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[0] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-0"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[1] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_ssh_key.server will be created
  + resource "hcloud_ssh_key" "server" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + name        = "prod-k3s-ssh_key"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/home/.kube/config"
      + id                   = (known after apply)
    }

  # ssh_resource.manager_ready[0] will be created
  + resource "ssh_resource" "manager_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[0] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[1] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-0",
          + "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-0"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-1",
          + "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-1"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.initial_manager will be created
  + resource "ssh_resource" "initial_manager" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
          + "sudo systemctl start k3s",
          + "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_sensitive_resource.join_token will be created
  + resource "ssh_sensitive_resource" "join_token" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo cat /var/lib/rancher/k3s/server/token",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_sensitive_resource.kubeconfig will be created
  + resource "ssh_sensitive_resource" "kubeconfig" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = (known after apply)
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hcloud_network_name = "prod-k3s-network"
  + k3s_cluster_cidr    = "10.42.0.0/16"
  + kube_api_server     = (known after apply)
  + kubeconfig          = (sensitive value)
  + location            = "nbg1"
  + network_name        = "prod-k3s-network"
  + pools               = (sensitive value)
  + region              = "eu-central"
  + ssh_port            = 2244
  + ssh_user            = "k3smanager"
time=2024-11-08T12:03:22Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.argocd will be created
  + resource "helm_release" "argocd" {
      + atomic                     = true
      + chart                      = "argo-cd"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "argocd"
      + namespace                  = "argocd"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://argoproj.github.io/argo-helm"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 600
      + values                     = [
          + <<-EOT
                global:
                  domain: argocd.prod.simonemms.com
                redis-ha:
                  enabled: true
                repoServer:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                server:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                  ingress:
                    enabled: true
                    ingressClassName: nginx
                    annotations:
                      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                      nginx.ingress.kubernetes.io/backend-protocol: HTTP
                      kubernetes.io/tls-acme: "true"
                      cert-manager.io/cluster-issuer: letsencrypt
                    tls: true
                    extraTLS:
                      - hosts:
                          - argocd.prod.simonemms.com
                        secretName: argocd-tls
                configs:
                  params:
                    server.insecure: true
            EOT,
        ]
      + verify                     = false
      + version                    = "7.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # helm_release.hcloud_ccm will be created
  + resource "helm_release" "hcloud_ccm" {
      + atomic                     = true
      + chart                      = "hcloud-cloud-controller-manager"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hccm"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.hcloud_csi will be created
  + resource "helm_release" "hcloud_csi" {
      + atomic                     = true
      + chart                      = "hcloud-csi"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hcsi"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "2.10.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = true
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = (known after apply)
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # kubernetes_secret_v1.hcloud will be created
  + resource "kubernetes_secret_v1" "hcloud" {
      + data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + name             = "hcloud"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be created
  + resource "random_integer" "ingress_load_balancer_id" {
      + id     = (known after apply)
      + max    = 9999
      + min    = 1000
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.

Copy link

github-actions bot commented Nov 8, 2024

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-08T12:11:12Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-08T12:11:12Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be created
  + resource "hcloud_firewall" "firewall" {
      + id     = (known after apply)
      + labels = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name   = "prod-k3s-firewall"

      + apply_to {
          + label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
          + server         = (known after apply)
        }

      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow access to Kubernetes API"
          + destination_ips = []
          + direction       = "in"
          + port            = "6443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow all TCP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "tcp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "Allow all UDP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "udp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "SSH port"
          + destination_ips = []
          + direction       = "in"
          + port            = "2244"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
    }

  # hcloud_network.network will be created
  + resource "hcloud_network" "network" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "10.0.0.0/16"
      + labels                   = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name                     = "prod-k3s-network"
    }

  # hcloud_network_subnet.subnet will be created
  + resource "hcloud_network_subnet" "subnet" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "10.0.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.workers["pool1"] will be created
  + resource "hcloud_placement_group" "workers" {
      + id      = (known after apply)
      + labels  = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + name    = "prod-k3s-pool1"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.manager[0] will be created
  + resource "hcloud_server" "manager" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-manager-0"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[0] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-0"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[1] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_ssh_key.server will be created
  + resource "hcloud_ssh_key" "server" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + name        = "prod-k3s-ssh_key"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/home/.kube/config"
      + id                   = (known after apply)
    }

  # ssh_resource.manager_ready[0] will be created
  + resource "ssh_resource" "manager_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[0] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[1] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-0",
          + "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-0"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-1",
          + "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-1"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.initial_manager will be created
  + resource "ssh_resource" "initial_manager" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
          + "sudo systemctl start k3s",
          + "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_sensitive_resource.join_token will be created
  + resource "ssh_sensitive_resource" "join_token" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo cat /var/lib/rancher/k3s/server/token",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_sensitive_resource.kubeconfig will be created
  + resource "ssh_sensitive_resource" "kubeconfig" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = (known after apply)
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hcloud_network_name = "prod-k3s-network"
  + k3s_cluster_cidr    = "10.42.0.0/16"
  + kube_api_server     = (known after apply)
  + kubeconfig          = (sensitive value)
  + location            = "nbg1"
  + network_name        = "prod-k3s-network"
  + pools               = (sensitive value)
  + region              = "eu-central"
  + ssh_port            = 2244
  + ssh_user            = "k3smanager"
time=2024-11-08T12:11:23Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.argocd will be created
  + resource "helm_release" "argocd" {
      + atomic                     = true
      + chart                      = "argo-cd"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "argocd"
      + namespace                  = "argocd"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://argoproj.github.io/argo-helm"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 600
      + values                     = [
          + <<-EOT
                global:
                  domain: argocd.prod.simonemms.com
                redis-ha:
                  enabled: true
                repoServer:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                server:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                  ingress:
                    enabled: true
                    ingressClassName: nginx
                    annotations:
                      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                      nginx.ingress.kubernetes.io/backend-protocol: HTTP
                      kubernetes.io/tls-acme: "true"
                      cert-manager.io/cluster-issuer: letsencrypt
                    tls: true
                    extraTLS:
                      - hosts:
                          - argocd.prod.simonemms.com
                        secretName: argocd-tls
                configs:
                  params:
                    server.insecure: true
            EOT,
        ]
      + verify                     = false
      + version                    = "7.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # helm_release.hcloud_ccm will be created
  + resource "helm_release" "hcloud_ccm" {
      + atomic                     = true
      + chart                      = "hcloud-cloud-controller-manager"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hccm"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.hcloud_csi will be created
  + resource "helm_release" "hcloud_csi" {
      + atomic                     = true
      + chart                      = "hcloud-csi"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hcsi"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "2.10.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = true
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = (known after apply)
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # kubernetes_secret_v1.hcloud will be created
  + resource "kubernetes_secret_v1" "hcloud" {
      + data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + name             = "hcloud"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be created
  + resource "random_integer" "ingress_load_balancer_id" {
      + id     = (known after apply)
      + max    = 9999
      + min    = 1000
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.

Copy link

github-actions bot commented Nov 8, 2024

Execution result of "run-all plan" in "stacks/prod"
time=2024-11-08T13:16:45Z level=info msg=The stack at /github/workspace/stacks/prod will be processed in the following order for command plan:
Group 1
- Module /github/workspace/stacks/prod/hetzner

Group 2
- Module /github/workspace/stacks/prod/kubernetes


time=2024-11-08T13:16:45Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/hetzner into /github/workspace/stacks/prod/hetzner/.terragrunt-cache/7n5v_ZVOv4gLIvn-SLBHuU7F7OI/B-HSI5LUu0nLTnyopQYP4SLEkoU prefix=[/github/workspace/stacks/prod/hetzner] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules...
Downloading git::https://github.com/mrsimonemms/terraform-module-k3s.git for k3s...
- k3s in .terraform/modules/k3s
Initializing provider plugins...
- Reusing previous version of hetznercloud/hcloud from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of loafoe/ssh from the dependency lock file
- Installing hetznercloud/hcloud v1.48.0...
- Installed hetznercloud/hcloud v1.48.0 (signed by a HashiCorp partner, key ID 5219EACB3A77198B)
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing loafoe/ssh v2.7.0...
- Installed loafoe/ssh v2.7.0 (self-signed, key ID C0E4EB79E9E6A23D)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # hcloud_firewall.firewall will be created
  + resource "hcloud_firewall" "firewall" {
      + id     = (known after apply)
      + labels = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name   = "prod-k3s-firewall"

      + apply_to {
          + label_selector = "simonemms.com/project=k3s,simonemms.com/provisioner=terraform,simonemms.com/workspace=prod"
          + server         = (known after apply)
        }

      + rule {
          + description     = "Allow ICMP (ping)"
          + destination_ips = []
          + direction       = "in"
          + protocol        = "icmp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
            # (1 unchanged attribute hidden)
        }
      + rule {
          + description     = "Allow access to Kubernetes API"
          + destination_ips = []
          + direction       = "in"
          + port            = "6443"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
      + rule {
          + description     = "Allow all TCP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "tcp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "Allow all UDP traffic on private network"
          + destination_ips = []
          + direction       = "in"
          + port            = "any"
          + protocol        = "udp"
          + source_ips      = [
              + "10.0.0.0/16",
            ]
        }
      + rule {
          + description     = "SSH port"
          + destination_ips = []
          + direction       = "in"
          + port            = "2244"
          + protocol        = "tcp"
          + source_ips      = [
              + "0.0.0.0/0",
              + "::/0",
            ]
        }
    }

  # hcloud_network.network will be created
  + resource "hcloud_network" "network" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "10.0.0.0/16"
      + labels                   = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/workspace"   = "prod"
        }
      + name                     = "prod-k3s-network"
    }

  # hcloud_network_subnet.subnet will be created
  + resource "hcloud_network_subnet" "subnet" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "10.0.0.0/16"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.workers["pool1"] will be created
  + resource "hcloud_placement_group" "workers" {
      + id      = (known after apply)
      + labels  = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + name    = "prod-k3s-pool1"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.manager[0] will be created
  + resource "hcloud_server" "manager" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-manager-0"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[0] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-0"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.workers[1] will be created
  + resource "hcloud_server" "workers" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-24.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "simonemms.com/pool"        = "pool1"
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "worker"
          + "simonemms.com/workspace"   = "prod"
        }
      + location                   = "nbg1"
      + name                       = "prod-k3s-pool1-1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cx22"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "klLG1jO14ZIPyCjeBE5aD7/BenA="

      + network {
          + alias_ips   = []
          + ip          = (known after apply)
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_ssh_key.server will be created
  + resource "hcloud_ssh_key" "server" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {
          + "simonemms.com/project"     = "k3s"
          + "simonemms.com/provisioner" = "terraform"
          + "simonemms.com/type"        = "manager"
          + "simonemms.com/workspace"   = "prod"
        }
      + name        = "prod-k3s-ssh_key"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGPjfqG/QomY6qu9pWp+/ioQ98QGGDh+rYlHEgrgHOQr homelab"
    }

  # local_sensitive_file.kubeconfig will be created
  + resource "local_sensitive_file" "kubeconfig" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0755"
      + file_permission      = "0600"
      + filename             = "/github/home/.kube/config"
      + id                   = (known after apply)
    }

  # ssh_resource.manager_ready[0] will be created
  + resource "ssh_resource" "manager_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[0] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # ssh_resource.workers_ready[1] will be created
  + resource "ssh_resource" "workers_ready" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "cloud-init status | grep \"status: done\"",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "5s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-0",
          + "sudo kubectl drain prod-k3s-pool1-0 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-0 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-0"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.drain_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "drain_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo kubectl cordon prod-k3s-pool1-1",
          + "sudo kubectl drain prod-k3s-pool1-1 --delete-emptydir-data --force --ignore-daemonsets --timeout=30s",
          + "sudo kubectl delete node prod-k3s-pool1-1 --force --timeout=30s",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "node_name" = "prod-k3s-pool1-1"
        }
      + user                               = "k3smanager"
      + when                               = "destroy"
    }

  # module.k3s.ssh_resource.initial_manager will be created
  + resource "ssh_resource" "initial_manager" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "echo \"flannel-iface: $(ip route get 10.0.0.0 | awk -F \"dev \" 'NR==1{split($2, a, \" \"); print a[1]}')\" | sudo tee -a /etc/rancher/k3s/config.yaml.d/flannel.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=stable sh -",
          + "sudo systemctl start k3s",
          + "until sudo kubectl get node prod-k3s-manager-0; do sleep 1; done",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-0"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_resource.install_workers["prod-k3s-pool1-1"] will be created
  + resource "ssh_resource" "install_workers" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo mkdir -p /etc/rancher/k3s/config.yaml.d",
          + "sudo mv /tmp/k3sconfig.yaml /etc/rancher/k3s/config.yaml",
          + "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=\"agent\" sh -",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (known after apply)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + triggers                           = {
          + "channel" = "stable"
        }
      + user                               = "k3smanager"
      + when                               = "create"

      + file {
          + content     = (known after apply)
          + destination = "/tmp/k3sconfig.yaml"
            # (4 unchanged attributes hidden)
        }
    }

  # module.k3s.ssh_sensitive_resource.join_token will be created
  + resource "ssh_sensitive_resource" "join_token" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = [
          + "sudo cat /var/lib/rancher/k3s/server/token",
        ]
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

  # module.k3s.ssh_sensitive_resource.kubeconfig will be created
  + resource "ssh_sensitive_resource" "kubeconfig" {
      + agent                              = false
      + bastion_port                       = "22"
      + commands                           = (known after apply)
      + commands_after_file_changes        = true
      + host                               = (known after apply)
      + id                                 = (known after apply)
      + ignore_no_supported_methods_remain = false
      + port                               = "2244"
      + private_key                        = (sensitive value)
      + result                             = (sensitive value)
      + retry_delay                        = "10s"
      + timeout                            = "5m"
      + user                               = "k3smanager"
      + when                               = "create"
    }

Plan: 19 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + hcloud_network_name = "prod-k3s-network"
  + k3s_cluster_cidr    = "10.42.0.0/16"
  + kube_api_server     = (known after apply)
  + kubeconfig          = (sensitive value)
  + location            = "nbg1"
  + network_name        = "prod-k3s-network"
  + pools               = (sensitive value)
  + region              = "eu-central"
  + ssh_port            = 2244
  + ssh_user            = "k3smanager"
time=2024-11-08T13:16:50Z level=info msg=Downloading Terraform configurations from file:///github/workspace/modules/kubernetes into /github/workspace/stacks/prod/kubernetes/.terragrunt-cache/-HGVTuUtXSFDQCN7IIesL6CulRY/z4vfL_CY3720zQ-fo9fFtb8YbxA prefix=[/github/workspace/stacks/prod/kubernetes] 
Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/helm from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/kubernetes v2.31.0...
- Installed hashicorp/kubernetes v2.31.0 (signed by HashiCorp)
- Installing hashicorp/random v3.6.2...
- Installed hashicorp/random v3.6.2 (signed by HashiCorp)
- Installing hashicorp/helm v2.14.1...
- Installed hashicorp/helm v2.14.1 (signed by HashiCorp)

Terraform has been successfully initialized!

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.argocd will be created
  + resource "helm_release" "argocd" {
      + atomic                     = true
      + chart                      = "argo-cd"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "argocd"
      + namespace                  = "argocd"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://argoproj.github.io/argo-helm"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 600
      + values                     = [
          + <<-EOT
                global:
                  domain: argocd.prod.simonemms.com
                redis-ha:
                  enabled: true
                repoServer:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                server:
                  autoscaling:
                    enabled: true
                    minReplicas: 2
                  ingress:
                    enabled: true
                    ingressClassName: nginx
                    annotations:
                      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
                      nginx.ingress.kubernetes.io/backend-protocol: HTTP
                      kubernetes.io/tls-acme: "true"
                      cert-manager.io/cluster-issuer: letsencrypt
                    tls: true
                    extraTLS:
                      - hosts:
                          - argocd.prod.simonemms.com
                        secretName: argocd-tls
                configs:
                  params:
                    server.insecure: true
            EOT,
        ]
      + verify                     = false
      + version                    = "7.7.0"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # helm_release.hcloud_ccm will be created
  + resource "helm_release" "hcloud_ccm" {
      + atomic                     = true
      + chart                      = "hcloud-cloud-controller-manager"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hccm"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "1.20.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.hcloud_csi will be created
  + resource "helm_release" "hcloud_csi" {
      + atomic                     = true
      + chart                      = "hcloud-csi"
      + cleanup_on_fail            = true
      + create_namespace           = false
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "hcsi"
      + namespace                  = "kube-system"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.hetzner.cloud"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + verify                     = false
      + version                    = "2.10.0"
      + wait                       = true
      + wait_for_jobs              = false

      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

  # helm_release.ingress_nginx will be created
  + resource "helm_release" "ingress_nginx" {
      + atomic                     = true
      + chart                      = "ingress-nginx"
      + cleanup_on_fail            = true
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "ingress-nginx"
      + namespace                  = "ingress-nginx"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://kubernetes.github.io/ingress-nginx"
      + reset_values               = true
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = (known after apply)
      + verify                     = false
      + version                    = "4.11.3"
      + wait                       = true
      + wait_for_jobs              = false
    }

  # kubernetes_secret_v1.hcloud will be created
  + resource "kubernetes_secret_v1" "hcloud" {
      + data                           = (sensitive value)
      + id                             = (known after apply)
      + type                           = "Opaque"
      + wait_for_service_account_token = true

      + metadata {
          + generation       = (known after apply)
          + name             = "hcloud"
          + namespace        = "kube-system"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # random_integer.ingress_load_balancer_id will be created
  + resource "random_integer" "ingress_load_balancer_id" {
      + id     = (known after apply)
      + max    = 9999
      + min    = 1000
      + result = (known after apply)
    }

Plan: 6 to add, 0 to change, 0 to destroy.

@mrsimonemms mrsimonemms deleted the sje/cluster-refactor branch November 10, 2024 18:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant