Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-creating kube_cluster_yaml on each terraform plan/apply #410

Open
viko97-a12 opened this issue Aug 3, 2023 · 2 comments
Open

Re-creating kube_cluster_yaml on each terraform plan/apply #410

viko97-a12 opened this issue Aug 3, 2023 · 2 comments
Assignees
Labels
area/terraform bug Something isn't working good first issue Good for newcomers team/area2
Milestone

Comments

@viko97-a12
Copy link

Terraform: v1.5.3
RKE Provider: 1.4.2
RKE Cluster: v1.26.4-rancher2-1

I have terraform module for creating RKE Cluster, everything is fine with the creation, but after that when I execute the plan, it's saying that it's going to re-create the kube_cluster_yaml ->

-/+ resource "local_file" "kube_cluster_yaml" {
      ~ content              = (sensitive value) # forces replacement
      ~ content_base64sha256 = "O5/gq0ppoyYX6PaB61JZMMK7QDruPOxHAhblDDBocG8=" -> (known after apply)
      ~ content_base64sha512 = "/d7UFN+ZZvRhHwqI+NVVHI9G0Un2Ct3o1JnmT/Or85lI0wCk0zASmlPlVJt9VpQMs1PlUm34B7FPkGzAnOb1rg==" -> (known after apply)
      ~ content_md5          = "3551bcb3590523e5d065f889abe6ae3e" -> (known after apply)
      ~ content_sha1         = "032b78ac94907ea24655ca5602e66170dc2928a6" -> (known after apply)
      ~ content_sha256       = "3b9fe0ab4a69a32617e8f681eb525930c2bb403aee3cec470216e50c3068706f" -> (known after apply)
      ~ content_sha512       = "fdded414df9966f4611f0a88f8d5551c8f46d149f60adde8d499e64ff3abf39948d300a4d330129a53e5549b7d56940cb353e5526df807b14f906cc09ce6f5ae" -> (known after apply)
      ~ id                   = "032b78ac94907ea24655ca5602e66170dc2928a6" -> (known after apply)
        # (3 unchanged attributes hidden)
    }

  # module.rke.rke_cluster.cluster will be updated in-place
  ~ resource "rke_cluster" "cluster" {
      ~ cluster_cidr              = "10.42.0.0/16" -> (known after apply)
      ~ cluster_dns_server        = "10.43.0.10" -> (known after apply)
      ~ cluster_domain            = "cluster.local" -> (known after apply)
        id                        = "92fa3c9c-8cc2-4f16-8779-659db5433548"
      ~ kube_config_yaml          = (sensitive value)
      ~ rke_cluster_yaml          = (sensitive value)
      ~ rke_state                 = (sensitive value)
        # (22 unchanged attributes hidden)

        # (12 unchanged blocks hidden)
    }

Plan: 1 to add, 1 to change, 1 to destroy.

I saw that there are other issues related to that, but nothing is solving my problem.

This is the debug from the plan ->

time="2023-08-03T09:48:05+03:00" level=info msg="Reading RKE cluster 92fa3c9c-8cc2-4f16-8779-659db5433548 ..."
time="2023-08-03T09:48:05+03:00" level=debug msg="audit log policy found in cluster.yml"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking if cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Enabling kube-api audit log for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking cri-dockerd for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="cri-dockerd is enabled for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurityPolicy for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurity for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking if cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Cluster version [1.26.4-rancher2-1] needs to have kube-api audit log enabled"
time="2023-08-03T09:48:05+03:00" level=debug msg="Enabling kube-api audit log for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.11 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.12 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: controlplane"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: worker"
time="2023-08-03T09:48:05+03:00" level=debug msg="Host: 172.16.16.13 has role: etcd"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking cri-dockerd for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="cri-dockerd is enabled for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurityPolicy for cluster version [v1.26.4-rancher2-1]"
time="2023-08-03T09:48:05+03:00" level=debug msg="Checking PodSecurity for cluster version [v1.26.4-rancher2-1]"

Please, if you can advice where is the problem or is a bug???
Thanks!

@a-blender a-blender added bug Something isn't working good first issue Good for newcomers [zube]: Next Up area/terraform labels Oct 30, 2023
@a-blender a-blender added this to the v1.4.4 milestone Oct 30, 2023
@zube zube bot removed the [zube]: Next Up label Nov 2, 2023
@thatmidwesterncoder
Copy link

Hello, I'm looking into this a bit and was only able to reproduce it after editing the kubeconfig (e.g. kubectx/kubens) or moving the file from the expected path.

It seems to only re-download the file when the local file doesn't match the state. So if the file gets modified and/or moved out of the directory terraform re-creates it which is expected behavior and not a bug.

Does that sound like what you're running into? Or is it for some reason always re-downloading the local file after consecutive terraform apply runs?

@snasovich
Copy link
Collaborator

@viko97-a12 , wondering if you could circle back to @thatmidwesterncoder on it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/terraform bug Something isn't working good first issue Good for newcomers team/area2
Projects
None yet
Development

No branches or pull requests

5 participants