-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade is not updating KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion #11344
Comments
This issue is currently awaiting triage. If CAPI contributors determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Or this field will never get updated during upgrade as kubeadm init happened during deployment and this field is only used while initializing the cluster? |
CAPI does have this:
but it's only updated if the value is "" .
searching the code base, i'd say it's not updated continuously. |
This question was also posted at least in 2 Slack channels. @gandhisagar can you please de-duplicate? It's not very efficient for folks trying to help. |
@neolit123 We are automating the upgrade using capi/capv in our enterprise product. In upgrade we are changing the spec.version as described in documentation but KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion remains the one which is used during installation. I am trying to understand the implication here , If it remains the old , does production will have any effect ? |
@sbueringer Sure, cold response there so will watch for a day and delete it from slack channel. |
echoing what I answered to slack channel (and please, stop duplicating request, it doesn't help you to solve your problem and makes everyone else life more complicated) you should not set the KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion field; if you leave it empty CABPK will use the the top level version and upgrades will just work. Note: we should probably also remove the field from the API, but we are stuck in refactoring this API by a few other ongoing discussion about |
@fabriziopandini This is upgrade , not deployment. This field is populated during deployment but not getting changed during upgrade. When I say upgrade it means we are upgrading the template from 1.29 to 1.30. Its in place upgrade , not blue-green. How to make it blank during upgrade ? As you may have noticed , Already deleted the message . |
I think the only way to unset this field on a KCP object that already has it is to disable the KCP validation webhook, unset the field and then enable the webhook again. |
@sbueringer Is there any procedure I can follow or do you recommend not to do that in production. We are fine keeping this old , Just trying to see if there is any impact with this field being old in production , appreciate the help. |
Not sure what the impact is. As far as I can tell this kubernetesVersion gets passed from the KCP object to the KubeadmConfigs and from there onto Machines and then used by kubeadm. (but maybe I"m misreading our code) I would probably try to verify which version effectively ends up in the config file used by kubeadm when creating new Nodes with If the result is that If overall the result of that investigation is that it's problematic, the only way to do this right now is: "I think the only way to unset this field on a KCP object that already has it is to disable the KCP validation webhook, unset the field and then enable the webhook again.". We don't have any further documentation. What we maybe could consider is allowing folks to unset the kubernetesVersion field within ClusterConfiguration (but this requires a code change). I assume today the validating webhook on KCP blocks unsetting the version? (based on what you wrote above) |
What steps did you take and what happened?
We are doing upgrade of kubernetes cluster deployed using cluster-api (capv- on vsphere infrastructure).
As part of upgrade , we are applying the following changes:
Applying clusterctl upgrade plan
Changing pre-post kubeadm commands
Changing spec.version (e.g. from 1.29.3 to 1.30.4)
Cluster is getting upgraded successfully. we can see all nodes are at 1.30.4. but KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion is not getting updated automatically.
KubeadmControlplane instance
Machine object (spec.version) is also 1.30.4
We are following this : https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters#how-to-upgrade-the-kubernetes-control-plane-version
When we tried to update it manually , it fails as forbidden to update this field.
Any suggestion or any specific step in upgrade we are missing out ?
So far tried
What did you expect to happen?
We were expecting if KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion is not modifiable then it should automatically get updated to 1.30.4 after upgrade.
Cluster API version
clusterctl version:
clusterctl version: &version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"965ffa1d94230b8127245df750a99f09eab9dd97", GitTreeState:"clean", BuildDate:"2024-03-12T17:15:08Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/amd64"}
bootstrap-kubeadm: v1.7.1
cert-manager: v1.14.2
cluster-api: v1.7.1
control-plane-kubeadm: v1.7.1
infrastructure-vsphere: v1.10.0
ipam-incluster: v0.1.0
Kubernetes version
1.29.3 -> 1.30.4 Upgrade
Anything else you would like to add?
root@sspi-test:/image/VMware-SSP-Installer-5.0.0.0.0.80589143/phoenix# kubectl get kubeadmcontrolplane ssp-cluster -n ssp-cluster
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
ssp-cluster ssp-cluster true true 1 1 1 0 63m v1.30.4
root@sspi-test:/image/VMware-SSP-Installer-5.0.0.0.0.80589143/phoenix# kubectl get cluster -A
NAMESPACE NAME CLUSTERCLASS PHASE AGE VERSION
ssp-cluster ssp-cluster Provisioned 63m
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: