-
Notifications
You must be signed in to change notification settings - Fork 32
chore: add upgrade docs for 1.33 #1520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Reza Abbasalipour <[email protected]>
f04c93d
to
fd0a8e5
Compare
Signed-off-by: Reza Abbasalipour <[email protected]>
Signed-off-by: Reza Abbasalipour <[email protected]>
12caebf
to
c7809f5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Reza! I have provided a few suggestions to make this page set up for future versions. Great work 😸
A few notes:
- This is probably a follow up card but is this also necessary when upgrading the charm?
- We need to update the page docs/canonicalk8s/snap/howto/upgrades.md to point to this new page. We need add an extra step in the upgrading minor versions to ask the user to check if there are any version specific configuration changes they should be aware of before upgrading the channel version.
- When we have release notes for 1.33 we should link to this page with upgrade instructions. (More of a reminder to myself)
If this is not satisfied, the Cilium agent pods will fail to start. | ||
For each node in the cluster, perform the following steps: | ||
|
||
- Edit the kubelet configuration: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Edit the kubelet configuration: | |
- Update the `--node-ip` flag in the kubelet configuration file `/var/snap/k8s/common/args/kubelet` to include both the IPv4 and IPv6 addresses (comma-separated) from the same interface: | |
``` | |
--node-ip=<IPv4>,<IPv6> | |
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you chose to go with my suggestion, lines 26-38 could be deleted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the "same interface" here meant to the cluster interface that has the cluster ip?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It refers to the network interface with the attached IP(s) that we use as node IP(s)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would clarify this then. Instead of saying the same interface, I would say something like "include both the IPv4 and IPv6 addresses from the network interface that is used for our node IPs" . Does that make sense?
sudo k8s kubectl rollout restart daemonset cilium -n kube-system | ||
``` | ||
|
||
Now you can run the snap `refresh` command to perform the upgrade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to add in a verify step? Like to check the version has updated and the cluster is ready?
Something like :
Verify the upgrade
Check the k8s
snap version has been updated and the cluster is back in the Ready
state.
snap info k8s
sudo k8s status --wait-ready
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea, I added it with a separate heading to emphasize the steps for all the users regardless of their networking environment. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Signed-off-by: Reza Abbasalipour <[email protected]>
@nhennigan about your first note: Yes, we should have a similar upgrading steps for |
Signed-off-by: Reza Abbasalipour <[email protected]>
I think this is almost ready to go - just one comment on the interface and we should be good to merge. I'll make sure we have a card to also apply this to the charm when that release happens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @rapour! Looks great. However, I'm wondering whether or not we should instruct the user to check for the upgrade
custom resource to validate the upgrade.
Signed-off-by: Reza Abbasalipour <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should backport these to 1.32 and 1.33 but other than that I think it looks good. 😸
--------- Signed-off-by: Reza Abbasalipour <[email protected]> (cherry picked from commit 4a4c582)
Successfully created backport PR for |
--------- Signed-off-by: Reza Abbasalipour <[email protected]> (cherry picked from commit 4a4c582)
Successfully created backport PR for |
--------- (cherry picked from commit 4a4c582) Signed-off-by: Reza Abbasalipour <[email protected]> Co-authored-by: Reza Alipour <[email protected]>
--------- Signed-off-by: Reza Abbasalipour <[email protected]> (cherry picked from commit 4a4c582)
Description
This PR suggests manual steps for upgrading snap package from 1.32 to 1.33