-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: On environment variable update, the new pod of zot that gets created goes into crashloopbackoff #2733
Comments
v1.4.x -> v2.x.x is a major version upgrade path and we don't guarantee backward compatibility in this case. However, that said, the best approach would be to setup a v2.x.x zot and setup sync/miror from v1.4.3 and then do the rolling upgrades thereafter. |
@kiransripada22, do you have anything specific in the configuration which is shared between the zot instances? Maybe shared storage? Are you using zot or zot-minimal? Do you have any specific extensions enabled? Do you use authentication? |
@rchincha Sorry if i am not clear, but i am facing this issue with a fresh installation of zot V2.1.1 . I had a complete new installation of zot V 2.1.1 and in that cluster when we did a rolling update, we are facing an issue where it fails to init controller.
Note: Also I found that when we do kubernetes update with Recreate Deployment Strategy it works, but the scenario we are using needs rolling update |
Wondering if you need this: #2730 |
@rchincha I think that may not fully fix it because if we delete the existing pod, the controller seems to not have any issue initialising. So this could be a resource availability issue when we do the rolling update |
Hei, got the same error message if I try to start two zots with same "meta.db".
|
The issue is to achieve "continuous" uptime, we need a single mutually exclusive db shared between two instances? |
zot version
v2.1.1
Describe the bug
Hi,
We have configured zot as a kubernetes deployment and added a flux-cd controller to track any changes that happen to this deployment and update the clusters based on the changes.
We used to have zot v1.4.3 which never had any issue with this rolling update whenever something changes in the zot deployment.
But we started facing issue once we upgraded to zot v2.1.1
After the upgrade, whenever we update any environment variable, instead of creating a new pod that replaces the old running pod, we are now getting a new pod that keeps going to crash loop and the old pod stays the same.
We have to manually go and delete the old pod for the crash loop to stop.
We checked the logs and the below is the log we found in zot container
{"level":"error","error":"timeout","goroutine":1,"caller":"zotregistry.dev/zot/pkg/cli/server/root.go:76","time":"2024-10-16T11:39:06.856027479Z","message":"failed to init controller"}
Error: timeout
To reproduce
Expected behavior
New pod of zot should be created that replaces the old pod.
Screenshots
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: