Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not coming back from scale-to-zero (KEDA) with latest image #878

Open
black-snow opened this issue Oct 1, 2024 · 0 comments
Open

Not coming back from scale-to-zero (KEDA) with latest image #878

black-snow opened this issue Oct 1, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@black-snow
Copy link

Describe the bug
Not quite sure this is a bug, really - perhaps rather unexpected but obvious if you know anything about how image-updater works.
I'll report it anyway because it surprised me - went against my naive expectation of what should have happened.

I have some deployments that scale to zero. I run KEDA with some trigger. So there's always a deployment and a ReplicaSet but the pod only gets spawned when there's actual work to do.

Now what I noticed is that when I push a new image version into the registry and spawn a pod it will be scheduled with the digest it was last scheduled with. It will then run for a moment, get killed, and be replaced with the new image version.

I have set:

argocd-image-updater.argoproj.io/backend.update-strategy: digest
argocd-image-updater.argoproj.io/write-back-method: git

I pretty much get why this happens, I think, and I do see the write-backs happen in my helm repo, but I wish the behaviour was different. With always-on workloads the delay between registry push and image update in k8s is quite short, but in this scenario here it can be quite surprising. k8s might schedule a pod that is way outdated. Best thing to happen probably is that it just crashes because it's too old - worst case is that it does wrong and unexpected things until it gets killed :/

To Reproduce

  • have a workload that scales to zero replicas (ain't got to be KEDA - you can also do this manually)
  • set update strategy and write-back
  • push a new image version
  • get replicas to >0 (via the trigger or manually)
  • watch the scheduled pod - the digest is not the newest one

Expected behavior
I'd want image-updater to not lag behind for scaled-down deployments. It would be nice if it looked at the deployment or rs as they are always there.

Additional context
Does imagePullPolicy: Always "fix" this issue? Probably. Is there currently a better way to achieve this?

Version
0.11.0

Logs
N/A

@black-snow black-snow added the bug Something isn't working label Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant