-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backupRepository (restic) can become stale if velero deployment is not running to observe bsl update/create #8279
Comments
What does Looks like in your test, the backupRepository CR exists and is not removed. How is this made? Is this a valid operation? Anyway, this problem falls the situation that BSL is modified, but backupRepository CR is not invalidated. IMO, either Velero server is running or not, at the time of BSL is modified, should go with the same solution. |
I mean velero deployment isn't running or deleted. Not via uninstall command. |
Backup repository was generated from first successful backup before bsl modification during velero server absence from deployment deletion. |
OK. Then see my above comment: I think we should find a unified way to solve the problem caused by BSL modification in either velero server is on or off case. |
Agree. Will check. Thanks! |
Invalidate if
Potentially if there is a validation func (which causes |
What steps did you take and what happened:
extension of #7292
We want to invalidate backupRepositories on server startup for all pre-existing BSLs.
Red Hat QE have found that after installing velero and running a successful backup with kopia
Scaling down velero deployment to 0 and deleting/recreating BSL with different prefix then scaling velero back to 1 replica.
Creating another kopia backup result in failed kopia backup
and further, the BackupRepository were pointing to old resticIdentifier.
What did you expect to happen:
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: