Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Velero Backup Partially Failed - backup repository is not ready: error to init backup repo #6431

Closed
jack1234-cloud opened this issue Jun 28, 2023 · 4 comments
Assignees
Labels
Kopia Needs info Waiting for information

Comments

@jack1234-cloud
Copy link

jack1234-cloud commented Jun 28, 2023

What steps did you take and what happened:
I took a backup using "velero create backup --include-namespaces " and got this error (PartiallyFailed):

Error:
Velero: name: /pod-name error: /failed to wait BackupRepository: backup repository is not ready: error to init backup repo: error to connect to storage: unable to determine if bucket "bucket_name" exists: Access Denied.

What did you expect to happen:
We expected a backup of completed

The following information will help us better understand what's going on:

If you are using velero v1.7.0+: Velero Version (Client & Server) Version: v1.11.0
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help

If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:
We changed the image tags from v1.11.0 to main, this allowed us to progress further but ran into issue of "DataUpload CRD under v1alpha1" was missing which resulted in the node agent pods restarting many times. This resulted in many failures in the velero backup

Environment:

  • Velero version (use velero version): Version: v1.11.0
  • Velero features (use velero client config get features): features NOT SET
  • Kubernetes version (use kubectl version): 1.21
  • Kubernetes installer & version: AWS EKS & 1.21
  • Cloud provider or hardware configuration: aws
  • OS (e.g. from /etc/os-release): Amazon Linux

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@Lyndon-Li
Copy link
Contributor

This is a known issue, see this Velero issue and this Kopia issue

@Lyndon-Li
Copy link
Contributor

We changed the image tags from v1.11.0 to main, this allowed us to progress further but ran into issue of "DataUpload CRD under v1alpha1" was missing

To use the main image, you also need to use Velero client binary generated from main branch, otherwise, the CRDs will mismatch.
However, even though you make it, it won't help on the current issue since the problem happens in the main branch as well.

@reasonerjt reasonerjt added Needs info Waiting for information Kopia labels Jul 3, 2023
@reasonerjt
Copy link
Contributor

@jack1234-cloud could you confirm this is a dup as @Lyndon-Li thinks?

@jack1234-cloud
Copy link
Author

@reasonerjt - we can close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Kopia Needs info Waiting for information
Projects
None yet
Development

No branches or pull requests

3 participants