Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix OADP-4623: OpenShift on IBMCloud setup for OADP #1482

Merged

Conversation

shubham-pampattiwar
Copy link
Member

@shubham-pampattiwar shubham-pampattiwar commented Aug 2, 2024

Why the changes were made

The default hostpath /var/lib/kubelet/pods cannot find PersistentVolumeClaims with volumeMode: Block on host.
The correct hostpath for OpenShift on IBM Cloud is /var/data/kubelet/pods

Similarly for host-plugins the correct hostpath is /var/data/kubelet/plugins

We are adding the ability to check for OpenShift infrastructure and update the hostpath and hostplugins for node-agent daemonset in case of OpenShift on IBMCloud

Note: The hostpath set via operator CSV env vars will take precedence (RESTIC_PV_HOSTPATH and FS_PV_HOSTPATH)

Similarly for host-plugins CSV env var will take precedence (PLUGINS_HOSTPATH)

How to test the changes made

  • Install OADP Operator using this PR on IBM Cloud cluster as well as AWS Cloud Cluster
  • In both the cases check for the expected hostpath for host-pods on node-agent daemon set
    • For IBMCloud: It should be /var/data/kubelet/pods
    • For AWS Cloud: It should be /var/lib/kubelet/pods
  • Also, in both the cases, check if you are able to specify the env var value on CSV and check if that takes precedence irrespective of the OpenShift Cloud infrastructure. (Once the CSV is updated, operator pod will restart and you might have to re-deploy/re-create the DPA to see the changes node-agent daemon set)

Follow the same steps as above for host-plugins path:

  • For IBMCloud: the volume path and volumeMount mount path should be var/data/kubelet/plugins
  • For AWS Cloud: the volume path and volumeMount mount path should be var/lib/kubelet/plugins

@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 2, 2024

@shubham-pampattiwar: This pull request references OADP-4623 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.17.0" version, but no target version was set.

In response to this:

Why the changes were made

The default hostpath /var/lib/kubelet/pods cannot find PersistentVolumeClaims with volumeMode: Block on host.
The correct hostpath for OpenShift on IBM Cloud is /var/data/kubelet/pods

We are adding the ability to check for OpenShift infrastructure and update the hostpath for node-agent daemonset in case of OpenShift on IBMCloud

Note: The hostpath set via operator CSV env vars will take precedence (RESTIC_PV_HOSTPATH and FS_PV_HOSTPATH)

How to test the changes made

  • Install OADP Operator using this PR on IBM Cloud cluster as well as AWS Cloud Cluster
  • In both the cases check for the expected hostpath for host-pods on node-agent daemon set
    • For IBMCloud: It should be /var/data/kubelet/pods
    • For AWS Cloud: It should be /var/lib/kubelet/pods
  • Also, in both the cases, check if you are able to specify the env var value on CSV and check if that takes precedence irrespective of the OpenShift Cloud infrastructure. (Once the CSV is updated, operator pod will restart and you might have to re-deploy/re-create the DPA to see the changes node-agent daemon set)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Aug 2, 2024
@shubham-pampattiwar
Copy link
Member Author

/cherry-pick oadp-1.4

@openshift-cherrypick-robot
Copy link
Contributor

@shubham-pampattiwar: once the present PR merges, I will cherry-pick it on top of oadp-1.4 in a new PR and assign it to you.

In response to this:

/cherry-pick oadp-1.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 2, 2024
@mateusoliveira43
Copy link
Contributor

Is a valid test scenario to run block e2e test on IBM (it never worked for me)? Ref https://github.com/openshift/oadp-operator/blob/master/tests/e2e/backup_restore_suite_test.go#L396

@shubham-pampattiwar
Copy link
Member Author

@mateusoliveira43 yes this should help with the block e2e on IBM Cloud. Testing block volume app on IBM Cloud would certainly help here.

Copy link
Member

@kaovilai kaovilai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shubham-pampattiwar
Copy link
Member Author

/hold PR needs an update, refer: vmware-tanzu/velero#8077

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 2, 2024
@openshift-ci-robot
Copy link

openshift-ci-robot commented Aug 7, 2024

@shubham-pampattiwar: This pull request references OADP-4623 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.17.0" version, but no target version was set.

In response to this:

Why the changes were made

The default hostpath /var/lib/kubelet/pods cannot find PersistentVolumeClaims with volumeMode: Block on host.
The correct hostpath for OpenShift on IBM Cloud is /var/data/kubelet/pods

Similarly for host-plugins the correct hostpath is /var/data/kubelet/plugins

We are adding the ability to check for OpenShift infrastructure and update the hostpath and hostplugins for node-agent daemonset in case of OpenShift on IBMCloud

Note: The hostpath set via operator CSV env vars will take precedence (RESTIC_PV_HOSTPATH and FS_PV_HOSTPATH)

Similarly for host-plugins CSV env var will take precedence (PLUGINS_HOSTPATH)

How to test the changes made

  • Install OADP Operator using this PR on IBM Cloud cluster as well as AWS Cloud Cluster
  • In both the cases check for the expected hostpath for host-pods on node-agent daemon set
    • For IBMCloud: It should be /var/data/kubelet/pods
    • For AWS Cloud: It should be /var/lib/kubelet/pods
  • Also, in both the cases, check if you are able to specify the env var value on CSV and check if that takes precedence irrespective of the OpenShift Cloud infrastructure. (Once the CSV is updated, operator pod will restart and you might have to re-deploy/re-create the DPA to see the changes node-agent daemon set)

Follow the same steps as above for host-plugins path:

  • For IBMCloud: the volume path and volumeMount mount path should be var/data/kubelet/plugins
  • For AWS Cloud: the volume path and volumeMount mount path should be var/lib/kubelet/plugins

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@shubham-pampattiwar
Copy link
Member Author

shubham-pampattiwar commented Aug 7, 2024

PR updated for host-plugins path changes for node-agent. PTAL !

Cluster = "cluster"
IBMCloudPlatform = "IBMCloud"
GenericPVHostPath = "/var/lib/kubelet/pods"
IBMCloudPVHostPath = "/var/data/kubelet/pods"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be deleted

what IBM changes is just host-plugins path, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think yesterday in scrum we discussed that we need both.
cc: @sseago

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall both are needed to allow a symlink to work. Pod and Host iiuc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I remember was that we need to mount both /var/lib/kubelet and /var/data/kubelet on pod

I tested only changing host-plugins path and e2e block datamover worked for me in IBM (which never worked before)

If we need both, then velero docs are still wrong https://velero.io/docs/v1.14/csi-snapshot-data-movement/#configure-node-agent-daemonset-spec

Copy link
Member Author

@shubham-pampattiwar shubham-pampattiwar Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we might need to update the docs upstream, I will do that once this PR is completed.

weshayutin
weshayutin previously approved these changes Aug 9, 2024
Copy link
Contributor

@weshayutin weshayutin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msfrucht
Copy link

msfrucht commented Aug 9, 2024

@weshayutin I will build and try the changes.

@weshayutin
Copy link
Contributor

/retest

@weshayutin
Copy link
Contributor

@weshayutin I will build and try the changes.

thank you sir!

kaovilai
kaovilai previously approved these changes Aug 9, 2024
controllers/nodeagent.go Outdated Show resolved Hide resolved
@msfrucht
Copy link

msfrucht commented Aug 9, 2024

notes are here: https://hackmd.io/hlsqVLK7SC-F080tYh3H0A

Changes look good. node-agent config is as expected on IBM Cloud.

      volumes:
        - name: host-pods
          hostPath:
            path: /var/data/kubelet/pods
            type: ''
        - name: host-plugins
          hostPath:
            path: /var/data/kubelet/plugins
            type: ''
        - name: scratch
          emptyDir: {}
        - name: certs
          emptyDir: {}
          volumeMounts:
            - name: host-pods
              mountPath: /host_pods
              mountPropagation: HostToContainer
            - name: host-plugins
              mountPath: /var/data/kubelet/plugins
              mountPropagation: HostToContainer
            - name: scratch
              mountPath: /scratch
            - name: certs
              mountPath: /etc/ssl/certs

Status is healthy.

status:
  currentNumberScheduled: 4
  numberMisscheduled: 0
  desiredNumberScheduled: 4
  numberReady: 4
  observedGeneration: 21
  updatedNumberScheduled: 4
  numberAvailable: 4

And node-agent pods have access to the volumes directly.

sh-5.1# pwd 
/var/data/kubelet/plugins/kubernetes.io/csi/openshift-storage.rbd.csi.ceph.com
sh-5.1# ls
30c8b8b1fd328c19e18737bafe540ed9c397f01e7114fa3735380c72183d56e1  9cbc2343a690e136f4d9c9897b113e6b95a795a2521b19bb68a2319651fd40e5
8583e3e77154813d689cd7902199ec40a1ba3bde05e363278756e2aaa210fa82  b0a190569d8d17934ccb3e154529ec3ce9d77f7387d712f8c7d43ad052ac9844
9965647b2453c57dfb8722bf9f649e936e3091169ba093488b8be2ed9f753992  dc81f1949091f76168fedfe3ee986070c3b7000ebe69f7822ab16f1a9dc50d8a

DataUpload successful

apiVersion: velero.io/v2alpha1
kind: DataUpload
metadata:
  generateName: oadp-4263-test-
  resourceVersion: '183152751'
  name: oadp-4263-test-vmr7h
  uid: bb094bb8-2a17-4838-9e7f-4400cc03458f
  creationTimestamp: '2024-08-09T22:29:24Z'
  generation: 8
  namespace: ibm-backup-restore
  labels:
    velero.io/accepted-by: 10.240.0.7
spec:
  backupStorageLocation: aws
  csiSnapshot:
    snapshotClass: ocs-storagecluster-rbdplugin-snapclass
    storageClass: ocs-storagecluster-ceph-rbd
    volumeSnapshot: anu-block-snapshot
  operationTimeout: 10m0s
  snapshotType: CSI
  sourceNamespace: anu-block
  sourcePVC: anu-block
status:
  completionTimestamp: '2024-08-09T22:30:09Z'
  node: 10.240.0.7
  path: >-
    /host_pods/467ee3a8-cfa0-4997-9d66-b2fea4e725be/volumeDevices/kubernetes.io~csi/pvc-39e5d76a-d49f-4801-a50d-fc9ebb9ad4c8
  phase: Completed
  progress:
    bytesDone: 5368709120
    totalBytes: 5368709120
  snapshotID: 676d78bbf288b0b1523a545cb089afd0
  startTimestamp: '2024-08-09T22:29:24Z'

For as little as it matters, you have my approval for this change.

@@ -2898,6 +3032,13 @@ func TestDPAReconciler_buildNodeAgentDaemonset(t *testing.T) {
},
},
wantErr: false,
clientObjects: []client.Object{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we adding this to all tests?

if yes, I would append it to test clientObjects inside t.Run

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine. Having a clientObject arg for each test case makes this future proof. We might need/write some specific test cases in the future that might need object mocking on per test case basis. And having clientObject array helps in that scenario.

Copy link
Contributor

@weshayutin weshayutin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/LGTM

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 12, 2024
@shubham-pampattiwar
Copy link
Member Author

/retest

@weshayutin
Copy link
Contributor

/retest

Copy link

openshift-ci bot commented Aug 13, 2024

@shubham-pampattiwar: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@shubham-pampattiwar
Copy link
Member Author

Testcase details:
Platform: IBM Cloud Block application B/R
Block PVC:

spampatt@spampatt-mac /Users/spampatt/oadp-operator [fix-oadp-4263]$ oc get pvc mongo -oyaml -n mongo-persistent                  
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: vpc.block.csi.ibm.io
    volume.kubernetes.io/storage-provisioner: vpc.block.csi.ibm.io
  creationTimestamp: "2024-08-13T17:04:38Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: mongo
  name: mongo
  namespace: mongo-persistent
  resourceVersion: "16686547"
  uid: cc85de73-69bc-49c1-ae4d-98e829e85545
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: ibmc-vpc-block-10iops-tier
  volumeMode: Block
  volumeName: pvc-cc85de73-69bc-49c1-ae4d-98e829e85545
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  phase: Bound

Completed Backup:

Name:         �[1mmongo-persistent�[22m
Namespace:    openshift-adp
Labels:       velero.io/storage-location=sample-dpa-1
Annotations:  velero.io/resource-timeout=10m0s
              velero.io/source-cluster-k8s-gitversion=v1.28.11+add48d0
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=28

Phase:  �[32mCompleted�[0m


Namespaces:
  Included:  mongo-persistent
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Or label selector:  <none>

Storage Location:  sample-dpa-1

Velero-Native Snapshot PVs:  auto
Snapshot Move Data:          true
Data Mover:                  velero

TTL:  720h0m0s

CSISnapshotTimeout:    10m0s
ItemOperationTimeout:  4h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2024-08-13 10:21:55 -0700 PDT
Completed:  2024-08-13 10:24:17 -0700 PDT

Expiration:  2024-09-12 10:21:54 -0700 PDT

Total items to be backed up:  85
Items backed up:              85

Backup Item Operations:
  Operation for persistentvolumeclaims mongo-persistent/mongo:
    Backup Item Action Plugin:  velero.io/csi-pvc-backupper
    Operation ID:               du-890610ca-26f9-451f-b855-a057d2950351.cc85de73-69bc-49c72363e
    Items to Update:
                           datauploads.velero.io openshift-adp/mongo-persistent-jfchd
    Phase:                 Completed
    Progress:              10737418240 of 10737418240 complete (Bytes)
    Progress description:  Completed
    Created:               2024-08-13 10:22:02 -0700 PDT
    Started:               2024-08-13 10:23:15 -0700 PDT
    Updated:               2024-08-13 10:24:11 -0700 PDT
Resource List:
  apps.openshift.io/v1/DeploymentConfig:
    - mongo-persistent/todolist
  apps/v1/Deployment:
    - mongo-persistent/mongo
  apps/v1/ReplicaSet:
    - mongo-persistent/mongo-6d9c596768
  authorization.openshift.io/v1/RoleBinding:
    - mongo-persistent/system:deployers
    - mongo-persistent/system:image-builders
    - mongo-persistent/system:image-pullers
  discovery.k8s.io/v1/EndpointSlice:
    - mongo-persistent/mongo-mmsvv
    - mongo-persistent/todolist-7pswf
  image.openshift.io/v1/ImageStream:
    - mongo-persistent/todolist-mongo-go
  rbac.authorization.k8s.io/v1/RoleBinding:
    - mongo-persistent/system:deployers
    - mongo-persistent/system:image-builders
    - mongo-persistent/system:image-pullers
  route.openshift.io/v1/Route:
    - mongo-persistent/todolist-route
  security.openshift.io/v1/SecurityContextConstraints:
    - mongo-persistent-scc
  v1/ConfigMap:
    - mongo-persistent/kube-root-ca.crt
    - mongo-persistent/openshift-service-ca.crt
  v1/Endpoints:
    - mongo-persistent/mongo
    - mongo-persistent/todolist
  v1/Event:
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb58894bffb837
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb588bf884b03a
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb588de467d995
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb5894c0b65db7
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb5896a47de743
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb5898fb14a5cc
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ac2599862
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ae1c57a96
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ae1c5f206
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589b0d135d47
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589b1906cc7f
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589e6d457065
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589e78746864
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589e79f4e217
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589e92b6fb48
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589e9d717f23
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ea79fc105
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ea8d7346d
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589ea8f475ac
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589eb2984632
    - mongo-persistent/mongo-6d9c596768-j7xkf.17eb589eb41e8dd4
    - mongo-persistent/mongo-6d9c596768.17eb58894badb477
    - mongo-persistent/mongo.17eb588949cdf5fd
    - mongo-persistent/mongo.17eb588bf8678d9a
    - mongo-persistent/mongo.17eb5894c2dacf8f
    - mongo-persistent/mongo.17eb5894c2df7686
    - mongo-persistent/mongo.17eb589898eb4cbc
    - mongo-persistent/todolist-1-deploy.17eb588965bcd2fd
    - mongo-persistent/todolist-1-deploy.17eb5889a1cec3e1
    - mongo-persistent/todolist-1-deploy.17eb5889aea8e1c9
    - mongo-persistent/todolist-1-deploy.17eb5889bb20bee5
    - mongo-persistent/todolist-1-deploy.17eb5889bf7944f5
    - mongo-persistent/todolist-1-dgp5w.17eb588a17c20d34
    - mongo-persistent/todolist-1-dgp5w.17eb588a40d9e4d7
    - mongo-persistent/todolist-1-dgp5w.17eb588a4c75612c
    - mongo-persistent/todolist-1-dgp5w.17eb588a574efe38
    - mongo-persistent/todolist-1-dgp5w.17eb588a58d4b7f2
    - mongo-persistent/todolist-1-dgp5w.17eb58a340b25814
    - mongo-persistent/todolist-1-dgp5w.17eb58a38d5b1f44
    - mongo-persistent/todolist-1-dgp5w.17eb58a39866d880
    - mongo-persistent/todolist-1-dgp5w.17eb58a399d61201
    - mongo-persistent/todolist-1.17eb588a15c87e05
    - mongo-persistent/todolist.17eb58895cc44bb0
    - mongo-persistent/velero-mongo-mjdcc.17eb58c91b8d7af2
    - mongo-persistent/velero-mongo-mjdcc.17eb58ca0a2fba8d
    - mongo-persistent/velero-mongo-mjdcc.17eb58cf4c1d2ff4
  v1/Namespace:
    - mongo-persistent
  v1/PersistentVolume:
    - pvc-cc85de73-69bc-49c1-ae4d-98e829e85545
  v1/PersistentVolumeClaim:
    - mongo-persistent/mongo
  v1/Pod:
    - mongo-persistent/mongo-6d9c596768-j7xkf
    - mongo-persistent/todolist-1-deploy
    - mongo-persistent/todolist-1-dgp5w
  v1/ReplicationController:
    - mongo-persistent/todolist-1
  v1/Secret:
    - mongo-persistent/builder-dockercfg-jb8m2
    - mongo-persistent/builder-token-ztjf6
    - mongo-persistent/default-dockercfg-pdh69
    - mongo-persistent/default-token-24vsz
    - mongo-persistent/deployer-dockercfg-sc9lz
    - mongo-persistent/deployer-token-psfp2
    - mongo-persistent/mongo-persistent-sa-dockercfg-nqql4
    - mongo-persistent/mongo-persistent-sa-token-dhf8n
  v1/Service:
    - mongo-persistent/mongo
    - mongo-persistent/todolist
  v1/ServiceAccount:
    - mongo-persistent/builder
    - mongo-persistent/default
    - mongo-persistent/deployer
    - mongo-persistent/mongo-persistent-sa

Backup Volumes:
  Velero-Native Snapshots: <none included>

  CSI Snapshots:
    mongo-persistent/mongo:
      Data Movement:
        Operation ID: du-890610ca-26f9-451f-b855-a057d2950351.cc85de73-69bc-49c72363e
        Data Mover: velero
        Uploader Type: kopia
        Moved data Size (bytes): 10737418240

  Pod Volume Backups: <none included>

HooksAttempted:  0
HooksFailed:     0

Completed DataUpload:

apiVersion: velero.io/v2alpha1
kind: DataUpload
metadata:
  creationTimestamp: "2024-08-13T17:22:02Z"
  generateName: mongo-persistent-
  generation: 11
  labels:
    velero.io/accepted-by: 10.241.0.6
    velero.io/async-operation-id: du-890610ca-26f9-451f-b855-a057d2950351.cc85de73-69bc-49c72363e
    velero.io/backup-name: mongo-persistent
    velero.io/backup-uid: 890610ca-26f9-451f-b855-a057d2950351
    velero.io/pvc-uid: cc85de73-69bc-49c1-ae4d-98e829e85545
  name: mongo-persistent-jfchd
  namespace: openshift-adp
  ownerReferences:
  - apiVersion: velero.io/v1
    controller: true
    kind: Backup
    name: mongo-persistent
    uid: 890610ca-26f9-451f-b855-a057d2950351
  resourceVersion: "16688246"
  uid: 74e46f91-36cb-4924-badc-b4a5b0501479
spec:
  backupStorageLocation: sample-dpa-1
  csiSnapshot:
    snapshotClass: ibmc-vpcblock-snapshot
    storageClass: ibmc-vpc-block-10iops-tier
    volumeSnapshot: velero-mongo-x64kh
  operationTimeout: 10m0s
  snapshotType: CSI
  sourceNamespace: mongo-persistent
  sourcePVC: mongo
status:
  completionTimestamp: "2024-08-13T17:24:11Z"
  node: 10.241.0.4
  path: /host_pods/4833cdb4-41e4-4818-bc4a-46fec660d570/volumeDevices/kubernetes.io~csi/pvc-ec1cc924-c30a-401f-b850-6ebeebb081f5
  phase: Completed
  progress:
    bytesDone: 10737418240
    totalBytes: 10737418240
  snapshotID: f40671a31902bf955b39ce0b90b54284
  startTimestamp: "2024-08-13T17:23:15Z"

Completed Restore:

Name:         �[1mmongo-persistent�[22m
Namespace:    openshift-adp
Labels:       <none>
Annotations:  <none>

Phase:                       �[32mCompleted�[0m
Total items to be restored:  39
Items restored:              39

Started:    2024-08-13 10:34:35 -0700 PDT
Completed:  2024-08-13 10:36:57 -0700 PDT

Warnings:
  Velero:     <none>
  Cluster:  could not restore, SecurityContextConstraints "mongo-persistent-scc" already exists. Warning: the in-cluster version is different than the backed-up version
  Namespaces:
    mongo-persistent:  could not restore, ConfigMap "kube-root-ca.crt" already exists. Warning: the in-cluster version is different than the backed-up version
                       could not restore, ConfigMap "openshift-service-ca.crt" already exists. Warning: the in-cluster version is different than the backed-up version
                       could not restore, RoleBinding "system:deployers" already exists. Warning: the in-cluster version is different than the backed-up version
                       could not restore, RoleBinding "system:image-builders" already exists. Warning: the in-cluster version is different than the backed-up version
                       could not restore, RoleBinding "system:image-pullers" already exists. Warning: the in-cluster version is different than the backed-up version

Backup:  mongo-persistent

Namespaces:
  Included:  all namespaces found in the backup
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io, csinodes.storage.k8s.io, volumeattachments.storage.k8s.io, backuprepositories.velero.io
  Cluster-scoped:  auto

Namespace mappings:  <none>

Label selector:  <none>

Or label selector:  <none>

Restore PVs:  true

CSI Snapshot Restores:
  mongo-persistent/mongo:
    Data Movement:
      Operation ID: dd-bb5d38a5-6a30-4e2c-83b5-9a28f973fec7.cc85de73-69bc-49c1d78d3
      Data Mover: velero
      Uploader Type: kopia

Existing Resource Policy:   <none>
ItemOperationTimeout:       4h0m0s

Preserve Service NodePorts:  auto

Restore Item Operations:
  Operation for persistentvolumeclaims mongo-persistent/mongo:
    Restore Item Action Plugin:  velero.io/csi-pvc-restorer
    Operation ID:                dd-bb5d38a5-6a30-4e2c-83b5-9a28f973fec7.cc85de73-69bc-49c1d78d3
    Phase:                       Completed
    Progress:                    10737418240 of 10737418240 complete (Bytes)
    Progress description:        Completed
    Created:                     2024-08-13 10:34:36 -0700 PDT
    Started:                     2024-08-13 10:35:04 -0700 PDT
    Updated:                     2024-08-13 10:36:38 -0700 PDT

HooksAttempted:   0
HooksFailed:      0

Resource List:
  apps.openshift.io/v1/DeploymentConfig:
    - mongo-persistent/todolist(created)
  apps/v1/Deployment:
    - mongo-persistent/mongo(created)
  apps/v1/ReplicaSet:
    - mongo-persistent/mongo-6d9c596768(created)
  authorization.openshift.io/v1/RoleBinding:
    - mongo-persistent/system:deployers(failed)
    - mongo-persistent/system:image-builders(failed)
    - mongo-persistent/system:image-pullers(failed)
  discovery.k8s.io/v1/EndpointSlice:
    - mongo-persistent/mongo-mmsvv(created)
    - mongo-persistent/todolist-7pswf(created)
  image.openshift.io/v1/ImageStream:
    - mongo-persistent/todolist-mongo-go(created)
  rbac.authorization.k8s.io/v1/RoleBinding:
    - mongo-persistent/system:deployers(created)
    - mongo-persistent/system:image-builders(created)
    - mongo-persistent/system:image-pullers(created)
  route.openshift.io/v1/Route:
    - mongo-persistent/todolist-route(created)
  security.openshift.io/v1/SecurityContextConstraints:
    - mongo-persistent-scc(failed)
  v1/ConfigMap:
    - mongo-persistent/kube-root-ca.crt(failed)
    - mongo-persistent/openshift-service-ca.crt(failed)
  v1/Endpoints:
    - mongo-persistent/mongo(created)
    - mongo-persistent/todolist(created)
  v1/Namespace:
    - mongo-persistent(created)
  v1/PersistentVolume:
    - pvc-cc85de73-69bc-49c1-ae4d-98e829e85545(skipped)
  v1/PersistentVolumeClaim:
    - mongo-persistent/mongo(created)
  v1/Pod:
    - mongo-persistent/mongo-6d9c596768-j7xkf(created)
    - mongo-persistent/todolist-1-dgp5w(skipped)
  v1/ReplicationController:
    - mongo-persistent/todolist-1(skipped)
  v1/Secret:
    - mongo-persistent/builder-dockercfg-jb8m2(created)
    - mongo-persistent/builder-token-ztjf6(skipped)
    - mongo-persistent/default-dockercfg-pdh69(created)
    - mongo-persistent/default-token-24vsz(skipped)
    - mongo-persistent/deployer-dockercfg-sc9lz(created)
    - mongo-persistent/deployer-token-psfp2(skipped)
    - mongo-persistent/mongo-persistent-sa-dockercfg-nqql4(created)
    - mongo-persistent/mongo-persistent-sa-token-dhf8n(skipped)
  v1/Service:
    - mongo-persistent/mongo(created)
    - mongo-persistent/todolist(created)
  v1/ServiceAccount:
    - mongo-persistent/builder(skipped)
    - mongo-persistent/default(skipped)
    - mongo-persistent/deployer(skipped)
    - mongo-persistent/mongo-persistent-sa(created)
  velero.io/v2alpha1/DataUpload:
    - openshift-adp/mongo-persistent-jfchd(skipped)

Completed DataDownload:

apiVersion: velero.io/v2alpha1
kind: DataDownload
metadata:
  creationTimestamp: "2024-08-13T17:34:36Z"
  generateName: mongo-persistent-
  generation: 6
  labels:
    velero.io/accepted-by: 10.241.0.5
    velero.io/async-operation-id: dd-bb5d38a5-6a30-4e2c-83b5-9a28f973fec7.cc85de73-69bc-49c1d78d3
    velero.io/restore-name: mongo-persistent
    velero.io/restore-uid: bb5d38a5-6a30-4e2c-83b5-9a28f973fec7
  name: mongo-persistent-v4sfb
  namespace: openshift-adp
  ownerReferences:
  - apiVersion: velero.io/v1
    controller: true
    kind: Restore
    name: mongo-persistent
    uid: bb5d38a5-6a30-4e2c-83b5-9a28f973fec7
  resourceVersion: "16700484"
  uid: 1ccf648a-359e-4782-9f0c-5f1f6bd1d41b
spec:
  backupStorageLocation: sample-dpa-1
  operationTimeout: 10m0s
  snapshotID: f40671a31902bf955b39ce0b90b54284
  sourceNamespace: mongo-persistent
  targetVolume:
    namespace: mongo-persistent
    pv: ""
    pvc: mongo
status:
  completionTimestamp: "2024-08-13T17:36:38Z"
  node: 10.241.0.4
  phase: Completed
  progress:
    bytesDone: 10737418240
    totalBytes: 10737418240
  startTimestamp: "2024-08-13T17:35:04Z"

Data in app is also restored:
Screenshot 2024-08-13 at 10 42 14 AM

Copy link
Contributor

@mateusoliveira43 mateusoliveira43 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/unhold
ran E2E backup/restore on my IBM cluster, all 10 tests passed

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 13, 2024
Copy link

openshift-ci bot commented Aug 13, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kaovilai, mateusoliveira43, shubham-pampattiwar, weshayutin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [kaovilai,mateusoliveira43,shubham-pampattiwar]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit b8fb9dd into openshift:master Aug 13, 2024
15 checks passed
@openshift-cherrypick-robot
Copy link
Contributor

@shubham-pampattiwar: #1482 failed to apply on top of branch "oadp-1.4":

Applying: Fix OADP-4623: OpenShift on IBMCLoud setup for OADP
Using index info to reconstruct a base tree...
M	bundle/manifests/oadp-operator.clusterserviceversion.yaml
M	config/manager/manager.yaml
M	controllers/bsl_test.go
M	controllers/nodeagent.go
M	controllers/nodeagent_test.go
M	main.go
Falling back to patching base and 3-way merge...
Auto-merging main.go
CONFLICT (content): Merge conflict in main.go
Auto-merging controllers/nodeagent_test.go
CONFLICT (content): Merge conflict in controllers/nodeagent_test.go
Auto-merging controllers/nodeagent.go
CONFLICT (content): Merge conflict in controllers/nodeagent.go
Auto-merging controllers/bsl_test.go
CONFLICT (content): Merge conflict in controllers/bsl_test.go
Auto-merging config/manager/manager.yaml
Auto-merging bundle/manifests/oadp-operator.clusterserviceversion.yaml
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Fix OADP-4623: OpenShift on IBMCLoud setup for OADP
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick oadp-1.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

shubham-pampattiwar added a commit to shubham-pampattiwar/oadp-operator that referenced this pull request Aug 13, 2024
* Fix OADP-4623: OpenShift on IBMCLoud setup for OADP

* fix unit tests and minor updates

* add updates for host-plugins host path

* fix unit tests

* lint fix

(cherry picked from commit b8fb9dd)
openshift-merge-bot bot pushed a commit that referenced this pull request Aug 14, 2024
* Fix OADP-4623: OpenShift on IBMCLoud setup for OADP

* fix unit tests and minor updates

* add updates for host-plugins host path

* fix unit tests

* lint fix

(cherry picked from commit b8fb9dd)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants