-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI Driver failed to mount PV to a Pod!!! #380
Comments
This is usually a mismatch in protocols. Does the |
Hi Below is the YAML for the Storage class, ============================= kind: StorageClass regards |
If it's iSCSI then my best guess is that you don't have any iSCSI ports on the array. If you check the logs of the CSP, it may reveal more clues. |
Thanx, what is the exact command to check the logs "k logs -nhpe-storage deploy/primera3par-csp"? |
|
==============================================================
|
Doesn't look like the CSP can find any iSCSI ports. |
just now deploy a PV with fc protocol but mounting again got failed. Attached yaml for fckind: StorageClass
|
Is the worker node zoned (since you're switching to FC) with the array? |
Yes...all are zoned!!! Redhat OCP-----3 Controller & 2 Worker nodes(All VMs from the Ovirt Engine Infra). |
This won't work, you have to use the oVirt CSI driver for persistent storage if the workers are VMs. The HPE CSI Driver only works on virtual machines when using iSCSI. |
The HPE CSI Driver only works on virtual machines when using iSCSI--------While using SCSI, as you mentioned it is unable to find any scsi port in the array. The backend storage is HPE Primera Storage. |
The array needs iSCSI cards and have those configured. You also need two vNICs on the oVirt VMs in ideally two different subnets shared with your array iSCSI interfaces. |
@datamattsson : Thanks for the support. We are checking on the iSCSI part. As of now we don't see any iscsi port on the HPE Primera. |
Hi logged a case with HPE for Primera, so they checking the same on the iSCSI part,I have a confusion here no where in documents it is mentioned that to use access protocol iSCSI,the supported Primera storage should be iSCSI configured,also how to configure all these steps,please help with respect to CSI driver |
The documentation for the 3PAR CSP (Alletra 9K/MP, Primera etc): https://scod.hpedev.io/container_storage_provider/hpe_alletra_9000/#storageclass_parameters |
FC should work as we have FC ports available in the physical storage & by default CSI driver will use the available FC ports. |
The HPE CSI Driver does not support virtual FC HPAs (NPIV). Run OpenShift on bare-metal instead and FC will work. |
Can you please help us with the ovirt csi operator link, as we are unable to find any good resource on that. Appreciate for your extending support here.... |
Hi Team,
While trying to mount a exsisting PV to a deployment, we are getting the below error.
Please help here where we are making mistake
================================================
17m Warning FailedAttachVolume pod/nginx-arup-7c7d84bf74-gwz5g AttachVolume.Attach failed for volume "pvc-ee5428bb-bae9-42d1-a739-e3d4e499503f" : rpc error: code = Internal desc = Failed to add ACL to volume pvc-ee5428bb-bae9-42d1-a739-e3d4e499503f for node &{ ocp-cp3.cluster.ocp.aspire.infra de78e75c-94e5-21de-7f15-cebdacc7d26f [0xc000920b80] [0xc000920bc0 0xc000920bd0 0xc000920be0 0xc000920bf0] [] } via CSP, err: Request failed with status code 500 and errors Error code (Internal Server Error) and message (VLUN creation error: failed to find any ready target port on array 192.168.236.192)
regards
Arup
The text was updated successfully, but these errors were encountered: