Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC volume appears to mount correctly in pod but writes to mountpoint inside the pod go to the nodes local filesystem #762

Open
mattjwarren opened this issue Sep 4, 2024 · 2 comments

Comments

@mattjwarren
Copy link

mattjwarren commented Sep 4, 2024

What happened:
A pvc used as a volume mount inside a test pod appears to be correctly mounted per kubelet logs, and csi-nfs-node logs, but writes the to the mountpoint are made to the node's local filesystem under the kubelet/..pod../.../mount directory.

What you expected to happen:
writes are made ultimately on the .../shares/pvc-..../ directory on the machine hosting the nfs shares

How to reproduce it:
create a dynamic pvc, create a simple pod yaml referencing the pvc as a volume mount in a container.

Anything else we need to know?:

kubelet logs on the node hosting the pod show the pvc apparently succeeding?

Sep 03 13:54:29 statler.ssd.hursley.ibm.com kubelet[3710]: I0903 13:54:29.157854    3710 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d5ca3d0-bec1-4266-af75-70c62d545286\" (UniqueName: \"kubernetes.io/csi/nfs.csi.k8s.io^host.dns.name#path/to/shares#pvc-3d5ca3d0-bec1-4266-af75-70c62d545286##\") pod \"user-info-san-mattsb1-cephbase6\" (UID: \"d665bb74-6a65-4b85-a58a-21a8b4349921\") " pod="sandboxer/user-info-san-mattsb1-cephbase6"
Sep 03 13:54:29 statler.ssd.hursley.ibm.com kubelet[3710]: I0903 13:54:29.262092    3710 csi_attacher.go:359] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice...
Sep 03 13:54:29 statler.ssd.hursley.ibm.com kubelet[3710]: I0903 13:54:29.262153    3710 operation_generator.go:658] "MountVolume.MountDevice succeeded for volume \"pvc-3d5ca3d0-bec1-4266-af75-70c62d545286\" (UniqueName: \"kubernetes.io/csi/nfs.csi.k8s.io^host.dns.name#path/to/shares#pvc-3d5ca3d0-bec1-4266-af75-70c62d545286##\") pod \"user-info-san-mattsb1-cephbase6\" (UID: \"d665bb74-6a65-4b85-a58a-21a8b4349921\") device mount path \"/kubelet/root/plugins/kubernetes.io/csi/nfs.csi.k8s.io/aa7abf5636712c06ecf961324a368786a69a6c251b7b78791a38a9384cb1d841/globalmount\"" pod="sandboxer/user-info-san-mattsb1-cephbase6"

csi-nfs-node logs on the node appear to show the mount succeeded

nfs I0903 12:54:29.264903       1 utils.go:109] GRPC call: /csi.v1.Node/NodePublishVolume                                                               

nfs I0903 12:54:29.264929       1 utils.go:110] GRPC request: {"target_path":"/kubelet/root/pods/d665bb74-6a65-4b85-a58a-21a8b4349921/volumes/kubernetes.io~csi/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["nfsvers=4.1"]}},"access_mode":{"mode":5}},"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3d5ca3d0-bec1-4266-af75-70c62d545286","csi.storage.k8s.io/pvc/name":"user-info-nfs-matthm1","csi.storage.k8s.io/pvc/namespace":"sandboxer","server":"host.dns.name","share":"/path/to/shares","storage.
kubernetes.io/csiProvisionerIdentity":"1724402785882-5888-nfs.csi.k8s.io","subdir":"pvc-3d5ca3d0-bec1-4266-af75-70c62d545286"},"volume_id":"host.dns.name#path/to/shares#pvc-3d5ca3d0-bec1-4266-af75-70c62d545286##"}

nfs I0903 12:54:29.265346       1 nodeserver.go:132] NodePublishVolume: volumeID(host.dns.name#path/to/shares#pvc-3d5ca3d0-bec1-4266-af75-70c62d545286##) source(host.dns.name:/path/to/shares/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286) targetPath(/kubelet/root/pods/d665bb74-6a65-4b85-a58a-21a8b4349921/volumes/kubernetes.io~csi/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286/mount) mountflags([nfsvers=4.1])

nfs I0903 12:54:29.265372       1 mount_linux.go:218] Mounting cmd (mount) with arguments (-t nfs -o nfsvers=4.1 host.dns.name:/path/to/shares/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286 /kubelet/root/pods/d665bb74-6a65-4b85-a58a-21a8b4349921/volumes/kubernetes.io~csi/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286/mount)
    
nfs I0903 12:54:29.629603       1 nodeserver.go:149] skip chmod on targetPath(/kubelet/root/pods/d665bb74-6a65-4b85-a58a-21a8b4349921/volumes/kubernetes.io~csi/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286/mount) since mountPermissions is set as 0

nfs I0903 12:54:29.629642       1 nodeserver.go:151] volume(host.dns.name#path/to/shares#pvc-3d5ca3d0-bec1-4266-af75-70c62d545286##) mount host.dns.name:/path/to/shares/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286 on /kubelet/root/pods/d665bb74-6a65-4b85-a58a-21a8b4349921/volumes/kubernetes.io~csi/pvc-3d5ca3d0-bec1-4266-af75-70c62d545286/mount succeeded                                 

nfs I0903 12:54:29.629664       1 utils.go:116] GRPC response: {}   

Manually making the nfs mount to the node hosting the test pod from the nfs server works as expected.

running 'mount | grep nfs' from inside the csi-nfs-node pod shows the mount as expected.

Environment:

  • CSI Driver version: v4.8.0
  • Kubernetes version (use kubectl version): 1.25.16
  • OS (e.g. from /etc/os-release): RHEL 8.10
  • Kernel (e.g. uname -a): 4.18.0-553.16.1.el8_10.x86_64
  • Install tools:
  • Others:
@andyzhangx
Copy link
Member

seems your kubelet dir is not under /var/lib/kubelet, you need to specify --set kubeletDir="..." in helm chart install, follow guide here: https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/charts#tips

@mattjwarren
Copy link
Author

That turned out to be the issue. Thankyou. Using helm rather than local installing and setting kubeletDir fixed it.

Is it worth raising an issue for this situation to be flagged up ? - the current logging produced by the driver appears to imply the mount succeeded, whereas a message regarding the reuqirement to set kubeletDir , or indicating there is an issue with kubelet roots location would have been very helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants