-
Notifications
You must be signed in to change notification settings - Fork 182
Tweak nfs mount options for performance #1429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| "noresvport", // use a non-privileged source port | ||
| "resvport", // use a privileged source port when communicating with the NFS server | ||
| "retrans=3", // retry three times before performing recovery actions | ||
| "rsize=524288", // receive 512 KB per read request |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is tweaking this to be in sync with our cca 4MiB reads make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out that either our OS or the remote server is forcing us back down to 512 KB, so any increase gets ignored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, any idea why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tomassrnka Any quick ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documents say that the servers define a maximum. Different linux distros and NFS implementations use different values, this just happens to be Google Filestore's max.
|
Can
have any negatives if we have a lot of client nodes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving, would just like to resolve why the read/write sizes are being modified to the lower value.
Google gives us 8,000 connections per TB, which is far more than we're currently thinking of using. Given that, I suspect there's not much of a down side here. Worth keeping in mind if we deploy this and we start seeing latency go up. |
This should reduce network round trips, increase thread count
noacactimeo=600noctorsize=524288rsize=1048576nconnect=7lookupcache=nonelookupcache=posnoresvportNote
Optimizes
nfs_mount_optsiniac/provider-gcp/nomad-cluster/main.tfto improve NFS throughput and reduce latency.locals.nfs_mount_optsiniac/provider-gcp/nomad-cluster/main.tfto optimize NFS behavior:actimeo=600,async,hard,lookupcache=positive,nconnect=7,nocto,noresvport,retrans=2,rsize=1048576,wsize=1048576,timeo=600,sec=sys.lookupcache=none,noac,tcp.nfsvers,noacl,nolock.Written by Cursor Bugbot for commit 2942fde. This will update automatically on new commits. Configure here.