Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 16 additions & 5 deletions iac/provider-gcp/nomad-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,23 @@ locals {
nfs_mount_path = "/orchestrator/shared-store"
nfs_mount_subdir = "chunks-cache"
nfs_mount_opts = join(",", [ // for more docs, see https://linux.die.net/man/5/nfs
"tcp", // docs say to avoid it on highspeed connections
format("nfsvers=%s", var.filestore_cache_enabled ? module.filestore[0].nfs_version == "NFS_V3" ? "3" : "4" : ""),
"lookupcache=none", // do not cache file handles
"noac", // do not use attribute caching
"noacl", // do not use an acl
"nolock", // do not use locking

"actimeo=600", // cache attributes for 60 seconds
"async", // delay writes until certain conditions are met
"hard", // retry nfs requests indefinitely until they succeed, never fail
"lookupcache=positive", // cache successful file handle lookups
"nconnect=7", // use multiple connections
"noacl", // do not use an acl
"nocto", // skip "close-to-open" attribute checks
"nolock", // do not use locking
"noresvport", // use a non-privileged source port
"resvport", // use a privileged source port when communicating with the NFS server
"retrans=3", // retry three times before performing recovery actions
"rsize=524288", // receive 512 KB per read request
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is tweaking this to be in sync with our cca 4MiB reads make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out that either our OS or the remote server is forcing us back down to 512 KB, so any increase gets ignored.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, any idea why?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tomassrnka Any quick ideas?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documents say that the servers define a maximum. Different linux distros and NFS implementations use different values, this just happens to be Google Filestore's max.

"sec=sys", // use AUTH_SYS for all requests
"timeo=600", // wait 60 seconds (measured in deci-seconds) before retrying a failed request
"wsize=524288", // receive 512 KB per write request
])

file_hash = {
Expand Down