-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass the availability zone the Kubernetes node is in when mounting EFS filesystem #1347
base: master
Are you sure you want to change the base?
Conversation
…and use that when mounting the EFS filesystem (if requested via 'discoverAzName')
Welcome @multimac! |
Hi @multimac. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: multimac The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Is this a bug fix or adding new feature?
New feature
What is this PR about? / Why do we need it?
We noticed a problem in our clusters where EFS filesystems were being mounted across AZs. We managed to troubleshoot this back to the way that our DNS servers are set up, in that they are not guaranteed to exist in the same availability zone as the server making the DNS request. This can lead to the CSI driver resolving the wrong IP address for a mount target, as it will get the IP address of the mount target in the AZ that the DNS server receiving the request is in.
We tried several different approaches to solve the problem, like trying to ensure our DNS requests stayed in the same AZ as the server making the request (tricky as we wanted inter-AZ DNS requests for redundancy) and making use of the ability to pass an explicit AZ in the
StorageClass
(would have made our storage classes too restrictive)Ultimately, we found the easiest way would be to have the EFS CSI driver detect the availability zone the Kubernetes node was in and pass that information along when mounting the filesystem
What testing is done?
We have been running a forked version of the EFS CSI driver that includes this change for about 4 weeks now. Since including the change we have seen all inter-AZ traffic related to EFS disappear