-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set different values for resource requests/limits for NFS resources #50
Comments
Yes, the base StorageClass parameters applicable to all CSPs using the HPE CSI Driver can be found here. What you're looking for is |
I'm aware of the |
Ok, I see what you're saying now and I somehow mixed this up with your other question. I'm not sure why this was left out in the implementation to be quite honest, it's clearly not complete without requests and obviously has impacts on systems with scarce resources. I can file a JIRA with HPE because that's clearly where the change need to come from. If you're handy with golang you can add the necessary stanzas here in the meantime: https://github.com/hpe-storage/csi-driver/blob/8874fe968539c6f42c2d7fa6eccd52b584cdf0a8/pkg/flavor/kubernetes/nfs.go#L489 (or easier, remove the resource limit code and add defaults in a LimitRange that is applied on the Namespace). |
I can file issue on HPE github if this helps. |
I opened issue with HPE on github. |
Fixed in hpe-storage/csi-driver#397 |
I'm encountering issues with NFS resources pre-allocating excessive CPU and memory in my Kubernetes cluster. For instance, I have several NFS mounts: most of NFS pods consume around 200Mi CPU and 150MiB memory, but two NFS pods are pre-allocated 1 CPU and 2GB. The default for resource/limit values are set at nfsResourceLimitsMemoryMi (2GiB) and nfsResourceLimitsCpuM (1 CPU).
How can I configure different values for NFS resource requests and limits? Currently, the same value is set for both, leading to resource over-allocation and potential CPU throttling for other workloads. Any advice would be appreciated. Thanks!
The text was updated successfully, but these errors were encountered: