Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set different values for resource requests/limits for NFS resources #50

Closed
kingnarmer opened this issue Dec 4, 2023 · 6 comments
Closed
Labels
next release This will be closed in the next release upstream Something is broken elsewhere

Comments

@kingnarmer
Copy link

kingnarmer commented Dec 4, 2023

I'm encountering issues with NFS resources pre-allocating excessive CPU and memory in my Kubernetes cluster. For instance, I have several NFS mounts: most of NFS pods consume around 200Mi CPU and 150MiB memory, but two NFS pods are pre-allocated 1 CPU and 2GB. The default for resource/limit values are set at nfsResourceLimitsMemoryMi (2GiB) and nfsResourceLimitsCpuM (1 CPU).

How can I configure different values for NFS resource requests and limits? Currently, the same value is set for both, leading to resource over-allocation and potential CPU throttling for other workloads. Any advice would be appreciated. Thanks!

@datamattsson
Copy link
Collaborator

How to configure nfsResource request to be different from resource limits ?

Yes, the base StorageClass parameters applicable to all CSPs using the HPE CSI Driver can be found here.

What you're looking for is nfsResourceLimitsCpuM and nfsResourceLimitsMemoryMi.

@kingnarmer
Copy link
Author

kingnarmer commented Dec 4, 2023

I'm aware of the nfsResourceLimitsMemoryMi/nfsResourceLimitsCpuM settings. My problem is that a single parameter sets the values for both requests and limits. I need to configure the requests to be significantly smaller than the limits. This adjustment is necessary to allow various pods to consume resources as needed without leading to CPU throttling for other workloads. For instance, in the case of 6 NFS mounts, the cluster pre-allocates 6 CPUs and 12GB of memory, whether the pods are actively using these resources or not.

@datamattsson
Copy link
Collaborator

Ok, I see what you're saying now and I somehow mixed this up with your other question. I'm not sure why this was left out in the implementation to be quite honest, it's clearly not complete without requests and obviously has impacts on systems with scarce resources.

I can file a JIRA with HPE because that's clearly where the change need to come from. If you're handy with golang you can add the necessary stanzas here in the meantime: https://github.com/hpe-storage/csi-driver/blob/8874fe968539c6f42c2d7fa6eccd52b584cdf0a8/pkg/flavor/kubernetes/nfs.go#L489 (or easier, remove the resource limit code and add defaults in a LimitRange that is applied on the Namespace).

@kingnarmer
Copy link
Author

kingnarmer commented Dec 5, 2023

I can file issue on HPE github if this helps.

@kingnarmer
Copy link
Author

I opened issue with HPE on github.

hpe-storage/csi-driver#366

@datamattsson datamattsson added the upstream Something is broken elsewhere label Dec 28, 2023
@datamattsson
Copy link
Collaborator

Fixed in hpe-storage/csi-driver#397

@datamattsson datamattsson added the next release This will be closed in the next release label May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
next release This will be closed in the next release upstream Something is broken elsewhere
Projects
None yet
Development

No branches or pull requests

2 participants