Skip to content
This repository has been archived by the owner on Nov 7, 2018. It is now read-only.

Question: Off heap native memory usage (Lucene) by elasticsearch running in kubernetes #200

Open
Misterhex opened this issue Jun 19, 2018 · 2 comments

Comments

@Misterhex
Copy link

Misterhex commented Jun 19, 2018

As far as i understand, lucene will use up as much as memory as it can from the operating system, which is referred to as off-heap native memory.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://discuss.elastic.co/t/understanding-off-heap-usage/97176
https://stackoverflow.com/a/35232221

Based on this understanding, does it means we have to run the elasticsearch pods in dedicated kubernetes nodes? Since the es pods would just keep crawling as much memory as it can, potentially causing memory and disk pressure on the kubelet, and causing other pods running on the same node to be evicted?

For example, if we have kubelet with 64 GB of memory, and for our elasticsearch pods, we set resources request and limit to 8 GB, and ES_HEAP_SIZE to be 3GB. Would lucene use up all remaining 60GB, or it would be using the remaining 5 GB based on the cgroup limit?

Thanks!

@pires
Copy link
Owner

pires commented Jun 19, 2018

IIRC Java heap limits should be enough. If you don't trust those, you can define pod resource limits and Kubernetes will kill the pod if it goes above the limits.

@rewt
Copy link

rewt commented Jun 25, 2018

I have same question, about whether or not swaps are disabled be default. Any idea how to verify that java heap limits are taking care of this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants