-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🐛 Bug]: Memory leak on hub and nodes when using google kubernetes (v4.26) #2476
Comments
@NicoIodice, thank you for creating this issue. We will troubleshoot it as soon as we can. Info for maintainersTriage this issue by using labels.
If information is missing, add a helpful comment and then
If the issue is a question, add the
If the issue is valid but there is no time to troubleshoot it, consider adding the
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable
After troubleshooting the issue, please add the Thank you! |
@joerg1985, on the memory consuming, do you have any idea? |
This might be related to the chrome processes not terminated propper e.g. related to the --no-sandbox arg or another chromedriver issue. @NicoIodice Could you switch to firefox to confirm this is related to chrome/edge? As the driver / browser is a child process of the server the overview in the screenshot might summarize it to the server memory consumption. |
@joerg1985 Thank your for your answer and suggestion. Regarding the latter, we will try to configure firefox to have values and behaviors to compare to. Regarding the driver, we make sure and have logs to confirm the web driver is correctly quit, and on dynatrace (screenshots I have attached to the issue description) we can validate that they are not there anymore. During the test execution we can cleary visualize the chrome diver process instances running, but it is guaranteed they quit correctly at the end of the test scenario. Additionaly, the driver has the --no-sandbox setting. I can leave here the current chrome web driver settings that we use: Is there any tool to verify if there is any instance related to the web driver or something related to it? |
The chromedriver does only spawn chrom processes, so you could search for chrome inside the running processes. To confirm it is a java leak, you could create a memory histogram with jmap, before running tests and after running tests. |
What happened?
### After executing some tests with paralellization, the resources baseline increases a couple without never reaching the previous baseline.
For example purposes, we start the hub and nodes, and the average memory used is like 500MB. After running tests a first time and resources reach their peak (2GB), the new resource baseline is near 700MB. On the next run, the new baseline is 800MB, and so on and so on. Obviously this number is not accurate and is random, but it is visible that there is a baseline memory value increase, influencing a fast trigger of the OOMKilled event on the nodes in the middle of a test execution.
On our automation test Google Cloud infra-structure, we use Google Kubernetes Engine (GKE) to host the Selenium Hub and Selenium Nodes on different pods of the same namespace. There is a pod with one replica of Selenium Hub and one pod with 5 replicas of Selenium Node (chrome), with 8 max-sessions each (total of 40 threads).
We used to run the tests sequentially but don't have an idea if this problem occured with this setup, but we remember sometimes we had some problems and needed to force a restart. Either way, with paralellization enabled this problem is more persistent and led us to increase the resources, in order have a better buffer and minimize this occurrence.
The actual resources configuration of the chrome nodes is the following:
The actual node configurations (collected from the UI):
We have identified that the selenium-servar.jar process is the main cause of the resource consumption as we can verify in the following images.
Before running tests and after a restart:
After running tests:
Questions:
I've read somewhere that in order to minimize this effect we can use the following parameter:
**--drain-after-session-count** to drain and shutdown the Node after X sessions have been executed. Useful for environments like Kubernetes. A value higher than zero enables this feature.
Command used to start Selenium Grid with Docker (or Kubernetes)
Relevant log output
Operating System
Kubernetes (GKE)
Docker Selenium version (image tag)
4.26.0 (revision 69f9e5e)
Selenium Grid chart version (chart version)
No response
The text was updated successfully, but these errors were encountered: