-
Notifications
You must be signed in to change notification settings - Fork 924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There are too many Spark sessions and the session timeout parameter is not effective #6855
Comments
you should read the docs carefully, the docs of
and the default value of
while the screenshot shows you are using USER share level |
for your cases, maybe you should change |
Thank you very much. Can I understand it this way? When using the user sharing level, |
kyuubi.session.idle.timeout可以放到创建session的rest请求中,针对某个特定的session设置吗 |
@holiday-zj no, yet. it's a server-side configuration that the client can not override, but the behavior can be discussed, in a quick thought, we can make changes to support that. |
我们支持reset idle timeout吗?好像只在执行某些操作时会刷新lastAccessTime和lastIdleTime |
Code of Conduct
Search before asking
Describe the bug
When using kyuubi to start a spark SQL ON K8S cluster, the number of internal sessions increases. The parameter
kyuubi.engine.user.isolated.spark.session.idle.timeout=PT30M
is found in the documentation. The default value is 6H. After changing it to 30M, there are still sessions that exceed 30M and are not closed, and the SQL of the session has been executed. I would like to ask whether the creation of this session is based on a SQL or how it is created. I found that there are not so many SQLs in my cluster, but there are many sessions. I don't know how they are created.Affects Version(s)
1.9.1
Kyuubi Server Log Output
No response
Kyuubi Engine Log Output
No response
Kyuubi Server Configurations
No response
Kyuubi Engine Configurations
No response
Additional context
No response
Are you willing to submit PR?
The text was updated successfully, but these errors were encountered: