You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I've noticed that SMF connections time out when left alone for a while. What I've also noticed is that RPCs also time out at the same rate, which provides little options to support having long-lived persistent connections but with having relatively short request timeouts.
They are both currently configured using this value in rpc_client_opts (rpc_client.h):
/// \ brief The default timeout PER connection body. After we/// parse the header of the connection we need to/// make sure that we at some point receive some/// bytes or expire the connection. Effectively infinitetypename seastar::timer<>::duration recv_timeout = std::chrono::minutes(1);
Which maps to this in rpc_connection_limits (rpc_connection_limits.h):
consttimer_duration_t max_body_parsing_duration;
I'm not sure if separating these viable, because, at least according to my limited understanding, an RPC slot is effectively lost forever if an RPC never returns.
Describe the solution you'd like
Some way to independently configure connection and RPC timeouts.
Describe alternatives you've considered
None of these are really satisfactory IMO, as they add additional burden to the client software:
Something that would periodically check if the connection is alive, and reopen it if it isn't.
This would block requests waiting for a TCP connection to come up, and force complexity upon requests that are waiting for the RPC connection to be restarted.
Reopening the connection when needed upon closure.
This has the same problems as the previous one.
This one is another possibility, but it may not be easy to implement:
A periodic "heartbeat" sent from the client to the server.
This wouldn't really solve the problem without a way to configure connection closure timeouts, although that burden could be pushed onto the developer using the library.
As it currently stands (and as I understand), this would require a custom client and server RPC. This (AFAICT) would need to be done for every RPC connection to keep it active.
Thoughts on this would be greatly appreciated; this is by no means a trivial problem.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I've noticed that SMF connections time out when left alone for a while. What I've also noticed is that RPCs also time out at the same rate, which provides little options to support having long-lived persistent connections but with having relatively short request timeouts.
They are both currently configured using this value in
rpc_client_opts
(rpc_client.h):Which maps to this in
rpc_connection_limits
(rpc_connection_limits.h):I'm not sure if separating these viable, because, at least according to my limited understanding, an RPC slot is effectively lost forever if an RPC never returns.
Describe the solution you'd like
Some way to independently configure connection and RPC timeouts.
Describe alternatives you've considered
None of these are really satisfactory IMO, as they add additional burden to the client software:
Something that would periodically check if the connection is alive, and reopen it if it isn't.
This would block requests waiting for a TCP connection to come up, and force complexity upon requests that are waiting for the RPC connection to be restarted.
Reopening the connection when needed upon closure.
This has the same problems as the previous one.
This one is another possibility, but it may not be easy to implement:
A periodic "heartbeat" sent from the client to the server.
This wouldn't really solve the problem without a way to configure connection closure timeouts, although that burden could be pushed onto the developer using the library.
As it currently stands (and as I understand), this would require a custom client and server RPC. This (AFAICT) would need to be done for every RPC connection to keep it active.
Thoughts on this would be greatly appreciated; this is by no means a trivial problem.
The text was updated successfully, but these errors were encountered: