You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! We're using ESP for authentication before proxying traffic to WebSocket servers. Our connections are particularly long lived, most of the time the lifetime is days.
With the current Envoy cluster configuration, default circuit breakers apply for the backend cluster, namely max_requests and max_connections both set to 1024 (https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/circuit_breaker.proto). Essentially, for us it means that we have to run 50 backend containers to support 50K simultaneous connections.
I looked briefly but I haven't found any way to influence the dynamic configuration Envoy gets. It seems that a workaround is possible, hacking the configuration generation, but it feels cumbersome and unnecessary. Have I missed anything? Maybe if Envoy has a way to set default circuit breakers values, that could be exposed as a startup parameter for endpoints runtime? Would appreciate any help.
To be explicit, these are the settings I'm talking about (this config is already hacked to allow for 1536 connections):
Normally, the backend cluster configuration would arrive dynamically to Envoy, missing the circuit_breakers section completely, thus effectively applying default 1024 to these.
The text was updated successfully, but these errors were encountered:
Maybe if Envoy has a way to set default circuit breakers values, that could be exposed as a startup parameter for endpoints runtime?
Yeah, I understand your use case and agree this is one way to solve the problem. We can expose flags to configure such internal Envoy settings. We've done this in the past, i.e. #480 for extra headers, #344 for buffer limits, #760 for transcoding, and many others.
My concern is this approach is not scalable. Every few months, a customer brings up a new requirement that is configured at the Envoy level. Then we need to implement extra flags to pass-through the data, expose + document the flags, and make a release. It creates a bit of maintenance burden for us.
It seems that a workaround is possible, hacking the configuration generation, but it feels cumbersome and unnecessary.
@shuoyang2016 or @TAOXUY , is it worth considering some alternative approach, such as letting customer patch the generated Envoy config?
Yes, exposing flags for these settings are the only available options today and aren't scale. ESP is in maintenance mode and we don't release new features any more. Feel free to make the change yourself and we can help review the PR.
Hi! We're using ESP for authentication before proxying traffic to WebSocket servers. Our connections are particularly long lived, most of the time the lifetime is days.
With the current Envoy cluster configuration, default circuit breakers apply for the backend cluster, namely max_requests and max_connections both set to 1024 (https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/circuit_breaker.proto). Essentially, for us it means that we have to run 50 backend containers to support 50K simultaneous connections.
I looked briefly but I haven't found any way to influence the dynamic configuration Envoy gets. It seems that a workaround is possible, hacking the configuration generation, but it feels cumbersome and unnecessary. Have I missed anything? Maybe if Envoy has a way to set default circuit breakers values, that could be exposed as a startup parameter for endpoints runtime? Would appreciate any help.
To be explicit, these are the settings I'm talking about (this config is already hacked to allow for 1536 connections):
Normally, the backend cluster configuration would arrive dynamically to Envoy, missing the
circuit_breakers
section completely, thus effectively applying default 1024 to these.The text was updated successfully, but these errors were encountered: