Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GRPCRoute timeout - GEP-3139 #3219
base: main
Are you sure you want to change the base?
GRPCRoute timeout - GEP-3139 #3219
Changes from 1 commit
95d2e3d
bbff733
c6fc4d1
193adc3
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this sentence have a typo? It says both "the timer only starts after the request is finished" and "the timer is never started."
Regardless, the intent behind my comment was that implementing this the naiive way for an Envoy data plane, the timer is never started in the case of a bidi stream because the stream never reaches the half-closed state.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to proxies, gRPC should be taken into consideration as a data plane on its own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
open to suggestions on this, I have received feedback that this sits weird
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that having this field only set to zero for disabling streaming is a strange user experience. This is very related to the GEP goals: I think we should address bidirectional streaming as well so that such a field becomes meaningful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed - darn, will likely not be in for the next release. But it makes sense, trying to define this field felt bizarre
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following this for any implications for timeouts/retries on HTTP streaming, which likewise I feel like we don't really have a good grasp on yet...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there needs to be some work to state the semantics in terms of gRPC-native concepts. For example, you might say:
"
request
specifies the maximum duration for the peer to respond to a gRPC request. This timeout is relative to when the client application initiates the RPC or, in the case of a proxy, when the proxy first receives the stream. If the stream has not entered the closed state this long after the timer has started, the RPC MUST be terminated with gRPC status 4 (DEADLINE_EXCEEDED)."This takes into consideration protocol differences between HTTP and gRPC. Namely:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a data plane timeout semantic, I would expect to see definitions for:
In paragraph 1, I see "If the gateway has not been able to respond before this deadline is met", which can either be interpreted as fixing the endpoint as when the client (or gateway) receives the first byte of the response. It could also be interpreted as when the stream reaches a fully closed state.
In the example in paragraph 2, I see "will cause a timeout if a client request is taking longer than 10 seconds to complete". This sounds like the start of the timer is the start of the call and the end is when the client finishes sending the request? This is a very odd semantic.
In paragraph 3, I see " This timeout is intended to cover as close to the whole request-response transaction as possible". This sounds like the timer starts at stream start and ends at stream close. (this is the semantic I would suggest for all arities).
But then, in the same paragraph, I see "although an implementation MAY choose to start the timeout after the entire request stream has been received instead of immediately after the transaction is initiated by the client."
So overall, I'm really not sure what the proposal is here. Can you help me understand the intent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gRPC propagates timeouts from the client to the server, and onward to further servers. This is used on the server side to cancel RPCs that surpass their timeout. Since the client will not be awaiting the result any longer, it doesn't make sense for the server to continue processing the request past the timeout.
This is communicated from the client to the server via the "grpc-timeout" metadata key. If a gateway or service mesh implementation is enforcing a stricter timeout than the client itself, it makes sense to rewrite this metadata element with the shorter of the two timeouts. For example, Envoy already provides knobs to do this.
I think it would be good to add this as an optional feature, perhaps with a boolean that, if set to true on an implementation that does not support it, will fail validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, thank you for the links! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Continuing on the theme of support for the gRPC library as a data plane, this field doesn't seem to make sense in that context. I think I'm fine with having a field that only applies for implementations with proxies, but we need to specify what happens when a Gateway API implementation that does not support this field (because there is no gateway) receives this field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would only an infinite timeout be allowed? It certainly makes sense to limit the max duration of a streaming RPC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the determination that an RPC is streaming vs unary made? I don't think this is possible either for gRPC library implementations or for proxies. From the perspective of the data plane, all RPCs of any arity are just streams. The only difference is that unary RPCs enter half-close after the client sends a single message and fully closed after the server sends a single response. This scenario may also happen under any of the three other arities. The only way for a data plane to make this determination would be by having access to the schema (specifically this portion) or some processed form of the schema (such as a
FileDescriptorSet
). I can see three ways that this would be delivered, but none of them are currently implemented, all of them require significant effort, and all of them result in a diminished UX for users of the Gateway API:Plumbed through the gRPC Library
gRPC library implementations do retain information keeping track of an RPC method's arity at the highest layer of generated code, but it quickly hits a generic streaming layer that throws the information about arity because all four arities are just special cases of bidirectional streaming.
So, in general, gRPC implementations simply do not have access to this information at runtime in the places in code that count and neither do proxies unless they are pre-loaded with the protobuf schema.
Delivered to a Proxy via Bundled DescriptorSets
You could bundle the schema information with the proxy and have the proxy look up the arity of individual URIs from the DescriporSet. But this only works for a certain set of RPCs which must be determined ahead of time.
You would also need to orchestrate mounting the DescriptorSet into your proxy container. Depending on the Gateway API implementation, this could be quite hard.
Delivered to a Proxy via gRPC Reflection
The gRPC reflection API offers a better mechanism for delivering the structured type information than bundling a processed form. The proxy would make a networked call to a reflection server. However, this injects additional latency (though this could be reduced by caching results). This would require that all RPCs that would possibly be routed have type information stored on a single network-accessible reflection server.
The proxy would of course have to be augmented with this functionality.