Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[grpc] Configurable grpc.ClientConn sharing #3188

Open
chrismoran-mica opened this issue Jul 12, 2023 · 1 comment
Open

[grpc] Configurable grpc.ClientConn sharing #3188

chrismoran-mica opened this issue Jul 12, 2023 · 1 comment
Labels

Comments

@chrismoran-mica
Copy link
Contributor

chrismoran-mica commented Jul 12, 2023

Feature Description

Connections in gRPC are able to be shared across multiple executing contexts (whether those contexts are goroutines, threads, etc). This sharing can safely continue to a point where the number of active channels reaches the server-configured maximum number of open streams available to a single http2 connection. Typically, this maximum is set to 100 concurrent streams.

Currently, a K6 gRPC Client is created for every VU. The Client struct has a wrapper for a grpc.ClientConn. In a typical test situation, it is common for VUs to communicate with a small number of endpoints. These endpoints are commonly the same endpoint multiplexing gRPC services.

K6 should support the option that would allow test writers to decide if VUs within a test should share gRPC connections to the same endpoint. One of the benefits of connection sharing is that it does not matter if an endpoint is used for two different services. That is not a property of the gRPC connection itself. That is a property of the invocation. So effectively, multiple VUs can use a single gRPC connection so long as that exact endpoint multiplexes the various gRPC services needed by the VUs.

A small change to the core gRPC connection logic can implement this in a safe and concurrent way.

(I've made this change in my personal fork and have tested it in our ecosystem with materially less resource consumption)

One of the key reasons why I, in particular, want this feature is because it substantially reduces the overhead for each VU using a gRPC Client. Instead of opening 50 gRPC Connections using 50 VUs, with this feature in place, those same 50 VUs can safely use the same single open gRPC connection.

Suggested Solution (optional)

My solution:

  1. Add ConnectionSharing to connectParams
  2. Conditionally call grpcext.Dial or grpcext.DialShared (new function so the original is untouched)
    a. One way to do this:
	// Create a local function variable to point to the appropriate Dial
	dialFn := grpcext.Dial
	if p.ConnectionSharing {
		dialFn = grpcext.DialShared
	}

	c.conn, err = dialFn(ctx, addr, opts...)
  1. Add supporting logic to grpcext package with limited requirements:
    a. Must be safe for concurrent access
    b. Create grpc.ClientConn once and store the reference for reuse if ConnectionSharing is enabled
    c. Clean up reference to grpc.ClientConn upon Close

Implementation (with tests): https://github.com/chrismoran-mica/k6/tree/feature/cross-vu-shared-grpc-clientconn

Already existing or connected issues / PRs (optional)

This is my PR for this very feature.

#3179

@olegbespalov
Copy link
Contributor

Hey @chrismoran-mica !

Sorry for the delay in answering

Thanks for reporting the issue and opening PRs with the proposed solution 🙇

The main concern around the feature is that sharing connections across VUs could affect the test run results. Since in that case, we can say that the subject will behave the same way it could if multiple clients try to connect and call procedures on it.

However, I see cases when you just want to boost the RPCs' load without connection costs. If you have more cases, feel free to list them here 🙌 So making this opt-in (and disabled by default) also sounds like a good thing.

Still, I believe that the feature could be implemented firstly in the experimental grpc module (github.com/grafana/xk6-grpc) and, after a few releases and considerations, could be backported.

How does that sound?

@olegbespalov olegbespalov removed their assignment Apr 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants