Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: connection pipeline cache does not shrink #4491

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 57 additions & 0 deletions tests/dragonfly/connection_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -1051,3 +1051,60 @@ async def test_hiredis(df_factory):
server.start()
client = base_redis.Redis(port=server.port, protocol=3, cache_config=CacheConfig())
client.ping()


@dfly_args({"proactor_threads": 1})
async def test_pipeline_cache_size(df_factory):
server = df_factory.create(proactor_threads=1)
server.start()

# Start 1 client.
good_client = server.client()

await good_client.execute_command("set foo bar")

bad_actor_client = server.client()

info = await bad_actor_client.info()

# Cache is empty.
assert info["pipeline_cache_bytes"] == 0
assert info["dispatch_queue_bytes"] == 0

async def push_pipeline(bad_actor_client):
# Fill cache.
p = bad_actor_client.pipeline(transaction=True)
for i in range(1):
p.lpush(str(i), "V")
await p.execute()

await push_pipeline(bad_actor_client)
info = await good_client.info()

old_pipeline_cache_bytes = info["pipeline_cache_bytes"]
assert old_pipeline_cache_bytes > 0
assert info["dispatch_queue_bytes"] == 0

# Whoops, total pipeline_cache_bytes haven't changed. If a workload aggregates a bunch
# pipeline_cache_bytes because it recycled too many messages, they won't gradually be released
# if one command (one connection out of `n` connections) dispatches async. Only 1 command out of
# n connections must be dispatched async and the pipeline won't gradually be relesed.
for i in range(30):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cam drain the pipeline cache bytes once we stop dispatching async. But on large pool of connections only one command must dispatch async and then we need to internally reset the counter. If this pattern continues the size of the cache will remain constant and will not be released gradually.

await push_pipeline(bad_actor_client)
await good_client.execute_command(f"set foo{i} bar")

info = await good_client.info()

# Pipeline cache bytes remained constant :(
assert old_pipeline_cache_bytes == info["pipeline_cache_bytes"]
assert info["dispatch_queue_bytes"] == 0

# Now drain it
for i in range(30):
await good_client.execute_command(f"set foo{i} bar")

info = await good_client.info()

# Drained
assert info["pipeline_cache_bytes"] == 0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drained completely

assert info["dispatch_queue_bytes"] == 0
Loading