Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grabbed tokens are lost when preempting requests #2591

Open
xemul opened this issue Dec 20, 2024 · 0 comments
Open

Grabbed tokens are lost when preempting requests #2591

xemul opened this issue Dec 20, 2024 · 0 comments

Comments

@xemul
Copy link
Contributor

xemul commented Dec 20, 2024

This place in fair_queue

auto fair_queue::grab_pending_capacity(const fair_queue_entry& ent) noexcept -> grab_result {
    _group.maybe_replenish_capacity(_group_replenish);

    if (_group.capacity_deficiency(_pending->head)) {
        return grab_result::pending;
    }

    capacity_t cap = ent._capacity;
    if (cap > _pending->cap) {
        return grab_result::cant_preempt;
    }

    _pending.reset();
    return grab_result::grabbed;
}

is buggy. When a request with cap less than _pending->cap is dispatched (i.e. -- what a thin request preempts a fat one) the capacity that was grabbed by fat request (and due to which the queue was set to pending state) is partially lost. More precisely -- the dispatching request gets cap tokens, and the remainder _pending->cap - cap is lost.

This was indirectly introduced by 557017f (#1766). Prior to it, the unused capacity was returned into the replenish bucket.

Possible fix can be

@@ -235,7 +237,13 @@ auto fair_queue::grab_pending_capacity(const fair_queue_entry& ent) noexcept ->
         return grab_result::cant_preempt;
     }
 
-    _pending.reset();
+    if (_pending->cap > cap) {
+        _group.grab_capacity(cap);
+        _pending->head += cap;
+    } else {
+        _pending.reset();
+    }
+
     return grab_result::grabbed;
 }
 

It shows some good results with io_tester. Job:

- name: writes
  shards: all
  type: seqwrite
  shard_info:
    parallelism: $wp
    reqsize: 256kB
    shares: 100

- name: reads
  shards: all
  type: randread
  data_size: 1GB
  shard_info:
    parallelism: $rp
    reqsize: 512
    shares: 1000

IOPS before the patch:

rp:wp reads writes
1:1 5500 15
10:10 50000 70
1:10 5500 15
10:1 50000 70

IOPS after the patch:

rp:wp reads writes
1:1 4500 5500
10:10 40000 5000
1:10 4500 5500
10:1 60000 4500
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant