You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
has an interesting design we should study and see what would could be ported in a way that is:
Seastar friendly (no locks, atomics, synchronization)
Does not add too much overhead to the measured system
Our histograms are very friendly as we preallocate 158KB per core - for all latency metrics, so updating them is very cheap
Future additional improvements also from Iago: twitter's load tester is different load distributions.
exponential, sustained, spiky, etc
with a WARM-up step
Currently we measure latencies of ALL requests. This is problematic because the first connection is very costly - usualy - since we do lazy initialization of most things, including cache fetching etc.
The text was updated successfully, but these errors were encountered:
Currently the RPC project comes with built-in load generator and statistics of latency only.
It also has average bytes per second per core and other minor ones.
We need new framework for extending the statistics
Facebook's Threadmill https://github.com/facebook/treadmill/blob/master/ContinuousStatistic.cpp
has an interesting design we should study and see what would could be ported in a way that is:
Our histograms are very friendly as we preallocate 158KB per core - for all latency metrics, so updating them is very cheap
Future additional improvements also from Iago: twitter's load tester is different load distributions.
exponential, sustained, spiky, etc
with a WARM-up step
Currently we measure latencies of ALL requests. This is problematic because the first connection is very costly - usualy - since we do lazy initialization of most things, including cache fetching etc.
The text was updated successfully, but these errors were encountered: