-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding netperf send size params (-m/-M) #195
Comments
Very nice find! Hmm, we quite recently added the ability to set UDP packet size (in 00eda5b), and I guess this is similar. So maybe a global switch that sets -m for netperf, and also sets the default UDP send size (overridable by the test parameter)? And maybe make --send-size also allow comma-separated values for per-flow setting? |
Ok, it sounds good allowing the comma separated values for setting it per-flow, but I'm not sure about sharing it with the UDP setting. People usually specify a UDP packet size that's something below the MTU right? That would cause pretty high CPU on the stream test. I expect when people are lowering this setting to use something like 4-32k or so. |
Pete Heist <[email protected]> writes:
Ok, it sounds good allowing the comma separated values for setting it
per-flow, but I'm not sure about sharing it with the UDP setting.
People usually specify a UDP packet size that's something below the
MTU right? That would cause pretty high CPU on the stream test. I
expect when people are lowering this setting to use something like
4-32k or so.
Well, we don't have any tests that have both UDP and TCP flows, so for
now that won't be an issue. It may be if we add some, I suppose, but
there's always the test parameter to override the UDP size? We can also
just make it TCP-only, though...
|
Ah ok, so we could combine them for now and there can always be separate flags later if needed, plus that's good if the UDP one can be overridden with the test param. Will see if I can make this change... |
Sent a pull request but only for setting it globally for TCP so far, as setting it per-flow and for the UDP packet size was more than I bargained for. :) The reason I'm probably seeing this more than most people is because I have non-default settings for I'll also be trying plots with delivery rate instead, since that comes from the kernel, so I may add some more plots to flent with that in it. |
Pete Heist <[email protected]> writes:
Sent a pull request but only for setting it globally for TCP so far,
as setting it per-flow and for the UDP packet size was more than I
bargained for. :)
No worries, I can add that while merging :)
The reason I'm probably seeing this more than most people is because I
have non-default settings for `net.ipv4.tcp_wmem` for doing tests on
high BDPs. Being able to set this send size as a test parameter would
be useful, because when I do lower and higher bandwidth tests in the
same batch, a setting that works well at low bandwidths may be too
expensive at higher bandwidths. Also why it would be nice if this
"just worked" in netperf.
Yeah, netperf auto-tuning would be better, of course. But guess we can't
have it all...
I'll also be trying plots with delivery rate instead, since that comes
from the kernel, so I may add some more plots to flent with that in
it.
Sure, go right ahead!
…-Toke
|
Sometimes it can happen that you see periodic dips in throughput plots, like this:
The period of the dips can be short, depending on test conditions / parameters:
It becomes less and less of a problem at higher bandwidths.
The reason for it is that netperf defaults to sending buffers the size of the socket buffer (128K in my case), and in its demo mode, those buffers are either included in full in the demo interval's throughput calculation or not (
units_received
incalc_thruput_interval
in the netperf code), so we're looking at a sort of beat frequency between the demo interval and buffer send interval.I think the right fix would be in netperf, to either emit updates on send buffer intervals instead of real time, or if possible figure out from the kernel on the demo interval what's actually been acked so far.
But, it can be mitigated by using the
-m
flag to netperf to reduce the send size, which increases CPU usage, but increases the fidelity of the throughput data. Here are the same two plots with-m 8192
hacked in, which raises netperf's CPU usage from ~1% to ~3% but...So, it would be helpful if we could specify netperf's send size to flent. I'm willing to make this change, but am not sure where it belongs, maybe as a single
--send-size
global flag, which is passed to netperf with both-m
and-M
?I'll be glad for this as for years I've at times thought either the code or my test rig was flawed. :)
The text was updated successfully, but these errors were encountered: