Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding netperf send size params (-m/-M) #195

Open
heistp opened this issue Feb 6, 2020 · 6 comments
Open

Adding netperf send size params (-m/-M) #195

heistp opened this issue Feb 6, 2020 · 6 comments

Comments

@heistp
Copy link
Contributor

heistp commented Feb 6, 2020

Sometimes it can happen that you see periodic dips in throughput plots, like this:

netperf_send_size

The period of the dips can be short, depending on test conditions / parameters:

netperf_send_size_highfreq

It becomes less and less of a problem at higher bandwidths.

The reason for it is that netperf defaults to sending buffers the size of the socket buffer (128K in my case), and in its demo mode, those buffers are either included in full in the demo interval's throughput calculation or not (units_received in calc_thruput_interval in the netperf code), so we're looking at a sort of beat frequency between the demo interval and buffer send interval.

I think the right fix would be in netperf, to either emit updates on send buffer intervals instead of real time, or if possible figure out from the kernel on the demo interval what's actually been acked so far.

But, it can be mitigated by using the -m flag to netperf to reduce the send size, which increases CPU usage, but increases the fidelity of the throughput data. Here are the same two plots with -m 8192 hacked in, which raises netperf's CPU usage from ~1% to ~3% but...

netperf_send_size_dashm

netperf_send_size_dashm_highfreq

So, it would be helpful if we could specify netperf's send size to flent. I'm willing to make this change, but am not sure where it belongs, maybe as a single --send-size global flag, which is passed to netperf with both -m and -M?

I'll be glad for this as for years I've at times thought either the code or my test rig was flawed. :)

@tohojo
Copy link
Owner

tohojo commented Feb 6, 2020

Very nice find! Hmm, we quite recently added the ability to set UDP packet size (in 00eda5b), and I guess this is similar.

So maybe a global switch that sets -m for netperf, and also sets the default UDP send size (overridable by the test parameter)? And maybe make --send-size also allow comma-separated values for per-flow setting?

@heistp
Copy link
Contributor Author

heistp commented Feb 6, 2020

Ok, it sounds good allowing the comma separated values for setting it per-flow, but I'm not sure about sharing it with the UDP setting. People usually specify a UDP packet size that's something below the MTU right? That would cause pretty high CPU on the stream test. I expect when people are lowering this setting to use something like 4-32k or so.

@tohojo
Copy link
Owner

tohojo commented Feb 6, 2020 via email

@heistp
Copy link
Contributor Author

heistp commented Feb 6, 2020

Ah ok, so we could combine them for now and there can always be separate flags later if needed, plus that's good if the UDP one can be overridden with the test param. Will see if I can make this change...

@heistp
Copy link
Contributor Author

heistp commented Feb 6, 2020

Sent a pull request but only for setting it globally for TCP so far, as setting it per-flow and for the UDP packet size was more than I bargained for. :)

The reason I'm probably seeing this more than most people is because I have non-default settings for net.ipv4.tcp_wmem for doing tests on high BDPs. Being able to set this send size as a test parameter would be useful, because when I do lower and higher bandwidth tests in the same batch, a setting that works well at low bandwidths may be too expensive at higher bandwidths. Also why it would be nice if this "just worked" in netperf.

I'll also be trying plots with delivery rate instead, since that comes from the kernel, so I may add some more plots to flent with that in it.

@tohojo
Copy link
Owner

tohojo commented Feb 6, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants