-
-
Notifications
You must be signed in to change notification settings - Fork 32.9k
http: optimize IncomingMessage._dump #59747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Review requested:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
This comment was marked as outdated.
This comment was marked as outdated.
d3ff5f6
to
e86d740
Compare
Failed to start CI⚠ Commits were pushed since the last approving review: ⚠ - http: optimize IncomingMessage._dump ✘ Refusing to run CI on potentially unsafe PRhttps://github.com/nodejs/node/actions/runs/17445334596 |
197ba2c
to
60e12ac
Compare
I don't know how to deal with the testing of this. Optimally I would like to run all http tests with and without the option. |
@RafaelGSS any idea why benchmark ci failed? |
Why should we run all HTTP tests with this option on and off? What makes this option so special to run that a high number of tests? |
I think this is one of those changes that can cause various regression. |
@ronang Benchmark CI https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1728/ You forgot to include the commit SHA. |
@RafaelGSS why did you close? |
I was a mistake, sorry @ronag |
This comment was marked as outdated.
This comment was marked as outdated.
doc/api/http.md
Outdated
@@ -3632,6 +3635,10 @@ changes: | |||
* `rejectNonStandardBodyWrites` {boolean} If set to `true`, an error is thrown | |||
when writing to an HTTP response which does not have a body. | |||
**Default:** `false`. | |||
* `dumpGETBody` {boolean} If set to `true`, request body for `HEAD` and `GET` | |||
requests will be immediatly dumped and the `end` and `close` events for | |||
`IncomingMessage` will be emitted before the server emits `'request'`. This |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that this is behind an option but I find it odd. It breaks the invariant where 'end'
/ 'close'
are always emitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a valid point. I don't think it's anywhere documented that it must emit the life cycle events. Though it is indeed breaking. It is totally valid from the streams api since you can receive an already closed stream. It will work with stream.finished
and stream.pipeline
.
d9d8fbc
to
fc3f493
Compare
@mcollina @RafaelGSS benchmark CI is not meaningful, I added my own autocannon benchmark #59747 (comment) const http = require('http');
http.createServer((req, res) => {
res.end();
}).listen(9000);
|
Can we implement so that if there is someone watching for ‘end’ or ‘close’, it follows the usual path, while if they are not, we can avoid them? Alternatively, we could have _dump actually emit them. |
With fastify:
|
Benchmark result on my tests: So, tuns out my computer was too fast and it was delivering the same amount of req/sec regardless of binary. In other words, I was being limited by wrk2 itself. Now I have configured proper values and the latency (which is the most important metric in terms of wrk2) has shown the following distribution:
node v24.5.0
And autocannon reported: node v24.5.0
versus
|
PRE:
POST: