stream: add fast paths for webstreams read and pipeTo#61807
stream: add fast paths for webstreams read and pipeTo#61807nodejs-github-bot merged 1 commit intonodejs:mainfrom
Conversation
|
Review requested:
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #61807 +/- ##
==========================================
- Coverage 89.75% 89.71% -0.05%
==========================================
Files 674 675 +1
Lines 204416 204884 +468
Branches 39285 39377 +92
==========================================
+ Hits 183472 183807 +335
- Misses 13227 13338 +111
- Partials 7717 7739 +22
🚀 New features to boost your workflow:
|
1ad0edd to
080e458
Compare
080e458 to
5e0474e
Compare
Add internal fast paths to improve webstreams performance without changing the public API or breaking spec compliance. 1. ReadableStreamDefaultReader.read() fast path: When data is already buffered in the controller's queue, return PromiseResolve() directly without creating a DefaultReadRequest object. This is spec-compliant because read() returns a Promise, and resolved promises still run callbacks in the microtask queue. 2. pipeTo() batch read fast path: When data is buffered, batch reads directly from the controller queue up to highWaterMark without creating PipeToReadableStreamReadRequest objects per chunk. Respects backpressure by checking desiredSize after each write. Benchmark results: - pipeTo: ~11% faster (***) - buffered read(): ~17-20% faster (***) Co-Authored-By: Malte Ubl <malte@vercel.com>
5e0474e to
76ccbaf
Compare
|
Landed in 199daab |
| readableStreamDefaultControllerCallPullIfNeeded(controller); | ||
| } | ||
|
|
||
| return PromiseResolve({ value: chunk, done: false }); |
There was a problem hiding this comment.
It looks like the only difference between the fast and slow paths is that the fast path calls PromiseResolve() while the slow path calls [kChunk] which calls PromiseWithResolvers().resolve(). The rest of the code is just a copy-paste of readableStreamDefaultControllerPullSteps.
Comparing the specification for PromiseResolve and Promise.withResolvers, they seem identical? The first calls promiseCapability.[[Resolve]] immediately, while the second exposes that same promiseCapability.[[Resolve]] as a function on the resulting obj. So there shouldn't be a difference in the number of microtasks: both resolve the promise immediately.
So where is the speed up coming from?
There was a problem hiding this comment.
Okay, so Promise.withResolvers is really just slower... Huh. 🤔
'use strict';
const common = require('../common.js');
const bench = common.createBenchmark(main, {
kind: ['promise-resolve', 'with-resolvers', 'new-promise'],
n: [1e5, 1e6, 1e7],
});
const PromiseResolve = Promise.resolve.bind(Promise);
const PromiseWithResolvers = Promise.withResolvers.bind(Promise);
async function main({kind, n}) {
switch (kind) {
case 'promise-resolve': {
bench.start();
for (let i = 0; i < n; ++i) {
await PromiseResolve({value: 'a', done: false});
}
bench.end(n);
break;
}
case 'with-resolvers': {
bench.start();
for (let i = 0; i < n; ++i) {
const resolvers = PromiseWithResolvers();
resolvers.resolve({value: 'a', done: false});
await resolvers.promise;
}
bench.end(n);
break;
}
case 'new-promise': {
bench.start();
for (let i = 0; i < n; ++i) {
await new Promise(resolve => resolve({value: 'a', done: false}));
}
bench.end(n);
break;
}
}
}Results:
promise-resolve-vs-with-resolvers.js n=100000 kind="promise-resolve": 20,311,167.079660397
promise-resolve-vs-with-resolvers.js n=1000000 kind="promise-resolve": 29,780,074.152384643
promise-resolve-vs-with-resolvers.js n=10000000 kind="promise-resolve": 32,482,982.977292772
promise-resolve-vs-with-resolvers.js n=100000 kind="with-resolvers": 14,021,705.600269217
promise-resolve-vs-with-resolvers.js n=1000000 kind="with-resolvers": 18,045,948.594310835
promise-resolve-vs-with-resolvers.js n=10000000 kind="with-resolvers": 21,435,400.134614315
promise-resolve-vs-with-resolvers.js n=100000 kind="new-promise": 14,197,285.479016412
promise-resolve-vs-with-resolvers.js n=1000000 kind="new-promise": 21,022,403.5754904
promise-resolve-vs-with-resolvers.js n=10000000 kind="new-promise": 24,266,573.159471426
Add internal fast paths to improve webstreams performance without changing the public API or breaking spec compliance.
ReadableStreamDefaultReader.read() fast path: When data is already buffered in the controller's queue, return PromiseResolve() directly without creating a DefaultReadRequest object. This is spec-compliant because read() returns a Promise, and resolved promises still run callbacks in the microtask queue.
pipeTo() batch read fast path: When data is buffered, batch reads directly from the controller queue up to highWaterMark without creating PipeToReadableStreamReadRequest objects per chunk. Respects backpressure by checking desiredSize after each write.
Benchmark results:
This was done in partnership with Vercel to improve the performance of React and Next.js, and from a conversation on X with @cramforce.