You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This benchmark runs the Babel transformation logic using the es2015 preset on a 196KiB ES2015 module containing the untranspiled Vue bundle. Note that this explicitly excludes the Babylon parser and only measures the throughput of the actual transformations. The parser is tested separately by the babylon benchmark below.
Right now it only tests an ES2015 module (albeit a 194kb one 👍) but that may not be representative of what the future will be like so we should think about possible changes to this benchmark:
Since we have deprecated the yearly presets like preset-es2015 we should run with @babel/preset-env now
I guess we might want to think about different targets but that just runs less of Babel so not sure how useful that is for a benchmark? (targets: default/ie, current node, current chrome, etc)
In a similar way I guess we could have a test for an ES3/ES5 file as a good test of the baseline perf of going through the whole program. (The shortcut Babel could do is just to print the file exactly out if it doesn't find any changes, kinda like engines cheat but we won't do that)
I just realized we could just run Babel on the output of the original benchmark since that will be ES5 anyway?
The payload should test out other kinds of code that people are writing/using with Babel like
ES2017+ and Stage x proposals (we could use Babel itself for this if we bundled all of it untranspiled, but there are probably other projects we could use)
JSX/Flow/Typescript
There are other things like compiling a minified source but people shouldn't be doing that?
Babel operates per file so realistically it compiles a lot of smaller files
The text was updated successfully, but these errors were encountered:
hzoo
changed the title
Babel: suggestions for benchmark payloads
Babel: thinking about how to make the benchmark more representative
Nov 16, 2017
Those are great suggestions, Henry, thanks a lot! The current version is mostly a one-shot prototype. Ideally the benchmark payloads would be created and driven by experts like you, who know how a representative workload for Babel looks like.
Ref #24 (comment)
the current benchmark:
web-tooling-benchmark/src/babel-benchmark.js
Line 11 in 21ca9e9
Right now it only tests an ES2015 module (albeit a 194kb one 👍) but that may not be representative of what the future will be like so we should think about possible changes to this benchmark:
@babel/preset-env
nowweb-tooling-benchmark/src/babel-benchmark.js
Line 12 in 21ca9e9
The text was updated successfully, but these errors were encountered: