diff options
| author | 2023-04-22 19:52:26 +0800 | |
|---|---|---|
| committer | 2023-04-22 19:52:26 +0800 | |
| commit | 4838df315931bb883f704ec3e1abe2685f296cdf (patch) | |
| tree | 57a8550c4cd5338f1126364bb518c6cde8d96e7d /docs/pages/blog/turbopack-benchmarks.mdx | |
| parent | db74ade0234a40c2120ad5f2a41bee50ce13de02 (diff) | |
| download | HydroRoll-4838df315931bb883f704ec3e1abe2685f296cdf.tar.gz HydroRoll-4838df315931bb883f704ec3e1abe2685f296cdf.zip | |
š
Diffstat (limited to 'docs/pages/blog/turbopack-benchmarks.mdx')
| -rw-r--r-- | docs/pages/blog/turbopack-benchmarks.mdx | 275 |
1 files changed, 275 insertions, 0 deletions
diff --git a/docs/pages/blog/turbopack-benchmarks.mdx b/docs/pages/blog/turbopack-benchmarks.mdx new file mode 100644 index 0000000..e6a0818 --- /dev/null +++ b/docs/pages/blog/turbopack-benchmarks.mdx @@ -0,0 +1,275 @@ +--- +title: "Turbopack Performance Benchmarks" +date: 2022/10/31 +description: "Benchmarking Turbopack performance against Vite and webpack." +tag: "web development" +ogImage: "/images/blog/turbopack-benchmarks/twitter-card.png" +--- + +import { DEFAULT_BARS, HMR_BARS } from '../../components/pages/pack-home/PackBenchmarks' +import { DocsBenchmarksGraph } from '../../components/pages/pack-home/DocsBenchmarksGraph'; +import { Tabs, Tab } from '../../components/Tabs' +import { ThemedImageFigure } from '../../components/image/ThemedImageFigure' +import { Authors } from '../../components/Authors' +import Callout from '../../components/Callout' +import Date from "../../components/blog/Date"; + +# Turbopack Performance Benchmarks + +<Date update={<>Thursday, December 22nd, 2022</>}> + Monday, October 31st, 2022 +</Date> + +<Authors authors={[ 'tobiaskoppers', 'alexkirsz' ]} /> + +<p className="text-gray-500 mt-6 uppercase text-sm tracking-wider">Summary</p> + +- We are thankful for the work of the entire OSS ecosystem and the incredible interest and reception from the [Turbopack release](https://vercel.com/blog/turbopack). We look forward to continuing our collaboration with and integration into the broader Web ecosystem of tooling and frameworks. +- In this article, you will find our methodology and documentation supporting the benchmarks that show **Turbopack is [much faster](#bench) than existing non-incremental approaches.** +- **Turbopack** andĀ [**Next.js 13.0.1**](https://github.com/vercel/next.js/releases/tag/v13.0.1)Ā are out addressing a regression that snuck in prior to public release and after the initial benchmarks were taken. We also fixed an incorrect rounding bug on our website (`0.01s`Ā āĀ `15ms`). We appreciateĀ [Evan You](https://twitter.com/youyuxi)'s work that helped us identify and [correct this](https://github.com/vercel/turbo/pull/2516). +- We are excited to continue to evolve the incremental build architecture of Turbopack. We believe that there are still significant performance wins on the table. + +<hr className="mt-8 w-full border-gray-400 authors border-opacity-20"/> + +At [Next.js Conf](https://nextjs.org), [we announced](https://www.youtube.com/watch?v=NiknNI_0J48) our latest open-source project: Turbopack, an incremental bundler and build system optimized for JavaScript and TypeScript, written in Rust. + +The project began as an exploration to improve webpackās performance and create ways for it to more easily integrate with tooling moving forward. In doing so, the team realized that a greater effort was necessary. While we saw opportunities for better performance, the premise of a new architecture that could scale to the largest projects in the world was inspiring. + +In this post, we will explore why Turbopack is so fast, how its incremental engine works, and benchmark it against existing approaches. + +## Why is Turbopack _blazing_ fast? + +Turbopackās speed comes from its incremental computation engine. Similar to trends we have seen in frontend state libraries, computational work is split into reactive functions that enable Turbopack to apply updates to an existing compilation without going through a full graph recomputation and bundling lifecycle. + +This does not work like traditional caching where you look up a result from a cache before an operation and then decide whether or not to use it. That would be too slow. + +Instead, Turbopack skips work altogether for cached results and only recomputes affected parts of its internal dependency graph of functions. This makes updates independent of the size of the whole compilation, and eliminates the usual overhead of traditional caching. + +## Benchmarking Turbopack, webpack, and Vite + +We created a test generator that makes an application with a variable amount of modules to benchmark cold startup and file updating tasks. This generated app includes entries for these tools: + +- Next.js 11 +- Next.js 12 +- Next.js 13 with Turbopack +- Vite + +As the current state of the art, we are including [Vite](https://vitejs.dev/) along with webpack-based [Next.js](https://nextjs.org) solutions. All of these toolchains point to the same generated component tree, assembling a [SierpiÅski triangle](https://en.wikipedia.org/wiki/Sierpi%C5%84ski_triangle) in the browser, where every triangle is a separate module. + +<ThemedImageFigure + borderRadius={false} + dark={{ + source: '/images/blog/turbopack-benchmarks/triangle-dark.png', + height: 600, + width: 1200 + }} + light={{ + source: '/images/blog/turbopack-benchmarks/triangle-light.png', + height: 600, + width: 1200 + }} + captionSpacing={-12} + caption="This image is a screenshot of the test application we run our benchmarks on. It depicts a SierpiÅski triangle where each single triangle is its own component, separated in its own file. In this example, there are 3,000 triangles being rendered to the screen." +/> + +### Cold startup time + +This test measures how fast a local development server starts up on an application of various sizes. We measure this as the time from startup (without cache) until the app is rendered in the browser. We do not wait for the app to be interactive or hydrated in the browser for this dataset. + +Based on feedback and collaboration with the Vite team, we used the [SWC plugin](https://github.com/vitejs/vite-plugin-react-swc) with Vite in replacement for the [default Babel plugin](https://github.com/vitejs/vite-plugin-react) for improved performance in this benchmark. + +<DocsBenchmarksGraph category="cold" bars={DEFAULT_BARS} /> + +<ThemedImageFigure + borderRadius={true} + dark={{ + source: '/images/blog/turbopack-benchmarks/bench_startup_dark.svg', + height: 720, + width: 1960 + }} + light={{ + source: '/images/blog/turbopack-benchmarks/bench_startup_light.svg', + height: 720, + width: 1960 + }} + captionSpacing={24} + caption="Startup time by module count. Benchmark data generated from 16ā MacBook Pro 2021, M1 Max, 32GB RAM, macOS 13.0.1 (22A400)." +/> + +#### Data + +To run this benchmark yourself, clone [`vercel/turbo`](https://github.com/vercel/turbo) and then use this command from the root: + +```sh +TURBOPACK_BENCH_COUNTS=1000,5000,10000,30000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "startup/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite SWC CSR)." +``` + +Here are the numbers we were able to produce on a 16ā MacBook Pro 2021, M1 Max, 32GB RAM, macOS 13.0.1 (22A400): + +```sh +bench_startup/Next.js 11 SSR/1000 modules 9.2±0.04s +bench_startup/Next.js 11 SSR/5000 modules 32.9±0.67s +bench_startup/Next.js 11 SSR/10000 modules 71.8±2.57s +bench_startup/Next.js 11 SSR/30000 modules 237.6±6.43s +bench_startup/Next.js 12 SSR/1000 modules 3.6±0.02s +bench_startup/Next.js 12 SSR/5000 modules 12.1±0.32s +bench_startup/Next.js 12 SSR/10000 modules 23.3±0.32s +bench_startup/Next.js 12 SSR/30000 modules 89.1±0.21s +bench_startup/Turbopack SSR/1000 modules 1381.9±5.62ms +bench_startup/Turbopack SSR/5000 modules 4.0±0.04s +bench_startup/Turbopack SSR/10000 modules 7.3±0.07s +bench_startup/Turbopack SSR/30000 modules 22.0±0.32s +bench_startup/Vite SWC CSR/1000 modules 4.2±0.02s +bench_startup/Vite SWC CSR/5000 modules 16.6±0.08s +bench_startup/Vite SWC CSR/10000 modules 32.3±0.12s +bench_startup/Vite SWC CSR/30000 modules 97.7±1.53s +``` + +### File updates (HMR) + +We also measure how quickly the development server works from when an update is applied to a source file to when the corresponding change is re-rendered in the browser. + +For Hot Module Reloading (HMR) benchmarks, we first start the dev server on a fresh installation with the test application. We wait for the HMR server to boot up by running updates until one succeeds. We then run ten changes to warm up the tooling. This step is important as it prevents discrepancies that can arise with cold processes. + +Once our tooling is warmed up, we run a series of updates to a list of modules within the test application. Modules are sampled randomly with a distribution that ensures we test a uniform number of modules per module depth. The depth of a module is its distance from the entry module in the dependency graph. For instance, if the entry module A imports module B, which imports modules C and D, the depth of the entry module A will be 0, that of module B will be 1, and that of modules C and D will be 2. Modules A and B will have an equal probability of being sampled, but modules C and D will only have half the probability of being sampled. + +We report the linear regression slope of the data points as the target metric. This is an estimate of the average time it takes for the tooling to apply an update to the application. + +<DocsBenchmarksGraph category="file_change" bars={HMR_BARS} /> + +<ThemedImageFigure + borderRadius={true} + dark={{ + source: '/images/blog/turbopack-benchmarks/bench_hmr_to_commit_dark.svg', + height: 720, + width: 1960 + }} + light={{ + source: '/images/blog/turbopack-benchmarks/bench_hmr_to_commit_light.svg', + height: 720, + width: 1960 + }} + captionSpacing={24} + caption="Turbopack, Next.js (webpack), and Vite HMR by module count. Benchmark data generated from 16ā MacBook Pro 2021, M1 Max, 32GB RAM, macOS 13.0.1 (22A400)." +/> + +<a id="bench"/> + +<ThemedImageFigure + borderRadius={true} + dark={{ + source: '/images/blog/turbopack-benchmarks/bench_hmr_to_commit_turbopack_vite_dark.svg', + height: 720, + width: 1960 + }} + light={{ + source: '/images/blog/turbopack-benchmarks/bench_hmr_to_commit_turbopack_vite_light.svg', + height: 720, + width: 1960 + }} + captionSpacing={24} + caption="Turbopack and Vite HMR by module count. Benchmark data generated from 16ā MacBook Pro 2021, M1 Max, 32GB RAM, macOS 13.0.1 (22A400)." +/> + +The takeaway: Turbopack performance is a function of **the size of an update**, not the size of an application. + +For more info, visit the comparison docs for [Vite](/pack/docs/comparisons/vite) and [webpack](/pack/docs/comparisons/webpack). + +#### Data + +To run this benchmark yourself, clone [`vercel/turbo`](https://github.com/vercel/turbo) and then use this command from the root: + +``` +TURBOPACK_BENCH_COUNTS=1000,5000,10000,30000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "hmr_to_commit/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite SWC CSR)" +``` + +Here are the numbers we were able to produce on a 16ā MacBook Pro 2021, M1 Max, 32GB RAM, macOS 13.0.1 (22A400): + +```sh +bench_hmr_to_commit/Next.js 11 SSR/1000 modules 211.6±1.14ms +bench_hmr_to_commit/Next.js 11 SSR/5000 modules 866.0±34.44ms +bench_hmr_to_commit/Next.js 11 SSR/10000 modules 2.4±0.13s +bench_hmr_to_commit/Next.js 11 SSR/30000 modules 9.5±3.12s +bench_hmr_to_commit/Next.js 12 SSR/1000 modules 146.2±2.17ms +bench_hmr_to_commit/Next.js 12 SSR/5000 modules 494.7±25.13ms +bench_hmr_to_commit/Next.js 12 SSR/10000 modules 1151.9±280.68ms +bench_hmr_to_commit/Next.js 12 SSR/30000 modules 6.4±2.29s +bench_hmr_to_commit/Turbopack SSR/1000 modules 18.9±2.92ms +bench_hmr_to_commit/Turbopack SSR/5000 modules 23.8±0.31ms +bench_hmr_to_commit/Turbopack SSR/10000 modules 23.0±0.35ms +bench_hmr_to_commit/Turbopack SSR/30000 modules 22.5±0.88ms +bench_hmr_to_commit/Vite SWC CSR/1000 modules 104.8±1.52ms +bench_hmr_to_commit/Vite SWC CSR/5000 modules 109.6±3.94ms +bench_hmr_to_commit/Vite SWC CSR/10000 modules 113.0±1.20ms +bench_hmr_to_commit/Vite SWC CSR/30000 modules 133.3±23.65ms +``` + +As a reminder, Vite is using the official [SWC plugin](https://github.com/vitejs/vite-plugin-react-swc) for these benchmarks, which is not the default configuration. + +Visit the [Turbopack benchmark documentation](/pack/docs/benchmarks) to run the benchmarks yourself. If you have questions about the benchmark, please open an [issue on GitHub](https://github.com/vercel/turbo/issues). + +## The future of the open-source Web + +Our team has taken the lessons from 10 years of webpack, combined with the innovations in incremental computation from [Turborepo](/repo) and Google's Bazel, and created an architecture ready to support the coming decades of computing. + +Our goal is to create a system of open source tooling that helps to build the future of the Webāpowered by Turbopack. We are creating a reusable piece of architecture that will make both development and warm production builds faster for everyone. + +For Turbopackās alpha, we are including it in Next.js 13. But, in time, [we hope that Turbopack will power other frameworks and builders](https://twitter.com/youyuxi/status/1585040276447690752?s=20&t=YV0ASkHl5twCWQvJF5jpwg) as a seamless, low-level, incremental engine to build great developer experiences with. + +We look forward to being a part of the community bringing developers better tooling so that they can continue to deliver better experiences to end users. If you would like to learn more about Turbopack benchmarks, visit [turbo.build](https://turbo.build/). To try out Turbopack in Next.js 13, visit [nextjs.org](https://nextjs.org/docs/advanced-features/turbopack). + +--- + +## Update (2022/12/22) + +When we first released Turbopack, we made some claims about its performance relative to previous Next.js versions (11 and 12), and relative to Vite. These numbers were computed with our benchmark suite, which was publicly available on the [turbo repository](https://github.com/vercel/turbo), but we hadnāt written up much about them, nor had we provided clear instructions on how to run them. + +After collaborating with Viteās core contributor [Evan You](https://github.com/yyx990803), we released this blog post explaining our methodology and we updated our website to provide instructions on how to run the benchmarks. + +Based on the outcome of our collaboration with Vite, here are some clarifications we have made to the benchmarks above on our testing methodology: + +### Raw HMR vs. React Refresh + +In the numbers we initially published, we were measuring the time between a file change and the update being run in the browser, but not the time it takes for React Refresh to re-render the update (`hmr_to_eval`). + +We had another benchmark which included React Refresh (`hmr_to_commit`) which we elected not to use because we thought it mostly accounted for React overheadāan additional 30ms. However, this assumption turned out to be wrong, and the issue was [within Next.jsā update handling code](https://github.com/vercel/next.js/pull/42350). + +On the other hand, Viteās `hmr_to_eval` and `hmr_to_commit` numbers were much closer together (no 30ms difference), and [this brought up suspicion that our benchmark methodology was flawed and that we werenāt measuring the right thing](https://github.com/yyx990803/vite-vs-next-turbo-hmr/discussions/8). + +This blog post has been updated to include React Refresh numbers. + +### Root vs. Leaf + +[Evan Youās benchmark application](https://github.com/yyx990803/vite-vs-next-turbo-hmr) is composed of a single, very large file with a thousand imports, and a thousand very small files. The shape of our benchmark is different ā it represents a tree of files, with each file importing 0 or 3 other files, trying to mimic an average application. Evan helped us find a regression when editing large, root files in his benchmark. The cause for this was [quickly identified and fixed by Tobias](https://github.com/vercel/turbo/commit/a11422fdf6b1b3cde9072d90aab6d9eebfacb591) and was released in Next 13.0.1. + +We have adapted our HMR benchmarks to samples modules uniformly at all depths, and we have updated this blog post and our documentation to include more details about this process. + +### SWC vs. Babel + +When we initially released the benchmarks, we were using the official Vite React plugin which uses Babel under the hood. Turbopack itself uses SWC, [which is much faster than Babel](https://swc.rs/blog/perf-swc-vs-babel). Evan You suggested that for a more accurate comparison, we should change Viteās benchmark to use the SWC plugin instead of the default Babel experience. + +While SWC does improve Viteās startup performance significantly, it only shows a small difference in HMR updates (\<10%). It should be noted that the React Refresh implementations between the plugins are different, hence this might not be measuring SWCās effect but some other implementation detail. + +We have updated our benchmarks to run Vite with the official SWC plugin. + +### File Watcher Differences + +Every OS provide its own APIs for watching files. On macOS, Turbopack uses [FSEvents](https://developer.apple.com/documentation/coreservices/file_system_events), which have shown to have [~12ms latency](https://twitter.com/devongovett/status/1586599130494746625) for reporting updates. We have considered using [kqueue](https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2) instead, which has much lower latency. However, since it is not a drop-in replacement, and brings its lot of drawbacks, this is still in the exploratory stage and not a priority. + +Expect to see different numbers on Linux, where [inotify](https://man7.org/linux/man-pages/man7/inotify.7.html) is the standard monitoring mechanism. + +### Improving our Methodology + +Our benchmarks originally only sampled 1 HMR update from 10 different instances of each bundler running in isolation, for a total of 10 sampled updates, and reported the mean time. Before sampling each update, bundler instances were warmed up by running 5 updates. + +Sampling so few updates meant high variance from one measurement to the next, and this was particularly significant in Viteās HMR benchmark, where single updates could take anywhere between 80 and 300ms. + +We have since refactored our benchmarking architecture to measure a variable number of updates per bundler instance, which results in anywhere between 400 and 4000 sampled updates per bundler. We have also increased the number of warmup updates to 10. + +Finally, instead of measuring the mean of all samples, we are measuring the slope of the linear regression line. + +This change has improved the soundness of our results. It also brings Viteās numbers down to an **almost constant 100ms at any application size**, while we were previously measuring 200+ms updates for larger applications of over 10,000 modules. + +### Conclusion + +We are now measuring Turbopack to be consistently 5x faster than Vite for HMR, over all application sizes, after collaborating with the Vite team on the benchmarks. We have updated all of our benchmarks to reflect our new methodology. |
