Support our deep-dive testing! Get 10% off any GN Modmats, Solder Mats, or tools during the sale: https://store.gamersnexus.net/This review and benchmark of ...
I wish more tech outlets knew about benchmarking developer workloads, Chromium compile time is such an irrelevant benchmark. Benchmarking the time to make a clean release build of an incredibly large C++ code base, especially one with tons of time dedicated to making the build as parallel as possible, isn’t at all representative of what 99% of programmers do in their day to day. I work on a large cpp codebase every day and it’s been months since I’ve done a clean release build on my local machine.
A substantially better test methodology would be to checkout to a commit 30 or so back, do a clean build and running of the test suite to populate caches, and then time the total duration it takes to run the test suite until you land back on the latest commit. Most programmers don’t do clean builds unless they absolutely have to and they’ll have build caches populated. Do this for an array of languages with projects of varying sizes and then you’ll have a benchmark actually worth looking at.
I mean unless you are pushing out a release, local testing is way better because cloud runners are expensive and thats way too long to know what went wrong. I can compile my stuff and get stuff immediately
The Chromium compile times are actually perfect for our gamedev workload since we have our build machines compile binaries without fastbuild or other caching when producing builds to submit to partners. And we can reasonably expect the performance scaling they see to apply to our general compile times too, though they’re fast enough on developer machines that you can barely feel differences lol
GN has asked for development benchmark suggestions on various platforms, especially recently specifically for TRX50 (and perhaps WRX90). Try to reach out to his team.
I wish more tech outlets knew about benchmarking developer workloads, Chromium compile time is such an irrelevant benchmark. Benchmarking the time to make a clean release build of an incredibly large C++ code base, especially one with tons of time dedicated to making the build as parallel as possible, isn’t at all representative of what 99% of programmers do in their day to day. I work on a large cpp codebase every day and it’s been months since I’ve done a clean release build on my local machine.
A substantially better test methodology would be to checkout to a commit 30 or so back, do a clean build and running of the test suite to populate caches, and then time the total duration it takes to run the test suite until you land back on the latest commit. Most programmers don’t do clean builds unless they absolutely have to and they’ll have build caches populated. Do this for an array of languages with projects of varying sizes and then you’ll have a benchmark actually worth looking at.
Wait, your devops infrastructure does run builds centrally and/or in the cloud but on the other hand does run tests locally?
Not trying to be difficult here, genuinely asking.
I mean unless you are pushing out a release, local testing is way better because cloud runners are expensive and thats way too long to know what went wrong. I can compile my stuff and get stuff immediately
I don’t think you understand what that benchmark is trying to achieve.
Most programmers aren’t using Threadrippers, and if your use-case isn’t parallelised, why would you be in the market for a high-core count CPU?
The Chromium compile times are actually perfect for our gamedev workload since we have our build machines compile binaries without fastbuild or other caching when producing builds to submit to partners. And we can reasonably expect the performance scaling they see to apply to our general compile times too, though they’re fast enough on developer machines that you can barely feel differences lol
GN has asked for development benchmark suggestions on various platforms, especially recently specifically for TRX50 (and perhaps WRX90). Try to reach out to his team.