Remix.run Logo
idoubtit 2 hours ago

I've contributed a few optimisations to some implementations in these benchmarks, but as I read the code of many other implementations (and some frameworks) I lost most of the trust I had in these benchmarks.

I knew that once a benchmark is famous, people start optimising for it or even gaming it, but I didn't realise how much it made the benchmarks meaningless. Some frameworks were just not production ready, or had shortcuts made just for a benchmark case. Some implementations were supposed to use a framework, but the code was skewed in an unrealistic way. And sometimes the algorithm was different (IIRC, some implementation converted the "multiple sql updates" requirements into a single complex update using CASE).

I would ignore the results for most cases, especially the emerging software, but at least the benchmarks suggested orders of magnitudes in a few cases. I.e. the speed of JSON serialization in different languages, or that PHP Laravel was more or less twice slower than PHP Symfony which could be twice slower than Rails.