Remix.run Logo
joaohaas 10 hours ago

With the recent barrage of AI-slop 'speedup' posts, the first thing I always do to see if the post is worth a read is doing a Ctrl+F "benchmark" and seeing if the benchmark makes any fucking sense.

99% of the time (such as in this article), it doesn't. What do you mean 'cloneBare + findCommit + checkout: ~10x win'? Does that mean running those commands back to back result in a 10x win over the original? Does that mean that there's a specific function that calls these 3 operations, and that's the improvement of the overall function? What's the baseline we're talking about, and is it relevant at all?

Those questions are partially answered on the much better benchmark page[1], but for some reason they're using the CLI instead of the gitlib for comparisons.

[1] https://github.com/hdresearch/ziggit/blob/5d3deb361f03d4aefe...

yevbar 9 hours ago | parent | next [-]

The reason being bun actually tested both using the git CLI as well as libgit2. Across the board the C library was 3x slower than just spawning calls to the git CLI.

Under the hood, bun's calling these operations when doing a `bun install` and these are the places where integrating 100% gives the most boost. When more and more git deps are included in a project, these gains pile up.

However, the results appear more at 1x parity when accounting for network times (ie round trip to GitHub)

hrmtst93837 9 hours ago | parent | prev [-]

If they pulled off a 10x gain across sequential git ops, the post should show flame graphs or profiler output and spell out whether that label is one path or three separate calls run back to back, because 'cloneBare + findCommit + checkout' says almost nothing on its own. I read it as marketing copy.