Remix.run Logo
throwaway7783 3 days ago

Java now has FFM, that is far better and simpler than JNI, FWIW. and chatgpt says

Language/API | Call Overhead (no-op C) | Notes

Go (cgo) | ~40–60 ns | Stack switch + thread pinning

Java FFM | ~50 ns (downcall) | Similar to JNI, can be ~30 ns with isTrivial()

Java FFM (leaf) | ~30–40 ns | Optimized (isTrivial=true)

JNI | ~50–60 ns | Slightly slower than FFM

Rust (unsafe) | ~5–20 ns | Near-zero overhead

C# (P/Invoke) | ~20–50 ns | Depends on marshaling

Python (cffi) | 1000–10000 ns | Orders of magnitude slower |

billywhizz 2 days ago | parent | next [-]

i can't see how these numbers can be anywhere near correct (nor the ones above). in JavaScript on an old Core i5 the overhead of a simple ffi call is on the order of 5 nanoseconds. on a recent x64/arm64 cpu it's more like 2 nanoseconds.

you can verify this easily with Deno ffi which is pretty much optimal for JS runtimes. also, from everything i have seen and read, luajit should be even lower overhead than this.

you really shouldn't be asking chatgpt questions like this imo. these are facts, that need to be proven, not just vibes.

throwaway7783 a day ago | parent [-]

I agree. was just following the parents pattern, to make it work for me :)

johnisgood 3 days ago | parent | prev [-]

Thanks, I added it to the list. Keep in mind that the numbers may be off (both yours and mine), so I would not take them at face value. It is interesting how in yours JNI is still pretty good. Also Rust is "~5–20 ns" in yours, so I assume "0" is the baseline.

throwaway7783 a day ago | parent [-]

This is chatgpt. Not my own benchmark. So it is probably hallucinating