▲ | vlovich123 3 days ago | |||||||||||||||||||||||||||||||||||||||||||
That would be a compelling counter if and only if languages like Java actually beat other languages in throughput. In practice that doesn’t seem to be the case and the reasons for that seem to be: * languages like c++ and Rust simply don’t allocate as much as Java, instead using value types. Even C# is better here with value types being better integrated. * languages like c++ and Rust do not force atomic reference counting. Rust even offers non atomic ref counting in the standard library. You also only need to atomic increment / decrement when ownership is being transferred to a thread - that isn’t quite as common depending on the structure of your code. Even swift doesn’t do too badly here because of the combination of compiler being able to prove the permission of eliding the need for reference counting altogether and offering escape hatches of data types that don’t need it. * c++, Rust, and Swift can access lower level capabilities (eg SIMD and atomics) that let them get significantly higher throughput. * Java’s memory model implies and requires the JVM to insert atomic accesses all over the place you wouldn’t expect (eg reading an integer field of a class is an atomic read and writing it is an atomic write). This is going to absolutely swamp any advantage of the GC. Additionally, a lot of Java code declares methods synchronized which requires taking a “global” lock on the object which is expensive and pessimistic for performance as compared with the fine-grained access other languages offer. * there’s lots of research into ways of offering atomic reference counts more cheaply (called biased RC) which can safely avoid needing to do an atomic operation in places completely transparently and safely provided the conditions are met . I’ve yet to see a Java program that actually gets higher throughput than Rust so the theoretical performance advantage you claim doesn’t appear to manifest in practice. | ||||||||||||||||||||||||||||||||||||||||||||
▲ | gf000 3 days ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||
The main topic here was Swift vs Android's Java. Of course with manual memory management you may be able to write more efficient programs, though it is not a given, and comes at the price of a more complicated and less flexible programming model. At least with Rust, it is actually memory safe, unlike c++. - ref counting still has worse throughout than a tracing GC, even if it is single-threaded, and doesn't have to use atomic instructions. This may or may not matter, I'm not claiming it's worse, especially when used very rarely as is the case with typical c++/rust programs. > You also only need to atomic increment / decrement when ownership is being transferred to a thread Java can also do on-stack replacement.. sometimes. - regarding lower level capabilities, java does have an experimental Vector API for simd. Atomics are readily available in the language. - Java's memory model only requires 32-bit writes to be "atomic" (though in actuality the only requirement is to not tear - there is no happens before relation in the general case, and that's what is expensive), though in practice 64-bit is also atomic, both of which are free on modern hardware. Field acces is not different from what rust or c++ does, AFAIK in the general case. And `synchronized` is only used when needed - it's just syntactic convenience. This depends on the algorithm at hand, there is no difference between the same algorithm written in rust/c++ vs java from this perspective. If it's lockless, it will be lockless in Java as well. If it's not, than all of them will have to add a lock. The point is not that manual memory can't be faster/more efficient. It's that it is not free, and comes at a non-trivial extra effort on developers side, which is not even a one-time thing, but applies for the lifetime of the program. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||
▲ | vips7L 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||
> Java’s memory model implies and requires the JVM to insert atomic accesses all over the place you wouldn’t expect (eg reading an integer field of a class is an atomic read and writing it is an atomic write). AFAIK that doesn’t really happen. They won’t insert atomic accesses anywhere on real hardware because the cpu is capable of doing that atomically anyway. > Additionally, a lot of Java code declares methods synchronized which requires taking a “global” lock on the object which is expensive and pessimistic for performance as compared with the fine-grained access other languages offer. What does this have to do with anything? Concurrency requires locks. Arc<T> is a global lock on references. “A lot” of Java objects don’t use synchronized. I’d even bet that 95-99% of them don’t. | ||||||||||||||||||||||||||||||||||||||||||||
|