| Not a L1/L2/... cache flush, but a store buffer flush, at least on x86.
This is true for LOCK instructions. Loads/stores (again on x86) are always acquire/release, so they don't need additional fences if you don't need seq-cst. However, seq-cst atomics in C++ lower stores to LOCK XCHG, so you get a fence. |
| |
| ▲ | ibraheemdev 5 days ago | parent | next [-] | | > There is no way the shared_ptr<T> is using the expensive sequentially consistent atomic operations. All RMW operations have sequentially consistent semantics on x86. It's not exactly a store buffer flush, but any subsequent loads in the pipeline will stall until the store has completed. | | |
| ▲ | Kranar 5 days ago | parent [-] | | It's a common misconception to reason about memory models strictly in terms of hardware. Sequential consistency is a property of a programming language's semantics and can not simply be inferred from hardware. It is possible for hardware operations to all be SC but for the compiler to still provide weaker memory orderings through compiler specific optimizations. | | |
| ▲ | ibraheemdev 5 days ago | parent [-] | | I'm referring to the performance implications of the hardware instruction, not the programming language semantics. Incrementing or decrementing the reference count is going to require an RMW instruction, which is expensive on x86 regardless of the ordering. | | |
| ▲ | Kranar 5 days ago | parent [-] | | The concept of sequential consistency only exists within the context of a programming language's memory model. It makes no sense to speak about the performance of sequentially consistent operations without respect to the semantics of a programming language. | | |
| ▲ | ibraheemdev 5 days ago | parent [-] | | Yes, what I meant was that the same instruction is generated by the compiler, regardless if the RMW operation is performed with relaxed or sequentially consistent ordering, because that instruction is strong enough in terms of hardware semantics to enforce C++'s definition of sequential consistency. There is a pretty clear mapping in terms of C++ atomic operations to hardware instructions, and while the C++ memory model is not defined in terms of instruction reordering, that mapping is still useful to talk about performance. Sequential consistency is also a pretty broadly accepted concept outside of the C++ memory model, I think you're being a little too nitpicky on terminology. | | |
| ▲ | Kranar 5 days ago | parent [-] | | The presentation you are making is both incorrect and highly misleading. There are algorithms whose correctness depends on sequential consistency which can not be implemented in x86 without explicit barriers, for example Dekker's algorithm. What x86 does provide is TSO semantics, not sequential consistency. | | |
| ▲ | ibraheemdev 5 days ago | parent [-] | | I did not claim that x86 provides sequential consistency in general, I made that claim only for RMW operations. Sequentially consistent stores are typically lowered to an XCHG instruction on x86 without an explicit barrier. From the Intel SDM: > Synchronization mechanisms in multiple-processor systems may depend upon a strong memory-ordering model. Here, a program can use a locking instruction such as the XCHG instruction or the LOCK prefix to ensure that a read-modify-write operation on memory is carried out atomically. Locking operations typically operate like I/O operations in that they wait for all previous instructions to complete and for all buffered writes to drain to memory (see Section 8.1.2, “Bus Locking”). |
|
|
|
|
|
| |
| ▲ | ot 4 days ago | parent | prev | next [-] | | I'm not sure which comment you're responding to, because I'm not talking about shared_ptr, but about how atomic operations in general are implemented on x86. I don't believe that shared_ptr uses seq-cst because I can just look at the source code, and I know that inc ref is relaxed and dec ref is acq-rel, as they should be. However, none of this makes a difference on x86, where RMW atomic operations all lower to the same instructions (like LOCK ADD). Loads also do not care about memory order, and stores sometimes do, and that was what my comment was about. | | |
| ▲ | tialaramex 3 days ago | parent [-] | | This thread is wondering why the MP shared_ptr is slower than SP shared_ptr, or in Rust where this distinction isn't compiler magic, why Arc is slower than Rc So hence the sequentially consistent ordering doesn't come into the picture. And yeah, no, you don't get the sequentially consistent ordering for free on x86. x86 has the total store order, but firstly that's not quite enough to deliver sequentially consistent semantics in the machine on its own and then also the compiler has barriers during optimisation and those are impacted too. So if you insist on this ordering (which to be clear again you almost never should, the fact it's the default in C++ is IMO a mistake) it does make a difference on x86. |
| |
| ▲ | loeg 5 days ago | parent | prev [-] | | > when you do that analysis your answer is going to be acquire-release and only for some edge cases, in many places the relaxed atomic ordering is fine. Why would shared_ptr refcounting need anything other than relaxed? Acq/rel are for implementing multi-variable atomic protocols, and shared_ptr refcounting simply doesn't have other variables. | | |
| ▲ | dataflow 5 days ago | parent | next [-] | | It's because you're not solely managing the refcount here. Other memory locations have a dependence on the refcount, given that you're also deleting the object after the refcount reaches zero. That means you need all writes to have completed at that point, and all reads to observe that. Otherwise you might destroy an object while it's in an invalid state, or you might release the memory while another thread is accessing it. | |
| ▲ | tialaramex 5 days ago | parent | prev | next [-] | | It's extremely difficult to see in real C++ standard library source because of the layers of obfuscating compiler workaround hacks, but eventually they are in fact using acquire-release ordering, but only for decrementing the reference count. Does that help you figure out why we want acquire-release, or do you need more help ? | |
| ▲ | Kranar 5 days ago | parent | prev [-] | | You need it to avoid a use after free. |
|
|