Remix.run Logo
Dylan16807 4 hours ago

> as long as all threads are only doing atomic_max operations on the memory location, which an implementation can't assume.

What assumes that?

If your early read gives you a higher number, quitting out immediately is the same as doing the max that same nanosecond. You avoid setting a variable to the same value it already is. Doing or not doing that write shouldn't affect other atomics users, should it?

In general, I should be able to add or remove as many atomic(x=x) operations as I want without changing the result, right?

And if your early read is lower then you just do the max and having an extra read is harmless.

The only case I can think of that goes wrong is the read (and elided max) happening too early in relation to accesses to other variables, but we're assuming relaxed memory order here so that's explicitly acceptable.

gpderetta 3 hours ago | parent [-]

Yes, probably you are right: a load that finds a larger value is equivalent to a max. As the max wouldn't store any value in this case, also it wouldn't introduce any synchronization edge.

A load that finds a smaller value is trickier to analyze, but i think you are just free to ignore it and just proceed with the atomic max. An underlying LL/SC loop to implement a max operation might spuriously fail anyway.

edit: here is another argument in favour: if your only atomic RMW is a cas, to implement X.atomic_max(new) you would:

  1: expected <- X 
  2: if new < expected: done
  3: else if X.cas(expected, y): done
     else goto 2 # expected implicitly refreshed
So a cas loop would naturally implement the same optimization (unless it starts with a random expected), so the race is benign.