Remix.run Logo
jasonwatkinspdx 3 days ago

No, ULID has a "monotonic" feature, where if it detects the same millisecond timestamp in back to back calls, it just increments the 80 bit "random" portion. This means it has convoying behavior. If two machines are generating ids independently and happen to choose initial random positions near each other, the probability of collision is much higher than the basic birthday bound.

I think this "sort of monotonic but not really" is the worst of both to be honest. It tempts you to treat it like an invariant when it isn't.

If you want monotonicity with independent generation, just use a composite key that's a lamport clock and a random nonce. Or if you want to be even more snazzy use Hybrid Logical Clocks or similar.

Dylan16807 3 days ago | parent | next [-]

> If two machines are generating ids independently and happen to choose initial random positions near each other, the probability of collision is much higher than the basic birthday bound.

But the chance of the initial random positions being near each other is very very low.

If you pick a billion random numbers in an 80 bit space, the chance you have a collision is one in a million. (2^80 / (2^30)^2)

If you pick a thousand random starting points and generate a million sequential numbers each, the chance your starting points are sufficiently close to each other to cause an overlap is one in a trillion. ((2^80 / 2^20) / (2^10)^2)

In that one in a trillion case, you'll likely end up with half a million collisions, which might matter to you. But if you care about 0 collisions versus 1+ collisions, pick the monotonic version.

jasonwatkinspdx 3 days ago | parent [-]

Right, but the point is there's no reason to accept this limitation. Likewise why hardcode millisecond scale timestamps in a world where billions of inserts per second are practical on a single server?

Or if what you want is monotonic distributed timestamps, again, HLC is how you do that properly.

So you're embracing this weird limitations for no real benefit.

And as you can see in the rest of this comment thread, a lot of people simply do not even know this behavior and are assuming the 80 bit portion is always random. Which is my whole point about having a not really an invariant invariant just being a bad way to go fundamentally.

Edit: just to reply to the below since I can't do so directly, I understand the arithmetic here. What I'm saying is there's zero reason to choose this weird scheme vs something that's just the full birthday bound and you never think about it again.

As another comment points out: just consider neighbor guessing attacks. This 80 bit random but not random field is a foot gun to anyone that just assumes they can treat it as truly random.

listenallyall 3 days ago | parent | next [-]

It's not a "limitation", he's saying there is much, much, much less chance of having any collisions with ULIDs - one in a million vs one in a trillion

Dylan16807 3 days ago | parent | prev [-]

> why hardcode millisecond scale timestamps in a world where billions of inserts per second are practical on a single server?

> Or if what you want is monotonic distributed timestamps, again, HLC is how you do that properly.

Why not just 64 bit timestamps stapled to a random number? You can be collision proof and monotonic without doing anything fancy.

> this weird scheme vs something that's just the full birthday bound and you never think about it again

But the weird scheme gives you better odds than the birthday bound.

listenallyall 3 days ago | parent | prev [-]

I dont think this holds up. Define "near each other" in an 80-bit random space. Further, the likelihood of a potential conflict is offset by the fact that you have far fewer "initial random positions" instead of every single element defining its own random position. And the extra random bits (over UUID7) reduce conflict possibilities by orders of magnitude.

I concede I'm no mathematician and I could be wrong here, but your analysis feels similar to assuming 10-11-12-13-14-15 is less likely to be a winning lottery ticket because the odds against consecutive numbers are so massive.

jasonwatkinspdx 3 days ago | parent [-]

No, the calculation is straightforward and I'm not making the fallacious assumption you say there at the end about a magical lottery ticket number.

My basic point is the probability of collision is lower than the birthday bound, there's no need for this, and as comments in this thread make clear people are not understanding this limitation even exists with the specification.

listenallyall 3 days ago | parent [-]

> the calculation is straightforward

Ok then, make it easy - your requirement is to independently pick 4 numbers from the range 0 to 9, without resulting in any duplicates. Which is more likely to be successful:

- pick 4 random digits independently

- pick a random digit, which will be appended by the next digit as pick #2 (i.e. if you pick 5, then 6 will automatically be your second digit, if you pick 9, 0 will be your second digit). Then pick once more on the same terms.

The math here is easy: scenario 1 you have 0.9 x 0.8 x 0.7 = 0.504 likelihood of success. Scenario 2 it's simply 0.7.