Remix.run Logo
agentultra 7 days ago

I really like TLA+ and am glad it exists.

But every time I suggest to a team I’m working on that we should try modelling the problem with it… people are less than enthusiastic.

These are all great tips. Especially starting with the most simple, abstract model and extending it with refinement if you need to.

That basically means that you write your first spec as abstractly and simply as possible with as little detail about the actual implementation as you can.

If details matter, you use the first spec as a module and write a new spec with the added details. The new spec will use implication that if this spec holds then the first one holds too.

Anyway, I was kind of mad when I learned TLA+ that it and systems like it aren’t part of the standard curriculum and common practices.

“We built this distributed database that guarantees quorum if you have at least 3 nodes!” someone will excitedly claim. And when you ask how you get an informal paper and 100k lines of C++ code. Wouldn’t it be nicer if you just had 1k lines of plain old pure maths?

And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.

For me it’s nicer I guess.

pron 7 days ago | parent | next [-]

I agree with everything you've said, and about refinements in particular. TLA+ newcomers often think of refinement as an advanced tool, but it's one of the most pragmatic and effective ways to specify.

But I'd like to add some nuance to this:

> And pure maths that can be exhaustively checked by the computer? It’s not as perfect as a proof but it’s great for a lot of practical applications.

1. A run of a model checker is exactly as perfect as a proof for the checked instance (which, indeed, is a weaker theorem than one that extends to an unbounded set of instances). In fact, it is a proof of the theorem when applied to the instance.

2. A proof is also not perfect for real software. A proof can perfectly convince us of a correctness of an algorithm, which is a mathematical construct, but a software system is not an algorithm; it's ultimately a physical system of silicone and metal elements that carry charge, and a physical system, unlike an algorithm, cannot be proven correct for the simple reason it's not a mathematical object. Virtually all correctness proofs assume that the hardware behaves correctly, but it only behaves correctly with high probability. My point is only that, unlike for algorithms or abstract theorems, there can be no perfect correctness in physical systems. Increasing the confidence in the algorithm beyond the confidence in the hardware (or human operators) may not actually increase the confidence in the system.

agentultra 7 days ago | parent [-]

Absolutely, I was waving my hands a lot there. Thanks for adding that.

And I’d also add the small model theory from Software Abstractions: if there is an error in your spec in a large instance then it is likely also going to appear in a small one. Most of errors can be caught by small instances of a model.

I only mention it because the common retort when people find out that the model checker only exhaustively checks a limited instance is, software is complex and you can’t model everything.

It’s true. But we don’t need models that account for the gravitational waves in our local region of space to get value from checking them. And neither do most programmers when they use unit tests or type checkers.

And proofs… I think I might disagree but I don’t have the professional experience to debate it fully. I do know from a colleague who does proof engineering on a real system written in C++ that it’s possible however… and likely more difficult than model checking.

goostavos 7 days ago | parent | prev [-]

I find the same. Even those who are interested in it in theory hit a pretty unforgiving wall when they try to put it in practice. Learning TLA+ is way harder than leaning another programming language. I failed repeatedly while trying to "program" via PlusCal. To use TLA you have to (re)learn some high-school math and you have to learn to use that math to think abstractly. It takes time and a lot (a lot!) of effort.

Now is a great time to dive in, though. LLMs take a lot of the syntactical pain out of the learning experience. Hallucinations are annoying, but you can formally prove they're wrong with the model checker ^_^

I think it's going to be a learn these tools or fall behind thing in the age of AI.

hwayne 7 days ago | parent [-]

I think the "high school math" slogan is untrue and ultimately scares people away from TLA+, by making it sound like it's their fault for not understanding a tough tool. I don't think you could show an AP calculus student the equation `<>[](ENABLED <<A>>_v) => []<><<A>>_v` and have them immediately go "ah yes, I understand how that's only weak fairness"

goostavos 5 days ago | parent [-]

Oh, hey -- you're that guy. I learned a lot of what I know about TLA from your writings ^_^

Consider my behavior changed. I thought the "high school math" was an encouraging way to sell it (i.e. "if you can get past the syntax and new way of thinking, the 'math' is ultimately straight forward"), but I can see your point, and how the perception would be poor when they hit that initial wall.