Remix.run Logo
hamiecod 3 days ago

I recently started reading "Specifying Systems: The TLA+ Language and Tools for Hardware and Software En" by Lawrence Lamport[0]. It is a good starting point for learning how to specify systems on the basis of mathematical proofs.

Since the new code is specifications in the age of AI, learning how to specify systems mathematically is a huge advantage because English is extremely ambiguous.

[0]: https://lamport.azurewebsites.net/tla/book-02-08-08.pdf

gmfawcett 2 days ago | parent [-]

I've had the same intuition. I've had mixed results in this area, although I'm certainly no expert. Recently I wanted to formalize a model of a runbook for a tricky system migration, to help me reason through some alternatives. I ended up writing a TLA+ spec before generating some visualizations, and also some possible prototypes in MiniZinc. All three (spec, visualizations, CP models) were vibe-coded in different sessions, in about that order, though most of my personal effort went into the spec.

While the later AIs quickly understood many aspects of the spec, they struggled with certain constraints whose intuitive meaning was concealed behind too much math. Matters which I had assumed were completely settled, because a precise constraint existed in the spec, had to be re-explained to the AI after implementation errors were found. Eventually, I added more spec comments to explain the motivation for some of the constraints, which helped somewhat. (While it's an untested idea, my next step was going to be to capture traces of the TLA+ spec being tested against some toy models, and including those traces as inputs when producing the implementations, e.g. to construct unit tests. Reasoning about traces seemed to be a fairly strong suit for the AI helper.)

In hindsight, I feel I set my sights a little too high. A human reader would have had similar comprehension problems with my spec, and they probably would have taken longer to prime themselves than the AI did. Perhaps my takeaway is that TLA+ is a great way to model certain systems mathematically, because precision in meaning is a great quality; but you still have to show sympathy to your reader.

hamiecod 2 days ago | parent [-]

Was the TLA spec just hard to understand for the reader or for a peer as well?

gmfawcett a day ago | parent [-]

That's a fair question, but I didn't have another person read it. (If I'm being honest, I think my colleagues would have looked at me funny. We don't exactly have a culture of using TLA+ to describe sysadmin processes here, and it felt a bit like a sledgehammer-swatting-fly even by my own standards! They thought the visualizations were cool, though...)

Many of my rules seemed pretty basic to me -- a lot of things like, "there is no target-node candidate whose ordinal is higher than the current target's and also satisfies the Good_Match predicate." But if I had been writing it for a human reader, rather than just to document constraints, I would have put in more effort to explain why the constraints existed in the first place (what's a Good match? Are there Poor matches? Why do we care? etc.). I didn't skip this step altogether but I didn't invest much time into it.

I did take care to separate "physics" rules from "strategy" rules (i.e, explicitly separating core actions and limits from objectives and approaches). That seemed to help the AI, and I'm sure it would have helped people too.