| ▲ | hamiecod 3 days ago | ||||||||||||||||
I recently started reading "Specifying Systems: The TLA+ Language and Tools for Hardware and Software En" by Lawrence Lamport[0]. It is a good starting point for learning how to specify systems on the basis of mathematical proofs. Since the new code is specifications in the age of AI, learning how to specify systems mathematically is a huge advantage because English is extremely ambiguous. [0]: https://lamport.azurewebsites.net/tla/book-02-08-08.pdf | |||||||||||||||||
| ▲ | gmfawcett 2 days ago | parent [-] | ||||||||||||||||
I've had the same intuition. I've had mixed results in this area, although I'm certainly no expert. Recently I wanted to formalize a model of a runbook for a tricky system migration, to help me reason through some alternatives. I ended up writing a TLA+ spec before generating some visualizations, and also some possible prototypes in MiniZinc. All three (spec, visualizations, CP models) were vibe-coded in different sessions, in about that order, though most of my personal effort went into the spec. While the later AIs quickly understood many aspects of the spec, they struggled with certain constraints whose intuitive meaning was concealed behind too much math. Matters which I had assumed were completely settled, because a precise constraint existed in the spec, had to be re-explained to the AI after implementation errors were found. Eventually, I added more spec comments to explain the motivation for some of the constraints, which helped somewhat. (While it's an untested idea, my next step was going to be to capture traces of the TLA+ spec being tested against some toy models, and including those traces as inputs when producing the implementations, e.g. to construct unit tests. Reasoning about traces seemed to be a fairly strong suit for the AI helper.) In hindsight, I feel I set my sights a little too high. A human reader would have had similar comprehension problems with my spec, and they probably would have taken longer to prime themselves than the AI did. Perhaps my takeaway is that TLA+ is a great way to model certain systems mathematically, because precision in meaning is a great quality; but you still have to show sympathy to your reader. | |||||||||||||||||
| |||||||||||||||||