| ▲ | pvillano a day ago |
| Is anyone working on applying these techniques to formal verification of software? My limited understanding of Rust is that it applies a fixed set of rules to guarantee memory safety. The rules are somewhat simple and limiting, for ease of understanding and implementation, but also because of undecidability. Programmers run into situations where they know that their code won't cause memory errors, but it doesn't follow the rules. Wouldn't it be cool if something like Aristotle was integrated into the compiler? Any code for which a proof of correctness could be written would pass/compile, without having to add more and more rules |
|
| ▲ | maxwells-daemon a day ago | parent | next [-] |
| We are! We very recently announced some results on formally proving the correctness of programs: https://harmonic.fun/news#blog-post-verina-bench-sota Formal methods are cool because, by contrast to tools like the borrow checker, you can prove some very "nonlocal" properties: this system does not deadlock, or it makes progress at least every N steps, etc. |
| |
| ▲ | vatsachak 8 hours ago | parent | next [-] | | Does Aristotle produce TLA+ output? For example can it read rust async code and prove that there are no deadlocks in TLA+, or some equivalent in Lean? | | |
| ▲ | zozbot234 8 hours ago | parent [-] | | TLA+ is generally used to specify a "toy model" of some complex distributed system. It's not intended for end-to-end proof, for that you'd just use Coq/Rocq or Lean itself. Lean is certainly expressive enough, but you'll have to translate the time and non-determinism modalities of TLA+ as part of the Lean development. |
| |
| ▲ | NetMageSCW 7 hours ago | parent | prev [-] | | How is “this system doesn’t deadlock” not the same as the halting problem? |
|
|
| ▲ | teiferer 19 hours ago | parent | prev | next [-] |
| An issue with this approach is that it may not be robust. That is, you could run into a casr where a minor modification of your program is suddenly not provable anymore, even though it is still correct. The heuristic (AI or otherwise) has necessarily limits, and if your are close to the "edge" of its capabilities then a minor change could push it across. If the proof is rooted in the programmer's understanding who can give proof hints to the prover then any modification of the program can then be accompanied with a modification of the hints, still allowing automatic proofs. But if the human has no clue then the automatic system can get stuck without the human having a chance to help it along. |
| |
| ▲ | amelius 13 hours ago | parent [-] | | The same is true for optimization. One small change and the compiler's optimizer doesn't know anymore how to optimize the code, and your code is now slow. And there is no way for a programmer to fix it except by rolling back their changes or by inspecting the assembly output. |
|
|
| ▲ | zozbot234 a day ago | parent | prev [-] |
| Formal verification of program correctness is also (for obvious reasons) key to unlocking AI-driven synthesis (i.e. 'vibe' coding) of "correct" programs that will verifiably meet the given spec. |
| |
| ▲ | eru a day ago | parent | next [-] | | Not all aspects of a spec can be formally encoded. But even half-way houses are good. Eg you can give the vague spec 'build me a todo list app', but you can still formally prove that everything your app does finishes, or even that it finishes in reasonable time. | | |
| ▲ | f1shy 21 hours ago | parent [-] | | Yes. For sure we will never be able to 100% automate the whole SWE process. As you say, the first input is a human wish, and there comes the joke of the genie that always screw the wishes by leaving something obvious out, because not explicitly specified. Also I think at some point the halting problem will make some programs impossible to test. But it would so great, program in a loose syntax, but with more safety than Rust and Ada together | | |
| ▲ | eru 27 minutes ago | parent [-] | | > Also I think at some point the halting problem will make some programs impossible to test. No, not at all. The halting problem isn't much of a problem here. To elaborate: yes, it's pretty much impossible to decide whether an arbitrary programme will halt. But we aren't dealing with arbitrary programmes, you carefully have your agent craft programmes that are easy to prove correct. There are languages available today whose type systems already only let you write terminating programmes. See eg https://news.ycombinator.com/item?id=32102203 the Dhall language. Or Agda or Lean itself (unless you specifically opt out via the 'partial' keyword. But it's trivial to check whether someone used 'partial'.) If your agent write a programme that's not easy to prove to be terminating, you don't try harder to prove. You just flag that as an error and have the agent try again. Just like as a human code reviewer you reject Pull Requests that are too complicated to understand: you don't even bother figuring out whether they are technically correct or not. |
|
| |
| ▲ | tomjen3 17 hours ago | parent | prev [-] | | It will certainly help - but its an extremely high bar. Almost all formal verification of software today is "does this pass the typechecker"?. Now this captures some errors, but it doesn't really capture high level ones (is this program guaranteed to not deadlock is a hard one), and it doesn't capture the one that is important for business purposes (does this do what the customer wants). That requirement is more important than correctness (vitness all the software that is described as "crap", but is nonetheless widely used). I don't think this is a required key to unlocking vibe coding. That seems to be easy: does this provide business value? And there the answer seems roughly to be "yes". |
|