| ▲ | readthenotes1 3 hours ago | |||||||
I recall an experiment in proving software correct from the 1990s that found more errors in the final proof annotations than in the software it had proved correct. Then, I foresee 2 other obstacles, 1 of which may disappear: 1. Actually knowing what the software is supposed to do is hard. Understanding what the users actually want to do and what the customers are paying to do --which aren't necessarily the same thing--is complex 2. The quality of the work of many software developers is abysmal and I don't know why they would be better at a truth language than they are at Java or some other language. Objection 2 may disappear to be replaced with AI systems with the attention to do what needs to be done. So perhaps things will change in that to make it worthwhile... | ||||||||
| ▲ | Groxx an hour ago | parent | next [-] | |||||||
I think the hope for 2 is that those programmers would be forced into inaction by the language safety, rather than being allowed to cause problems. I don't really think that works either, because there's endless ways to add complication even if you can't worsen behavior (assuming that's even possible). At best they might be caught eventually... but anyone who has worked in a large tech company knows at least a few people who are somehow still employed years later, despite constant ineptitude. Play The Game well enough and it's probably always possible. | ||||||||
| ▲ | jsmorph 3 hours ago | parent | prev | next [-] | |||||||
Re 1: Discussing and guiding the desirable theorems for general-purpose programs has been a major challenge for us. Proofs for their own sake (bad?) vs glorious general results (good but hard?). Actual human guidance there can be critical there at least for now. | ||||||||
| ▲ | cayley_graph 3 hours ago | parent | prev [-] | |||||||
Mind linking the experiment? Sounds interesting. | ||||||||
| ||||||||