| ▲ | virgilp 5 hours ago | ||||||||||||||||
Actually, no. We always needed good checks - that's why you have techniques like automated canary analysis, extensive testing, checking for coverage - these are forms of "executable oracles". If you wanted to be able to do continuous deployment - you had to be very thorough in your validation. LLMs just take this to the extreme. You can no longer rely on human code reviews (well you can but you give away all the LLM advantages) so then if you take out "human judgement" *from validation*[1], you have to resort to very sophisticated automated validation. This is it - it's not about "inventing a new language", it's about being much more thorough (and innovative, and efficient) in the validation process. [1] never from design, or specification - you shouldn't outsource that to AI, I don't think we're close to an AI that can do that even moderately effective without human help. | |||||||||||||||||
| ▲ | whattheheckheck 20 minutes ago | parent | next [-] | ||||||||||||||||
Llm boys discover the halting problem! | |||||||||||||||||
| ▲ | nitwit005 4 hours ago | parent | prev [-] | ||||||||||||||||
If the LLM generates code exactly matching a specification, the specification becomes a conventional programing language. The LLM is just transforming from one language to another. | |||||||||||||||||
| |||||||||||||||||