| ▲ | Someone 3 hours ago | |||||||||||||
> That is not true, and the proof is that LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions. LLMs can generate (relatively small amounts of) working code from relatively terse descriptions, but I don’t think they can do so _reliably_. They’re more reliable the shorter the code fragment and the more common the code, but they do break down for complex descriptions. For example, try tweaking the description of a widely-known algorithm just a little bit and see how good the generated code follows the spec. > Sometimes the interpolated detail is wrong (and indeterministic), so, if reliable result is to be achieved Seems you agree they _cannot_ reliably generate (relatively small amounts of) working code from relatively terse descriptions | ||||||||||||||
| ▲ | mike_hearn 2 hours ago | parent [-] | |||||||||||||
Neither can humans, but the industry has decades of experience with how to instruct and guide human developer teams using specs. | ||||||||||||||
| ||||||||||||||