| ▲ | readitalready 7 hours ago | |||||||||||||
That's like saying "the programmers are so terrible you have to think ahead of them so they don't make mistakes". | ||||||||||||||
| ▲ | avereveard 7 hours ago | parent | next [-] | |||||||||||||
eh, good programmer are goal oriented, today SOTA models still need for the most part step by step guidance, so there's a gap still. the AGENTS.md pieces that pin specific tool-call shapes or force chain-of-thought before action are coping that ages out, same lifecycle as the retry-with-different-prompt loops or chains of thought prompt most stacks shipped in 2024 to compensate for brittle instruction-following. not quite there yet, but it's nice to see them being shorter and shorter as model release until all the basic are peeled out by the march of progress and one day only the invariants will be left there | ||||||||||||||
| ▲ | Rekindle8090 7 hours ago | parent | prev [-] | |||||||||||||
No it's not actually anything like that whatsoever. Programmers are objectively, infinitely more capable than llms. Stop anthropomorphizing algorithms. | ||||||||||||||
| ||||||||||||||