▲ | sarchertech 3 days ago | |||||||
Dev teams are much less non-deterministic than LLMs. If you ask the same dev team to build the same product multiple times they’ll eventually converge on the producing the same product. The 2nd time it will likely be pretty different because they’ll use what they learned to build it better. The 3rd time will be better still, but each time after that it will essentially be the same product. An LLM will never converge. It definitely won’t learn from each subsequent iteration. Human devs are also a lot more resilient to slight changes in requirements and wording. A slight change in language that wouldn’t impact a human at all will cause an LLM to produce completely different output. | ||||||||
▲ | ako 3 days ago | parent [-] | |||||||
An LLM within the right context/environment can also converge: just like with humans you need to provide guidelines, rules, and protocols to instruct how to implement something. Just like with humans I’ve used the approach you describe: generate something one until it works the way you want it, then ask it so document insights, patterns and rules, and for the next project instruct it to follow the rules you persisted. Will result in more or less the same project. Humans are very non deterministic: if you ask me to solve a problem today, the solution will be different from last week, last year or 10 years ago. We’ve learnt to deal with it, and we can also control the non-determinism of LLMs. And humans are also very prone to hallucinations: remember those 3000+ gods that we’ve created to explain the world, or those many religions that are completely incompatible? Even if some are true, most of them must be hallucinations just by being incompatible to the others. | ||||||||
|