▲ | ako 3 days ago | |
An LLM within the right context/environment can also converge: just like with humans you need to provide guidelines, rules, and protocols to instruct how to implement something. Just like with humans I’ve used the approach you describe: generate something one until it works the way you want it, then ask it so document insights, patterns and rules, and for the next project instruct it to follow the rules you persisted. Will result in more or less the same project. Humans are very non deterministic: if you ask me to solve a problem today, the solution will be different from last week, last year or 10 years ago. We’ve learnt to deal with it, and we can also control the non-determinism of LLMs. And humans are also very prone to hallucinations: remember those 3000+ gods that we’ve created to explain the world, or those many religions that are completely incompatible? Even if some are true, most of them must be hallucinations just by being incompatible to the others. | ||
▲ | sarchertech 3 days ago | parent [-] | |
That only works with very small projects to the point where the specification document is a very large percentage of the total code. If you are very experienced, you won’t solve the problem differently day to day. You probably would with a 10 year difference, but you won’t ever be running the next model 10 years out (even if the technology matures), so there’s no point in doing that comparison. Solving the same problem with the same constraints in radically different ways day to day comes from inexperience (unless you’re exploring and doing it on purpose). Calling what LLMs do hallucinations and comparing it to human mythology is stretching the analogy into absurdity. |