▲ | petra 17 hours ago | |||||||
Can an LLM detect a lack of precision and point it to you ? | ||||||||
▲ | TheOtherHobbes 16 hours ago | parent | next [-] | |||||||
Sometimes, yes. Reliably, no. LLMs don't have enough of a model of the world to understand anything. There was a paper floating around recently about how someone trained an ML system on orbital dynamics. The result was a system that could calculate orbits correctly, but it completely failed to extract the underlying - simple - math. Instead it basically frankensteined together its own system of epicycles which solved a very narrow range of problems but lacked any generality. Any coding has the same problems. Sometimes you get lucky, sometimes you don't. And if you strap on an emulator and test rig and allow the machine to flail around inside it, sometimes working code falls out. But there's no abstracted model of software development as a process in there, either in theory or practise. And no understanding of vague goals with constraints and requirements that can be inferred creatively from outside the training data. | ||||||||
| ||||||||
▲ | staunton 16 hours ago | parent | prev | next [-] | |||||||
An LLM can even ignore lack of precision and just guess what you wanted, usually correctly, unless what you want is very unusual. | ||||||||
▲ | TeMPOraL 16 hours ago | parent | prev [-] | |||||||
It can! Though you might need to ask for it, otherwise it may take what it thinks you mean and run off with it, at which point you'll discover the lack of precision only later, when the LLM gets confused or the result is nothing like what you actually expected. |