Remix.run Logo
petra 17 hours ago

Can an LLM detect a lack of precision and point it to you ?

TheOtherHobbes 16 hours ago | parent | next [-]

Sometimes, yes. Reliably, no.

LLMs don't have enough of a model of the world to understand anything. There was a paper floating around recently about how someone trained an ML system on orbital dynamics. The result was a system that could calculate orbits correctly, but it completely failed to extract the underlying - simple - math. Instead it basically frankensteined together its own system of epicycles which solved a very narrow range of problems but lacked any generality.

Any coding has the same problems. Sometimes you get lucky, sometimes you don't. And if you strap on an emulator and test rig and allow the machine to flail around inside it, sometimes working code falls out.

But there's no abstracted model of software development as a process in there, either in theory or practise. And no understanding of vague goals with constraints and requirements that can be inferred creatively from outside the training data.

antonvs 5 hours ago | parent [-]

> LLMs don't have enough of a model of the world to understand anything.

This is binary thinking, and it's fallacious.

For your orbital mechanics example, sure, it's difficult for LLMs to develop good models of the physical world, in large part because they aren't able to interact with the world directly and have to rely on human texts to describe it to them.

For your software development example, you're making a similar mistake: the fact that their strongest suit is not producing fully working systems doesn't mean that they have no world model, or that their successes are as random as you seem to think ("Sometimes you get lucky, sometimes you don't," "sometimes working code falls out.")

But if you try, for example, asking an LLM to identify a bug in a program, or ask it questions about how a program works, you'll find that from a functional perspective, they exhibit excellent understanding that strongly implies a good world model. You may be taking your own thought processes for granted too much to realize how good they are at this. The idea that "there's no abstracted model of software development as a process in there" is hard to reconcile with the often superhuman responses they're capable of, when you use them in the scenarios they're most effective at.

staunton 16 hours ago | parent | prev | next [-]

An LLM can even ignore lack of precision and just guess what you wanted, usually correctly, unless what you want is very unusual.

TeMPOraL 16 hours ago | parent | prev [-]

It can! Though you might need to ask for it, otherwise it may take what it thinks you mean and run off with it, at which point you'll discover the lack of precision only later, when the LLM gets confused or the result is nothing like what you actually expected.