Remix.run Logo
Tade0 6 hours ago

This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language.

Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.

AlecSchueler 6 hours ago | parent [-]

It's true that they only give plausible sounding answers. But let's say we ask a simple question like "What's the sum of two and two?" The only plausible sounding answer to that will be "four." It doesn't need to have any fancy internal understanding or anything else beyond prediction to give what really is the same answer.

The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct.

Tade0 5 hours ago | parent [-]

It becomes important the moment your particular bug is on one hand typical, but has a non-typical reason. In such cases you'll get nonsense which you need to ignore.

Again - they're very useful, as they give great answers based on someone else's knowledge and vague questions on part of the user, but one has to remain vigilant and keep in mind this is just text presented to you to look as believable as possible. There's no real promise of correctness or, more importantly, critical thinking.

AlecSchueler 3 hours ago | parent [-]

100% They're not infallible but that's a different argument to "they can't find bugs in your code."