| ▲ | Tade0 6 hours ago | ||||||||||||||||
This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language. Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced. | |||||||||||||||||
| ▲ | AlecSchueler 6 hours ago | parent [-] | ||||||||||||||||
It's true that they only give plausible sounding answers. But let's say we ask a simple question like "What's the sum of two and two?" The only plausible sounding answer to that will be "four." It doesn't need to have any fancy internal understanding or anything else beyond prediction to give what really is the same answer. The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct. | |||||||||||||||||
| |||||||||||||||||