Remix.run Logo
number6 6 hours ago

But can it count the R's in strawberry?

Paradigma11 5 hours ago | parent | next [-]

That question is equivalent to asking a human to add the wavelengths of those two colors and divide it by 3.

snovv_crash 5 hours ago | parent | next [-]

Unless you're aware of hyperspectral image adapters for LLMs they aren't capable of that either.

szszrk 5 hours ago | parent | prev | next [-]

Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number.

Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light.

koliber 5 hours ago | parent [-]

> human will instantly answer "I don't know" instead of yelling a random number.

Seems that you never worked with Accenture consultants?

szszrk 41 minutes ago | parent [-]

Fair.

Yet this can be filtered with fixed rules, like "output produced by corporate structures is untrusted random data".

thegabriele 3 hours ago | parent | prev [-]

Why is that?

3 hours ago | parent | next [-]
[deleted]
Paradigma11 2 hours ago | parent | prev [-]

Because LLMs dont have a textual representation of any text they consume. Its just vectors to them. Which is why they are so good at ignoring typos, the vector distance is so small it makes no difference to them.

Aditya_Garg 6 hours ago | parent | prev [-]

yes its ridiculously good at stuff like that now. I dare you to try and trick it.

frizlab 5 hours ago | parent [-]

https://news.ycombinator.com/item?id=47495568

thedatamonger 5 hours ago | parent [-]

what bothers me is not that this issue will certainly disappear now that it has been identified, but that that we have yet to identify the category of these "stupid" bugs ...

sigmoid10 5 hours ago | parent [-]

We already know exactly what causes these bugs. They are not a fundamental problem of LLMs, they are a problem of tokenizers. The actual model simply doesn't get to see the same text that you see. It can only infer this stuff from related info it was trained on. It's as if someone asked you how many 1s there are in the binary representation of this text. You'd also need to convert it first to think it through, or use some external tool, even though your computer never saw anything else.

datsci_est_2015 3 hours ago | parent [-]

Okay but, genuinely not an expert on the latest with LLMs, but isn’t tokenization an inherent part of LLM construction? Kind of like support vectors in SVMs, or nodes in neural networks? Once we remove tokenization from the equation, aren’t we no longer talking about LLMs?

fenomas 2 hours ago | parent [-]

It's not a side effect of tokenization per se, but of the tokenizers people use in actual practice. If somebody really wanted an LLM that can flawlessly count letters in words, they could train one with a naive tokenizer (like just ascii characters). But the resulting model would be very bad (for its size) at language or reasoning tasks.

Basically it's an engineering tradeoff. There is more demand for LLMs that can solve open math problems, but can't count the Rs in strawberry, than there is for models that can count letters but are bad at everything else.