Remix.run Logo
energy123 7 days ago

Performance is proportional to the number of reasoning tokens. How to reconcile that with your opinion that they are "random words"?

kelipso 6 days ago | parent | next [-]

Technically random can have probabilities associated with them.. Casual speech, random means equal probabilities, or we don’t know the probabilities. But for LLM token output, it does estimate the probabilities.

energy123 6 days ago | parent [-]

Greedy decoding isn't random.

blargey 6 days ago | parent | prev [-]

s/random/statistically-likely/g

Reducing the distance of each statistical leap improves “performance” since you would avoid failure modes that are specific to the largest statistical leaps, but it doesn’t change the underlying mechanism. Reasoning models still “hallucinate” spectacularly even with “shorter” gaps.

ikari_pl 6 days ago | parent [-]

What's wrong with statistically likely?

If I ask you what's 2+2, there's a single answer I consider much more likely than others.

Sometimes, words are likely because they are grounded in ideas and facts they represent.

blargey 6 days ago | parent [-]

> Sometimes, words are likely because they are grounded in ideas and facts they represent.

Yes, and other times they are not. I think the failure modes of a statistical model of a communicative model of thought are unintuitive enough without any added layers of anthropomorphization, so there remains some value in pointing it out.