Remix.run Logo
shakna 4 days ago

Thinking is better understood than you seem to believe.

We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.

Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".

[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...

chpatrick 4 days ago | parent [-]

Who says what degree of complexity is enough? Seems like deferring the problem to some other mystical arbiter.

In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.

shakna 4 days ago | parent [-]

> In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal

And you'd be disagreeing with the vast amount of research into AI. [0]

> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.

[0] https://machinelearning.apple.com/research/illusion-of-think...

chpatrick 4 days ago | parent [-]

This article doesn't mention "slop" at all.

shakna 4 days ago | parent [-]

But it does mention that prompt complexity is not related to the output.

It does say that there is a maximal complexity that LLMs can have - which leads us back to... Intelligence requires organizational complexity that LLMs are not capable of.