Remix.run Logo
gsleblanc 4 hours ago

It's increasingly looking naive to assume scaling LLMs is all you need to get to full white-collar worker replacement. The attention mechanism / hopfield network is fundamentally modeling only a small subset of the full human brain, and all the increasing sustained hype around bolted-on solutions for "agentic memory" is, in my opinion, glaring evidence that these SOTA transformers alone aren't sufficient even when you just limit the space to text. Maybe I'm just parroting Yann LeCun.

aerhardt 3 hours ago | parent | next [-]

> you just limit the space to text

And even then... why can't they write a novel? Or lowering the bar, let's say a novella like Death in Venice, Candide, The Metamorphosis, Breakfast at Tiffany's...?

Every book's in the training corpus...

Is it just a matter of someone not having spent a hundred grand in tokens to do it?

voxl 3 hours ago | parent | next [-]

I know someone spending basically every day writing personal fan fiction stories using every model you can find. She doesn't want to share it, and does complain about it a lot, seems like maintaining consistency for something say 100 pages long is difficult

zozbot234 2 hours ago | parent | prev | next [-]

Never mind novels, it can't even write a good Reddit-style or HN-style comment. agentalcove.ai has an archive of AI models chatting to one another in "forum" style and even though it's a good show of the models' overall knowledge the AIisms are quite glaring.

mh- an hour ago | parent [-]

They definitely can, and do.

It's just that the ones that manage to suppress all the AI writing "tells" go unnoticed as AI. This is a type of survivorship bias, though I feel there must be a better term for it that eludes me.

conception 3 hours ago | parent | prev | next [-]

I don’t understand - there are hundreds/thousands of AI written books available now.

aerhardt 3 hours ago | parent [-]

I've glossed over a few and one can immediately tell they don't meet the average writing level you'd see in a local workshop for writers, and much less that of Mann or Capote.

colechristensen 3 hours ago | parent | prev [-]

Who says they can't? What's your bar that needs to be passed in order for "written a novella" to be achieved?

There's a lot of bad writing out there, I can't imagine nobody has used an LLM to write a bad novella.

aerhardt 3 hours ago | parent [-]

> What's your bar that needs to be passed

I provide four examples in my comment...

colechristensen 3 hours ago | parent [-]

Your qualification for if an LLM can write a novella is it has to be as good as The Metamorphosis?

Yes, those are examples of novellas, surely you believe an LLM could write a bad novella? I'm not sure what your point is. Either you think it can't string the words together in that length or your standard is it can't write a foundational piece of literature that stays relevant for generations... I'm not sure which.

aerhardt 3 hours ago | parent [-]

I don't think it can write something that's of a fraction of the quality of Kafka.

But GP's argument ("limit the space to text") could be taken to imply - and it seems to be a common implication these days - that LLMs have mastered the text medium, or that they will very soon.

> it can't write a foundational piece of literature

Why not, if this a pure textual medium, the corpus includes all the great stories ever written, and possibly many writing workshops and great literature courses?

colechristensen 2 hours ago | parent [-]

I don't know what to tell you. It's more than a little absurd to make the qualification of being able to do something to be that the output has to be considered a great work of art for generations.

aerhardt 2 hours ago | parent [-]

I agree that the argument starts from a reduction to the absurd.

So at least we can agree that AI hasn't mastered the text medium, without further qualification?

And what about my argument, further qualified, which is that I don't think it could even write as well as a good professional writer - not necessarily a generational one?

ACCount37 3 hours ago | parent | prev [-]

You probably are.

The "small subset" argument is profoundly unconvincing, and inconsistent with both neurobiology of the human brain and the actual performance of LLMs.

The transformer architecture is incredibly universal and highly expressive. Transformers power LLMs, video generator models, audio generator models, SLAM models, entire VLAs and more. It not a 1:1 copy of human brain, but that doesn't mean that it's incapable of reaching functional equivalence. Human brain isn't the only way to implement general intelligence - just the one that was the easiest for evolution to put together out of what it had.

LeCun's arguments about "LLMs can't do X" keep being proven wrong empirically. Even on ARC-AGI-3, which is a benchmark specifically designed to be adversarial to LLMs and target the weakest capabilities of off the shelf LLMs, there is no AI class that beats LLMs.

bigyabai 2 hours ago | parent [-]

> Human brain isn't the only way to implement general intelligence - just the one that was the easiest for evolution to put together out of what it had.

The human brain is not a pretrained system. It's objectively more flexible than than transformers and capable of self-modulation in ways that no ML architecture can replicate (that I'm aware of).

ACCount37 2 hours ago | parent [-]

Human brain's "pre-training" is evolution cramming way too much structure into it. It "learns from scratch" the way it does because it doesn't actually learn from scratch.

I've seen plenty of wacky test-time training things used in ML nowadays, which is probably the closest to how the human brain learns. None are stable enough to go into the frontier LLMs, where in-context learning still reigns supreme. In-context learning is a "good enough" continuous learning approximatation, it seems.

bigyabai 2 hours ago | parent [-]

> In-context learning is a "good enough" continuous learning approximatation, it seems.

"it seems" is doing a herculean effort holding your argument up, in this statement. Say, how many "R"s are in Strawberry?

ACCount37 2 hours ago | parent [-]

If you think that "strawberry" is some kind of own, I don't know what to tell you. It takes deep and profound ignorance of both the technical basics of modern AIs and the current SOTA to do this kind of thing.

LLMs get better release to release. Unfortunately, the quality of humans in LLM capability discussions is consistently abysmal. I wouldn't be seeing the same "LLMs are FUNDAMENTALLY FLAWED because I SAY SO" repeated ad nauseam otherwise.

bigyabai 2 hours ago | parent [-]

I can ask a nine-year-old human brain to solve that problem with a box of Crayola and a sheet of A4 printer paper.

In-context learning is professedly not "good enough" to approximate continuous learning of even a child.

ACCount37 an hour ago | parent [-]

You're absolutely wrong!

You can also ask an LLM to solve that problem by spelling the word out first. And then it'll count the letters successfully. At a similar success rate to actual nine-year-olds.

There's a technical explanation for why that works, but to you, it might as well be black magic.

And if you could get a modern agentic LLM that somehow still fails that test? Chances are, it would solve it with no instructions - just one "you're wrong".

1. The LLM makes a mistake

2. User says "you're wrong"

3. The LLM re-checks by spelling the word out and gives a correct answer

4. The LLM then keeps re-checking itself using the same method for any similar inquiry within that context

In-context learning isn't replaced by anything better because it's so powerful that finding "anything better" is incredibly hard. It's the bread and butter of how modern LLM workflows function.