Remix.run Logo
mccoyb 9 hours ago

It's fascinating to think about the space of problems which are amenable to RL scaling of these probability distributions.

Before, we didn't have a fast (we had to rely on human cognition) way to try problems - even if the techniques and workflows were known by someone. Now, we've baked these patterns into probability distributions - anyone can access them with the correct "summoning spell". Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

One question this raises to me is how these models are going to keep up with the expanding boundary of science. If RL is required to get expert behavior into the models, what happens when experts start pushing the boundary faster? In 2030, how is Anthropic going to keep Claude "up-to-date" without either (a) continual learning with a fixed model (expanding context windows? seems hard) or (b) continual training (expensive)?

Crazy times.

Aerroon 9 hours ago | parent | next [-]

A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.

bitexploder 8 hours ago | parent | next [-]

This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.

dotancohen 23 minutes ago | parent | next [-]

  > This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale.
All major LLMs today have a nontrivial context window. Whether or not this constitutes "a meaningful timescale" is application dependant - for me it has been more than adequate.

I also disagree that this has any bearing on whether or not "the machine is intelligent" or whether or not "submarines can swim".

dtj1123 8 hours ago | parent | prev | next [-]

Would you consider someone with anterograde amnesia not to be intelligent?

adriand 4 hours ago | parent | next [-]

I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.

From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.

morleytj 7 hours ago | parent | prev | next [-]

A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.

wang_li 7 hours ago | parent [-]

Or you could have just said "they can't form new memories."

dtj1123 6 hours ago | parent | next [-]

I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.

morleytj 7 hours ago | parent | prev | next [-]

I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.

bitexploder 6 hours ago | parent | prev [-]

That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.

bitexploder 7 hours ago | parent | prev | next [-]

That is a good area to explore. Their map of the past is fixed. They are frozen at some point in their psychological time. What has stopped working? Their hippocampus and medial temporal lobe. These are like the write-head that move data from the hippocampus to the neo cortex. Their "I" can no longer update itself. Their DMN is frozen in time. So if intelligence is purely the "I" telling a continuous coherent story about itself. The difference is that although they are fixed in time which is a characteristic shared by a specific LLM model. They can still completely activate their task positive network for problem solving and if their previous information stored is adequate to solve the problem they can. You could argue that is pretty similar to an LLM and what it does. So it is certainly a signifiant component of intelligence.

There is also the nature of the human brain, it is not just those systems of memory encoding, storage, and use of that in narratives. People with this type of amnesia still can learn physical skills and that happens in a totally different area of the brain with no need for the hippocampus->neocortex consolidation loop. So, the intelligence is significantly diminished, but not entirely. Other parts of the brain are still able to update themselves in ways an LLM currently cannot. The human with amnesia also has a complex biological sensory input mapping that is still active and integrating and restructuring the brain. So, I think when you get into the nuances of the human in this state vs. an LLM we can still say the human crosses some threshold for intelligence where the LLM does not in this framework.

So, they have an "intelligence", localized to the present in terms of their TPN and memory formation. LLMs have this kind of "intelligence". But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.

beepbooptheory 7 hours ago | parent | prev [-]

Sure, why can't both things be true? "Intelligence" is just what you call something and someone else knows what you mean. Why did AI discourse throw everyone back 100 years philosophically? Its like post-structuralism or Wittgenstein never happened..

It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?

Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.

Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.

Good luck though!

aerodexis 2 hours ago | parent | next [-]

Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.

bitexploder 6 hours ago | parent | prev [-]

I think you can look at it dispassionately from a systems perspective. There is not /really/ a quantifiable threshold for capital I Intelligence. But there is a pretty well agreed set of properties for biological intelligence. As humans, we have conveniently made those properties match things only we have. But you can still mechanistically separate out the various parts of our brain, what they do, and how they interact and we actually have a pretty good understanding of that.

You can also then compare that mapping of the human brain to other biological brains and start to figure out the delta and which of those things in the delta create something most people would consider intelligence. You can then do that same mapping to an LLM or any other AI construct that purports intelligence. It certainly will never be a biological intelligence in its current statistical model form. But could it be an Intelligence. Maybe.

I don't think, if you are grounded, AI did anything to your philosophical mapping of the mind. In fact, it is pretty easy to do this mapping if you take some time and are honest. If you buy into the narratives constructed around the output of an LLM then you are not, by definition, being very grounded.

The other thing is, human intelligence is the only real intelligence we know about. Intelligence is defined by thought and limited by our thought and language. It provides the upper bounds of what we can ever express in its current form. So, yes, we do have a tendency to stamp a narrative of human intelligence onto any other intelligence but that is just surface level. We de decompose it to the limits of our language and categorization capabilities therein.

mlyle 8 hours ago | parent | prev | next [-]

There's nothing to say that you can't build something intelligent out of them by bolting a memory on it, though.

Sure, it's not how we work, but I can imagine a system where the LLM does a lot of heavy lifting and allows more expensive, smaller networks that train during inference and RAG systems to learn how to do new things and keep persistent state and plan.

bitexploder 7 hours ago | parent | next [-]

You aren't wrong and that is a fascinating area of research. I think the key thing is that the memory has to fundamentally influence the underlying model, or at least the response, in some way. Patching memory on top of an LLM is different from integrating it into the core model. To go back to human terms it is like an extra bit of storage, but not directly attached to our neo cortex. So it works more like a filter than a core part of our intelligence in the analogy. You think about something and assemble some thought and then it would go to this next filter layer and get augmented and that smaller layer is the only thing being updated.

It is still meaningful, but it narrows what the intelligence can be sufficiently that it may not meet the threshold. Maybe it would, but it is probably too narrow. This is all strictly if we ask that it meet some human-like intelligence and not the philosophy of "what counts as intelligence" but... we are humans. The strongest things or at least the most honest definitions of intelligence I think exist are around our metacognitive ability to rewire the grey matter for survival not based on immediate action-reaction but the psychological time of analyzing the past to alter the future.

charcircuit 7 hours ago | parent | prev [-]

Memory is not just bolted on top of the latest models. They under go training on how and when to effectively use memory and how to use compaction to avoid running out of context when working on problems.

rnxrx 6 hours ago | parent [-]

Maybe there's an analogy to our long and short term memory - immediate stimuli is processed in the context deep patterns that have accreted over a lifetime. The effect of new information can absolutely challenge a lot of those patterns but to have that information reshape how we basically think takes a lot longer - more processing, more practice, etc.

In the case of the LLM that longer-term learning / fundamental structure is a proxy for the static weights produced by a finite training process, and that the ability to use tools and store new insights and facts is analogous to shorter-term memory and "shallow" learning.

Perhaps periodic fine-tuning has an analogy in sleep or even our time spent in contemplation or practice (..or even repetition) to truly "master" a new idea and incorporate it into our broader cognitive processing. We do an amazing job of doing this kind of thing on a continuous basis while the machines (at least at this point) perform this process in discrete steps.

If our own learning process is a curve then the LLM's is a step function trying to model it. Digital vs analog.

Symmetry 5 hours ago | parent | prev | next [-]

That means they're not conscious in the Global Workspace[1] sense but I think it would be going too far to say that that means they're not intelligent.

[1]https://en.wikipedia.org/wiki/Global_workspace_theory

anematode 8 hours ago | parent | prev [-]

But they're not "slow"! Unlike biological thinking, which has a speed limit, you can accelerate these chains of thought by orders of magnitude.

bitexploder 7 hours ago | parent | next [-]

Their consolidation of memory speed is what I was referring to. The model iterations are essentially their form of collective memory. In the sense of the human model of intelligence we have thoughts. Thoughts become memory. New thoughts use that memory and become recursively updated thoughts. LLMs cannot update their memory very fast.

Jweb_Guru 7 hours ago | parent | prev [-]

I assure you that LLM thinking also has a speed limit.

ramses0 6 hours ago | parent [-]

But imagine a beowulf cluster of them... /s

...but seriously... there was the "up until 1850" LLM or whatever... can we make an "up until 1920 => 1990 [pre-internet] => present day" and then keep prodding the "older ones" until they "invent their way" to the newer years?

We knew more in 1920 than we did in 1850, but can a "thinking machine" of 1850-knowledge invent 1860's knowledge via infinite monkeys theorem/practice?

The same way that in 2025/2026, Knuth has just invented his way to 2027-knowledge with this paper/observation/finding? If I only had a beowulf cluster of these things... ;-)

rcarr 7 hours ago | parent | prev | next [-]

Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?

computably 2 hours ago | parent | next [-]

On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years.

And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent.

Filligree 4 hours ago | parent | prev [-]

It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.

It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.

Wowfunhappy 2 hours ago | parent [-]

> It’s not impossible, obviously—humans do it

It's still not at all obvious to me that LLMs work in the same way as the human brain, beyond a surface level. Obviously the "neurons" in neural nets resemble our brains in a sense, but is the resemblance metaphorical or literal?

Yiin an hour ago | parent [-]

https://www.youtube.com/watch?v=l-OLgbdZ3kk

theblazehen 5 hours ago | parent | prev [-]

I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc

sosodev 6 hours ago | parent | prev | next [-]

My understanding, from listening/reading what top researchers are saying, is that model architectures in the near future are going to attempt to scale the context window dramatically. There's a generalized belief that in-context learning is quite powerful and that scaling the window might yield massive benefits for continual learning.

It doesn't seem that hard because recent open weight models have shown that the memory cost of the context window can be dramatically reduced via hybrid attention architectures. Qwen3-next, Qwen3.5, and Nemotron 3 Nano are all great examples. Nemotron 3 Nano can be run with a million token context window on consumer hardware.

mccoyb 5 hours ago | parent [-]

I don't disagree with this, but I don't think the memory cost is the only issue right? I remember using Sonnet 4.5 (or 4, I can't remember the first of Anthropic's offerings with a million context) and how slow the model would get, how much it wanted to end the session early as tokens accrued (this latter point, of course, is just an artifact of bad training).

Less worried about memory, more worried about compute speed? Are they obviously related and is it straightforward to see?

sosodev 3 hours ago | parent | next [-]

The compute speed is definitely correlated with the memory consumption in LLM land. More efficient attention means both less memory and faster inference. Which makes sense to me because my understanding is that memory bandwidth is so often the primary bottleneck.

We're also seeing a recent rise in architectures boosting compute speed via multi-token prediction (MTP). That way a single inference batch can produce multiple tokens and multiply the token generation speed. Combine that with more lean ratios of active to inactive params in MOE and things end up being quite fast.

The rapid pace of architectural improvements in recent months seems to imply that there are lots of ways LLMs will continue to scale beyond just collecting and training on new data.

whimsicalism 2 hours ago | parent | prev [-]

The parent commentator is a bit confused - most of the innovation in these hybrid architectures comes from reducing the computation pressure not just the memory pressure.

lxgr 9 hours ago | parent | prev | next [-]

Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

I could totally imagine "free" inference for researchers under the condition that the reasoning traces get to be used as future training data.

mccoyb 9 hours ago | parent | next [-]

Agreed, there's no doubt this will happen. It's likely already happening (it feels safe to assume that Anthropic is curating data from the data they record from Claude Code?)

As far as I understand RL scaling (we've already maxxed out RLVR), these machines only get better as long as they have expert reasoner traces available.

Having an expert work with an LLM and successfully solve a problem is high signal data, it may be the only path forward?

My prior is that these companies will take this data without asking you as much as they can.

lxgr 8 hours ago | parent [-]

Exactly, or functionally equivalently, asking you in paragraph 37 of a 120-page PDF (bonus points: in an agreement update).

And importantly, this can be cross-lab/model too. I suspect there's a reason why e.g. Google has been offering me free Claude inference in Google Antigravity on a free plan...

nhecker 3 hours ago | parent | prev | next [-]

The site arena.ai does exactly this already, as far as I can tell. (In addition to the whole ranking thing.)

the_af 7 hours ago | parent | prev [-]

> Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

Wouldn't this lead to model collapse?

littlestymaar 7 hours ago | parent [-]

Not necessarily, as exhibited by the massive success of artificial data.

the_af 4 hours ago | parent [-]

Could you elaborate?

nhecker 3 hours ago | parent [-]

EDIT: probably not relevant, after re-re-reading the comment in question.

Presumably littlestymaar is talking about all the LLM-generated output that's publicly available on the Internet (in various qualities but significant quantity) and there for the scraping.

visarga 7 hours ago | parent | prev | next [-]

> In 2030, how is Anthropic going to keep Claude "up-to-date"

I think the majority of research, design and learning goes through LLMs and coding agents today, considering the large user base and usage it must be trillions of tokens per day. You can take a long research session or a series of them and apply hindsight - what idea above can be validated below? This creates a dense learning signal based on validation in real world with human in the loop and other tools, code & search.

Robdel12 an hour ago | parent | prev | next [-]

That’s AGI, right? For the model to learn novel things itself and retain it?

I have no idea but I’m along for the ride!

andsoitis 7 hours ago | parent | prev | next [-]

> Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

Part of it comes down to “knowing” what questions to ask.

esafak 7 hours ago | parent [-]

I see it like the relationship between a student and research advisor. The advisor will ideally know the terrain and suggest a fruitful line of attack (what to ask), and the student will follow through, learning along the way.

baq 7 hours ago | parent | prev | next [-]

> In 2030, how is Anthropic going to keep Claude "up-to-date"

In 2030 Anthropic hopes Claude will keep Anthropic "up-to-date" on its progress on itself.

I'm only half joking here.

mt_ 3 hours ago | parent | prev | next [-]

I call them, entropy reducers.

whimsicalism 2 hours ago | parent | prev | next [-]

> how these models are going to keep up with the expanding boundary of science

The same way humans do?

The phraseology in this comment: 'probability distributions', 'baked these patterns' IMO has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now.

The reference to how AI will keep up with AI-assisted human progress in science in 2030 is meant to reassure. It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

mccoyb an hour ago | parent [-]

Sorry, are you familiar with what a next token distribution is, mathematically speaking?

If you are not, let me introduce you to the term: a probability distribution.

Just because it has profound properties ... doesn't make it different.

> has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now

Perhaps respond to my actual comment compared to whatever meta-level grouping you wish to interpret it as part of?

> It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

What premises? Be clear.

DeathArrow 8 hours ago | parent | prev [-]

They can use LORA.