Remix.run Logo
charcircuit 4 hours ago

It can learn. When my agents makes mistake they update their memories and will avoid making the same mistakes in the future.

>Reinforcement learning, on the other hand, can do that, on a human timescale. But you can't make money quickly from it.

Tools like Claude Code and Codex have used RL to train the model how to use the harness and make a ton of money.

kelnos 2 hours ago | parent | next [-]

That's not learning, though. That's just taking new information and stacking it on top of the trained model. And that new information consumes space in the context window. So sure, it can "learn" a limited number of things, but once you wipe context, that new information is gone. You can keep loading that "memory" back in, but before too long you'll have too little context left to do anything useful.

That kind of capability is not going to lead to AGI, not even close.

charcircuit an hour ago | parent [-]

>but before too long you'll have too little context left to do anything useful.

One of the biggest boosts in LLM utility and knowledge was hooking them up to search engines. Giving them the ability to query a gigantic bank of information already has made them much more useful. The idea that it can't similarly maintain its own set of information is shortsighted in my opinion.

Dansvidania 2 hours ago | parent | prev | next [-]

That’s not learning. That’s carrying over context that you are trusting is correctly summarised over from one conversation to the next.

otabdeveloper4 2 hours ago | parent | prev [-]

> they update their memories

Their contexts, not their memories. An LLM context is like 100k tokens. That's a fruit fly, not AGI.

charcircuit an hour ago | parent [-]

A human can't keep 100k tokens active in their mind at the same time. We just need a place to store them and tools to query it. You could have exabytes of memories that the AI could use.