Remix.run Logo
redeux a day ago

All that was described here is learning from a mistake, which is something I hope all humans are capable of.

dragonwriter a day ago | parent | next [-]

No, what was described was specifically reporting to an external party the neural connections involved in the mistake and the source in past training data that caused them, as well as learning from new data.

LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.

Jensson a day ago | parent [-]

> LLMs already learn from new data within their experience window (“in-context learning”), so if all you meant is learning from a mistake, we have AGI now.

They don't learn from the mistake though, they mostly just repeat it.

hnuser123456 a day ago | parent | prev [-]

Yes thank you, that's what I was getting at. Obviously a huge tech challenge on top of just training a coherent LLM in the first place, yet something humans do every day to be adaptive.