| ▲ | godshatter 2 hours ago | |
Do LLMs even learn? The companies that build them build new models based partly on the conversations the older models have had with people, but do they incorporate knowledge into their neural nets as they go along? Can an LLM decide, without prompting or api calls, to text someone or go read about something or do anything at all except for waiting for the next prompt? Do LLMs have any conceptual understanding of anything they output? Do they even have a mechanism for conceptual understanding? LLMs are incredibly useful and I'm having a lot of fun working with them, but they are a long way from some kind of general intelligence, at least as far as I understand it. | ||
| ▲ | CamperBob2 2 hours ago | parent [-] | |
Yes, to all of your questions. You need to use a recent LLM in an agentic harness. Tell it to take notes, and it will. After a bit of further refinement, we'll start to call that process "learning." Eventually the question of who owns the notes, who gets to update them, and how, will become a huge, huge deal. | ||