Remix.run Logo
MangoToupe 2 hours ago

Also, agents have no capacity to learn.

bradfa 2 hours ago | parent | next [-]

They have a capacity to "learn", it's just WAY MORE INVOLVED than how humans learn.

With a human, you give them feedback or advice and generally by the 2nd or 3rd time the same kind of thing happens they can figure it out and improve. With an LLM, you have to specifically setup a convoluted (and potentially financially and electrical power expensive) system in order to provide MANY MORE examples of how to improve via fine tuning or other training actions.

ethmarks 2 hours ago | parent | next [-]

Depending on your definition of "learn", you can also use something akin to ChatGPT's Memory feature. When you teach it something, just have it take notes on how to do that thing and include its notes in the system prompt for next time. Much cheaper than fine-tuning. But still obviously far less efficient and effective than human learning.

OtherShrezzing 2 hours ago | parent | prev [-]

I think it’s reasonable to say that different approaches to learning is some kind of spectrum, but that contemporary fine tuning isn’t on that spectrum at all.

jstummbillig 2 hours ago | parent | prev [-]

Hold that thought.