Remix.run Logo
gishh a day ago

... If this happens, the next hacks will be context poisoning. A whole cottage industry will pop around preserving and restoring context.

Sounds miserable.

Also, LLMs don't learn. :)

parasubvert 5 hours ago | parent [-]

LLMs themselves don’t learn but AI systems based around LLMs can absolutely learn! Not on their own but as part of a broader system: RLHF leveraging LoRAs that get re-incorporated as model fine tunings regularly, natural language processing for context aggregation, creative use of context retrieval with embeddings databases updated in real time, etc.