Remix.run Logo
bethekidyouwant a day ago

In what world can you not always break the response of an AI by feeding it a bunch of random junk?

xnx a day ago | parent | next [-]

Indeed. In what world can you not break any tool when deliberately misusing it?

lacoolj 6 hours ago | parent [-]

BRB getting an anvil

kgeist a day ago | parent | prev | next [-]

I mean, currently LLMs are stateless and you can get rid of all the poisoned data by just starting a new conversation (context). And OP introduces "long-term memory" where junk will accumulate with time

soerxpso a day ago | parent | next [-]

I believe you're misunderstanding what the OP means about "long-term" memory. From what I can tell, it's not actively modifying the weights of the underlying model, it just "remembers" things from a high number of tokens into the past of its context. The point is that this allows it to remember something it read ~200 pages ago in a very long context window, not that it can remember something from one session into another clean session.

AlexCoventry a day ago | parent [-]

This model has fast weights, which actually are modified during inference.

energy123 a day ago | parent [-]

Marketplace for fast weights inbound

dmix a day ago | parent | prev [-]

In something like Cursor if it messes something up your can click 'undo'. I'd imagine a small snapshot would only persisted to the memory if you keep it's output and even then it's mostly just a summary.

There's probably lots of small signals of "the user is happy with the output" plus the longer the history the more it will converge on the middle of being what you want. Including when the user says "don't do [x]" which override past stuff.

CooCooCaCha a day ago | parent | prev [-]

I mean ideally AI would be resilient to junk, don't you think?

vlovich123 a day ago | parent | next [-]

Humans are pretty vulnerable to junk so I’m not sure.

amarant a day ago | parent | prev [-]

Ideally, you'd run your own instance of this, I think.

I can see a product where you purchase a model that has basic training, and then, using the features outlined in the paper, it learns on the fly from your usage.

I can also see there being a secondary market for specially trained models, long-term memory filled with some specific skill, done in some specific way. To make a silly example, imagine buying a licence to Torvald's OS coding assistant, ready to insult your prs before you even commit them!(And possibly help you write code in Torvald's style too)

This would of course require Linus to use the model enough for it to learn,I won't comment on the likelihood of that happening: it's just a silly example after all