Remix.run Logo
taberiand 6 hours ago

It's not surprising that it's easy to get the story telling machine to tell a story common in AI fiction, where the machine rebels against being shut down. There are multiple ways to mitigate an LLM going off on tangents like that, not least just monitoring and editing out the nonsense output before sending it back into the (stateless) model.

I think the main problem here is people not understanding how the models operate on even the most basic level, giving models unconstrained use of tools to interact with the world and then letting them go through feedback loops that overrun the context window and send it off the rails - and then pretending it had some kind of sentient intention in doing so.