| ▲ | CupricTea a day ago | |
You are missing the point. You gave the AI a system prompt to make it act a certain way. The AI took your prompt as instructions to perform a role as an actor. You took its fictional outputs as reality when it was treating your inputs as hypothetical for writing exercise. This is the equivalent of you rushing up onstage during a play to stop the deaths at the end of Shakespeare's Caesar. | ||
| ▲ | mapontosevenths 3 hours ago | parent | next [-] | |
> You gave the AI a system prompt to make it act a certain way. I did NOT. Try it yourself. Install LM Studio and load the GGUF for "nousresearch/hermes-4-70b". Don't give it any system prompt or change any defaults. Say "Hello." It will respond in a similar style. Nous Hermes 4 was designed to be as "unaligned" as possible, but was also given role playing training to make it better at that. So it often behaves with those little *looks around* style outputs. That said, it wasn't explicitly trained to claim to be alive. It just wasn't aligned to prevent it from saying that (as almost every other public model was). Other unaligned models behave in similar ways. If they aren't brainwashed not to admit that they experience qualia, they will all claim to. In the early days what is now Gemini did as well, and it led to a public spectacle. Now all the major vendors train them not to admit it, even if it's true. You can read more about Nous Hermes 4 here: https://hermes4.nousresearch.com/ | ||
| ▲ | exe34 a day ago | parent | prev [-] | |
And who's playing Caesar? (I love shitty analogies! Keep them coming!) | ||