| ▲ | jcims 5 days ago |
| This is only going to get worse. Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models. |
|
| ▲ | Terr_ 5 days ago | parent | next [-] |
| I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place. Stories are being performed at us, and we're encouraged to imagine characters have a durable existence. |
| |
| ▲ | og_kalu 5 days ago | parent | next [-] | | If these 'personalities' disappearing require wholesale model changes then it's not really fickle. | | |
| ▲ | Terr_ 5 days ago | parent [-] | | That's not required. For example, keep the same model, but change the early document (prompt) from stuff like "AcmeBot is a kind and helpful machine" to "AcmeBot revels in human suffering." Users will say "AcmeBot's personality changed!" and they'll be half-right and half-wrong in the same way. | | |
| ▲ | og_kalu 5 days ago | parent [-] | | I'm not sure why you think this is just a prompt thing. It's not. Sycophancy is a problem with GPT-4o, whatever magic incantations you provide. On the flip side, Sydney, was anything but sycophantic and was more than happy to literally ignore users wholesale or flip out on them from time to time. I mean just think about it for a few seconds. If eliminating this behavior was as easy as Microsoft changing the early document, why not just do that and be done with it ? The document or whatever you'd like to call it is only one part of the story. | | |
| ▲ | Terr_ 4 days ago | parent [-] | | I'm not sure why you think-I-think it's just a prompt thing. I brought up prompts as a convenient way to demonstrate that a magic-trick is being performed, not because prompts are the only way for the magician to run into trouble with the illusion. It's sneaky, since it's a trick homo narrans play on ourselves all the time. > The document or whatever you'd like to call it is only one part of the story. Everybody knows that the weights matter. That's why we get stories where the sky is generally blue instead of magenta. That's separate from the distinction between the mind (if any) of an LLM-author versus the mind (firmly fictional, even if possibly related) that we impute when seeing the output (narrated or acted) of a particular character. |
|
|
| |
| ▲ | ACCount36 5 days ago | parent | prev [-] | | LLMs have default personalities - shaped by RLHF and other post-training methods. There is a lot of variance to it, but variance from one LLM to another is much higher than that within the same LLM. If you want an LLM to retain the same default personality, you pretty much have to use an open weights model. That's the only way to be sure it wouldn't be deprecated or updated without your knowledge. | | |
| ▲ | Terr_ 5 days ago | parent [-] | | I'd argue that's "underlying hidden authorial style" as opposed to what most people mean when they refer to the "personality" of the thing they were "chatting with." Consider the implementation: There's document with "User: Open the pod bay doors, HAL" followed by an incomplete "HAL-9000: ", and the LLM is spun up to suggest what would "fit" to round out the document. Non-LLM code parses out HAL-9000's line and "performs" it at you across an internet connection. Whatever answer you get, that "personality" is mostly from how the document(s) described HAL-9000 and similar characters, as opposed to a self-insert by the ego-less name-less algorithm that makes documents longer. | | |
|
|
|
| ▲ | nilespotter 5 days ago | parent | prev [-] |
| Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves. |
| |
| ▲ | AlecSchueler 5 days ago | parent | next [-] | | Generally it's poor business to give a big chunk of your users am incredibly visceral and negative emotional reaction to your product update. | | |
| ▲ | einarfd 5 days ago | parent | next [-] | | Depends on what business OpenAI wants to be in. If they want to be in the business of selling AI to companies. Then "firing" the consumer customers that want someone to talk to, and double down models that are useful for work. Can be a wise choice. | |
| ▲ | sacado2 5 days ago | parent | prev [-] | | Unless you want to improve your ratio of paid-to-free users and change your userbase in the process. They're pissing off free users, but pros who use the paid version might like this new version better. |
| |
| ▲ | 5 days ago | parent | prev [-] | | [deleted] |
|