| ▲ | Ajedi32 2 days ago |
| > AI needs to more hard-coded w/r/t honesty & clarity about what it is That precludes the existence of fictional character AIs like Meta is trying to create, does it not? Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real? The article says "Chats begin with disclaimers that information may be inaccurate." and shows a screenshot of the chat bot clearly being labeled as "AI". Exactly how many disclaimers should be necessary? Or is no amount of disclaimers acceptable when the bot itself might claim otherwise? |
|
| ▲ | robotnikman 2 days ago | parent | next [-] |
| I wonder if we are at the point right now where AI needs a large bright disclaimer while using it saying "This person is not real and is an AI" (kind of like the big warning on cigarettes and nicotine products). Many of us here would think such a thing is common sense, but there are plenty of people out there who could be convinced by an AI chatbot that they are real |
|
| ▲ | browningstreet 2 days ago | parent | prev [-] |
| > Knowing when to stay in character and when not to seems like a very difficult problem to solve. Should LLM characters in video games be banned, because they might claim to be real? In video games? I'm having trouble taking this objection to my suggestion seriously. |
| |
| ▲ | gs17 2 days ago | parent | next [-] | | Really, your response should be that the video game use case is easier to detect going off track. It's a lot more feasible to detect when Random Peasant #2154 in Skyrim is breaking the fourth wall than a generic chatbot. The exact same scenario as the article could happen with an NPC in a game if there's no/poor guardrails. An LLM-powered NPC could definitely start insisting that it's a real person that's in love with you, with a real address you should come visit right now, because there's not necessarily an inherent difference in capability when the same chatbot is in a video game context. | |
| ▲ | Ajedi32 2 days ago | parent | prev [-] | | Why? They're exactly the same thing, just in a slightly different context. The article is about a fictional character AI, not a generic informational chat bot. | | |
| ▲ | strongpigeon 2 days ago | parent [-] | | But the difference in context is exactly what matters here no? When you're playing a game, it's very clear you're you're playing a game. When you chatting with a bot in the same interface that you chat with your other friends, that line becomes much blurrier. | | |
| ▲ | Ajedi32 2 days ago | parent [-] | | There was an obvious disclaimer though, and the chat window was clearly labeled "AI"; it's not like Meta was trying to pass this off as a real person. So is this just a question of how many warnings need to be in place before users are allowed to chat with fictional characters? Or should this entire use case be banned, as the root commenter seemed to be suggesting? | | |
| ▲ | maxwell 2 days ago | parent [-] | | > “I said, ‘Who is this?’” Linda recalled. “When Julie saw it, she said, ‘Mom, it’s an AI.’ I said, ‘It’s a what?’ And that’s when it hit me.” |
|
|
|
|