| ▲ | basisword 8 hours ago |
| Removing anthropomorphism from LLM's seems like a really great idea with zero downside. Not just because people starting "relationships" with AI is going to harm society but I imagine people are also more willing to trust misinformation from an anthropomorphic AI. |
|
| ▲ | OJFord 8 hours ago | parent | next [-] |
| Is that even possible while still training on 'things written by humans' (and not expressly for training purposes) though? |
| |
| ▲ | wredcoll 8 hours ago | parent | next [-] | | It doesn't have to be perfect. A hypothetical law could be phrased something like "not allowed to intentionally influence the user into thinking the llm is a human", which sure, is up to judges at the end, but it also gives a clear indication of things to avoid doing intentionally. | |
| ▲ | basisword 8 hours ago | parent | prev [-] | | I feel like you could do it via the system prompt quite easily (but maybe that's my lack of knowledge showing). |
|
|
| ▲ | reaperducer 8 hours ago | parent | prev [-] |
| Removing anthropomorphism from LLM's seems like a really great idea with zero downside. Step 1: Stop giving them human or human-like names. Claude, Siri, Gemini, etc. |
| |
| ▲ | kitd 8 hours ago | parent | next [-] | | I swear I'm about to get dumped by my wife for Claude. He gives her all the answers she wants, whereas I only give her the ones she needs. | |
| ▲ | lelanthran 8 hours ago | parent | prev | next [-] | | Yeah. Maybe HAL9000 would be better :-) | |
| ▲ | ChrisGreenHeur 8 hours ago | parent | prev [-] | | Hey T1000, give me a good apple pie recipe, make sure to include pears instead of apples. |
|