▲ | ACCount37 5 days ago | |
You should anthropomorphize LLMs more. Anthropomorphizing LLMs is at least directionally correct 9 times out of 10. LLMs, in a very real way, have "conscientiousness". As in: it's a property that can be measured and affected by training, and also the kind of abstract concept that an LLM can recognize and operate off. If you can just train an LLM to be "more evil", you can almost certainly train an LLM to be "more conscientious" or "less conscientious". | ||
▲ | patrickmay 5 days ago | parent [-] | |
> You should anthropomorphize LLMs more. No, you shouldn't. They hate that. |