| ▲ | notepad0x90 2 hours ago | |
Isn't this more to do with how LLMs are trained for general purpose use? Are LLMs with a specific use and dataset in mind better? Like if the dataset was fiction novels, would it sound more booky? If it was social-media, would it sound more click-baity and engaging? I've had AI be boring, but I've also seen things like original jokes that were legitimately funny. Maybe it's the prompts people use, it doesn't give it enough of a semantic and dialectic direction to not be generic. IRL, we look at a person and get a feel for them and the situation to determine those things. | ||