▲ | andrepd 16 hours ago | ||||||||||||||||||||||
It's not just that it's word salad, it's also that it's exactly the same. There's a multi-trillion dollar attempt to replace your individuality with bland amorphous slop """content""". This doesn't bother you in the slightest? | |||||||||||||||||||||||
▲ | pmg101 16 hours ago | parent | next [-] | ||||||||||||||||||||||
I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human. | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | flir 15 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population. Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out. (Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles). | |||||||||||||||||||||||
▲ | cantor_S_drug 16 hours ago | parent | prev [-] | ||||||||||||||||||||||
This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2? |