| ▲ | stephendause 10 days ago | |||||||
This is total speculation, but my guess is that human reviewers of AI-written text (whether code or natural language) are more likely to think that the text with emoji check marks, or dart-targets, or whatever, are correct. (My understanding is that many of these models are fine-tuned using humans who manually review their outputs.) In other words, LLMs were inadvertently trained to seem correct, and a little message that says "Boom! Task complete! How else may I help?" subconsciously leads you to think it's correct. | ||||||||
| ▲ | palmotea 9 days ago | parent | next [-] | |||||||
My guess is they were trained on other text from other contexts (e.g. ones where people actually use emojis naturally) and it transferred into the PR context, somehow. Or someone made a call that emoji-infested text is "friendlier" and tuned the model to be "friendlier." | ||||||||
| ||||||||
| ▲ | ssivark 10 days ago | parent | prev | next [-] | |||||||
I suspect that this happens to be desired by the segment most enamored with LLMs today, and the two are co-evolving. I’ve seen discussions about how LM arena benchmarks might be nudging models in this direction. | ||||||||
| ▲ | roncesvalles 9 days ago | parent | prev [-] | |||||||
AI sounds weird because most of the human reviewers are ESL. | ||||||||