| ▲ | internet_points 2 hours ago | |
I just sent TFA to a colleague of mine who was experimenting with llm's for auto-correcting human-written text, since she noticed the same phenomenon where it would correct not only mistakes, but slightly nudge words towards more common synonyms. It would often lose important nuances, so "shun" would be corrected to "avoid", and "divulge" would become "disclose" etc. | ||