▲ | CamperBob2 4 days ago | |||||||||||||
The irony isn't that the GP is right, because they are. "AI does not understand anything: it just predicts the next token based on previous tokens and statistics." The irony is that we've recently learned that "just predicting the next token" is good enough to hack code, compose music and poetry, write stories, win math competitions -- and yes, "give synonyms matching a particular metrical pattern" (good luck composing music and poetry without doing that) -- and the GP doesn't appreciate what an earthshaking discovery that is. They are too busy thumping their chest to assert dominance over a computer, just as any lesser primate could be expected to do. | ||||||||||||||
▲ | Telemakhos 3 days ago | parent [-] | |||||||||||||
AI fails miserably at writing poetry in Latin and Greek, precisely because it cannot apply metrical rules. Predicting the next token does not produce correct verse. Perhaps it works for stress-based songs in English, but it does not in languages with quantitative (moraic) meter, nor can AI scan meter correctly. Schoolchildren can and do: it's a well understood domain with simple rules, if you can apply rules. Token prediction is not that. | ||||||||||||||
|