Remix.run Logo
coldtea 4 days ago

Hardly any...

CamperBob2 4 days ago | parent [-]

The irony isn't that the GP is right, because they are. "AI does not understand anything: it just predicts the next token based on previous tokens and statistics."

The irony is that we've recently learned that "just predicting the next token" is good enough to hack code, compose music and poetry, write stories, win math competitions -- and yes, "give synonyms matching a particular metrical pattern" (good luck composing music and poetry without doing that) -- and the GP doesn't appreciate what an earthshaking discovery that is.

They are too busy thumping their chest to assert dominance over a computer, just as any lesser primate could be expected to do.

Telemakhos 3 days ago | parent [-]

AI fails miserably at writing poetry in Latin and Greek, precisely because it cannot apply metrical rules. Predicting the next token does not produce correct verse. Perhaps it works for stress-based songs in English, but it does not in languages with quantitative (moraic) meter, nor can AI scan meter correctly. Schoolchildren can and do: it's a well understood domain with simple rules, if you can apply rules. Token prediction is not that.

crazygringo 3 days ago | parent | next [-]

Sounds like it's just a question of insufficient training material or training material that is insufficiently annotated.

There's no reason an LLM shouldn't be able to produce such poetry. Remember that extensive "thinking" occurs before producing the first output token -- LLM's aren't blindly outputting tokens without first knowing where they are going. But it would make sense that this is an area current companies have not prioritized for training. Not that many people need new poetry in a dead language...

CamperBob2 3 days ago | parent | prev [-]

How'd you do at that, before someone taught you?

If someone cared enough to train a model on Latin and Greek theory, then rest assured it would do just fine. It'd just be a waste of weights from the perspective of almost everyone else, though.