▲ | Telemakhos 3 days ago | |
AI fails miserably at writing poetry in Latin and Greek, precisely because it cannot apply metrical rules. Predicting the next token does not produce correct verse. Perhaps it works for stress-based songs in English, but it does not in languages with quantitative (moraic) meter, nor can AI scan meter correctly. Schoolchildren can and do: it's a well understood domain with simple rules, if you can apply rules. Token prediction is not that. | ||
▲ | crazygringo 3 days ago | parent | next [-] | |
Sounds like it's just a question of insufficient training material or training material that is insufficiently annotated. There's no reason an LLM shouldn't be able to produce such poetry. Remember that extensive "thinking" occurs before producing the first output token -- LLM's aren't blindly outputting tokens without first knowing where they are going. But it would make sense that this is an area current companies have not prioritized for training. Not that many people need new poetry in a dead language... | ||
▲ | CamperBob2 3 days ago | parent | prev [-] | |
How'd you do at that, before someone taught you? If someone cared enough to train a model on Latin and Greek theory, then rest assured it would do just fine. It'd just be a waste of weights from the perspective of almost everyone else, though. |