| ▲ | jrowen 2 days ago | |
> If we assume that large language models are being used to generate these texts and if those models are able to faithfully and believably parody our long-standing assumptions of what those texts are expected to sound like then does it call in to question the entire practice of intellectualizing an artist's work in their unique voice? I would say no. Authenticity is always in question. If the artist pasted LLM output wholesale, that was the choice they made to represent their work. Maybe they felt they expressed themselves in the prompt. What if they used a thesaurus, or a ghostwriter, or plagiarized something, or overheard someone say something they liked? It's up to the viewer to decide whether they find it meaningful or resonant. That's the beauty of art. Intent matters, in that it can affect the interpretation, but ultimately any interpretation is valid. | ||