| ▲ | codexon 6 hours ago | |
AI music from suno sounds indistinguishable to non-ai generated music to me. In terms of how well it works, the quality of AI music is far better than art or code. In art there are noticeble glitches like multiple fingers. For code, it can call non existent functions, not do what it is supposed to do, or have security issues or memory leaks. From what I can tell, there is no such deal breaker for AI music. | ||
| ▲ | yellowapple 19 minutes ago | parent | next [-] | |
> For code, it can call non existent functions, not do what it is supposed to do, or have security issues or memory leaks. I guess what I'm getting at is that, since programmers are typically more inclined than the average person to understand how AI works, programmers are therefore ahead of the curve when it comes to understanding those pitfalls and structuring their workflows to minimize them — to play to the strengths and weaknesses of LLMs. A “fancy” autocomplete v. a “fancy” linter v. something pretending to be a junior programmer are all going to have very different rates of success. The issue hindering art and music is that most people using generative AI for art and music are doing so analogously to the “something pretending to be a junior programmer” role instead of the “fancy autocomplete” or “fancy linter” roles. That is: they're typically using AI to generate works end-to-end, whereas (non-vibe-coder) programmers are typically using AI in far narrower scopes, with more direct control over the final output. I think the quality of AI-based art and music will improve as more narrowly-scoped AI-driven workflows catch on among actually-skilled artists and musicians — and the result will be works that are very different from existing works, rather than works that only cheaply imitate some statistical average of existing works. | ||
| ▲ | GoatInGrey 3 hours ago | parent | prev [-] | |
The tells in music are there. The most common being: vocals have a subtle constant hiss to them, voices and instruments sound different in the second half than they did in the first, the hiss filter gets more prominent and affects all instruments towards the end of the song, auditory artifacts like volume jumps or random notes/noises near transitions. More subjective tells: drums are hissy and weak, lyrics are generic or weird like "Went to the grocery store to buy coffee beans for my sadness", weirdly uniform loudness and density from start to finish, drops/climaxes are underwhelming, and (if you've listened to enough of them) a general uncanny feel to them. I've generated about 70 hours of AI music and have listened to all of the songs at least once, so it's become intuitive for me to pick them out. Some examples for listening for the hiss filter: https://suno.com/s/qvUKLxVV6HDifknq (Easiest to hear at 0:00 with the inhale) https://suno.com/s/QZx1t0aii0HVZYGx (Really strong at 0:09) Some examples for more hiss and other (subjective) tells like weak drums: | ||