▲ | bitbuilder 9 hours ago | |
As someone who's both a software engineer and music producer its been really interesting to watch the parallel progression of the AI advancements in both areas, as well as my own sentiments towards those advancements. I've always been pretty bullish about using LLMs to help with coding. My comment history here should verify that. I just thought it was insanley cool tech, and I found I was rediscovering the joy of programming when I could delegate out all the tedious crap I could never get myself to do. Side projects that had languished for years were finally getting somewhere near "done". I also found in many cases it was actually smarter than me, finding cleaner and more elegant solutions to problems than I ever would have. And that was great! The first time I played with Suno though, it was all so different. I felt deflated. What was the point in making music if a robot could do it better? Yes, the first models were crap, but the writing was on the wall. Music is all about pattern recognition and repetition after all, why wouldn't the robots be great at it? I suddenly knew how all those visual artists felt once the image models started rapidly improving. So it's been interesting try to disect why I thought coding models were great, and creative models... well, depressing. Part of it in my case was because being that guy who made cool music was always part of my identity, a part of me I was proud of. Coding, not so much. Don't get me wrong, I've had some pretty big career achievements that I'm very proud of, and I love coding. But at the end of day, the lines of code written were just a means to an end to making something cool. Which brings me to the other reason I think I was so much more positive about LLMs: the application I was bringing to life with the help of an LLM still very much felt like "mine". Yes, maybe the tedium of writing a bunch of boilerplate was being delegated, but the idea, the architecture, the UX, were still all mine. So I was building something I could still feel proud of. But typing a prompt into a box and getting a song back? Nah, that's not really mine. It's no different than shouting an idea to a musical improv musician and getting a song back. Maybe you gave them a cool idea, but the song isn't yours. Which brings me full circle back to this new Suno DAW announcement: this is absolutely incredible. I've only skimmed the announcement so far, but I feel like this brings AI song generation firmly back into the court of how I'm using LLMs to code: letting AI take care of the boring shit, and letting me focus on the composition. I've only really ever produced electronic music of various varieties, and I have so many uncompleted songs whithering on the vine because I wasted a week of time flipping endlessly through patches, tweaking them, trying to find "that sound" in my head that I could never bring to life, then eventually getting sick of the song and saying fuck it. And in my experiments with Suno, I found it was actually crazy good at matching a "sound" I described, I just wished I had a way to compose my own song based on the sounds it generated. And now here it is. | ||
▲ | BrokenCogs 6 hours ago | parent [-] | |
It's human nature to want to feel like we've accomplished something. AI generaters like Suno, where all you have to do is type in a prompt and you get the final result, take that sense of accomplishment away from us. However, if we start working on a project where we're assisted by AI, for example, we're making a game where the sprites are generated by AI or the background music is generated by AI, but the overall game is still directed by humans, that sense of accomplishment stays. But at some point we're going to reach the stage where the entire game can be generated in high quality, at the same level as humans. What then? |