@Tamasg @tardis Here's how I think about this having been a musician / producer for most of my life, as well as someone who has a lot of experience with AI models. I'll preface by saying I haven't listened to the podcast yet, but I'm honestly not sure we're as close as some people think. Let's start with services like Suno and Udio.Even with Suno's latest models, there are still noticeable artifacts, high frequency noise, lack of variety and similar patterns being repeated over time. While the average listener probably won't pick up on these right away, repeated listening should make it clear that this is not human created music. One thing that really stands out to me with Suno especially is lack of variety. You'll get the same vocalist, or something very similar much of the time, and after a while, it's extremely obvious to me when a song has been generated with Suno.All these Music generation models need to be trained, and they are only as good as the training data. They can't adapt to new genres of music without being completely retrained or fine-tuned, and again, even with that the models will be limited and have a specific "sonic signature." I'm going to omit the question of stolen music as both services now have deals with record labels so that's no longer an issue, but assuming both services completely retrain their models from scratch for the next versions, I expect the quality to drop as the training dataset gets smaller.To be clear, AI will absolutely have a big impact on the music industry, but I really believe the people who are actually going to benefit the most will be musicians / producers. Maybe in a few years we'll see more and more AI music topping the charts, but I don't think that's going to replace real musician's work or even be a viable long-term strategy. All of these AI models have artifacts, that's not a problem that's been completely solved yet.I think you're more likely to see musicians using AI models to augment what they're already doing, rather than just typing in a prompt and getting a song to put on Spotify. Some DAWs like Logic already have AI based Session Players that can take a chord progression and lay down drums, bass, or piano parts. It's all midi data, which means you have complete control over the performance down to the note level. That's something Suno or Udio currently cannot provide due to the nature of the models used.Looking ahead, I'm really excited for what generative audio models will bring to the table, for example taking a recording of one instrument and making that sound like it was played on a completely different one, with all the little details you would expect. We already have Technology that can do this, but it's not using any sort of generative models as far as I know. It's simply a tone transfer, taking the frequency spectrum of one sound and mapping it to another. I think we could push this much further, and when that day comes, it will be a revolution.