AI in the Music Industry Maya Hauptmann 11/13/2025
Back in 2020 I remember watching a video from Adam Neely with the group Dadabots explaining how they worked together to make a procedurally generated, infinite bass solo. He recorded himself playing bass guitar for two hours and then the group took the resulting audio and used it to train a neural network that would generate a bass solo until they told it to stop. Adam remarked on how the generated solo sounded vaguely like his playing but not really. He seemed entirely confused by what he was listening to, and I felt the same way. This was the first time I was exposed to the idea of using AI to make music, and it didn't really seem that important to me because frankly I saw no commercial or creative application for it. It was just... neat, I guess.
Importantly, Adam and Dadabots were interested in working together to make this thing, and they were very transparent with what they needed from each other and what they were attempting to achieve. Nobody else was involved with the project, as far as I'm aware, and nobody else's music was used. I saw no reason to be offended by this project, and very little reason to be worried about it.
Fast forward to today, we have projects as large as ChatGPT that are trained on an entire internet's worth of material, and can generate reasonably convincing outputs that mimic what artists and writers can do. These models have been trained using content that was plucked indiscriminately from people that did not consent to their work being used. AI models have made several companies heaps of money generating all kinds of media while the artists from whom they gethered training materials get nothing. This practice being done by several incredibly valuable companies like Meta and OpenAI is some of the most flamoyant disragard and condescension I have ever seen directed at artists as a whole, ever.
The difference between Dadabots' infinite bass solo and something like Suno AI should be pretty obvious. Dadabots were transparent with Adam, they used only materials they were specifically allowed to use, and they were partaking in a creative and explorative endeavor first and foremost. AI companies nowadays are not transparent at all, indiscriminate, infringe on outdated copyright law, and are primarily incentivized by profit over all else. I and a lot of other artists wouldn't have much of an issue with projects like the infinite bass solo, but we care deeply about generative AIs because the companies making them do not have our best interests at heart.
I believe the moment this all became fundamentally problematic was when AI research and development shifted from being about boundary exploration and building cool machines to being about generating value and padding pockets. Dadabots were not trying to exploit Adam for revenue. They were trying to make something new and interesting.
Funnily enough, generative AI has the potential to shoot itself in the foot. If AI generated content gets dispersed enough among the internet, new AI models will accidentally be fed wholly or partially AI generated materials during training, which will reduce the quality of the resulting model. The rush to implement AI everywhere it can go as quickly as possible, be it articles, adverisements, music or whatever, could very easily cause the industry to cannibalize itself and massively stunt its growth.
Artificial intelligence has actually been a pretty big, unobjectionable part of the music industry for a little while now. There are tools for spectral analysis, stem separation, noise reduction, mastering, etc, that use AI in a way that's augmentative or empowering. I believe it's the 12th version of iZotope Ozone that features a "mastering assistant" which will analyze a piece of audio and make suggestions based on conventions on how to master it. These tools don't steal anything from anybody nor do they prompt a data center to suck up natural resources every time you want to use it.
There have been a couple of fake bands appearing on streaming services lately. Notably, one artist doing AI generated indie music is called The Velvet Sundown, and they accumulated millions of listeners. This, to me, is an entirely regrettable outcome (and ironic, considering this AI indie artist is more popular than the vast majority of independent musicians I know) and plainly illuminates the need for full and obvious disclosure of when media is AI generated, as I imagine many people would not be listening to this music if they were aware it was generated by AI.
As a musician, I have some fairly strong opinions about AI in the music industry. I believe audio that was generated by AI should be required to have obvious audible watermarks and stay completely off of streaming services, period. Companies that develop AIs need to be entirely clear about the materials they're using to train their models, and when they use things without express permission from the owners, they need to be punished. Creators whose work was stolen to train these models need to be compensated heavily. We need strict regulations on AI companies and AI usage because it would be helpful for the internet and media as a whole to remain relevant to humanity, and also because of the absurd resource cost of the AI boom (I might write an article about this topic). We need to be investing in things that serve the interests of creators broadly, not the interests of AI companies and their executives.
Thanks for reading.
-Maya