PRS Explores AI panel 2024

Here’s what we learned from PRS Explores: Composing with AI

Members' Day London saw a panel of experts discuss the responsible use of assistive AI in music creation.

Sam Harteam Moore
  • By Sam Harteam Moore
  • 11 Jun 2024
  • min read

Bringing last week’s Members’ Day London (4 June) to a close was the latest PRS Explores panel, Composing with AI.

Featuring a trio of experts — Rachel Lyske, CEO of DAACI; Declan McGlynn, Director of Communications at Voice-Swap; artist, producer and creative director Ilā Kamalagharan — and moderated by PRS for Music’s Chief Strategy, Communications & Public Affairs Officer John Mottram, the session featured a wide-ranging discussion about the potential of assistive artificial intelligence (AI) tools in responsibly aiding and enhancing human music creation.

From the importance of consent and control to protecting music creators' rights amid the rise of AI, here’s what we learned from the PRS Explores: Composing with AI panel.

Consent, control and creativity are key when it comes to AI in music

Having 'taken over 30 years to research and develop a system like this,’ assistive AI composition platform DAACI (which stands for Definable Aleatoric Artificial Composition Intelligence) now has ‘over 75 patents in the [AI] space’, its CEO Rachel Lyske told the PRS Explores audience.

‘But all of it has come from the perspective of musicians and writers,’ she continued. ‘What are we looking at? What do we need to be focusing on? Who can we be empowering? That's all part of our process: [it’s] not about replacing the composition process itself.’

Noting that ‘everyone writes music in very different ways: it's not a one-size-fits-all approach’, Rachel then expounded on one of DAACI’s assistive AI co-pilot plugins, ‘Natural Drums’.

‘By using some very simple inputs it's a way to speed up and enhance the process of getting drumbeats that aren't just loops going around, they compose and adapt to what's happening. That's one small element of a wider system.’

Voice-Swap, meanwhile, focuses more on assistive vocal technology. Co-founded by Dan Stein, aka DJ Fresh, the venture came about, as Director of Communications Declan McGlynn explained, because Dan ‘felt that there had to be a way to create something ethical around AI that respects artists and pays them royalties for the use of their talent in the AI training data’.

'We’re a platform that allows anyone to convert their singing voice into the voice of another singer using AI,’ he continued. ‘But what makes it slightly different is that we pay royalties to the singers every time their voice is converted: we give them a 50/50 split at the cost of the conversion, at the point of inference.’

‘Everyone writes music in very different ways: it's not a one-size-fits-all approach.' - Rachel Lyske 

Voice-Swap is based, Declan explained, on ‘three pillars: consent, control and creativity’.

‘Everything that we do is with consent: we're not scraping any data, we're not stealing any voices and we're not creating things from anything we don't own,’ he said. ‘We work with every singer to create the training data from scratch. We work with their teams to make sure it's the same microphone, same sound card, same cable: all these things go into making the output as high quality as possible.

'The second pillar is control. We use content moderation at the upload stage, so if someone tries to say some hate speech or inappropriate content it immediately gets flagged and isn't allowed to be converted into their voice. We also have another layer of filtering where we can give artists the opportunity to prevent words and phrases that they might be uncomfortable saying themselves.

‘We also use watermarking at the download stage to be able to track and trace back to the user at the time the stem went up. All of these things are there to give the artist comfort.’

Declan added, however, that 'none of this is a silver bullet'.

'I'm not saying we've solved AI attribution, but what we're trying to do is allow artists to have more control over their AI identities. Once those pillars of control and consent are addressed, then the final pillar, creativity, can happen. We can then stop worrying so much about the potential issues and try to enjoy the creative aspects of this.'

Music creators can use AI ‘to extend what they do’

Asked for their view on assistive AI music tools, Ilā said ‘the most exciting way to use AI is to do all the things we can't do ourselves’.

‘Often our fear [about AI] as composers is that we're being replaced, and that we don't feel comfortable doing something that we could do ourselves already,’ they said. ‘But how about all the exciting experiments and ideas that we could do with both things? What would it sound like if we did a hybrid model of both of our voices together? That's where my curiosity as a composer takes me, to the places that are perhaps a little bit more otherworldly or unusual. How can I use AI in ways that extend what I can do?’

Ilā’s advice to creators who are interested in exploring AI? 'Dive in and try things. You don't have to understand exactly how everything works: you can use a tool like DAACI or Voice-Swap quite instinctively. I've tried DAACI myself, and it's just like any other plugin in the sense that it's very usable. If no one told you it was AI, you wouldn't know any different: you'd think it was just a sophisticated sample library. I think sometimes it's to do with the language we use that tells us whether we want to go there or not. There's nothing dark or weird going on in a lot of these tools.'

‘The most exciting way to use AI is to do all the things we can't do ourselves.' - Ilā Kamalagharan

If you’re unsure about AI, lean on your creative community

With the AI landscape constantly evolving, keeping on top of the latest developments in the space can prove to be a tricky task.

By nature, music creators are ‘really generous with sharing information’, Ilā remarked. ‘There’s so many great places you can look, like forums and open-source software that you can trial. There are a lot of amazing things in Beta that are available for people to try for free as well. When something is behind a paywall, I just ask around to see if anyone has tried any of these tools. I'm quite happy as well, as a writer and producer, to tell people how something is: I know some people are a bit guarded. It's really good to find out from other creators what's working for them and how they're using it.'

Asked for her view, Rachel replied: ‘As a music creator, I would look for tools like DAACI that are an open Beta community. We're building all this stuff, but is it useful for you? What do you want us to build next? There's a massive system here, but what's the most useful thing for you as writers? That's why we're going for this composer-first approach, because we know how to write music.’

Declan advised the audience to not get ‘overwhelmed by the fact that these big, bad language model-based music tools are here, because they are novelty in my mind’.

‘I think it's all about agency. Start with an idea and work backwards: don't start with the tool and force yourself to learn it.'

The protection of music creators’ rights is as vital as ever

In the case of Voice-Swap, ‘we create a model of your voice, and you can then monetise it on your terms’, said Declan.

‘I don't believe in the rhetoric of “get on the train or you'll be left behind”, because I don't think that's constructive,’ he added. ‘There's a lot of people who would say that [AI] is happening either way, so we can either change legislation in a way that's positive to others, which is what all of us are trying to fight for, or we can just let the big tech companies run away with our content and intellectual property.

‘I think one of the ways that Voice-Swap is trying to change the narrative is by creating this path we can point to when legislators and big tech say it's impossible to pay artists or know what was generated with what content, or what was put into the training data. We'll say, "Well, we've created a way to show that it is possible to pay artists". There are risks here, but we have to create a better future because this is happening with or without us.’

Citing Voice-Swap’s recent link-up with BMAT to establish a certification program for AI music models, Declan said: 'We've just agreed a new partnership with BMAT to run through all of our training data to ensure there's no unauthorised copyrighted content inside. As we scale up, we're going to make that part of all the models that we're use.'

'We have to create a better future, because this is happening with or without us.’ - Declan McGlynn

Rachel agreed, saying that ‘there's no need to be railroaded by big tech saying there's no possible way that attribution can happen, or there's no possible way that artists and composers can be compensated. There is. You've seen it here, there is a way to do it.’

How far is AI going to go?

Rounding off the panel, our experts were asked by John to try and envision the future relationship between AI and music. Here’s what they had to say:

Rachel: 'We've obviously got a very human instinct to protect what we feel is human. Technology in terms of the music industry is a series of advancements: singers couldn't be heard at a gig if they didn't have a microphone; they couldn't be heard on a record if they didn't record with a mic. We’ve always used technology to be heard and communicate. If we can use [AI] technology to amplify and be heard more, we would. I'd love to be able to sing to my grandkids when I die; I would love to be able to write a specific piece of music to enhance a gaming experience. Technology could take us where we can imagine it can go. It's up to us to imagine that.'

Ilā: ‘We're all listeners of music as well as artists and writers. I think if there is a new piece of technology that really inspires people and they get really excited by it, then go for it. I've worked with Holly Herndon and Imogen Heap, phenomenal musicians who have used [AI] technology.

‘Maybe it's hopelessly romantic, but I think we'll always want to connect through music in a very immediate and visceral way. I think we'll always want to dance around and bang drums around the fire. I actually think one of the reactions to AI-generative music will be a resurgence of really raw, very immediate rough-sounding music, because people will want and crave that. You can never predict the future. How far are we willing to go? I think that's not just a question for AI, I think that's a question for society. It's a much bigger question than just AI.'

Declan: 'I think we'll start to see a rise in the appreciation of live performance again, like people wanting to see you actually playing the instrument or that you can sing the song, because everything can be automated.

‘But I think a great example of humans connecting using this technology is a translation project we did with a singer called Ruth Royall. She sent her song to songwriters in Mexico, Japan, China and someone in London who speaks Arabic. They each translated the song, and then we put those a cappellas back through these AI models so that Ruth was able to sing in those languages that she couldn't speak. It was a way of connecting to a global audience. But the great thing is that all those artists got paid for their translation and the session they recorded —we’re not trying to cut them out of any revenue streams. It allows people to connect across the globe in a way that just wasn't possible before.'