Skip links

Artists Demand Consent for Generative AI Training

Artists Demand Consent for Generative AI Training

Generative AI is evolving fast, and it’s raising big questions—especially when it comes to the rights of creators. Should companies be able to use human-made art, writing, and music to train AI without asking first?

Over 11,500 creative professionals like actor Julianne Moore and musician Thom Yorke, have now spoken out, signing an open letter that calls for explicit consent before creative work is used to train AI.

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” the letter states.

Creators argue that scraping art without permission is digital theft and undervalues their craft. With AI’s capabilities growing daily, the demand for transparency and respect for creators’ rights is reaching a boiling point.

Drake Generative AI

Why Creators Want a Say in AI Training

Generative AI technology relies heavily on vast datasets to learn how to produce images, text, or music. However, in many cases, these datasets are scraped from the internet, often including copyrighted work without permission. Many artists argue that this is a form of digital infringement.

Ed Newton-Rex, a composer and founder of Fairly Trained, a nonprofit that certifies ethical AI models, emphasizes that “lots of generative AI companies train on creators’ work without a license to do so.” Therefore, calling it a “major issue” for creatives worldwide is not unfounded: AI models can now replicate styles, voices, and likenesses with increasing accuracy.

In a famous case earlier this year, popular rapper, Drake had to remove an AI-generated track that allegedly used Tupac Shakur’s likeness without authorization. Similarly, public figures like Taylor Swift and Morgan Freeman have found their images and voices used in AI-generated advertisements without their consent .

EU AI ACT 2024

Where We’re Headed: AI Regulations

The United States currently lacks comprehensive federal regulation over generative AI. Although, some states are taking steps to address issues like deepfakes and AI impersonation.

For instance, California recently passed laws aiming to protect performers from unauthorized digital replicas. However, without nationwide guidelines, there’s still considerable leeway for companies using unlicensed creative content in their models .

The European Union, on the other hand has taken significant steps toward regulating AI, introducing the AI Act, which will come into effect in 2026. The EU’s AI Act mandates transparency for AI-generated content, particularly deepfakes, requiring that any content produced or manipulated by AI be clearly labeled as such.

Additionally, the AI Act pushes for greater accountability from AI developers, particularly around copyright and data sourcing, setting a strong precedent that many hope will influence global standards.

Julianne Moore Generative AI

A Fair Future for AI and Creators

With mounting pressure, some AI companies are making changes. Partnerships with major media outlets, like those between Condé Nast and OpenAI, are paving the way for a consent-based model where data is licensed, and creators are compensated.

And some startups are building “ethically trained” models using public domain data or licensed content only, so users know exactly where their AI-generated content comes from.

Will Permission Become the New Normal?

As more voices join the call for consent in AI training, the push for fairness is hard to ignore. Think about cookies: just a few years ago, websites didn’t need to ask permission to track our data. Now, every site politely (or not-so-politely) asks for consent before storing our info.

For now, creators and advocates will keep pushing for a digital landscape that respects the rights—and hard work—of those who bring art, music, and words to life.

Let us know if you enjoyed reading this article. Say Hi

Explore
Drag