When illustrator Sarah Andersen saw her distinctive comic style replicated by Stable Diffusion — sometimes with fragments of her signature visible in the generated images — she did what artists have always done when their work is stolen: she called a lawyer. The class-action lawsuit she joined, alongside artists Kelly McKernan and Karla Ortiz, against Stability AI, Midjourney, and DeviantArt became the first major legal challenge to AI image generation. It won't be the last.
The Core Legal Question
The lawsuits boil down to one question: does training an AI model on copyrighted images constitute copyright infringement?
The AI companies say no. They argue that training is "transformative use" — similar to how a human artist studies other artists' work to develop their own style. The training data is transformed into mathematical weights, not stored as copies. You can't extract the original images from the model.
The artists say yes. They argue that the models are, functionally, sophisticated copying machines. When you can prompt Midjourney to produce art "in the style of [specific living artist]," the model is commercially exploiting that artist's life's work without permission or compensation.
The Legal Landscape Is Shifting
Early rulings were mixed. A federal court initially dismissed some of the artists' claims but allowed others to proceed. The key ruling: while individual training images might be fair use, the systematic scraping of an artist's entire portfolio to create a commercial product that directly competes with that artist raises different questions.
In early 2026, the landscape shifted significantly:
- The New York Times v. OpenAI lawsuit survived a motion to dismiss, with the judge finding that verbatim reproduction of copyrighted text during model output was not fair use
- The Getty Images v. Stability AI case in the UK resulted in a ruling that mass scraping without license was copyright infringement under UK law
- The EU AI Act required companies to disclose training data, giving rights holders the information they need to bring claims
- Japan, initially permissive toward AI training, revised its copyright guidelines to give artists more protection
The Music Industry Parallel
Hollywood and the music industry have been here before. When Napster enabled mass copying of music, the industry's initial response was lawsuits. Eventually, the market found an equilibrium: streaming services that compensated rights holders (imperfectly) while giving consumers what they wanted.
A similar equilibrium for AI seems likely but not inevitable. Some proposals include:
Licensing models — Companies like Shutterstock and Adobe have launched "ethically trained" AI tools that only use licensed images and share revenue with creators.
Opt-out registries — Artists can register works they don't want used for training. But opt-out puts the burden on creators, and enforcement is difficult.
Collective licensing — Similar to how ASCAP and BMI collect royalties for musicians, a collective could negotiate blanket licenses for visual artists.
The Human Cost
While the legal battles play out, the economic damage is already done. Illustrators report that freelance rates have plummeted. Concept art studios have laid off staff. Stock photography revenue has cratered. The artists who built the creative foundation that AI models exploit are the same ones losing their livelihoods because of them.
The legal system may eventually establish fair compensation. But for many working artists, "eventually" is too late.
