Artists have filed a class action lawsuit targeted at Stability AI, Midjourney, and DeviantArt, who have released artificial intelligence powered image generators that can transform simple text prompts into convincingly rendered images.
In a complaint filed last week with the United States District Court of the Northern District of California, artists Karla Ortiz, Kelly Mckernan, and Sarah Andersen, represented by Joseph Saveri Law Firm, claim that the aforementioned companies have violated copyright laws by using their images, along with those of tens of thousands of other artists, to train their image generators and produce derivative works. The plaintiffs claim that these companies have infringed on 17 U.S. Code § 106, exclusive rights in copyrighted works, the Digital Millennium Copyright Act, and are in violation of the Unfair Competition law.
“Though the rapid success of Stable Diffusion has been partly reliant on a great leap forward in computer science, it has been even more reliant on a great leap forward in appropriating copyrighted images,” the complaint reads.
All three companies mentioned have built their AI image generators on a software library called Stable Diffusion, which was developed by Stability AI. This model is built on a technological process called “diffusion” where the program is first trained to be able to reconstruct images that it has been fed. Then it can generate new images when a prompt in input.
“The primary goal of a diffusion model is to reconstruct copies of the training data with maximum accuracy and fidelity to the Training Image,” the complaint reads. “It is meant to be a duplicate.”
The plaintiffs claim that these copied images are then used to create “derivative works,” a work that it “incorporate[s] enough of the original work that it obviously stems from the original,” in the Cornell Law School’s Legal Information Institute’s definition. The image generators, according to the plaintiffs, are nothing but a “21st century collage tool” that has the potential to greatly damage artistic industries and was built off of protected works.
The complaint also alleges that these image generators have empowered users to create what they refer to as “fakes.” For example, after the Korean illustrator Kim Jung Gi died, a software developer who goes by the username 5you, used Stable Diffusion to create a model that could produce images in Kim’s style. Other artists have reported similar examples of users creating works in their styles.
There are several issues with the lawsuit as constructed, according to experts.
First, only specific images, not styles, are protected by copyright. Meanwhile, collage is a protected medium under “Fair Use,” a legal doctrine that creates exceptions to copyright law “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use, scholarship, or research)” and “transformative” creative production.
Whether or not the lawsuit accurately characterizes diffusion is also in question.
“It is a common misconception that a machine learning model is just a storage of images that then generates a collage,” Dr. Andres Guadamuz, a reader in intellectual property law at the University of Sussex, wrote in a blog post on the complaint Sunday.
Instead, Guadamuz claims that the technology is too complex to be generalized in this way. Stable Diffusion, according to Guadamuz, does not store copied works even in early training. Rather, it stores data on how space and color relate to one another when representing certain objects, using what it “learned” from studying the training set.
This complex process when explained by experts during trial, he wrote, might undermine the argument in the complaint. And then it will be up to a judge, or a jury, to decide if those complexities are meaningfully distant from our understanding of plagiarism or if these models truly constitute fair use of copyrighted materials.