{"id":26208,"date":"2023-12-06T14:26:22","date_gmt":"2023-12-06T14:26:22","guid":{"rendered":"https:\/\/nftandcrypto-news.com\/crypto\/artists-aim-to-thwart-ai-with-data-poisoning-software-and-legal-action\/"},"modified":"2023-12-06T14:26:24","modified_gmt":"2023-12-06T14:26:24","slug":"artists-aim-to-thwart-ai-with-data-poisoning-software-and-legal-action","status":"publish","type":"post","link":"https:\/\/nftandcrypto-news.com\/crypto\/artists-aim-to-thwart-ai-with-data-poisoning-software-and-legal-action\/","title":{"rendered":"Artists aim to thwart AI with data-poisoning software and legal action"},"content":{"rendered":"
As the use of artificial intelligence (AI) has permeated the creative media space \u2014 especially art and design \u2014 the definition of intellectual property (IP) seems to be evolving in real time as it becomes increasingly difficult to understand what constitutes plagiarism.<\/p>\n
Over the past year, AI-driven art platforms have pushed the limits of IP rights by utilizing extensive data sets for training, often without the explicit permission of the artists who crafted the original works. <\/p>\n
For instance, platforms like OpenAI\u2019s DALL-E and Midjourney\u2019s service offer subscription models, indirectly monetizing the copyrighted material that constitutes their training data sets.<\/p>\n
In this regard, an important question has emerged: \u201cDo these platforms work within the norms established by the \u2018fair use\u2019 doctrine, which in its current iteration allows for copyrighted work to be used for criticism, comment, news reporting, teaching and research purposes?\u201d <\/p>\n
Recently, Getty Images, a major supplier of stock photos, initiated lawsuits against Stability AI in both the United States and the United Kingdom. Getty has accused Stability AI\u2019s visual-generating program, Stable Diffusion, of infringing on copyright and trademark laws by using images from its catalog without authorization, particularly those with its watermarks.<\/p>\n
However, the plaintiffs must present more comprehensive proof to support their claims, which might prove challenging since Stable Diffusion\u2019s AI has been trained on an enormous cache of 12+ billion compressed pictures.<\/p>\n
In another related case, artists Sarah Andersen, Kelly McKernan and Karla Ortiz initiated legal proceedings against Stable Diffusion, Midjourney and the online art community DeviantArt in January, accusing the organizations of infringing the rights of \u201cmillions of artists\u201d by training their AI tools using five billion images scraped from the web \u201cwith\u00adout the con\u00adsent of the orig\u00adi\u00adnal artists.\u201d<\/p>\n
Responding to the complaints of artists whose works were plagiarized by AI, researchers at the University of Chicago recently released a tool called Nightshade, which enables artists to integrate undetectable alterations into their artwork.\u00a0<\/p>\n
These modifications, while invisible to the human eye, can poison AI training data. Moreover, subtle pixel changes can disrupt AI models\u2019 learning processes, leading to incorrect labeling and recognition. <\/p>\n
Even a handful of these images can corrupt the AI\u2019s learning process. For instance, a recent experiment showed that introducing a few dozen misrepresented images was sufficient to impair Stable Diffusion\u2019s output significantly.<\/p>\n
The University of Chicago team had previously developed its own tool called Glaze, which was meant to mask an artist\u2019s style from AI detection. Their new offering, Nightshade, is slated for integration with Glaze, expanding its capabilities further. <\/p>\n
In a recent interview, Ben Zhao, lead developer for Nightshade, said that tools like his will help nudge companies toward more ethical practices. \u201cI think right now there\u2019s very little incentive for companies to change the way that they have been operating \u2014 which is to say, \u2018Everything under the sun is ours, and there\u2019s nothing you can do about it.\u2019 I guess we\u2019re just sort of giving them a little bit more nudge toward the ethical front, and we\u2019ll see if it actually happens,\u201d he added.<\/p>\n