Artists aim to disrupt AI with data poisoning software and legal action.

Artists aim to disrupt AI with data poisoning software and legal action.


As artificial intelligence (AI) transforms the creative media space—especially art and design—the definition of intellectual property (IP) is becoming increasingly difficult to define.

Last year, AI-driven art platforms pushed the boundaries of IP rights by using vast data sets for training, often without the express permission of the artists who created the original works.

For example, platforms like OpenAI's DALL-E and Midjourney service offer subscription models that indirectly monetize copyrighted training datasets.

An important question is raised in this regard: “These platforms operate under the rules of the ‘fair use' doctrine, which in its current iteration allows the use of a copyrighted work for criticism, commentary, news reporting, teaching and research.” Objectives?”

coinbase

Recently, Getty Images, a major provider of stock photos, filed lawsuits against Stability AI in both the United States and the United Kingdom. Getty accused Stable AI's visual generation program, Stable Diffusion, of violating copyright and trademark laws by using images from its catalog without permission, particularly those containing watermarks.

However, the plaintiffs need to provide more comprehensive evidence to support their claims, which can be challenging, as Stable Diffusion's AI was trained on a cache of 12+ billion compressed images.

In another related case, artists Sarah Anderson, Kelly McKernan and Carla Ortiz filed a lawsuit in January against Stable Diffusion, Midjourney and online art community DeviantArt, alleging the companies violated the rights of “millions of artists” by training their AI tools. Five billion images have been removed from the web “without the consent of the original artists.”

AI poisoning software

In response to complaints from artists whose works have been overwritten by AI, researchers at the University of Chicago recently developed a tool called Nightshade that allows artists to integrate unknown changes into their artwork.

Although these modifications are invisible to the human eye, they can poison the AI's training data. Additionally, subtle pixel changes can disrupt the learning processes of AI models, leading to mislabeling and recognition.

Even a few of these images can spoil the AI's learning process. For example, a recent experiment showed that introducing a few dozen wrong images is enough to seriously damage the Stable Diffusion effect.

A team at the University of Chicago has developed its own tool called Glaze, which was previously designed to detect an artist's style from AI. Their newest offering, Nightshade, is slated to integrate with Glaze, further expanding its capabilities.

In a recent interview, Nightshade lead developer Ben Zhao said tools like his help move companies toward more ethical practices. “I think there's very little incentive for companies to change the way they've been doing things these days — which is, ‘Everything under the sun is ours, and you can't do anything about it.' I guess we're pushing them a little bit more on the moral front, and we'll see if that actually happens,” he added.

Example of Nightshade Poisoning Art Datasets. Source: HyperAllergic

While Nightshade has the potential to preserve future artwork, Zhao says the platform cannot reverse artworks already created by older AI models. Additionally, there is a risk that the software could be misused for malicious purposes, such as infecting large-scale digital image generators.

However, Zhao is confident that this latter use case will be a challenge, as it will require thousands of poisoned samples.

Latest: AI and pension funds: Is AI a safe bet for pension investing?

While independent artist Autumn Beverly believes tools like Nightshade and Glaze have empowered her to share her work online without fear of abuse, Marian Mazzon, an expert with the Art and Artificial Intelligence Lab at Rutgers University, thinks such tools may not. Make a permanent fix that suggests artists need to follow legal updates to address ongoing issues related to AI-generated imagery.

The CEO of Artifi, a Web3 artwork investment solution, told Cointelegraph that creators using AI data mining tools are challenging traditional notions of ownership and authorship and prompting a re-evaluation of copyright and creative control.

“The use of data poisoning tools is raising legal and ethical questions about AI training on publicly available digital artworks. People are debating issues such as copyright, fair use, and respecting the rights of original creators. Meanwhile, AI companies like Nightshade and Glaze are using data poisoning. Devices are now working on a variety of strategies to address the impact of their machine learning models, including improving their defenses, improving data validation, and developing more robust algorithms to detect and mitigate pixel poisoning patterns.

Yubo Ruan, founder of ParaX, a Web3 platform powered by account abstraction and zero-knowledge virtual machines, told Cointelegraph that as artists continue to use AI-powered tools, there is a need to rethink what digital art means and how to copyright it. The origin is determined.

“We want to reassess today's intellectual property frameworks to accommodate the complexities introduced by these technologies. “Using data analysis tools raises legal concerns about licensing and copyright infringement, as well as ethical issues related to the fair use of public works of art without proper compensation or acknowledgment to the original owners.”

Stretching IP laws to their limits

Beyond digital art, the impact of generative AI is also being seen in other domains, including academia and video-based content. In July, comedian Sarah Silverman, along with authors Christopher Golden and Richard Cadre, filed legal action against OpenAI and Meta in US District Court, accusing the tech giant of copyright infringement.

The lawsuit alleges that both OpenAI's ChatGPT and Meta's Llama were trained on data sets obtained from illegal “shadow library” sites that contained the plaintiffs' copyrighted works. The lawsuits point to instances where ChatGPT packaged their books without including copyright management information, using Silverman's Bedwater, Golden Ararat and Kadri Sandman's Slim as key examples.

Separately, the lawsuit filed against Meta alleges that the company's llama models were trained using data from the same dubious sources, specifically citing The Pile from EleutherAI and including content from its private tracker Bibliotik.

Latest: Real AI use cases in crypto: Crypto-based AI markets and AI financial analysis

The authors stated that they never consented to their works being used in this way and are asking for damages and refunds.

As we move into a future driven by AI tech, many companies seem to be struggling with the enormity of the technology proposition presented by this evolving paradigm.

Companies like Adobe have started using marks to identify AI-generated data, but companies like Google and Microsoft have said they are willing to face any legal heat if their customers are accused of copyright infringement when using their AI-generated products.

Leave a Reply

Pin It on Pinterest