Google, Meta, OpenAI Join Other Industry Giants Against AI Child Abuse Images

Google, Meta, Openai Join Other Industry Giants Against Ai Child Abuse Images


To combat the spread of child sexual abuse (CSAM), a consortium of top AI developers—including Google, Meta, and OpenAI—has pledged to implement safeguards around the emerging technology.

The team is brought together by two non-profit organizations: Children's Technology Group Thornton and New York-based All Tech People. Formerly known as the DNA Foundation, Thorn was launched in 2012 by actors Demi Moore and Ashton Kutcher.

The joint commitment was announced Tuesday alongside a new Thorne report that supports the “safety by design” principle in generative AI development to prevent child sexual abuse (CSAM) throughout the lifecycle.

“We urge all companies that develop, operate, maintain and generate AI technologies and products to use these safety by design principles and demonstrate their commitment to preventing the creation and proliferation of CSAM, AIG-CSAM and other child sexual abuse and exploitation,” Thorne said in a statement.

okex

AIG-CSAM is an AI-generated CSAM, which the report shows is relatively easy to create.

Image: thorn

Thorn develops tools and resources focused on protecting children from sexual abuse and exploitation. In the year In its 2022 Impact Report, the organization revealed that more than 824,466 files containing child abuse were found. Last year, Torn reported that more than 104 million suspicious CSAM files were reported in the US alone.

In an earlier online problem, deep child pornography skyrocketed after generative AI models were made public, while stand-alone AI models were distributed on dark web forums.

Generative AI, Thorn said, makes creating volumes of content easier than ever. A child predator can create large amounts of child sexual abuse (CSAM), including adapting original images and videos into new content.

“The flow of AIG-CSAM poses a significant threat to an already taxed child welfare ecosystem, exacerbating the challenges law enforcement faces in identifying and rescuing existing victims of abuse, and increasing the new victimization of many children,” Thorn said.

Thorne's report outlines a series of principles that generative AI developers should follow to prevent their technology from being used to create child pornography, including responsibly incorporating training datasets, feedback observations and stress-testing strategies, including content history or “provenance” from adversarial abuse. Mindfully, and responsibly handling their own AI models.

Other signatories to the pledge include Microsoft, Anthroponic, Mistral AI, Amazon, Stability AI, Civit AI and Metaphysic, each issuing separate statements today.

Alejandro López, chief marketing officer of Metaphysics, told Decrypt: “Our ethical body is responsible for development in the world of AI, that's right, it's about empowerment, but it's about responsibility. “Starting and developing means the most vulnerable in our society, children. We are quickly realizing that it means waiting and that unfortunately this technology will end up using deeply fake pornography for child sexual abuse. , that was done.

In the year Launched in 2021, Metaphysic gained notoriety last year after several Hollywood stars, including Tom Hanks, Octavia Spencer and Anne Hathaway, used Metaphysic Pro technology to digitize their likenesses in order to secure patents on important features. To train an AI model.

OpenAI declined to comment further on the initiative, instead citing a public statement from Chelsea Carlson, Decrypt's head of child safety.

“We care deeply about the safety and responsible use of our tools, which is why we've built strong safeguards and security measures into ChatGPT and DALL-E,” Carlson said in a statement. “Together with Thorn, All Tech is Human and the broader technology community, we are committed to continuing our work to protect safety by design principles and reduce potential harm to children.”

Decrypt contacted other coalition members but did not immediately respond.

“At Meta, we've spent over a decade working to keep people safe online. During that time, we have developed many tools and behaviors to help prevent and combat potential harm—and just as predators have adapted to try and escape our protection, we continue to adapt,” Meta said in a prepared statement.

“We proactively remove CSAE material in our products through a combination of hash matching technology, artificial intelligence classifiers, and human reviews,” Susan Jasper, Google's vice president of trust and security solutions, said in a post. “Our policies and safeguards are designed to detect all types of CSAE, including AI-generated CSAM. When we identify exploitation, we remove it and take appropriate action, which may include reporting it to NCMEC.

In October, a UK watchdog group, the Internet Watch Foundation, warned that AI-generated child abuse material could ‘overwhelm' the internet.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest