AI wear: Deepfake services skyrocket in popularity

Ai Wear: Deepfake Services Skyrocket In Popularity



The scourge of malicious deep-faking is well-established in the realm of celebrities and public figures, and a new report on Non-Consent Intimate Images (NCII) will only grow as image generators become more sophisticated and widespread.

Social media analytics firm Graphica said in a report on Friday that “AI stripping” is on the rise, describing the practice as using finely tuned generative AI tools to remove clothing from images uploaded by users.

The game and the Twitch streaming community were gripped earlier this year after popular streamer Brandon ‘Atriok' Ewing accidentally watched a deep fake AI-generated porn video of female streamers he called his friends, according to a report by Kotaku.

Ewing returned to the stage in March, grieving and reporting for a week of work to recover from the injury. But the incident opened the floodgates for the entire online community.

Minergate

Graphica's report shows that the incident was just a drop in the bucket.

“Using data provided by Meltwater, we measured the number of comments and posts on Reddit and X that contained referral links to 34 websites and 52 Telegram channels,” wrote Graphica Intelligence analyst Santiago Lakatos. “These will total 1,280 in 2022 compared to over 32,100, representing a year-on-year increase of 2,408%.”

New York-based Graphica says the explosion at NCII shows the tools have moved from source discussion boards to cottage industry.

“These models allow a large number of suppliers to easily and cheaply create photorealistic NCII scales,” said Graphica. “Without such providers, customers must host, maintain and operate their own custom image distribution models – a time-consuming and sometimes expensive process.

Graphica warns that the rise in popularity of AI wearables could lead to not only fake porn but also targeted harassment, sextortion and child sexual abuse (CSAM).

According to Graphica, developers of AI-undressing tools advertise on social media to direct potential users to their websites, private Telegram chats or Discord servers where the tools are available.

“Some providers are transparent about their activities, stating that they offer a ‘undressing' service and posting photos of people they claim to be ‘undressed.'” Others are less transparent and present themselves as AI art services or Web3 photo galleries, but use keywords related to the generated NCII in their profiles. And when they include it in their writings.”

While Dressing AI focuses specifically on portraits, the AI ​​has been used to create video depth fakes using images of celebrities including YouTube personality Mr. Beast and famous Hollywood actor Tom Hanks.

Some actors, such as Scarlett Johansson and Indian actor Anil Kapoor, are taking to the legal system to fight the ongoing threat of AI deepfaking. Still, while mainstream entertainers may get a lot of media attention, adult entertainers say their voices are rarely heard.

“It's very difficult,” Tanya Tate, the famous adult actress and PR head of Star Factory, told Decrypt earlier. “If someone is in swimming, I'm sure it's very easy.”

Although AI and deep spoofing technology haven't taken off, Tate says social media is flooded with fake accounts taking advantage of her potential and content. Not understanding the issues is the stigma sex workers face, forcing them and their supporters to remain in the shadows.

In October, the Internet Watch Foundation (IWF), a UK-based Internet watchdog, said in a separate report that more than 20,254 child abuse images had been found on a single dark web platform in just one month. The IWF has warned that AI-generated child pornography could “crowding” the internet.

Thanks to advances in generative AI imaging, the IWF warns that law enforcement should pursue online phantoms rather than actual abuse victims as it becomes increasingly complex to differentiate between AI-generated images and real images of deep fake pornography.

“So there's a continuum where you can't believe things are real or not,” Dan Sexton, CTO of the Internet Watch Foundation, told Decrypt.

As for Ewing, Kotaku reports that the stream has been working with reporters, technologists, researchers and women affected by the incident since it began airing in January. Ewing also sent money to the Los Angeles-based law firm of Ryan Morrison, Morrison Cooper, to provide legal services to any woman on Twitch who wants their help in serving takedown notices on sites that publish their images.

Ewing added that deep research into the deep forgery issue came from secret deep forgery researcher Genevieve O.

“I tried to find the ‘bright spots' in fighting this type of content,” Ewing said.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest