From ‘Accuracy Low Speed’ to 99% Success: Can AI’s New Tool Detect Deepfakes?

From 'Accuracy Low Speed' To 99% Success: Can Ai'S New Tool Detect Deepfakes?


OpenAI, a pioneer in the field of generative AI, is completing the challenge of identifying deep fake images amid the proliferation of misleading content on social media. At the Wall Street Journal's Tech Live conference in Laguna Beach, California recently, the company's chief technology officer, Mira Muratti, unveiled a new deep-fake story.

Murathi said OpenAI's new tool boasts “99% reliability” in determining whether an image was processed using AI.

AI-generated images can include everything from luminous creations like Pope Francis in a puffy Balenciaga coat to deceptive images that can cause financial trouble. The potential and pitfalls of AI are clear. As these tools become more sophisticated, identifying the true AI-generated content is becoming more challenging.

While the device's release date remains under wraps, the announcement has generated a lot of interest, especially given OpenAI's past efforts.

bybit

In the year In January 2022, the company introduced a text classifier that separates human text from machine-generated models like ChatGPT. But in July, OpenAI quietly shut down the tool, posting an update with an unacceptably high error rate. Their classifier incorrectly labeled real human text as AI-generated 9% of the time.

If Murati's claims are true, this will be an important moment for the industry, as current methods of identifying AI-generated images are typically not automated. Often, fans rely on gut feelings and focus on familiar challenges that feature hands, teeth, and patterns. The distinction between AI-generated images and AI-corrected images remains blurred, especially if one tries to use AI to achieve AI.

OpenAI isn't just working on identifying harmful AI images, it's developing its own model to censor even more than what's officially stated in its content guidelines.

As Decrypt found, the Dall-E tool from OpenAI appears to be configured to modify requests without notice, and silently throws errors when asked to generate certain results, though it adheres to published guidelines and avoids creating sensitive content that includes specific names, artist styles, and other content. and nationalities.

It can be part of the Dall-E 3 question in ChatGPT. Source: Decrypt

Detecting deep fakes isn't just OpenAI's endeavor. One company developing its capabilities is DeepMedia, which works specifically with government clients.

Big names like Microsoft and Adobe are also rolling up their sleeves. They introduced a system called ‘AI watermarking'. This method, led by the Coalition for Content Provenance and Authenticity (C2PA), includes a distinct “cr” symbol in a speech bubble, indicating AI-generated content. The symbol is intended as a transparency symbol to allow users to identify the origin of the content.

But like any technology, it is not foolproof. There is a hole where metadata containing this symbol can be extracted. However, as an antidote, Adobe has come up with a cloud service that can recover the lost metadata, thereby confirming the existence of the token. He, too, is not hard to pass.

With regulators in place to criminalize deep fakes, these innovations are not just the product of technology, but societal needs. Recent initiatives by OpenAI and the likes of Microsoft and Adobe have emphasized a collective effort to ensure authenticity in the digital age. Although these tools have been developed to achieve high accuracy, their effective implementation hinges on widespread adoption. This includes not only the tech giants but also content creators, social media platforms and end users.

With the rapid development of generative AI, indexers continue to struggle to identify accuracy in text, images, and audio. For now, human judgment and vigilance are our best defenses against AI misuse. But people are not infallible. Sustainable solutions require technology leaders, legislators and the public to work together to navigate this complex new frontier.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest