Meta to fight AI-generated fake news with ‘invisible watermarks’

Meta To Fight Ai-Generated Fake News With 'Invisible Watermarks'


Social media giant Meta, formerly known as Facebook, is taking steps to prevent misuse of its technology by adding an invisible watermark to all images it creates using artificial intelligence (AI).

In a Dec. 6 report detailing improvements to Meta AI — Meta's virtual assistant — the company said it will soon add an invisible watermark to all AI-generated images created with the “Meta AI experience.” Like many other AI chatbots, Meta AI generates images and content based on user queries. But META aims to prevent bad actors from viewing the service as just another public fraud tool.

Like other AI image generators, Meta AI generates images and content based on user requests. The latest feature of watermark creator makes it more difficult to remove the watermark.

“In the coming weeks, we will be adding an invisible watermark to the image in the Meta AI experience for greater transparency and traceability.”

Meta said it uses a deep learning model to apply watermarks to images created by AI tools that are invisible to the human eye. However, invisible watermarks can be found with a matching model.

coinbase

Unlike traditional watermarks, Meta AI watermarks — dubbed Imagine with Meta AI — are “resistant to common image manipulations such as cropping, color changes (brightness, contrast, etc.), screenshots, and more,” he said. Watermarking services will initially be released for images generated through Meta AI, but the company plans to bring them to other Meta services that use AI-generated images.

In the latest update, Meta AI introduced the “reimagine” feature for Facebook Messenger and Instagram. The update allows users to send and receive AI-generated images. As a result, both messaging services also accept the invisible watermark feature.

RELATED: Tom Hanks, MrBeast and other celebrities warn about AI's deep lies

AI services such as Dall-E and Midjourney already allow adding traditional watermarks to published content. However, such watermarks can be removed by simply cropping the edge of the image. In addition, some AI tools can automatically remove watermarks from images, which Meta AI says is impossible to do as a result.

Since the groundbreaking of generative AI tools, several entrepreneurs and celebrities have called for AI-powered scam campaigns. Fraudsters use readily available tools to create and distribute fake videos, audio and popular images on the Internet.

In May, an AI-generated image of an explosion near the Pentagon – the headquarters of the US Department of Defense – caused the stock market to briefly crash.

A fake image like the one above was later picked up and spread by other news outlets, causing a snowball effect. However, local officials, including the Pentagon's Energy Protection Agency, which oversees the building's security, said they were aware of the reports and confirmed that “there was no explosion or incident.”

In the same month, human rights group Amnesty International used AI-generated footage of police brutality to campaign against authorities.

AI-generated image from Amnesty International Source: Twitter

“We removed the images from social media posts because we didn't want the criticism of the use of AI-generated images to distract from the main message of support for the victims and their demand for justice in Colombia,” Erica Guevara said. Rosas, Amnesty's Americas director.

Magazine: Legislators' fear and skepticism fuel proposed crypto regulations in the US.



Leave a Reply

Pin It on Pinterest