Meta to fight AI-generated fake news with ‘invisible watermarks’
Social media giant Meta, formerly known as Facebook, is taking steps to prevent misuse of its technology by adding an invisible watermark to all images it creates using artificial intelligence (AI).
In a Dec. 6 report detailing improvements to Meta AI — Meta's virtual assistant — the company said it will soon add an invisible watermark to all AI-generated images created with the “Meta AI experience.” Like many other AI chatbots, Meta AI generates images and content based on user queries. But META aims to prevent bad actors from viewing the service as just another public fraud tool.
Like other AI image generators, Meta AI generates images and content based on user requests. The latest feature of watermark creator makes it more difficult to remove the watermark.
“In the coming weeks, we will be adding an invisible watermark to the image in the Meta AI experience for greater transparency and traceability.”
Meta said it uses a deep learning model to apply watermarks to images created by AI tools that are invisible to the human eye. However, invisible watermarks can be found with a matching model.
Unlike traditional watermarks, Meta AI watermarks — dubbed Imagine with Meta AI — are “resistant to common image manipulations such as cropping, color changes (brightness, contrast, etc.), screenshots, and more,” he said. Watermarking services will initially be released for images generated through Meta AI, but the company plans to bring them to other Meta services that use AI-generated images.
In the latest update, Meta AI introduced the “reimagine” feature for Facebook Messenger and Instagram. The update allows users to send and receive AI-generated images. As a result, both messaging services also accept the invisible watermark feature.
RELATED: Tom Hanks, MrBeast and other celebrities warn about AI's deep lies
AI services such as Dall-E and Midjourney already allow adding traditional watermarks to published content. However, such watermarks can be removed by simply cropping the edge of the image. In addition, some AI tools can automatically remove watermarks from images, which Meta AI says is impossible to do as a result.
Since the groundbreaking of generative AI tools, several entrepreneurs and celebrities have called for AI-powered scam campaigns. Fraudsters use readily available tools to create and distribute fake videos, audio and popular images on the Internet.
In May, an AI-generated image of an explosion near the Pentagon – the headquarters of the US Department of Defense – caused the stock market to briefly crash.
A (fake) story (heavily AI-generated) about the explosion at the Pentagon was tweeted by this account that tweeted a first-hand look at the explosion at the Pentagon, just like Bloomberg News. feeding pic.twitter.com/SThErCln0p
— Andy Campbell (@AndyBCampbell) May 22, 2023
A fake image like the one above was later picked up and spread by other news outlets, causing a snowball effect. However, local officials, including the Pentagon's Energy Protection Agency, which oversees the building's security, said they were aware of the reports and confirmed that “there was no explosion or incident.”
@PFPAOfficial and ACFD are aware of social media reports circulating online about the explosion near the Pentagon. There are no explosions or incidents at or near the Pentagon, and there is no immediate threat or danger to the public. pic.twitter.com/uznY0s7deL
— Arlington Fire & EMS (@ArlingtonVaFD) May 22, 2023
In the same month, human rights group Amnesty International used AI-generated footage of police brutality to campaign against authorities.
“We removed the images from social media posts because we didn't want the criticism of the use of AI-generated images to distract from the main message of support for the victims and their demand for justice in Colombia,” Erica Guevara said. Rosas, Amnesty's Americas director.
Magazine: Legislators' fear and skepticism fuel proposed crypto regulations in the US.