Is that Joe Biden or AI Deepfake? The White House plans to label authentic content
9 months ago Benito Santiago
The administration is ramping up efforts to verify official communications from the White House after U.S. President Joe Biden's elaborate hoax, which was created by AI to deceive voters in the New Hampshire primary last month.
With AI's Pandora's box wide open, technologists and ethicists alike are scrambling to find ways to combat deep-rooted fallacies created by AI. Now the White House says it's investigating cryptographic technology to encrypt real content.
White House special counsel Ben Buchanan told Business Insider earlier this month, “We recognize that increasing technology makes it easier to do things like clone audio or fake videos. We want to make sure we manage some of the risks and not stifle the creativity that can come from having more powerful tools.”
According to Buchanan, the Biden administration in 2016 After meeting with more than a dozen AI developers in 2023, the companies agreed to embed watermarks in their products before releasing them to the public. However, watermarks can be manipulated and removed entirely, making the reverse approach attractive.
“[On] We're in the process of developing watermarking standards through the new AI Safety Institute and the Department of Commerce to make sure we have clear road rules and guidelines on the government side of how we handle some of these. “Shore watermarks and content, provenance issues,” Buchanan told Yahoo Finance.
A White House spokesperson did not immediately respond to a request for comment from Decrypt.
In December, in an initiative to combat AI-generated images, digital imaging giants Nikon, Sony and Canon announced a partnership to include digital signatures on images captured by their respective cameras.
Last week, the Biden administration announced the launch of the US AI Safety Institute, with participants including OpenAI, Microsoft, Google, Apple and Amazon. The institute said the administration was born out of Biden's executive order on the AI industry announced in October.
Developing AI to detect deep fakes has become a cottage industry. But some oppose the move, saying it will make the problem worse.
“Using AI to find AI doesn't work. Instead, it creates an endless arms race. The bad guys always win,” Rod Boothby, co-founder and CEO of identity verification firm, told Decrypt. “The solution is to reverse the problem and enable people to verify their identity online. Using a bank ID is an obvious solution to the problem.
Using “Continuous Verification” to ensure that the person in an anonymous Internet session is who they say they are, by applying to Buzbee.
For cyber security and legal scholar Starr Cashman, protecting ourselves from gossip comes back to awareness.
“Raising awareness, especially for robocalls and AI-powered phone scams, can prevent a lot of harm,” Cashman told Decrypt in an email. “If your family detects a common AI voice phone scam where fraudsters pretend to have taken a family member, and use AI to mimic that family member's voice, the person receiving the call may know that they need to verify that relative before paying. A false ransom for the lost.
Advances in generative AI have made it easier to scam and deceive the general public, as evidenced recently by a robocall campaign that attempted to turn out Democratic voters in New Hampshire's primary last month.
Biden's robocall was traced to a Texas-based telecom company. After issuing a cease-and-desist order to Lingo Telecom LLC, the US Federal Communications Commission issued new rules that would make robocalls using AI voices illegal.
Cashman says awareness is still the best way to avoid being fooled by AI's deep illusions, and acknowledges that the threat calls for government intervention.
“Knowledge of deep lies does not prevent individuals from creating them,” Cashman said. “However, knowledge can add more pressure on the government to pass the law by creating deep-seated federal illegalities that do not have an illegal consent.”