If artificial intelligence is used to create political ads, it must be disclosed, according to a new proposal by the US Federal Communications Commission. The FCC notice published Wednesday comes nearly three months after an AI-generated robocall targeted New Hampshire voters.
Under the FCC's proposal, political ads would require on-air disclosure and written disclosure by broadcasters whenever AI-generated content is added.
“As artificial intelligence tools become more accessible, the Commission wants to ensure that consumers are fully informed when the technology is being used,” the FCC said. Content you see or hear.
The disclosure rules apply to both candidates and provide advertisements and entities that offer “source programming” or programming that is produced or obtained for a license to transmit to subscribers, including cable, satellite TV and radio providers.
Other than that, the proposed policy doesn't outright ban AI-generated content. But the agency has taken similar steps in the past.
In February, the FCC banned the use of AI-generated robocalls after U.S. President Joe Biden used voice-over falsification to try to mislead New Hampshire residents from voting in the state's primary election in February. Already the subject of AI-generated deep-fakes, Biden called for a ban on AI voice simulation during his State of the Union address in March.
But while Biden has called for a ban on AI vote machines, Ohio's 7th District congressional candidate Matt Deemer has partnered with AI developer Sivox AI to engage the technology with voters.
“A system like Sivox allows me to make my voice available to people,” Diemer previously told Decrypt. “That would be more than 730,000 citizens across the state.”
“It's no different than sending blogs, emails, text messages, TikToks or tweets,” he said. “This is another way for people to connect with me and make more of a connection.”
Dimer, current host of the once-daily GM podcast DeCrypt, previously made his candidacy unique by supporting crypto, adding AI to his toolbox with new technologies.
Developers of generative AI models, including Microsoft, OpenAI, Meta, Anthropic, and Google, have restricted or banned the use of their large language modeling platforms for political ads.
“In preparation for the many elections around the world in 2024, and out of an abundance of caution, we're limiting the election-related queries that Gemini can respond to,” a Google spokesperson previously told Decrypt.
As we look ahead to US elections this fall and beyond, the FCC has emphasized the importance of being vigilant against deceptive AI-generated deep fakes.
“The use of AI is expected to play a major role in the creation of political ads in 2024 and beyond, but the use of AI-generated content in political ads creates the potential to misinform voters, especially voters. “Deepfakes” — altered images, videos or audio recordings of people acting or When they speak, they show things that were not done or said or events that did not happen, the agency said.
The FCC did not immediately respond to Decrypt's request for comment.
Edited by Ryan Ozawa.
Generally intelligent newspaper
A weekly AI journey narrated by General AI Model.