AI misinformation could sway the 2024 elections—here’s how OpenAI plans to fight it

Ai Misinformation Could Sway The 2024 Elections—Here'S How Openai Plans To Fight It



As the threat of artificial intelligence to democracy remains a top concern for policymakers and voters worldwide, OpenAI released its plan on Monday to help ensure transparency in AI-generated content and improve reliable voting data ahead of the 2024 elections.

Since the launch of GPT-4 in March, generative AI and its misuse, including artificial intelligence generated by AI, have become a central part of the conversation surrounding AI's meteoric rise in 2023. In 2024, we could see serious consequences from such an AI- misinformation spread between popular elections, including the US presidential race.

“As we prepare for elections in the world's largest democracies in 2024, our approach is to continue our work on platform security by increasing accurate voting data, implementing measured policies, and improving transparency,” OpenAI said in a blog post.

OpenAI added, “Experts from our security systems, risk intelligence, legal, engineering and policy teams are working to rapidly investigate and resolve potential attacks.”

bybit

In August, the U.S. Federal Election Commission said it would move forward with considering a petition to ban AI-generated campaign ads, with FEC Commissioner Alan Dickerson saying, “There are serious First Amendment concerns lurking behind this effort.”

For ChatGPT's U.S. customers, OpenAI said it would direct users to CanIVote.org, a non-distributor site, when asked “certain procedural voting questions.” The company says implementing these changes will inform its approach globally.

“We look forward to continuing to learn and work with partners to anticipate and prevent misuse of our tools ahead of this year's global election,” he added.

In ChatGPT, OpenAI says it prohibits developers from creating chatbots that mimic real people or institutions, such as government officials and offices. Also impermissible, OPNI said, are those intended to dissuade people from voting, including encouraging voting or misrepresenting who is eligible to vote.

AI-generated deep hoaxes, fake images, videos and audio created using generative AI have gone viral in the past year, with several featuring US President Joe Biden, former President Donald Trump and even Pope Francis shared on social media.

To stop the Dall-E 3 image generator from being used in deep campaigns, OpenAI says it will implement content credentials that add a mark or “icon” to an AI-generated image of content provenance and authenticity.

“We are also experimenting with provenance classifiers to identify Dal-E generated images,” OpenAI said. “Our internal test images showed promising early results, even though they were subjected to the usual types of enhancements.”

Last month, Pope Francis called on world leaders to create a binding global agreement to regulate AI.

“The inherent dignity of each person and the fraternity that binds us as members of the human family must serve as an indisputable criterion for the development and evaluation of new technologies before they are put to work. Francis has contributed to respect justice and peace.

In order to curb misinformation, ChatGPT said it will start providing current news stories globally, including quotes and links.

“The transparency of information origin and balance in news sources helps voters to better evaluate information and decide for themselves what to believe,” the company said.

Last summer, OpenAI donated $5 million to the American Journalism Project. Last week, OpenAI struck a deal with The Associated Press to give its developer access to the global news outlet's archive of news stories.

OpenAI's comments about the statement in the news report come as the company faces several copyright lawsuits, including from the New York Times. In December, the Times sued OpenAI and OpenAI's biggest investor, Microsoft, alleging that millions of its texts were used to train ChatGPT without permission.

“OpenAI and Microsoft have built a tens of billions of dollars in business by unlicensedly exploiting the collective labor of humanity,” the suit says, adding that “in training their models, defendants provided copyrighted material to exploit exactly what copyright law was designed to do.” To maintain: Elements of expression that can be maintained, such as writing, word choice, and the organization and presentation of facts.

OpNIE called the New York Times' lawsuit “frivolous,” saying the publication used its interests to prompt the chatbot to generate responses similar to Times articles.

Edited by Andrew Hayward.

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest