Can Voters See AI’s Deep Lies Before the 2024 Presidential Election?

Can Voters See Ai'S Deep Lies Before The 2024 Presidential Election?



In the year The United States is one of those states that is gearing up for a big election cycle in 2024. With the advent of artificial intelligence (AI) tools available to the public, political deep lies are becoming more and more common, asking voters. Get new skills to identify the real one.

In the year On February 27, Senate Intelligence Committee Chairman Mark Warner said America is “less prepared” for election fraud in the upcoming 2024 election than it was in 2020.

This is due in large part to the rise of AI-generated deep fakes in the US over the past year. According to data from identity verification service SumSub, North America will see a 10x increase in the number of deep insights globally by 2023.

Between January 20 and 21, US President Joe Biden said he received robocalls telling New Hampshire citizens not to vote in the state's primary election on January 23.

Binance

A week later, the incident prompted regulators in the US to ban AI-generated voices from automated phone scams, making them illegal under US telemarketing laws.

But, as with all scams, where there's a will, there's a way, regardless of the laws. As the United States prepares for Super Tuesday on March 5 — when several US states hold primary elections and the reason — it is under threat from fake AI-generated information and hoaxes.

Cointelegraph spoke with Pavel Goldman Kaladin, head of AI and machine learning at SumSub, to better understand how voters can spot deep fakes and handle fake identity fraud situations.

How to identify a deep fake

Kalaidin said that while the number of deep fakers worldwide has increased 10x, he expects it to grow further as countries enter election season.

He stressed that they should be aware of two types of deep fraud: one that comes from “tech-savvy groups” that use advanced technologies and hardware such as high-end graphics processing units and generative AI models, and “hard-to-detect low-level fraudsters” that use tools commonly found on consumer computers. .

“It's important that voters are proactive in checking the content in their feeds and watch out for video or audio content,” he said.

“Individuals should prioritize verifying the source of information, reliable, trustworthy media and distinguishing content from anonymous users.”

According to an AI specialist, there are several signs to recognize in a deep fake:

If any of the following features are present: unnatural hand or lip movements, artificial backgrounds, uneven motion, lighting changes, skin tone variations, unusual blinking patterns, mismatch of lip movements with speech or digital artifacts, the content is missing. created”

However, Kaladin cautioned that the technology will continue to “move rapidly,” noting recently that “the human eye would not be able to detect deep fakes without the technology.”

Deep fake generation, distribution and solutions

The real problem, he said, lies in the generation of deep fakes and their subsequent distribution. The accessibility of AI has opened the door of opportunity for many, but the liability for the rise of fake content is accessibility. He added:

“The democratization of AI technology has made face-swapping applications widely accessible and the ability to manipulate content to construct false narratives.”

The spread of this deeply false content follows, with the lack of clear legal guidelines and policies making it easy to spread such misinformation online.

“This also makes voters less aware, which increases the risk of making decisions that are not based on basic information,” Kaladin warned.

Possible solutions include mandatory checks for AI or deep content on social media platforms to inform users.

Platforms must use deep lie and visual detection technologies to ensure content authenticity, protect users from misinformation and outright lies.

Another suggested method involves employing user authentication, “where authenticated users bear responsibility for the accuracy of the content displayed, while reminding unauthenticated users to exercise caution in trusting the content.”

This unstable climate has led governments around the world to begin to consider adequate measures. India has issued advice to domestic tech companies, saying it needs to approve new reliable AI tools for public use before its own 2024 elections.

In Europe, the European Commission has created AI disinformation guidelines for platforms operating in the region in light of several election cycles in the region. Not long ago, Meta – the parent company of Facebook and Instagram – released its own strategy for the EU to combat the misuse of generative AI on platforms.

Magazine: Google's Multi-Boarded Gemini AI to Adjust, ChatGPT Goes Crazy: AI Eye

Leave a Reply

Pin It on Pinterest