AI-Powered Cybercrime Will Explode in 2024: CrowdStrike Executive

AI-Powered Cybercrime Will Explode in 2024: CrowdStrike Executive



The new year brings new cybersecurity threats powered by artificial intelligence, CrowdStrike chief security officer Sean Henry told CBS Morning Tuesday.

“I think that's everybody's concern,” Henry said.

“AI has really put this incredibly powerful tool in the hands of the average person and made them incredibly capable,” he explained. “So the adversaries are using this new innovation, AI, to overcome various cybersecurity capabilities to break into corporate networks.”

Henry highlighted the use of AI to infiltrate corporate networks as well as increasingly sophisticated video, audio and text deep-fakes to spread disinformation online.

bybit

Henry stressed the importance of checking the source of information and never taking anything published online at face value.

“You have to check where it came from,” Henry said. “Who is telling the story, what is their motivation, and can you verify it with multiple sources?”

It's hard because people have 15 or 20 seconds when they're using a video and they don't have the time or often don't put in the effort to find this information, and that's a problem.

In the year Noting that 2024 is an election year for many countries, including the US, Mexico, South Africa, Taiwan and India – Henry said democracy itself is at the polls, and cybercriminals are looking to exploit political chaos using AI.

“We've seen foreign adversaries focus on US elections for years, not just in 2016. [China] In the year He targeted us in 2008,” Henry said. “We have seen Russia, China and Iran engage in this type of disinformation and disinformation over the years. They are going to use it again here in 2024.

“People need to understand where information is coming from,” Henry said. Because there are people who have evil intentions and cause some big problems.

The security of voting machines is of particular concern in the upcoming 2024 US election. Asked whether AI could be used to hack voting machines, Henry hoped the decentralized nature of the US voting system would prevent that from happening.

“I think our system in the United States is too decentralized,” Henry said. “There are pockets of individuals that can be targeted, such as voter registration papers, etc. [but] I don't think it will affect the election in a wider way than the problem of voter opinion in the election – I don't think it's a big issue.

Henry highlighted the ability to provide technical tools to non-technical cybercriminals.

“AI provides a very efficient tool in the hands of people who may not have high technical skills,” Henry said. “They can write code, create malware, phishing emails, etc.”

In October, the RAND Corporation released a report suggesting that generative AI could be harnessed to help terrorists plan biological attacks.

“In general, if the malicious actor is clear [in their intent]”You get a response that tastes like, ‘Sorry, I can't help you with that,'” co-author and RAND Corporation senior engineer Christopher Moton told Decrypt in an interview. “So you generally have to use one of these jailbreak methods or be fast in engineering to get one step below the guardrail.”

In another report, cyber security firm SlashNext reported that email phishing attacks have increased by 1265 percent since the beginning of 2023.

International policymakers have spent much of 2023 looking for ways to control and control the misuse of generative AI, including the UN Secretary General, who has warned against the use of AI-generated deep fakes in conflict zones.

In August, the US Federal Election Commission filed a petition to ban the use of artificial intelligence in ads leading up to the 2024 election season.

Tech giants Microsoft and Meta have announced new policies aimed at curbing AI-powered political disinformation.

“By 2024, the world may see more authoritarian governments seeking to interfere in electoral processes,” Microsoft said. “And combining traditional techniques with AI and other new technologies can threaten the integrity of electoral systems.”

Even Pope Francis, the profoundly false Pope Francis, has introduced artificial intelligence in sermons on several occasions, spawned by viral AI.

Pope Francis said, “We must be aware of the rapid changes that are happening now and guide them in ways that protect basic human rights and protect basic human rights.” “Artificial intelligence should serve our best human potential and our highest aspirations, not compete with them.”

Stay on top of crypto news, get daily updates in your inbox.

Leave a Reply

Pin It on Pinterest