AI Advances May Disrupt Self-Initiated Cyber ​​Threats: NCSC Report

Ai Advances May Disrupt Self-Initiated Cyber ​​Threats: Ncsc Report


Artificial intelligence (AI) is set to significantly increase cyber threats, particularly ransomware attacks, over the next two years, according to a report by the UK's National Cyber ​​Security Center (NCSC).

In the report, the NCSC outlines how AI's impact on cyber threats can be balanced by using AI to strengthen cybersecurity through refinement and improved design. However, further research is recommended to measure how AI advances in cyber security can reduce the impact of threats.

A ransomware attack is a cyber attack in which malicious software is deployed to encrypt a victim's files or entire system. The attackers then demand a ransom, typically in cryptocurrency, to provide the victim with a decryption key or tools to restore their data.

AI's impact on cyber threats is expected to vary, allowing advanced government actors to access more sophisticated AI-driven cyber operations. The report highlights social engineering where AI can greatly enhance capabilities, making phishing attacks more convincing and harder to detect.

Ledger
A chart showing how much AI-driven skill growth will happen over the next two years. Source: NCSC

The NCSC report states that AI will primarily enhance the capabilities of risk actors in social engineering. Generative AI can already create persuasive interactions, like documents that trick individuals, free of semantic and grammatical errors common in phishing, and this trend is expected to evolve and gain popularity over the next couple of years.

The National Crime Agency's Director of Threats, James Babbage, supported this statement.

“AI services will lower barriers to entry, increase the number of cybercriminals, and increase their capabilities by improving the scale, speed, and effectiveness of existing attack methods. Fraud and child sexual abuse can be particularly damaging.

Related: AI fools voters and politicians ahead of 2024 US election – ‘I thought it was real'

The NCSC review points to challenges in cyber recovery with AI models such as generative AI and large scale linguistic models. These models make it difficult to verify the legitimacy of emails and password reset requests. The decreasing time between security updates and vulnerability exploitation makes it challenging for network administrators to quickly fix vulnerabilities.

Using advanced AI in cyber operations requires knowledge, resources, and access to quality data, so highly skilled government actors are best placed to harness AI's potential. Other government actors and commercial companies will see modest talent growth over the next 18 months, the report said.

While the NCSC recognizes the skills, tools, time and money needed to use advanced AI in cyber operations, it says these factors will become critical as AI models become more widespread. Access to AI-enabled cyber tools is predicted to increase as powerful groups monetize AI-enabled cyber tools, with enhanced capabilities available to anyone willing to pay.

The report notes that the number, complexity, and impact of cyber operations will increase when threat actors are able to use AI effectively. NCCC chief executive Lindy Cameron said the government must first harness the power of AI when managing the risks:

We must both ensure that we use AI technology to its full potential and manage its risks – including its implications for cyber threats.

To address this evolving threat, the UK Government has invested £2.6 billion in its Cyber ​​Security Strategy by 2022 to improve the UK's resilience.

Magazine: Crypto+AI token picks, AGI will take ‘longer', Galaxy AI up to 100M phones: AI Eye

Leave a Reply

Pin It on Pinterest