OpenAI and Microsoft join forces to fight against government-related cyberattacks.

OpenAI and Microsoft join forces to fight against government-related cyberattacks.



OpenAI, the developer behind artificial intelligence (AI) chat GPT, has partnered with major investor Microsoft to prevent five cyberattacks linked to various malicious actors.

According to a report released Wednesday, Microsoft has tracked hacking groups with ties to Russian military intelligence, Iran's Revolutionary Guard, and the governments of China and North Korea.

LLMs use vast text data sets to generate human-like responses.

According to OpenAI, five of the cyberattacks were from two groups linked to China: Coal Typhoon and Salmon Typhoon. The attacks were linked to Iran in Crimson Sandstorm, North Korea in Emerald Slate and Russia in Forest Storm.

bybit

The teams attempted to employ ChatGPT-4, a company and cybersecurity tool, to debug code, create scripts, translate phishing campaigns, translate technical papers, detect malware, and study satellite communications and radar technology, according to OpenAI. When detected, the accounts are deactivated.

The company revealed the discovery as it implemented a blanket ban on government-sponsored hacking groups using AI products. While OpenAI effectively defends against these incidents, it acknowledges the challenge of avoiding any malicious use of its programs.

Related: OpenAI Gives ChatGPT Memory: No More Goldfish Brains?

After the launch of ChatGPT, policymakers began to scrutinize artificial AI developers following the proliferation of AI-generated deep lies and fraud. In June 2023, Open announced a $1 million cybersecurity grant program to advance and measure the impact of AI-based cybersecurity technologies.

Although OpenAI strives for cybersecurity and implements safeguards to prevent ChatGPT from generating harmful or inappropriate responses, hackers have found a way to bypass these measures and allow the chatbot to produce such content.

More than 200 entities, including OpenAI, Microsoft, Anthropic, and Google, recently partnered with the Biden administration to form the AI ​​Safety Institute and the United States AI Safety Institute Consortium (AISIC). The groups were formed in response to President Joe Biden's October 2023 executive order on AI security, which aims to promote the safe development of AI, combat AI-generated vulnerabilities, and address cybersecurity issues.

Magazine: ChatGPT With Nuclear Happy Trigger, SEGA's 80s AI, TAO Increases 90%: AI Eye

Leave a Reply

Pin It on Pinterest