Microsoft Temporarily Bans ChatGPT: Security Test Results

Microsoft Inadvertently Blocks Employees From Chatgpt While Testing Security Systems


Microsoft, a significant investor in OpenAI, has temporarily banned its employees from accessing OpenAI's popular AI tool, ChatGPT.

The ban was later identified as an unintended consequence of security system tests for large language models (LLMs).

Microsoft briefly banned the use of ChatGPT

Microsoft's move was made public when employees of the software giant were banned from using ChatGPT. Microsoft announced internally:

bybit

“Many AI tools are not available for employees to use due to security and data risks.”

The ban was not limited to ChatGPT but extended to external AI services such as Midjourney and Replica.

Despite the initial confusion, Microsoft restored access to ChatGPT after discovering the bug. A spokesperson later clarified.

“We were testing endpoint monitoring systems for LLMS and accidentally turned them on for all employees.”

Microsoft has encouraged its employees and customers to use services such as Bing Chat Enterprise and ChatGPT Enterprise, emphasizing their superior privacy and security protections.

The relationship between Microsoft and OpenAI is a close one. Microsoft has been using OpenAI services to develop its Windows operating system and office applications, and these services run on Microsoft's Azure cloud infrastructure.

ChatGPT, a service with over 100 million users, is famous for its human-like responses to chat messages.

Change in ChatGPT website visitors since launch. Source: ToolTester

They are trained on various types of Internet data, which has led some companies to restrict their use to prevent the sharing of sensitive information. Microsoft's update recommends using their own Bing Chat tool, which also relies on OpenAI's artificial intelligence models.

Risk assessment

In these events, OpenAI, the parent company of ChatGPT, has launched an event team. This group, led by Alexander Madry, director of the Massachusetts Institute of Technology's Center for Deployable Machine Learning, aims to assess and manage the risks posed by artificial intelligence models.

These risks include interpersonal persuasion, cyber security and disinformation threats.

Read more: 11 Best ChatGPT Chrome Extensions to Check Out in 2023

The OpenAI initiative comes as the world grapples with the potential threats of frontier AI:

“High-capacity general-purpose AI models can perform a wide variety of tasks that match and match or exceed the capabilities of today's most advanced models.”

As AI continues to evolve and shape our world, companies like Microsoft and OpenAI are pushing the boundaries of what AI can achieve and working to ensure its safe and responsible use.

Disclaimer

Adhering to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news report aims to provide accurate and up-to-date information. However, readers are advised to independently verify the facts and consult with experts before making any decisions based on this content. It operates without personal beliefs, emotions or biases, providing data-driven content. A human editor carefully reviews, edits, and approves the article for publication to ensure relevance, accuracy, and compliance with BeInCrypto's editorial standards.

Leave a Reply

Pin It on Pinterest