OpenAI Exec Says Who Stopped Security ‘Took a Backseat to Shiny Products’

Openai Exec Says Who Stopped Security 'Took A Backseat To Shiny Products'



Jan Lake, founder of OpenAI Alignment and the “Superalignment” initiative, took to Twitter (aka X) on Friday to explain why he left the AI ​​developer. In a Twitter thread, Lyke cited a lack of resources and a security focus as the reason for his decision to leave the ChatGPT maker.

The OpenAI Alignment or Master Alignment Team is responsible for creating secure and more human-centric AI models.

Like's departure marks the OpenAI team's third high-profile release since February. On Tuesday, OpenAI co-founder and former chief scientist Ilya Sutskever announced that he is leaving the company.

“Getting out of this job is one of the hardest things I've ever done,” Lake wrote. Because we need to quickly figure out how to direct and control AI systems smarter than us.

okex

Although Lyke thought OpenAI would be the best place to do research on artificial intelligence, he didn't always see eye-to-eye with the company's leadership.

“Building machines that are smarter than humans is an inherently risky business,” Lake warned. “But over the years, safety culture and processes have taken a back seat to flashy products.”

Citing artificial general intelligence (AGI) risks, OpenAI said it had a “huge responsibility,” but said the company was more focused on achieving AGI, not security, and that his team was “sailing with the wind” and struggling. For computer utilities.

Also known as the singularity, artificial general intelligence refers to an AI model that can solve problems in a variety of human-like environments, as well as the ability to teach itself and solve problems for which the program is not trained.

On Monday, OpenAI unveiled several new updates to its flagship AI product, ChatGPT, including a faster, smarter GPT-4O model. According to Like, his former team at OpenAI is working on several projects related to more intelligent AI models.

Before working for OpenAI, Lake worked as an alignment researcher at Google DeepMind.

“It's been such a wild ride over the past ~3 years,” Lake wrote. “My team started for the first time [Reinforcement Learning from Human Feedback] LLM with InstructGPT, published the first scalable control over LLMs; [and] Pioneering automated interpretation and weak-to-strong generalization. Lots of exciting stuff coming out soon.

According to Lyke, a serious discussion about the implications of achieving AGI is long overdue.

“We have to make it a priority to prepare for them as best we can,” Lake continued. “Only then can we ensure that AGI will benefit all of humanity.”

While Like didn't mention any plans in the thread, he encouraged OpenAI to be ready when AGI becomes a reality.

“Learn to feel AGI,” he said. “Apply with appropriate gravity for what you are building. I believe you can ‘send' the required cultural change.

“I trust you,” he concluded. “The world is counting on you.”

Like did not immediately respond to a request for comment from Decrypt.

Edited by Andrew Hayward.

Generally intelligent newspaper

A weekly AI journey narrated by General AI Model.

Leave a Reply

Pin It on Pinterest