OpenAI loses two more leaders – what does that mean for AI security?

Openai Loses Two More Leaders - What Does That Mean For Ai Security?



The same week OpenAI commanded global attention for the release of its latest AI model, prominent executives Ilya Sutskever and Jan Leike announced they were leaving the company. Coupled with February's departure of Andrey Karpati, it appears that people committed to safe, human-centered AI development have left the company.

Has the big name in AI lost the brake pedal in the fierce race to dominate the highly disruptive industry?

The failed coup

It was in early 2023 that Sutskever, co-founder and former chief scientist, was rumored to be the leader behind the controversial move to oust CEO Sam Altman over concerns that he was a proponent of AI security protocols. His dramatic removal led to breathless headlines and rumours, but Altman returned to work a week later.

okex

Sutskever soon apologized and stepped down from OpenAI's board—and hasn't made any public statements or appearances since.

Indeed, when OpenAI presented its much-hyped product update on Monday, Suitskever was no exception.

Suitskever announced his official departure two days later. His resignation statement was like Altman's public recognition.

“After nearly ten years, I have decided to retire from OpenAI,” Sutskever wrote. “The company's approach is nothing short of miraculous, and I'm confident that OpenAI will build meaningful and useful AGI under the leadership of (Sam Altman, Greg Brockman and Mira Murati).

“It's been an honor and a privilege to work together and I'm going to miss everyone so much,” he continued, adding that he's moving on to focus on a personally meaningful project.

“This is very sad for me; Ilya is simply one of the greatest minds of our generation, a leading light in our field and a dear friend,” said Sam Altman, “OpenAI wouldn't exist without him.”

Shortly after, OpenAI announced that Jakub Panochi would fill Suitskever's position. Panocchi previously served as director of research and had a more technical role, mostly in expanding AI.

But there may be more to the changing of the guard. When Suitskever announced his resignation, he only followed a handful of Twitter accounts, including OpenAI and the dachshund meme account. Adam Sulick, a self-described “AI Scientist,” responded to the announcement, saying that Suitskever would still share insider information on behalf of a human, ignoring any non-disclosure agreements it might buy.

Suitskever followed Sulik back after the comment…then unfollowed him a few hours later.

As it turns out, Suitskever wasn't alone on his way out. A few hours later, Jan Ley, who co-founded OpenAI's “Superalignment” team with Suitskever to ensure ethical, long-term AI development — resigned in a frosty tweet saying, “I'm leaving civilization.”

No pleasantries, no thanks to OpenAI, and no response from OpenAI execs.

Before joining OpenAI, Like worked at Google Deepmind. At the latter organization, the Super Alignment team focused on ensuring that deterministic systems where AI's are expected to meet or exceed human intelligence are aligned with human values ​​and goals.

However, not much is known about this AI security team. Apart from the two leaders, OpenAI did not provide any additional information about the unit, saying “researchers and engineers from [its] The former alignment team, as well as researchers from other teams within the company.

Sulik commented on Lake's resignation, sharing his concerns about the timing of the event.

“Seeing Jan go after Illya is not a good way for humanity to be safe,” he tweeted.

OpenAI did not immediately respond to a request for comment from Decrypt.

These departures come months after OpenAI co-founder Andrei Karpati left the company in February, saying he wanted to pursue personal projects. Karpathy has worked as a research scientist at OpenAI and is involved in various projects ranging from computer vision applications to AI assistants and key developments during ChatGPT training. He took a five-year sabbatical from the company between 2017 and 2022 to lead AI development at Tesla.

With these three departures, OpenAI misses some of the most important minds in pursuing an ethical approach to AGI.

Rejected, new deals entered into.

There are signs that Altman's failed ouster removed the resistance from more lucrative but morally clouded opportunities.

For example, shortly after Altman's reinstatement, OpenAI loosened guidelines to use its technology for potentially harmful applications, such as weapons development—previously banning such activities entirely.

In addition to striking a deal with the Pentagon, OpenAI also launched a plugin repository that allows anyone to design and share personalized AI assistants, effectively eliminating direct control. Most recently, the company began to “explore” the creation of adult content.

The waning strength of ethical advocates and guidelines extends beyond Open AI. Microsoft axed its entire ethics and society team in January, and Google sidelined its AI task force the same month. Meta, meanwhile, broke down his AI crew. All three tech giants are in a frenzy to dominate the AI ​​market.

There is an obvious cause for concern. AI products are already a mass mainstream social phenomenon with billions of interactions per day. There appears to be a growing potential for asymmetric AI to negatively impact the development of future generations in business, political, societal, and family matters.

The rise of philosophical movements such as “efficient acceleration” that value rapid development over ethical concerns exacerbates these concerns.

For now, the main remaining safeguards seem to be a mix of open source development, voluntary participation in AI security consortia – such as the Frontier Model Forum or MLCommons – and direct government regulation from AI legislation in the UK. to the G7 AI code of conduct.

Generally intelligent newspaper

A weekly AI journey narrated by a generative AI model.

Leave a Reply

Pin It on Pinterest