AI security researchers leave OpenAI as a priority.

AI security researchers leave OpenAI as a priority.


The entire OpenAI team focused on the existential risks of AI has reportedly either resigned or joined other research groups.

Days after OpenAI's chief scientist and one of the company's co-founders, Ilya Sutskever, announced his resignation, Jan Lake, a former DeepMind researcher, posted on X that another co-leader of the OpenAI super-alignment team had resigned.

According to Lake, he left the company because of concerns about its priorities, which he believed were more focused on product development than AI security.

Source: Jan Leike

In a series of articles, Lyke argued that OpenAI's leadership had gotten its top priorities wrong, and that artificial general intelligence (AGI) development should emphasize security and preparedness as it moves forward.

Betfury

AGI is the term for a hypothetical artificial intelligence that can perform the same or better than humans on a variety of tasks.

After three years at OpenAI, Lyke criticized the company for prioritizing flashy products rather than nurturing a strong AI security culture and processes. He stressed the need for resource allocation, especially computing power accelerators, to support his group's important security research, which is being neglected.

“…I disagreed with OpenAI's leadership for a long time about the core issues of the company until we finally reached a dead end. Over the past few months, my team has been sailing with the wind…”

OpenAI established a new research team in July last year to develop a super-intelligent artificial intelligence capable of surpassing and defeating its creators. OpenAI chief scientist and co-founder Ilya Sutskever has been appointed co-leader of this new team, which has acquired 20% of OpenAI's computing resources.

Related: Reddit shares jump after hours on OpenAI data sharing deal

After recent releases, OpenAI has chosen to disband the ‘Superalignment' team and integrate its activities with other research projects within the organization. This decision is said to be a result of an ongoing internal restructuring that began in November 2023 in response to the management crisis.

Sutskever was part of the effort that saw the OpenAI board successfully push Altman as CEO in November of last year, before he was hired into the role after a backlash from employees.

According to The Information, Sutskever informed employees that the board's decision to remove Sam Altman was in line with their responsibility to ensure OpenAI develops AGI that benefits all of humanity. As one of six board members, Sutskever emphasizes the board's commitment to aligning OpenAI's goals with the greater good.

Magazine: How to get better crypto predictions from chatgpt, Humane AI pin slammed: AI Eye

Leave a Reply

Pin It on Pinterest