Former OpenAI Chief Scientist Ilya Sutskever started SSI to focus on AI security

Former Openai Chief Scientist Ilya Sutskever Started Ssi To Focus On Ai Security


OpenAI co-founder and former chief scientist Ilya Sutskever and former OpenAI engineer Daniel Levy have partnered with startup accelerator Y Combinator to form Secure Super Intelligence Inc. (SSI). The goal and product of the new company is evident from the name.

SSI is an American company with offices in Palo Alto and Tel Aviv. He said in an online announcement on June 19, 2010 that it will advance artificial intelligence (AI) by developing security and capabilities.

“Our single focus means there are no distractions beyond management or product cycles, and our business model means safety, security and improvement are insulated from short-term business pressures.”

Sustkever and Gross are already worried about AI security

Sutskever released Open AI on May 14. He was involved in ousting CEO Sam Altman and played an ambiguous role at the company after Altman stepped down from the board after his return. Daniel Levy was one of the researchers who left OpenAI a few days after Suitskever.

okex

Related: OpenAI makes ChatGPT ‘less verbose', blurs writer-AI distinction

Sutskever and Jan Lake were the leaders of OpenAI's Superalignment team, created in July 2023 to “learn how to guide and control AI systems that are smarter than us”. Such systems are called artificial general information (AGI). OpenAI has dedicated 20% of its computing power to the Superalignment team during its creation.

Laik left OpenAI in May and is now the team lead at Amazon-backed AI startup Anthropic. OpenAI defended its security-related precautions in a lengthy X post by company president Greg Brockman, but disbanded the Superalignment team after the researchers resigned in May.

Other high-tech figures are also concerned

The former OpenAI researchers are among many scientists concerned about the future direction of AI. Vitalik Butertin, founder of Ethereum, called AGI “dangerous” among the workers of OpenAI. “Such models are far less destructive than both corporate megalomania and military forces,” he added.

Source: Ilya Sutskever

Tesla CEO Elon Musk, once a supporter of OpenAI, and Apple co-founder Steve Woznick urged more than 2,600 tech leaders and researchers to halt training of AI systems for six months, reflecting on the “deep danger” it represents to humanity.

The SSI advertisement notes that the company is hiring engineers and researchers.

Magazine: How to get better crypto predictions than chatgpt, Humane AI pin slammed: AI Eye

Leave a Reply

Pin It on Pinterest