OpenAI’s New AI Shows Steps Towards ‘Biological Weapons Hazards’, Ex-Worker Warns Senate

Openai'S New Ai Shows Steps Towards 'Biological Weapons Hazards', Ex-Worker Warns Senate



OpenAI's new GPT-o1 AI model is the first to demonstrate capabilities that can help experts identify known — and new — biological threats, a former company insider told U.S. senators this week.

William Sanders, a former member of the technical staff at OpenAI, told the Senate committee that “OpenAI's new AI system can assist experts in planning to reproduce a known biological threat and is the first system to demonstrate measures of biological weapons risk.” Subcommittee on Privacy, Technology, and the Law of the Judiciary.

He warned that this capability could cause “serious damage” if AGI systems are developed without proper safeguards.

The experts also testified that artificial intelligence is developing rapidly and a tricky measure known as artificial general intelligence is on the near horizon. At the AGI level, AI systems can match human intelligence on a variety of cognitive tasks and learn autonomously. If a publicly available system understands biology and develops new tools without proper oversight, the potential for malicious users to cause significant damage is greatly increased.

okex

“AI companies are making rapid progress in building AGI,” Saunders told the Senate committee. “It is reasonable that the AGI system can be built in three years.”

Helen Toner—who was an OpenAI board member and voted to fire co-founder and CEO Sam Altman—also expects to see AGI sooner rather than later. “Even if short-term predictions turn out to be wrong, the idea of ​​building human-level AI in the next decade or two should now be seen as a real possibility that requires major preparatory action,” she testified.

Saunders, who has worked at OpenAI for three years, highlighted the company's recently announced GPT-o1 AI system, which has “passed significant milestones” in its capabilities. As reported by Decrypt, OpenAI even said it decided to move away from the traditional number incrementing in GPT versions, because this model is not just an update, but an evolution – it features a whole new set of capabilities. A variety of models with different abilities.

Saunders is also concerned about the lack of adequate safety measures and controls in AGI development. “Nobody knows if AI systems are safe and secure,” he pointed out, criticizing OpenAI's new approach to secure AI development, which cares more about profitability than safety.

“While OpenAI pioneered aspects of this experiment, they repeatedly prioritized deployment over robustness,” he warned. “I believe there is a real risk that future AI systems will miss important critical capabilities.”

The testimony also revealed some internal challenges at OpenAI, particularly those that came to light after Altman's ouster. “The leadership team at OpenAI, tasked with developing approaches to mastering AGI, no longer exists. Its leaders and many key researchers have resigned after struggling to get the resources they need,” he said.

His words add another brick to the wall of complaints and warnings that AI security experts have been leveling about OpenAI's approach. Ilya Sutskever, who co-founded OpenAI and played a key role in ousting Altman, left after launching GPT-4o to found Safe Superintelligence Inc.

OpenAI co-founder John Shulman and alignment chief Jan Lake have left the company to join rival Anthroponic, with Lake saying that under Altman's leadership, security “took a backseat to shiny products.”

Likewise, former OpenAI board members Toner and Tasha McCauley wrote an op-ed published in The Economist, arguing that Sam Altman is prioritizing profit over responsible AI development, hiding key developments from the board and fostering a toxic environment within the company.

In the statement, Saunders called for urgent regulatory action, emphasizing the need for clear security measures in the development of AI, not only from companies, but also from independent parties. He also emphasized the importance of data protection in the tech industry.

The former OpenAI employee highlighted the broader implications of AGI development, including the potential to entrench existing inequities and facilitate manipulation and misinformation. Saunders also warned that “the loss of control of autonomous AI systems” could lead to the “extinction of humanity”.

Edited by Josh Quittner and Andrew Hayward.

Generally intelligent newspaper

A weekly AI journey narrated by General AI Model.

Blockcard

Pin It on Pinterest