The OpenAI board defends CEO Sam Altman amid claims of a ‘toxic culture’
7 months ago Benito Santiago
Days after OpenAI announced the formation of its new security committee, former board members Helen Toner and Tasha McCauley accused CEO Sam Altman of putting profits ahead of responsible AI development, hiding key developments from the board and creating a toxic environment at the company.
But current OpenAI board members Brett Taylor and Larry Summers fired back today at Altman's staunch defense of the lawsuit, saying Toner and McCauley were trying to reopen a closed case. The argument was revealed in a pair of op-eds published in The Economist.
The former board members were first fired after protesting the OpenAI board's lack of buy-in from the CEO.
“Last November, to save this self-governing structure, OpenAI's board fired its CEO,” Toner and McCauley, who played a role in Altman's ouster last year, wrote on May 26. “In the specific case of OpenAI, the board's duty to provide independent oversight and protect the company's public interest mission, we stand by the board's action.”
In their published response, Brett Taylor and Larry Summers — who joined OpenAI after Toner and McCauley left the company — defended Altman, denying the claims and affirming his commitment to security and governance.
“We reject the claims made by Ms. Toner and Ms. McCauley regarding the actions taken at OpenAI,” they wrote. “We regret that Ms. Toner has re-examined issues that have been thoroughly investigated by the WilmerHale-led review rather than moving forward,” he said.
While Toner and McCauley did not mention the company's new safety and security committee, their letter echoed concerns that OpenAI could not police itself and its CEO with integrity.
“Based on our experience, we believe that autonomy cannot reliably withstand the pressure of profit incentives,” they wrote. “We've heard of changes since his return to the company—including his return to the board and the release of a high-security.-oriented talent—sick for OpenAI's experiment in autonomy.”
The former board members said Altman's “long-standing patterns of behavior” left the company's board unable to properly control “key decisions and internal security protocols.” Altman's current colleagues, however, pointed to the conclusions of an independent review of the conflict by the company.
“The review's findings refute the idea that any AI safety concerns warrant Mr. Altman's replacement,” they wrote. Disclosure to customers or business partners.
Perhaps more troubling, Toner and McCauley also accused Altman of fostering a toxic company culture.
“Several senior executives have privately shared serious concerns with the board, saying they believe Mr. Altman has fostered a ‘toxic culture of lies' and is acting ‘out of character.' [that] It can be described as psychological abuse.
But Taylor and Summers denied their claims, saying Altman was held in high esteem by his employees.
“In six months of almost daily engagement with the company, we have found Mr. Altman to be consistently collegial on all relevant matters and with the management team,” he said.
Taylor and Summers also said Altman is committed to working with the government to address concerns about AI development.
The public back-and-forth comes amid a tumultuous era of OpenAI that began with a brief dismissal. Just this month, he joined rival Anthropic after his former head of alignment made similar allegations against Altman. After she couldn't get her license, he had to backtrack on a voice model similar to actress Scarlett Johansson. The company disbanded its superannuation team, and abusive NDAs reportedly prevented former employees from criticizing the company.
OpenAI has secured agreements with the Department of Defense to use GPT technology for military applications. Major open AI investor Microsoft has reportedly made a similar arrangement with ChatGPT.
The claims shared by Toner and McCauley appear to be consistent with statements shared by former OpenAI researcher Jan Lake, who left the company, saying that “the security culture and processes over the years have changed.” [at OpenAI] Shiny products have taken a back seat” and that his lineup was “flying on the wind”.
Taylor and Summers addressed these concerns in part in their column, noting the new security committee and its responsibility to “make recommendations to the full board on matters related to critical security and safety decisions for all OpenAI projects.”
Toner recently stepped up her claims of Altman's lack of transparency.
“To get the sense of what I'm talking about, the board hasn't been told in advance when ChatGPT comes out in November 2022,” she said on the TED AI Show podcast earlier this week. “We learned about ChatGPT on Twitter.”
She said the OpenAI board did not know Altman owned the OpenAI Startup Fund, although he said he had no financial stake in OpenAI. The fund, unbeknownst to the board, has invested millions in other businesses that it raised from partners like Microsoft. Altman's ownership of the fund was terminated in April.
OpenAI did not respond to a request for comment from Decrypt.
Edited by Ryan Ozawa.
Generally intelligent newspaper
A weekly AI journey narrated by General AI Model.