This week, two of tech's most influential voices offered opposing views on the development of artificial intelligence, highlighting the growing tension between innovation and security.
CEO Sam Altman on Sunday night at A Blog post As for the company's approach, OpenAI has tripled its user base to over 300 million weekly active users while running on Artificial General Intelligence (AGI).
“We're confident that we know how to build AGI as we understand it now,” Altman said, adding that by 2025 AI agents will “join the workforce” and “materially change the way companies produce.”
Altman says that Open Agency is leading beyond AI agents and AGI, saying that the company has started working on “dominance in the true sense of the word.”
The time frame for the delivery of AGI or superintelligence is unclear. OpenAI did not immediately respond to a request for comment.
But hours before Sunday, Ethereum co-creator Vitalik Buterin proposed that Using blockchain technology to create universal fail-safe mechanisms for advanced AI systems, including a “soft pause” capability that temporarily limits industrial-scale AI operations if warning signs are detected.
Crypto-based security for AI security
Buterin talks about “d/acc” or decentralized/defensive acceleration here. Simply put, d/acc is a variation on e/acc, or effective acceleration, a philosophical movement championed by top Silicon Valley figures like a16z's Marc Andreessen.
Buterin's d/acc supports technological progress but prioritizes developments that improve security and human agency. Unlike Effective Acceleration (e/acc), which takes a “grow at any cost” approach, d/acc focuses on building resilience first.
“D/acc is an extension of the fundamental values of Crypto (decentralization, censorship resistance, open global economy and society) to other areas of technology,” Buterin wrote.
Looking back at how d/acc has evolved over the past year, Buterin wrote about how the transition to AGI and advanced intelligence can be made more secure by using crypto strategies like zero-knowledge proofs.
Under Buterin's proposal, the major AI computers would need weekly approval from three international groups to continue operating.
“The signatures will be device-agnostic (if desired, we can even require zero-knowledge proof that they are published on the blockchain) so it will be all-or-nothing. There will be no practical way to authorize one device to continue running without authorizing all other devices,” Buterin explained.
The system acts as a master switch where all authorized computers work or none work – preventing anyone from doing selective enforcement.
“Just being able to soft-stop until such a critical moment occurs does little harm to developers,” Buterin said, describing the system as an insurance policy against catastrophic situations.
In any case, OpenAI's explosive growth from 2023—from 100 million to 300 million weekly users in two years—shows that AI adoption is growing rapidly.
From an independent research lab to a major technology company, Altman acknowledged the challenges of building “an entire company from scratch around this new technology.”
The proposals reflect broader industry debates around managing AI development. Proponents have previously argued that implementing any global regulatory regime would require unprecedented collaboration between major AI developers, governments and the crypto sector.
Buterin “One year of ‘war mode' can easily be worth a hundred years of work in satisfactory conditions,” Buterin wrote. “If we have to restrict people, it seems better to restrict everyone equally and try to cooperate to organize that rather than one party trying to control everyone.”
Edited by Sebastian Sinclair.
Generally intelligent newspaper
A weekly AI journey narrated by a generative AI model.