Ethereum creator Vitalik Buterin has warned against rushing into ‘very dangerous’ super-intelligent AI.

Ethereum creator Vitalik Buterin has warned against rushing into 'very dangerous' super-intelligent AI.


Vitalik Buterin, the mastermind behind Ethereum, called for a more cautious approach to AI research, calling the current pessimism “very dangerous.” Responding to OpenAI and its leadership, Ryan Selkis—of crypto intelligence firm Messari—the world's second most influential blockchain developer—outlines his views on AI alignment—the key principles that should drive development.

“Superintelligent AI is very dangerous, and we shouldn't rush into it and push it against people who try,” Buterin said.

Superintelligent AI is theoretically an artificial intelligence that surpasses human intelligence in almost all domains. While many see Artificial General Intelligence (AGI) as the ultimate glimpse of the emerging technology's full potential, advanced intelligent models will be next. While today's state-of-the-art AI systems have yet to reach these levels, advances in machine learning, neural networks, and other AI-related technologies continue to improve — and people are excited and frustrated by turns.

“AGI is too important for us to look like another soft-spoken narcissist,” Misari tweeted, buterin stressed the need for diverse AI ecosystems to avoid a world where the enormous value of AI is owned and controlled by a few people.

Phemex

A strong ecosystem of open models running on consumer hardware [is] “There is an important hedge against the future where the value captured by AI is so concentrated and most human thought is read and communicated by a few central servers controlled by a few people.” “Such models are far less dangerous than both corporate megalomania and the military.”

The creator of Ethereum has been following the AI ​​scene closely, recently praising the open source LLM Llama3 model. Additionally, one study suggested that OpenAI's GPT-4o multimodal LLM could pass the Turing test, where human responses are indistinguishable from those generated by AI.

Buterin also explained that AI models are divided into “small” and “large” groups, with a focus on controlling “large” models as a logical priority. However, he expressed concern that many current ideas could increasingly be labeled “big”.

Buterin's comments come amid heated debate over the resignations of key people in AI Alignment and OpenAI's Super Alignment research team. Ilya Sutskever and Jan Lake left the company, with Lake accusing OpenAI CEO Sam Altman of prioritizing “shiny products” over responsible AI development.

OpenAI was separately revealed to have strict non-disclosure agreements (NDAs) that prevent employees from discussing the company after they leave.

High-level long-term debates about superintelligence are becoming more urgent, with experts voicing widely divergent recommendations and voicing concerns.

Paul Christiano, who previously led the Language Model Alignment team at OpenAI, has founded the Alignment Research Center, dedicated to aligning AI and machine learning systems with “human needs.” As reported by Decrypt, Cristiano suggested that there could be a “50/50 chance of disaster shortly after we get human-level systems.”

On the other hand, Yan Lekun, the lead researcher of the meta, believes that such a disaster is highly unlikely. In the year He tweeted in April 2023 that a “hard takeoff” scenario was “absolutely impossible.” Lekun says short-term AI developments will shape the long-term trajectory and have a profound impact on the way AI is created.

Buterin, instead, thinks of himself as the center. In the year In an essay he wrote in 2023 – which he approved today – “It seems very difficult to live in a ‘friendly' super-intelligent-AI-dominated world where humans are anything other than domesticated animals,” but he also argued that “most of the time, if our civilization's technology version N creates problems and version N+1 fixes it, but it doesn't happen automatically and requires deliberate human effort.In other words, if superintelligence is the issue, people will probably find a way around it.

The release of OpenAI's more cautious alignment researchers and changes in company security policies have raised broader and more mainstream concerns about a lack of focus on ethical AI development among major AI startups. Indeed, Google, Meta, and Microsoft have reportedly disbanded their teams responsible for ensuring that AI is created safely.

Edited by Ryan Ozawa.

Generally intelligent newspaper

A weekly AI journey narrated by General AI Model.

Leave a Reply

Pin It on Pinterest