Meta's Chief AI Scientist and Turing Award winner Yan Lekun has been a defining figure in artificial intelligence. But over the past year, his work has not only pushed the boundaries of AI research, but also sparked critical discussions about how society should approach the opportunities and risks posed by this transformative technology.
Born in 1960 in Soisy-sous-Montmorency, France, Lekun is a driving force in AI innovation. In the year From founding director of New York University's Center for Data Science in 2012 and co-founding Meta AI in 2013, to shaping the future of open source artificial intelligence, Leku's hands-on perspective makes him Person of the Year.
“Technically, he was a visionary. In an interview with New York University computer science professor Rob Fergus, there are only two people you can really talk about. “Recently, support for open source and open research has been critical to the Cambrian explosion of startups and people building on these large language models.”
Fergus is an American computer scientist specializing in machine learning, deep learning, and generative models. A professor at NYU's Courant Institute and a Google DeepMind researcher, he co-founded Meta AI (formerly Facebook Artificial Intelligence Research) with Yan LeCun in September 2013.
Lekun's influence on AI stretches back, including his groundbreaking work in machine learning and neural networks. Silver, a former professor at New York University, has long advocated self-regulation—a mechanism for how people learn from their environment. In the year By 2024, this vision has led to the development of artificial intelligence systems capable of perceiving, reasoning, and planning just like living beings.
“It was around 2015 that tutoring was seen as the stepping stone to AGI. It had that cake analogy: unsupervised learning is part of it, supervised learning is the icing, and tutoring is the cherry on top,” Professor Fergus recalled. “Many scoffed at it at the time, but it has been proven true. Modern LL.M.'s are trained primarily in unsupervised learning, fine-tuned with minimal supervision This is done using reinforcement learning based on data and people's preferences.
Whether it's meta-open source big language models, including AI for purpose, or addressing AI's ethical and regulatory challenges, Lekun has become a central figure in the global debate about the role of artificial intelligence.
“It was great to see him up close and see the amazing things he's done,” Professor Fergus said. “More people need to hear him.
Table of Contents
ToggleAI rules
One of Lekun's most controversial areas this year has been his staunch opposition to regulation of basic AI models.
“He told me he doesn't think AI rules are necessary or the right thing to do,” NYU math professor Russell Kaflish told Decrypt. I believe he is optimistic, and sees all the good things that can come from AI.
Kaflish, director of the Courant Institute for Mathematical Sciences at New York University, has known Professor LeCun since 2008 and has witnessed the development of modern machine learning.
In June, Lekun took to X to assert that controlling the models would stifle innovation and hinder technological progress.
“Holding technology developers responsible for the bad use of products built from their technology simply stops the development of technology,” said LeCun. “It will definitely stop the spread of open source AI platforms, which will kill the entire AI ecosystem, not just startups, but also academic research.”
LeCun argued for focusing regulation on applications where risks are more context-specific and manageable, and argued against regulation of basic AI models, suggesting that it would be more beneficial to regulate applications rather than the underlying technology.
“He did the fundamental work that made that AI successful,” Kaflish said. “Its contemporary value is approachable, descriptive, and visionary for advancing AI into artificial general intelligence.”
Criticisms of fear of AI
Lekun has been vocal in his opposition to what he sees as excessive fear surrounding the potential dangers of AI.
“He doesn't give in to fear and he's optimistic about AI, but he's also not encouraging,” Kaflish said. “He introduced a way to improve this with robotics by gathering information from the physical world.”
On Lex Friedman's podcast in April, he dismissed dire predictions related to runaway superintelligence or uncontrolled AI systems.
“AI diehards imagine all kinds of disaster scenarios about how AI can escape or take over and basically kill us all, and that's based on general assumptions that are generally false,” LeCun said. “The first assumption was that the emergence of a superintelligence would be a phenomenon, to figure out the secret, and we'd turn on a super-intelligent machine, and because we've never done it before, it's going to take over the world and kill us all. That's a lie.”
Since ChatGPT launched in November 2022, the world has entered what many call an AI arms race. With the help of a century of Hollywood movies predicting the coming robot apocalypse, and news that AI developers are working with the US government and its allies to integrate AI into their frameworks, many believe AI superintelligence will rule the world.
But LeCun does not agree with these views, the most intelligent AI is only the level of intelligence of a small animal, and does not have the global hive of the matrix.
“It's not going to happen. Systems as intelligent as a cat have all the characteristics of human intelligence, but at the level of intelligence of a cat or a parrot,” continued LeCun. “Then we work to make those things smarter. As we make them more intelligent, we put some safeguards in them.
In a hypothetical doomsday scenario where rogue AI systems emerge, Lekun suggests that if developers can't agree on how to control AI, and if one is malicious, ‘good' AI could be deployed to fight pirates.
Yann LeCun AI language models can't reason or plan – even models like OpenAI's o1 – instead they only work to acquire intelligence and are not a way to think to a human level pic.twitter.com/wQb4pVaRpX
— Tsarathustra (@tsarnick) October 23, 2024
The way forward for AI
For advocates of what Leku calls “purpose-driven AI,” AI systems can't just predict sequences or generate content, but are driven by goals and can understand, predict, and interact with the world in a way similar to living organisms. This process involves creating AI systems that develop “models of the world”—internal descriptions of how things work—that create causal factors and the ability to plan and adapt strategies in real time.
Lekun has been a proponent of autonomous learning as a means to advance AI toward more autonomous and general intelligence. He envisions learning AI at different levels of perception, reasoning, and planning, allowing it to learn from large amounts of unknown information, similar to how humans learn from their environment.
“The true AI revolution has yet to arrive,” Lekun said in a speech at the 2024 K-Science and Technology Global Forum in Seoul. “In the near future, every interaction we have with the digital world will be powered by AI assistants.”
Yan Lekun's contributions to AI in 2024 reflect technological innovation and practical foresight. Challenging heavy-handed AI regulation and rejecting armistic AI narratives underscores his commitment to moving the field forward. As AI continues to evolve, Lekun's influence will ensure that it remains a force for technological advancement.
Generally intelligent newspaper
A weekly AI journey narrated by a generative AI model.