According to Fortune, Yann LeCun, the 65-year-old NYU professor and Turing Award winner who developed convolutional neural networks in the late 1980s, is planning to leave Meta after joining in December 2013 as founding director of FAIR. His departure follows Meta’s June reorganization that invested $14.3 billion in Scale AI and placed 28-year-old CEO Alexandr Wang in charge of a new Meta Superintelligence Labs division, shifting LeCun’s reporting structure. The move comes as Meta cut approximately 600 AI positions in October and more than half the authors of the original Llama research paper left within months of publication. LeCun is now in early discussions to raise funding for a startup focused on “world models” that learn from video and spatial data rather than text.
The great AI divide
Here’s the thing: this isn’t just another executive departure. This represents a fundamental philosophical split in how to approach artificial intelligence. Mark Zuckerberg has clearly pivoted toward rapid deployment of large language models after Meta’s Llama 4 model fell short against competitors like OpenAI and Google. But LeCun has been openly skeptical of LLMs, arguing they’ll never achieve human-level reasoning. Basically, it’s the classic research-versus-product tension, but playing out at massive scale with billions of dollars at stake.
From check reading to AGI
LeCun’s credentials are absolutely legendary in AI circles. He developed convolutional neural networks back in the late 80s – specifically the LeNet architecture that could recognize handwritten digits. That work was so effective that by the mid-1990s, NCR was using it to process 10-20% of all checks in the U.S. Think about that – his algorithms were literally reading your bank checks before most people even had email. He also created DjVu image compression that helped digitize libraries. And in 2019, he shared the ACM Turing Award with Geoffrey Hinton and Yoshua Bengio for making deep neural networks practical.
What are world models?
So what exactly is LeCun planning to build? He’s talking about “world models” – AI systems that develop an internal understanding of their environment by learning from video and spatial data rather than just text. Current LLMs are basically super-powered autocomplete engines trained on internet text. LeCun’s approach would create AI that actually understands cause and effect, that can predict outcomes in physical environments. He’s said this might take about a decade to mature, which tells you he’s playing the long game. This is the kind of fundamental research that FAIR was originally created to pursue, but apparently Meta’s patience for decade-long research timelines has worn thin.
Research exodus
The writing has been on the wall for a while. Former employees told Fortune that FAIR has been “dying a slow death” as Meta prioritized commercially focused AI teams. When you combine the 600 AI layoffs, the mass exodus of Llama paper authors, and now LeCun’s departure, it paints a pretty clear picture. Meta is betting big on immediate product deployment while one of the godfathers of modern AI is heading for the exits to pursue his vision elsewhere. It makes you wonder – in the race to AGI, are we sacrificing the fundamental research that might actually get us there for short-term competitive gains?
