AI Trends

Beyond LLMs: AMI Labs' $1B Bet on World Models

Yann LeCun's AMI Labs raised $1.03B to build world models using JEPA — a direct challenge to LLM dominance that could reshape how AI understands physical reality.

Beyond LLMs: AMI Labs' $1B Bet on World Models

When Yann LeCun — Turing Award winner, co-inventor of convolutional neural networks, and one of the most influential researchers in the history of AI — bets $1.03 billion against the dominant paradigm of the field he helped build, it is worth paying close attention. On March 10, 2026, AMI Labs officially launched with the largest seed round ever raised by a European startup, and a founding thesis that directly challenges the assumption powering every major AI lab in Silicon Valley: that large language models are the path to general intelligence.

LeCun disagrees. He has said so publicly, repeatedly, and with increasing specificity. His argument is not that LLMs are useless — they have proven remarkably capable for language tasks — but that they are the wrong architecture for AI that needs to reason about and operate in the physical world. Text prediction, no matter how sophisticated, does not teach an AI how objects fall, how fluids behave, or how a robot should move through uncertain terrain.

The AMI Labs thesis is that the next decade of transformative AI will not come from scaling LLMs further. It will come from world models — AI systems trained to understand and predict the causal structure of physical environments. This is a direct challenge to the incumbent architecture, and a $1 billion seed round backed by Bezos Expeditions, Eric Schmidt, and a roster of elite investors is betting that LeCun is right.


What Is a World Model, and Why Does It Matter?

A world model is an AI system that builds an internal representation of how environments work — learning the physical rules that govern how states change over time — so it can plan, predict, and act accordingly.

Unlike language models that predict the next token in a sequence, world models learn to simulate the dynamics of their environment. Given the current state of a system (a robot arm, a vehicle, a fluid container), a world model predicts what that system will look like in the future and uses that prediction to plan actions.

FeatureLarge Language ModelWorld Model (JEPA)
Input domainText tokensVideo, sensor data, physical observations
Prediction targetNext token in sequenceFuture state in latent representation space
Primary strengthLanguage, reasoning, codingPhysical reasoning, embodied planning
Training dataInternet-scale textReal-world video and sensor streams
Sample efficiencyLow — requires trillions of tokensHigher — V-JEPA 2 achieves robot planning with 62 hours of data
Key limitationNo physical groundingNot suited for open-ended language generation

The key insight behind AMI Labs’ approach is that generative models — whether they generate text or pixels — are inherently imprecise because they attempt to reconstruct the full complexity of the world. JEPA sidesteps this by learning in abstract representation space, predicting what matters about how the world changes rather than trying to reproduce every irrelevant surface detail.


What Is JEPA and How Does It Work?

JEPA (Joint Embedding Predictive Architecture), introduced by LeCun in a 2022 paper, is the core architecture underpinning AMI Labs. It is a self-supervised learning framework designed to learn rich, abstract representations of the world from unlabeled data.

In standard generative models, the system is trained to reconstruct its input — predicting every pixel of a masked image or every token of a masked sentence. JEPA instead trains a predictor to anticipate the embedding (abstract representation) of a future state, given the embedding of the current state. The model never tries to reconstruct raw inputs; it only learns relationships between representations.

This architecture has a critical practical advantage: sample efficiency. Because JEPA does not waste capacity reconstructing irrelevant details, it learns meaningful representations much faster. AMI Labs’ V-JEPA 2 prototype demonstrated zero-shot robot planning capabilities trained on just 62 hours of video — a fraction of the data required by comparable systems.


Why Is This the Right Moment for World Models?

The timing of AMI Labs’ launch is not accidental. Several structural forces have converged to make 2026 a plausible inflection point for world models:

Hardware maturity. Inference-optimized chips (including Meta’s own MTIA lineup announced the same week as AMI Labs’ launch) are making it practical to run complex world models at the edge, inside robots and vehicles, without dependence on cloud round-trips.

Data abundance. The robotics and autonomous vehicle industries have spent five years building instrumented fleets that generate massive amounts of real-world video and sensor data — exactly the training signal world models need.

LLM plateau signals. Multiple researchers have noted that scaling LLMs is producing diminishing returns on common-sense physical reasoning benchmarks. The gap between language capability and physical understanding remains vast.


Who Is Backing AMI Labs?

The $1.03 billion seed round — the largest in European startup history — reflects not just confidence in LeCun’s vision but a strategic calculation by its investors that the LLM paradigm has a structural ceiling for physical-world applications.

InvestorCategoryNotable Significance
Bezos ExpeditionsFamily officeJeff Bezos personally backing world model thesis
Cathay InnovationDeep tech VCCo-lead; strong focus on hardware-software integration
GreycroftUS VCCross-stage generalist with deep enterprise AI portfolio
Hiro CapitalEuropean deep techRobotics and gaming AI specialist
HV CapitalEuropean VCCo-lead; historically contrarian bets on infrastructure
Eric SchmidtIndividualFormer Google CEO; strategic signal from AI establishment
Mark CubanIndividualHigh-profile endorsement of physical AI thesis
Tim & Rosemary Berners-LeeIndividualWeb inventor bet on next-generation intelligence layer

The participation of Eric Schmidt and Tim Berners-Lee is particularly notable. Both are individuals who have been at the center of previous paradigm shifts in computing. Their backing carries an implicit message: this is not fringe research.


How Does AMI Labs Compare to the Incumbent LLM Labs?

AMI Labs is not competing head-to-head with OpenAI or Anthropic for language tasks. Its architecture, training data, and target applications sit in a different domain entirely — embodied intelligence versus text generation. The strategic divergence is stark across every key dimension.

DimensionOpenAI / Anthropic / GoogleAMI Labs
Core architectureTransformer-based LLMsJEPA world models
Primary modalityText, then multimodalVideo, sensor, physical data
Intelligence theoryScaling hypothesisPhysical grounding hypothesis
Target applicationLanguage, reasoning, codeEmbodied AI, robotics, autonomy
Training paradigmSupervised + RLHFSelf-supervised world prediction
Valuation$300B+ (OpenAI)$3.5B pre-money (seed stage)

This is not a head-to-head competition for the same market — at least not yet. LLMs and world models may coexist for years, with LLMs dominating text-centric tasks while world models enable a new generation of physically intelligent systems. The longer-term question is whether a system with genuine physical understanding will eventually outperform text-only models even on abstract reasoning tasks — because it has learned from richer, causally structured training signal.

LeCun’s bet is that the answer is yes.


What Does This Mean for the AI Industry?

AMI Labs’ launch is a signal, not just a funding event. It marks the beginning of a credible, well-resourced challenge to LLM orthodoxy. The implications are worth taking seriously:

For enterprise buyers, world models could unlock AI applications that LLMs genuinely cannot do: reliably operating robots in dynamic factory floors, planning routes for autonomous vehicles in novel conditions, predicting equipment failures in real time.

For the AI research community, $1.03 billion in seed funding buys significant compute and talent. AMI Labs will attract researchers who believe the next frontier is physical intelligence, accelerating the field in directions that large LLM labs are not prioritizing.

For investors, the AMI Labs round creates a new benchmark for pre-product AI funding and establishes world models as a legitimate institutional investment category — not just an academic curiosity.

The LLM era is not over. But it may no longer be the only game in town.


FAQ

What is AMI Labs and what does it do? AMI Labs is a Paris-based AI startup co-founded by Turing Award winner Yann LeCun. It builds world models using JEPA (Joint Embedding Predictive Architecture) — AI systems that learn how the physical world works, rather than predicting text tokens. Its first product, AMI Video, targets drones, robotaxis, and industrial autonomous systems.

How is a world model different from a large language model? LLMs predict the next token in a sequence and learn from text. World models learn to represent and predict how environments change over time, operating in a compressed latent space rather than reconstructing every pixel or word. They are designed for physical reasoning and embodied tasks, not text generation.

What is JEPA and why does LeCun believe it beats LLMs? JEPA (Joint Embedding Predictive Architecture) trains AI to predict the future state of the world in abstract representation space, not in raw pixel or token space. LeCun argues this makes learning more sample-efficient and robust, since it focuses on what matters rather than reconstructing irrelevant surface detail. V-JEPA 2 achieved zero-shot robot planning with just 62 hours of training data.

Who funded AMI Labs and what is its valuation? AMI Labs raised $1.03 billion in a seed round co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. The round values the company at $3.5 billion pre-money — making it Europe’s largest seed round on record. Backers include Eric Schmidt, Mark Cuban, Jim Breyer, and Tim Berners-Lee.

What industries will world models from AMI Labs target? AMI Labs is targeting industries that operate complex physical systems where AI errors carry high real-world consequences: manufacturing, aerospace, biomedical, and pharmaceutical sectors. Consumer-facing robotics, autonomous vehicles, and drone logistics are also core use cases.

Does AMI Labs see LLMs as a dead end? LeCun has long argued that LLMs cannot achieve human-level intelligence because they lack grounding in physical reality. AMI Labs does not predict LLMs will disappear, but contends that for embodied, physical-world tasks, world models based on JEPA are fundamentally better architectures. The two approaches may coexist for different application domains.


References

TAG