

Imagine an autonomous vehicle that can anticipate anything the world throws at it before driving a single mile. This is the promise of World Foundation Models (WFMs): sophisticated generative AI systems capable of simulating highly realistic virtual environments and scenarios – the perfect solution to the severe shortage of diverse and high-quality real-world data. WFMs significantly enhance Physical AI training, testing, and validation processes, drastically reducing costs and speeding up development.
With over $10B invested in companies building world models, this poses a once-in-a-generation opportunity, similar to the rise of Large Language Models (LLMs), as the race for the ultimate AI model that can see, reason, and predict the physical world in motion accelerates.
Today, we’re thrilled to announce a strategic partnership with Valeo, one of the world’s leading automotive technology companies and a pioneer in autonomous systems, to build one of the largest open-source, multi-camera World Foundation Models. This collaboration brings together Valeo’s pioneering world generative models with NATIX’s global 360° real-world driving data, creating a foundation that will redefine how autonomous systems and Physical AI learn, adapt, and act.
Over the past few years, LLMs have transformed digital intelligence. They taught machines to understand words, ideas, and context — to predict the next sentence, summarize, or even reason in language. But language is only one part of intelligence. To operate in the physical world, machines need to grasp space, motion, and time. That’s where World Foundation Models (WFMs) come in.
If an LLM predicts the next word in a sentence, a WFM predicts the next moment in reality. It learns from video, imagery, and sensor data to understand how the world behaves — how a pedestrian crosses the street, how light reflects off wet asphalt, or how traffic flows through a complex intersection. In simple terms, a WFM is an LLM for the real world — forecasting how a scene evolves and simulating complex environments with incredible accuracy.
WFMs can train autonomous systems in simulated environments — teaching them to drive, navigate, or interact without ever needing to test in the real world. MIT’s “LucidSim” research recently demonstrated a similar concept, allowing robots to learn complex physical skills by “dreaming” inside AI-generated worlds. WFMs can bring that same power to cars, drones, and robots: machines that learn from experience before ever leaving simulation.
WFMs are becoming the hottest frontier in Physical AI because they represent a fundamental shift: they teach machines how to reason in the real world rather than merely react to it. But achieving that level of intelligence requires enormous amounts of diverse, multi-jurisdiction data — something few companies can provide.
This is where NATIX stands apart. Our VX360 network captures comprehensive 360° driving footage from across several continents, weather conditions, and edge cases — giving WFMs the diversity, realism, and scale they need to learn how the world truly works.

Valeo is a global leader in perception and driving intelligence, powering hundreds of millions of vehicles. Its research team is behind VaViM (Video Autoregressive Model) and VaVAM (Video-Action Model), two groundbreaking open-source models that offer a complete perception-to-action digital pipeline. VaViM forecasts future driving scenes by predicting how visual frames evolve over time, while VaVAM translates that understanding into physical driving actions such as steering, braking, or accelerating — behaviors that emerge naturally from learned experience.
These models were trained primarily on single front-facing camera video sources, including large-scale online datasets. This gave Valeo a strong foundation for world modeling — but the real world is not a single view. That’s where NATIX comes in.
Through our VX360 multi-camera network, which has collected over 80,000 hours of driving in just six months (14x more than what L2D, the world’s largest open-source autonomous driving dataset, gathered in three years), we capture synchronized 360° footage from vehicles around the world — across the US, Europe, and Asia — representing the complexity of real-world motion in every direction. This data extends world models from front-view to multi-camera input, giving AI the same complete spatial perception that autonomous systems and robots rely on in practice.
This partnership unites Valeo’s expertise in world modeling with NATIX’s ability to capture diverse, multi-camera real-world data at scale. Together, we are creating the largest truly open, multi-camera WFM — one that learns from the world as it actually is, generates infinite scenarios based on that real-world knowledge, and opens its doors to the global Physical AI community.
The first iteration of this collaboration will launch with 5,000 hours of multi-camera driving data, forming the foundation of the model. This will allow the WFM to fully simulate the world around a vehicle from experience — not just from a front viewpoint, but through a complete multi-camera understanding of its environment.
Unlike world models limited to front-facing cameras, this one integrates the full surround view, capturing the nuances of real driving: a cyclist overtaking from the right, a pedestrian crossing from behind, or a merging car emerging from a blind spot. This design enables the model to simulate complex, real-world interactions that front-view systems often overlook.
Subsequent stages of this model will expand the dataset dramatically — adding tens of thousands of hours of driving data from new geographies, weather conditions, and traffic cultures. The result will be a model that can generalize across continents and contexts, pushing the boundaries of what open foundation models can achieve.
By open-sourcing the WFM, we ensure that innovation moves forward collectively. Researchers, OEMs, and developers will be able to build on shared foundations rather than starting from scratch — creating a virtuous cycle of improvement that benefits the entire ecosystem.
At the core of this effort lies NATIX’s VX360, which makes this scale possible. VX360 transforms vehicles into all-encompassing high-quality data collectors, continuously gathering real-world footage at a global scale. NATIX’s data network is built as a decentralized physical infrastructure network, coordinated on Solana (the home for DePIN and DePAI) to enable scalable data collection, validation, and open participation. Combined with automated anonymization pipelines, this decentralized network ensures that data remains both abundant and ethically sourced — forming the lifeblood of the open-source WFM.
This effort also aligns naturally with Solana’s role as the coordination layer for DePIN and data-driven Physical AI networks. As the home of many large-scale data curation and compute networks, Solana provides the infrastructure needed to support open, globally distributed datasets like NATIX multi-camera video data at scale.

WFMs are rapidly becoming the cornerstone of autonomous driving and broader robotics. By learning how the world behaves, they provide the perfect platform for autonomous machines to experience, learn, adapt, and anticipate outcomes before they happen.
WFMs can simulate millions of real-world scenarios for training, testing, and validation — from routine traffic flows to rare and dangerous edge cases — dramatically accelerating development and improving safety and reliability.
But open source is what will define the next generation of Physical AI. Progress in the physical world can’t thrive behind closed doors — it depends on shared knowledge, reproducibility, and collective improvement. By open-sourcing the world’s largest multi-camera WFM, NATIX and Valeo are ensuring that Physical AI develops on transparent foundations that anyone can build upon.
This philosophy mirrors how innovation in software and language models accelerated once foundational systems were made open. When the same approach reaches the physical world, every contributor — from researchers to startups — becomes part of a global effort to make autonomous technology safer, smarter, and universally accessible.
The implications extend far beyond self-driving cars. WFMs represent the underlying intelligence layer for Physical AI — enabling robots, drones, and even AR/VR systems to perceive, rationalize, and act in the real world with human-like foresight. By capturing the richness of reality through NATIX’s decentralized data and Valeo’s advanced modeling, this partnership lays the groundwork for machines that can truly understand the dynamics of the physical world before they are even built.
This partnership represents more than a technical milestone — it’s about accelerating the path to safe, open, and transparent Physical AI. By pairing Valeo’s world-modeling expertise with NATIX’s 360° data engine, we’re creating the foundation for how machines will learn from the world around them.
Together, we are building more than a model — we’re shaping the blueprint for the next era of AI: open, responsible, and grounded in the real world.
As we enter the age of Physical AI, openness and collaboration will be the driving forces of progress — ensuring the next leap in AI doesn’t happen behind closed doors, but in the open, for everyone.