

Computer vision models are progressing faster than ever, but one bottleneck remains unchanged: the lack of large-scale, high-variety, high-quality video data. Building models that can truly understand motion, interaction, and temporal behavior requires more than images. It requires video. It requires diversity. It requires realism. And it requires scale.
This is why we’re proud to announce our partnership with Nomadic ML. Securing Nomadic ML as a video data customer brings NATIX’s multi-camera video data into one of the most advanced video-native AI platforms in the industry. In parallel, NATIX will use Nomadic ML’s advanced video understanding models to make its own multi-camera dataset searchable and interactive. The result is a two-sided collaboration that strengthens both companies and directly accelerates progress in Physical AI.

Nomadic ML has built a video analysis platform that converts raw footage into structured datasets for autonomy development. Their system automatically labels events, detects behaviors, and extracts meaningful patterns from large-scale driving video, giving engineers the ability to search, curate, and analyze footage without manual review bottlenecks.
The platform is widely used across driving, robotics, and infrastructure monitoring because it replaces slow, reviewer-driven annotation with AI-driven analysis. Physical AI teams use Nomadic ML to prototype detection models, mine for edge cases, and process full datasets overnight. This makes it possible to surface critical events, create training sets, and explore video archives through natural language search.
Nomadic ML’s engineering team has deep experience in computer vision and large-scale ML infrastructure, with backgrounds at major tech and autonomy companies. Their focus is simple: turn unstructured video into usable training data so autonomy developers can move faster and train better models. For Physical AI development, this is critical.
Video models are only as strong as the diversity of the data that trains them. Nomadic ML requires multi-camera, real-world, uncurated driving footage that captures the world as it is, not as a simulation engine imagines it. NATIX provides exactly this at a global scale.
Our VX360 network captures real-world 360° driving video across different countries, traffic styles, weather conditions, and edge-case scenarios. This gives Nomadic ML the data diversity required to push video intelligence to the next level.
Nomadic ML will use NATIX data to:
This helps Nomadic ML build stronger models. It also ensures that their platform learns from real-world complexity rather than narrow, curated datasets.
NATIX has also integrated Nomadic ML’s video intelligence tools to make the VX360 dataset searchable in a natural, intuitive way. Instead of manually combing through long recordings, teams will be able to find complex sequences in seconds by describing them using a visual language model (VLM).
If the sequence exists, the model can find it. This is especially valuable for Physical AI and AV developers who rely on difficult-to-source edge cases for training, testing, and validation.
The shift to Physical AI and world-model-driven autonomy requires datasets that reflect the true dynamics of the world. It also requires tools that can extract insight from those datasets quickly and reliably. NATIX and Nomadic ML together address both challenges.
NATIX brings the global, multi-camera data that next-generation video models need. Nomadic ML brings the video intelligence engine that makes this data easier to search, understand, and augment. The result is a stronger pipeline for training physical AI systems across robotics, autonomy, simulation, and advanced mobility research.
Nomadic ML builds advanced video AI models. NATIX provides the global multi-camera dataset that makes those models stronger. By combining NATIX’s decentralized data network with Nomadic ML’s video intelligence platform, we are helping accelerate the development of high-performance video models that can see, understand, and reason about the real world.
This collaboration marks another step forward in NATIX’s mission to power the future of Physical AI with diverse, high-quality real-world data.