

We’re already rushing towards the end of the 1st quarter of 2026, but we had to take a moment to review what February looked like for the NATIX Network. We’ve had another great partnership announcement, published more guides and informational materials, and made sure to connect with the community.
Despite being a relatively short month, February was still ripe with news, network growth, and special activities. Let’s take a moment to review our progress.

The NATIX Network keeps expanding rapidly. With over 270K drivers in the network who covered over 247M KMs, we’re also getting very close to the next Network Laps milestone of 250M KMs of road covered. We’re expecting to hit that milestone in the next month, as we continue being one of the largest DePIN networks in the world!
Additionally, it’s notable that we’ve collected over 141K hours of multi-camera footage, which is critical for Physical AI development. We’re getting closer to the 150K mark, and the more we collect, the deeper our foothold in the industry becomes. NATIX is a force to be reckoned with in the ecosystem.

Computer vision models are progressing faster than ever, but one bottleneck remains unchanged: the lack of large-scale, high-variety, high-quality video data. Luckily, we announced our partnership with Nomadic ML, bringing NATIX’s multi-camera video data into one of the most advanced video-native AI platforms in the industry. The result is a two-sided collaboration that strengthens both companies and directly accelerates progress in Physical AI.
As part of this partnership, Nomadic ML will use NATIX data to train its video-based foundation models, improve edge case recognition, and enhance video generation and augmentation capabilities. Meanwhile, Nomadic ML’s video intelligence tools make the VX360 dataset searchable, which is especially valuable for Physical AI and AV developers who rely on difficult-to-source edge cases for training, testing, and validation.

Autonomous driving is entering a different phase: teaching machines how the physical world behaves. Unlike humans, who approach driving with a lifetime of prior knowledge about motion, cause and effect, danger, and social behavior, Physical AI has to learn it from data, and that reality is reshaping how AI training data for autonomous driving is being built.
Video is becoming the primary learning signal, and a new pipeline is emerging in which Visual Language Models, World Foundation Models (WFMs), Vision-Language-Action (VLA) models, and End-to-End (E2E) driving systems work together to treat driving as something learned from experience. In this neat article, we break down all the steps that lead to how autonomous driving is being developed today, and why video data is the key to unlocking the next step in autonomy.
VX360 is the cornerstone of multi-camera data collection. We wanted to make sure the earning mechanism for VX360 is simple and understandable, which is why we wrote a guide that explains how the VX360 reward system works, how monthly Cycles and Tiers determine rewards, how in-app currencies are structured, and how you can maximize your earnings over time.
The bottom line is that the more you drive and the more footage you upload, the higher you climb through the Tiers, and the more you earn. However, if you’re driving a Tesla with a VX360 and still not sure how to make the most out of your device, we recommend reading this dedicated guide.
Following our Nomadic ML partnership announcement, we held another special AMA where we displayed the power of video-based VLMs. Video-based VLMs are a powerful tool for finding the long-tail within hours of recorded footage, and edge case extraction at scale is a significant step in the development of autonomous systems. CEO Alireza showed exactly how it’s done, so make sure you don’t miss this broadcast.
Another important participation this month included a live AMA hosted by XMAQUINA, which was all about the next generation of Physical AI. CEO Alireza joined some of the top Web3 projects leading the space, discussing the autonomous machines of tomorrow.
We still have more news to share before we close out the first quarter of the year. We’re sitting at the perfect spot to help Physical AI projects level up, helping autonomy reach the next stage. The need for massive amounts of multi-camera data is increasing, and NATIX offers a solution that scales.
As always, make sure to follow our Twitter @NATIXNetwork to stay up-to-date with our announcements and releases.