NVIDIA’s AR1 model analyzes driving environments step by step, evaluates multiple trajectory options, and uses contextual data to make more accurate routing decisions. At NeurIPS 2025, the company introduced a comprehensive suite of open-source models, datasets, and tools across autonomous driving, speech AI, and safety research further solidifying its leadership in open digital and physical AI innovation.
The company was also highlighted in Artificial Analysis’ new Openness Index, where NVIDIA’s Nemotron family was ranked among the most transparent and open model ecosystems.
NVIDIA introduced DRIVE Alpamayo-R1, which it calls “the world’s first open reasoning VLA model for autonomous driving,” marking a significant step forward in transparent and explainable self-driving AI.
Bryan Catanzaro, NVIDIA’s vice president of applied deep learning research, noted that the model combines chain-of-thought reasoning with advanced path planning, enabling deeper research into complex road conditions and supporting the development of level-4 autonomous driving systems.
NVIDIA says AR1 analyzes driving scenes step by step, evaluates potential trajectories, and leverages contextual information to select the safest route. A portion of its training data is available through NVIDIA’s Physical AI Open Datasets, and the model itself will be released on GitHub and Hugging Face.
Built on NVIDIA Cosmos Reason, AR1 can be customised for non-commercial research applications. The company noted that reinforcement learning proved highly effective during post-training, significantly enhancing the model’s reasoning capabilities compared to its pretrained version. NVIDIA also introduced AlpaSim, an open evaluation framework designed to benchmark AR1’s performance.
NVIDIA further expanded the Cosmos ecosystem by adding new tools and workflows to the Cosmos Cookbook, providing step-by-step guidance for model post-training, synthetic data generation, and performance evaluation.
New Cosmos-based systems introduced by NVIDIA include LidarGen, a world model for generating lidar data; Omniverse NuRec Fixer, a tool for correcting artifacts in neural reconstructions; Cosmos Policy, which converts video models into robot control policies; and ProtoMotions3, a framework for training physically simulated digital humans and robots.
The new Cosmos-based systems include LidarGen, a world model for generating high-quality lidar data; Omniverse NuRec Fixer, a tool for correcting artifacts in neural reconstructions; Cosmos Policy, which converts video models into practical robot policies; and ProtoMotions3, a framework for training physically simulated digital humans and robots.
In digital AI, NVIDIA expanded its Nemotron and NeMo portfolios with new models and datasets. Among them are MultiTalker Parakeet, a speech recognition model designed for overlapping multi-speaker scenarios; Sortformer, a next-generation diarization model; and Nemotron Content Safety Reasoning, which applies domain-specific safety rules using structured reasoning.
NVIDIA also released the Nemotron Content Safety Audio Dataset, designed to help identify unsafe audio content. The company introduced additional tools for synthetic data generation and reinforcement learning, including NeMo Gym for building RL environments and the NeMo Data Designer Library, which is now fully open-sourced under the Apache 2.0 license.
CrowdStrike, Palantir and ServiceNow are among the leading partners adopting Nemotron and NeMo tools to power specialised agentic AI workflows.
NVIDIA researchers are also showcasing more than 70 papers and sessions at NeurIPS, underscoring the company’s growing influence across AI research domains.









