
Editor’s note: This post is part of Into the Omniverse,a series focused on how developers,3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.
Open source has become essential for driving innovation in robotics and autonomy. By providing access to critical infrastructure – from simulation frameworks to AI models – NVIDIA is enabling collaborative growth that accelerates the path to safer,more capable autonomous systems.
At CES earlier this month, NVIDIA introduced a new suite of open physical AI models and frameworks to accelerate the development of humanoids, autonomous vehicles and other physical AI embodiments. These tools span the entire robotics development lifecycle – from high-fidelity world simulation and synthetic data generation to cloud-native orchestration and edge deployment – giving developers a modular toolkit to build autonomous systems that can reason, learn and act in the real world.
OpenUSD provides the common framework that standardizes how 3D data is shared across these physical AI tools, enabling developers to build accurate digital twins and reuse them seamlessly from simulation to deployment. NVIDIA Omniverse libraries, built on OpenUSD, serve as the source of ground‑truth simulation that feeds the entire stack.
At CES 2026,developers brought the NVIDIA physical AI stack out of the lab and onto the show floor,debuting machines ranging from heavy equipment and factory assistants to social and service robots.
The stack taps into NVIDIA Cosmos world models; NVIDIA Isaac technologies, including the new Isaac Lab-Arena open source framework for policy evaluation; the NVIDIA Alpamayo open portfolio of AI models, simulation frameworks and physical AI datasets for autonomous vehicles; and the NVIDIA OSMO framework to orchestrate training across compute environments.

