From Ryuta Hamamoto at TIMEWELL
This is Ryuta Hamamoto from TIMEWELL Corporation.
In January 2026, Jensen Huang took the CES stage and made a declaration: "The ChatGPT moment for robotics has arrived." This framing matters. When ChatGPT launched, it made language AI accessible to general users and triggered a wave of deployment across industries. NVIDIA is betting the same inflection point is now happening for Physical AI — AI that operates not just in digital environments but in the physical world.
NVIDIA in 2026: Key Facts
| Item | Detail |
|---|---|
| CES 2026 | January 5-9 (Las Vegas) |
| Rubin mass production | Shipments begin H2 2026 |
| GTC 2026 | Scheduled March 16-19, 2026 |
| Physical AI declaration | "ChatGPT moment for robotics" |
| New models | Cosmos 2.5, Alpamayo announced |
| Roadmap | Rubin Ultra (2027), Feynman (2028+) |
CES 2026: Physical AI Moves from Concept to Product
Jensen Huang's declaration
The core claim: just as ChatGPT normalized language AI for general use, Physical AI is now at the point where it can be broadly deployed in robotics and autonomous vehicles. NVIDIA is positioning itself as the infrastructure provider for this transition.
Physical AI is defined by three capabilities: perceiving environments, reasoning about them, and adapting behavior in response. Unlike digital AI, Physical AI must operate reliably in an unpredictable physical world — which requires both more sophisticated models and purpose-built simulation infrastructure to train them safely.
Rubin platform enters mass production
CES 2026 confirmed that the Rubin platform has moved from development into mass production, with customer shipments planned for H2 2026.
Rubin configuration:
- GPU: Rubin (3nm, HBM4 memory)
- CPU: Vera
- Six new chips across the platform
- AWS, Google Cloud, Microsoft Azure, and OCI have all announced plans to offer Rubin-based instances in 2026
Looking for AI training and consulting?
Learn about WARP training programs and consulting services in our materials.
Rubin vs. Blackwell: Performance Comparison
| Metric | Blackwell B300 | Rubin NVL144 | Improvement |
|---|---|---|---|
| FP4 Dense | 1.1 EFLOPS | 3.6 EFLOPS | 3.3x |
| FP8 Training | 0.36 EFLOPS | 1.2 EFLOPS | 3.3x |
| Memory bandwidth | 8 TB/s | 13 TB/s | 1.6x |
| Memory capacity | 288GB | 288GB | — |
| Process node | 4nm | 3nm | — |
| Memory type | HBM3e | HBM4 | New generation |
Additional Rubin improvements:
- Inference token cost: 1/10th of Blackwell
- GPUs required for MoE training: 1/4 of Blackwell
- FP4 peak inference: 50 PFLOPs (vs. 20 PFLOPs for Blackwell)
Architecture Roadmap
| Architecture | Timeline | Key Specs |
|---|---|---|
| Blackwell | 2024–2025 | Current generation, 4nm |
| Blackwell Ultra | H2 2025 | 1.5x performance improvement |
| Rubin | H2 2026 | 3nm, HBM4, 50 PFLOPs |
| Rubin Ultra | H2 2027 | 100 PFLOPs, NVL576 |
| Feynman | 2028+ | Next-next generation |
Open Physical AI Models
NVIDIA's open-source strategy
At CES 2026, NVIDIA released multiple Physical AI models on Hugging Face:
Cosmos Transfer 2.5 / Cosmos Predict 2.5
- World models for synthetic data generation
- Enables robot policy evaluation in simulation before physical deployment
Alpamayo
- Reasoning Vision-Language-Action (VLA) model for autonomous driving
- Alpamayo R1: the first open reasoning VLA model
- AlpaSim: simulation blueprint for AV testing
Isaac Lab-Arena
NVIDIA open-sourced Isaac Lab-Arena on GitHub — a simulation framework for safe virtual testing of robot capabilities, available to NVIDIA's 2 million robotics developers.
NVIDIA + Hugging Face integration
NVIDIA's Isaac and GR00T technologies have been integrated into Hugging Face's LeRobot framework:
- NVIDIA ecosystem: 2 million robotics developers
- Hugging Face ecosystem: 13 million AI builders
- Combined: 15 million developers
Robotics Partnerships
Global companies announced new robots at CES 2026 built on the NVIDIA robotics stack:
| Partner | Product |
|---|---|
| Boston Dynamics | Atlas robot (NVIDIA integration) |
| Caterpillar | Autonomous industrial machines |
| Franka Robotics | Collaborative robots |
| LG Electronics | Service robots |
| NEURA Robotics | Porsche-designed Gen 3 humanoid |
| Richtech Robotics | Dex (industrial mobile humanoid) |
TechCrunch described NVIDIA's approach as "trying to become the Android of robotics" — providing hardware-agnostic software infrastructure and open models to build a developer ecosystem around.
New Hardware: Jetson T4000
The Jetson T4000 module, announced at CES 2026, extends Blackwell architecture to edge devices:
- 4x energy efficiency improvement over prior generation
- 4x AI compute capacity
- Target applications: robots, autonomous vehicles, industrial equipment
NVIDIA now offers a consistent platform from edge (Jetson) to cloud (Rubin):
| Layer | Product |
|---|---|
| Edge devices | Jetson |
| AI development | DGX |
| HPC workloads | HGX |
| Next-gen AI supercomputers | Rubin |
Then vs. Now: NVIDIA's Trajectory
| Item | GTC 2025 (March) | CES 2026 (January) |
|---|---|---|
| Latest GPU | Blackwell announced | Rubin in mass production |
| Physical AI | Concept | Products and models released |
| Robotics | Research stage | "ChatGPT moment" declared |
| Open models | Limited | Cosmos, Alpamayo released |
| Edge hardware | Jetson Orin | Jetson T4000 announced |
| Key partners | Developing | Boston Dynamics, LG et al. |
What to Watch at GTC 2026
GTC 2026 is scheduled for March 16-19, 2026.
Expected announcements:
- Detailed Rubin benchmarks
- Rubin Ultra roadmap update
- Physical AI progress report
- Partner product expansions
- New Rubin NVL144/NVL576 specifications
Enterprise Implications
AI infrastructure investment decisions
Rubin's arrival creates a timing question for organizations planning AI infrastructure investment: buy Blackwell now, or wait for Rubin. Cloud deployments via AWS, Google Cloud, Azure, and OCI will offer Rubin instances during 2026, making this question more manageable for organizations that don't own their hardware.
Robotics opportunity
The "ChatGPT moment" framing signals that NVIDIA believes robotics deployment is approaching the kind of accessibility threshold that produced the current wave of language AI adoption. For manufacturing, logistics, and services industries, the practical question is: what does the first generation of broadly deployable physical AI systems mean for our operations?
NVIDIA's open model strategy (Cosmos, Alpamayo, Isaac Lab-Arena) is designed to accelerate developer ecosystem growth, which tends to reduce the time from platform availability to practical enterprise deployment.
