Nvidia CEO Jensen Huang

“Companies don’t print money”: Nvidia’s Jensen Huang recasts the economics of AI

At CES 2026, Huang argues that AI spending is a redirection of existing budgets, and a long-overdue overhaul of global computing infrastructure.

LAS VEGAS - “Companies don’t print money to buy our processors; they simply take the annual budget that was meant to replace conventional servers and redirect it toward building AI factories,” said Jensen Huang, Nvidia’s founder and CEO, opening his keynote at the CES 2026 exhibition in Las Vegas. The remark went straight to one of the central anxieties surrounding artificial intelligence: its cost. Huang’s broader message was that AI is moving rapidly from an experimental phase into industrial-scale production. As it does, the boundary between the digital world, autonomous software agents, and the physical world of robots is beginning to blur. The economic logic underpinning this shift, he argued, is rooted in efficiency and, increasingly, national sovereignty.
According to Huang, the global economy is currently built on a trillion-dollar data center infrastructure that relies on outdated technology. The transition now underway, he said, is less a spending spree than a long-overdue reallocation, a “budget diversion” and a necessary “major refresh” of global computing infrastructure. Nvidia’s new processors, Huang explained, function like turbines in a power plant: they generate “intelligence,” measured in tokens, as a product that can be consumed or sold immediately. In that sense, investment in AI hardware has become a form of industrial production.
1 View gallery
ג'נסן הואנג מנכ"ל ומייסד אנבידיה ב תערוכת CES 2026
ג'נסן הואנג מנכ"ל ומייסד אנבידיה ב תערוכת CES 2026
Nvidia CEO Jensen Huang
(Reuters/Steve Marcus)
Beyond corporate balance sheets, Huang pointed to a new and significant source of capital expected to accelerate in 2026: what he termed “sovereign AI.” Governments including those of Japan, Canada, and France are now funding national, autonomous computing infrastructures designed to safeguard local data, language, and culture. Unlike the dot-com boom, Huang argued, AI investments deliver an immediate “profit multiplier,” largely through operational savings, for example, by using AI agents to write code and eliminate thousands of hours of human labor.
Huang framed the current moment within a longer historical arc. He traced the evolution of AI from the emergence of transformer models such as BERT in 2015, through the breakthrough of ChatGPT in 2022, to the arrival of reasoning models in 2023 that enabled systems to “think” in real time. The period of 2024-2025, he said, marks the rise of “agentic AI”: systems that do not merely answer questions, but can research, plan, and use tools autonomously to carry out complex tasks. As an internal example, Huang cited Cursor, a tool that has reshaped software development practices inside Nvidia itself.
To illustrate this vision, Nvidia demonstrated a system called Brev, which allows users to turn a DGX Spark machine into a private cloud. In the demonstration, an “intent-based router” handled sensitive tasks, such as reading emails, locally on the device, while routing more computationally demanding requests to cloud-based models. The agent was also shown interacting with a small desktop robot, Reachy Mini, developed by Hugging Face, communicating both verbally and through physical movement.
Huang described “physical AI” as one of the largest and most consequential technological frontiers now emerging. Unlike language-based systems, physical AI is designed to understand and operate within the laws of the natural world. He introduced the term “AI physics” to describe models capable of decoding the physical rules embedded in real-world environments. To support this effort, Nvidia unveiled open models such as Cosmos, a foundational system meant to understand “how the world works,” and GR00T, a model designed for humanoid robots that governs movement and motor control.
On stage, the company demonstrated several autonomous robots, including BDX and Reachy, trained entirely in simulated environments. The robots, styled like characters from a science-fiction film, navigated space independently, responded to spoken commands, and demonstrated spatial awareness without human operators. These systems were trained using Nvidia’s Cosmos model. Huang argued that robotics, powered by such models, is poised to become one of the largest industries in history.
In a notable moment, Huang praised the Chinese DeepSeek R1 model, calling it “the first open-source reasoning model that surprised the world” and helped ignite a broader wave of innovation. He positioned Nvidia not merely as a hardware supplier, but as a frontier model builder with a growing role in the open-source AI ecosystem.
Huang also reviewed a series of Nvidia-developed models trained on the company’s supercomputers and released publicly. These included models for biology and drug discovery (BioNeMo, Evo2), climate and weather forecasting (Earth-2, ForecastNet), and autonomous driving (Alasim). He highlighted a shift toward multi-model architectures, in which systems deploy several models simultaneously to solve complex problems, pointing to the search engine Perplexity as an early example of this approach.
The keynote, held at the Fontainebleau Hotel in Las Vegas, drew an unusually large crowd. Thousands packed the venue, with many unable to enter the main hall and forced to watch from large screens set up throughout the hotel. The scale of anticipation extended to the event itself. During one of the live demonstrations, the screens inside the hall suddenly went dark. Huang paused and announced, “All systems are down.” He quickly turned the interruption into humor, quipping that such failures “never happen in Santa Clara,” Nvidia’s headquarters, and joking that someone in Las Vegas must have won the jackpot and drained the city’s power grid. The speech resumed shortly afterward, with Huang improvising until the technical issue was resolved.