Nvidia Says Its $100 Billion OpenAI Investment Will Be Gradual

Nvidia clarifies its $100 billion OpenAI investment is gradual, not a lump sum, amid reports of stalled talks and chip concerns.

Feb 3, 2026
7 min read
Set Technobezz as preferred source in Google News
Technobezz
Nvidia Says Its $100 Billion OpenAI Investment Will Be Gradual

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Nvidia CEO Jensen Huang clarified that the company's proposed $100 billion investment in OpenAI was "never a commitment," telling reporters in Taipei that funding would occur gradually rather than as a single lump sum. The statement came after reports suggested the September 2025 deal had stalled amid OpenAI's search for alternative AI chips.

"We never said we would invest $100B in one round," Huang said. "They invited us to invest up to $100B. We will invest one step at a time."

When pressed about whether the commitment still stands, Huang responded, "I told you just now. You keep putting words in my mouth."

The original agreement, announced in September 2025, outlined Nvidia's intention to invest up to $100 billion while helping OpenAI build at least 10 gigawatts of computing capacity. That infrastructure would have supported OpenAI's AI model training and deployment, equivalent to New York City's peak electricity demand.

OpenAI CEO Sam Altman responded to the speculation on social media platform X, writing, "We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time." Altman added, "I don't get where all this insanity is coming from."

Behind the investment uncertainty lies a technical shift in OpenAI's computing requirements. Eight sources familiar with the matter told Reuters that OpenAI has been unsatisfied with some of Nvidia's latest AI chips since 2025, particularly for inference workloads.

Inference refers to the process where trained AI models like ChatGPT respond to user queries. While Nvidia dominates AI training chips, inference has become a competitive battleground requiring different hardware characteristics.

OpenAI needs new hardware that would eventually provide about 10% of its inference computing needs, according to one source. The company has discussed working with startups Cerebras and Groq for chips offering faster inference speeds.

Nvidia responded to this competitive pressure by striking a $20 billion licensing deal with Groq in December, effectively shutting down OpenAI's talks with the startup. Nvidia also hired away Groq's chip designers while licensing the company's technology.

The technical challenge centers on memory architecture. Inference requires more memory than training because chips spend more time fetching data than performing calculations. Nvidia and AMD GPU technology relies on external memory, which adds processing latency.

OpenAI has focused on chips with large amounts of SRAM memory embedded directly into the silicon. This architecture offers speed advantages for chatbots processing millions of user requests simultaneously.

The issue became particularly visible in OpenAI's Codex product for generating computer code. Company staff attributed some of Codex's performance limitations to Nvidia's GPU-based hardware, according to sources.

In a January 30 call with reporters, Altman said customers using OpenAI's coding models "put a big premium on speed for coding work." He noted that OpenAI would meet this demand through its recent deal with Cerebras, while speed matters less for casual ChatGPT users.

Competitors like Anthropic's Claude and Google's Gemini benefit from deployments using Google's custom tensor processing units. These chips are designed specifically for inference calculations and can offer performance advantages over general-purpose GPUs.

The investment uncertainty triggered market reactions, with Nvidia shares dropping 2.9% and Oracle stock declining 2.8% on Monday. Oracle has a multi-year agreement where OpenAI will purchase $300 billion worth of computing power.

Oracle clarified that its relationship with OpenAI remains unaffected, stating, "The NVIDIA-OpenAI deal has zero impact on our financial relationship with OpenAI. We remain highly confident in OpenAI's ability to raise funds and meet its commitments."

Despite the clarified investment timeline, Huang confirmed Nvidia's ongoing support. "We will invest a great deal of money, probably the largest investment we've ever made," he said, though he added "No, no, nothing like that" when asked about the $100 billion figure.

Huang dismissed reports of tension between the companies as "nonsense" and called OpenAI "one of the most consequential companies of our time." The chipmaker's proposed investment had raised concerns about circular investing, given Nvidia remains OpenAI's largest AI chip supplier.

Since the September announcement, OpenAI has signed deals with Nvidia rival AMD and other chipmakers for processors that could compete with Nvidia's offerings. These moves represent part of a broader strategy to diversify computing options as AI infrastructure demand surges.

OpenAI's computing infrastructure leader Sachin Katti posted on X that Nvidia's technology remains "foundational" for the company. "This is not a vendor relationship. It is deep, ongoing co-design," he wrote, noting OpenAI's compute capacity would accelerate from roughly 1.9 GW in 2025.

The partnership evolution comes as OpenAI reportedly aims for a public listing by the end of 2026. Every major investment announcement builds the company's narrative ahead of a potential IPO, with total announced commitments reportedly reaching $1.4 trillion across various partnerships.

Industry analysts note that the original $100 billion figure represented a ceiling rather than a binding commitment. The phased investment approach allows both companies to adjust their partnership as AI technology and market conditions continue evolving rapidly.

Share this article

Help others discover this content

More in News