In a sign that the tech industry’s next big boom is picking up steam, Nvidia on Wednesday predicted a sharp increase in already strong demand for chips to build artificial intelligence systems.
The Silicon Valley company’s products, called graphics processing units, or GPUs, are used to build most AI systems, including the popular ChatGPT chatbot. Tech companies from start-ups to industry giants are fighting to get their hands on it.
Nvidia said heavy demand from cloud computing services and other customers for its chip-to-power AI system drove revenue to $13.5 billion in the second quarter ended July, a 101 percent jump from a year earlier, while profits jumped more than ninefold. $6.2 billion.
That was better than what Nvidia estimated in late May, when its $11 billion revenue estimate for the quarter stunned Wall Street and helped push Nvidia’s market value above $1 trillion for the first time.
Nvidia’s predictions and high market cap have become emblematic of the growing excitement around AI, which is transforming many computing systems and the way they are programmed. They are also keenly interested in what Nvidia might say next about chip demand for its current quarter, which ended in October.
Nvidia estimated third-quarter sales of $16 billion, nearly triple the year-ago level and $3.7 billion more than analysts’ average expectation of about $12.3 billion.
Chip makers’ financial performance is often considered a harbinger for the rest of the tech industry, and Nvidia’s strong results could be Revive the enthusiasm For tech stocks on Wall Street. Other tech companies like Google and Microsoft are spending billions and making little on AI, but Nvidia is making money.
Nvidia CEO Jensen Huang said major cloud services and other companies are investing to bring Nvidia’s AI technology to every industry. “The number of applications is quite spectacular,” he said in an interview after a conference call with analysts. “Every single data center will be accelerated.”
Nvidia shares rose more than 9 percent in after-hours trading.
Until recently, Nvidia derived the bulk of its revenue from the sale of GPUs for rendering images in video games. But AI researchers began using those chips in 2012 for tasks like machine learning, a trend that Nvidia has exploited over the years by adding enhancements to its GPUs and many pieces of software to reduce labor for AI programmers.
Selling chips for data centers, where most AI training is done, is now the company’s biggest business. Revenue from that business rose 171 percent to $10.3 billion in the second quarter, Nvidia said.
The rush to add generative AI capabilities has become a fundamental imperative for corporate heads and boards of directors, said Patrick Moorehead, an analyst at Moore Insights & Strategy. Nvidia’s only limitation at the moment, he said, is its struggle to supply enough chips — a gap that could create opportunities for big chip companies like Intel and start-ups like Advanced Micro Devices and Groq.
Nvidia’s roaring sales contrast sharply with the fortunes of some of its chip industry peers, which have been hurt by soft demand for personal computers used for general-purpose work and data center servers. Intel said in late July that second-quarter revenue fell 15 percent, though the results were better than Wall Street expected. Revenue at Advanced Micro Devices fell 18 percent during the same period.
Some analysts believe that spending on AI-specific hardware, such as Nvidia’s chips, and systems that use them is diverting money away from spending on other data center infrastructure. IDC, a market research firm, estimates that cloud services will increase their spending on server systems for AI by 68 percent over the next five years.
While Google, Amazon, Meta, IBM and others also make AI chips, Nvidia today accounts for more than 70 percent of AI chip sales and has a larger position in training generative AI models, according to research firm Omdia.
Demand is particularly heavy for the H100, a new GPU developed by Nvidia for AI applications, which began shipping in September. Companies large and small have scrambled to find supplies of the chips, which are made in an advanced manufacturing process and require equally sophisticated packaging that combines GPUs with specialized memory chips.
Nvidia’s ability to increase deliveries of the H100 is largely tied to the Taiwanese semiconductor manufacturing company’s operations, which manufactures GPUs in addition to handling packaging.
Industry executives expect the shortage of H100s to extend through 2024, a problem for AI start-ups and cloud services hoping to sell computing services that absorb the new GPUs.
Mr. Huang said the company is working tirelessly with its manufacturing partners to bring more chips to market, including working with other companies to complement TSMC’s packaging capabilities. “Supply will increase significantly for the rest of this year and next year,” he said.