What Is To Be Learned From Nvidia's Latest Earnings Transcript

Image Source: DepositPhotos

 

The key points from the Nvidia's (NVDA) management discussion are:

1. Nvidia is seeing strong demand for its Hopper and upcoming Blackwell GPU architectures, which are enabling the transition from general-purpose computing to accelerated computing and the rise of generative AI. Hopper demand remains strong, and anticipation for Blackwell is incredible.

2. Blackwell is a significant leap over Hopper, as it is an AI infrastructure platform with multiple custom chips (Grace CPU, Blackwell GPU, ConnectX DPU, BlueField DPU, NVLink Switch) designed to work together to enable massive-scale AI computing. Blackwell can provide 3-5x more AI throughput in a power-limited data center compared to Hopper.

3. NVLink, Nvidia’s high-speed GPU interconnect, is a game-changer, enabling the connection of up to 144 GPUs in a single NVLink domain with 259TB/s of aggregate bandwidth, crucial for training large language models.

4. Generative AI momentum is accelerating, with frontier model makers racing to scale up models, and internet services and enterprises deploying generative AI for recommender systems, ad targeting, search, and other applications. Nvidia’s AI software platform is enabling enterprises to build custom AI applications. 5. Nvidia’s software, SaaS, and support revenue is expected to approach a $2 billion annual run rate by the end of the year, driven by the Nvidia AI Enterprise platform.

There are quite a few jargons to be understood:

Hopper refers to Nvidia’s Hopper GPU architecture, which is the successor to their previous Ampere GPU architecture. The transcript mentions that demand for Hopper GPUs remains strong, and that Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper is described as the “state of the art” GPU offering from Nvidia, providing significant performance improvements over previous generations.

Inference: Inference refers to the process of using a trained machine learning model to make predictions or decisions on new data. The transcript states that over the trailing four quarters, inference drove more than 40% of Nvidia’s Data Center revenue. Inference is a key application for Nvidia’s GPUs, as they are able to provide high throughput and efficiency for running inference workloads, especially for large language models and other generative AI applications.

According to the transcript, the major drivers of Nvidia are:

1. The transition from general purpose computing to accelerated computing, driven by the slowdown in CPU scaling and the growing demand for computing power. Nvidia’s accelerated computing solutions, powered by CUDA-X libraries, are enabling this transition and opening up new markets for Nvidia.

2. The generative AI revolution, which is driving the need for more powerful and scalable AI infrastructure. Nvidia’s Blackwell platform, which is a step function leap over the Hopper architecture, is designed to meet this demand. Blackwell integrates multiple custom chips, including the Grace CPU, Blackwell GPU, ConnectX DPU, BlueField DPU, and NVLink switch, to create an end-to-end AI infrastructure platform.

3. The growing momentum in generative AI, with frontier model makers racing to scale their models and increase their safety and capabilities. This is driving demand for Nvidia’s computing infrastructure, both in the cloud and in sovereign AI initiatives by countries.

4. The enterprise AI wave, where Nvidia is working with leading IT companies to help enterprises customize AI models and build bespoke AI applications using the Nvidia AI enterprise platform, which includes NeMo, NIMs, NIM Agent Blueprints, and AI Foundry. This is expected to drive significant growth in Nvidia’s software and services revenue.

According to the transcript, the competitive landscape for Nvidia is as follows:

The demand for Nvidia’s Hopper and Blackwell products is incredibly strong, with customers clamoring to get their hands on the latest technology. Nvidia is seeing strong demand from cloud service providersconsumer internet companies, and enterprises that are looking to accelerate their computing workloads and adopt generative AI capabilities. Nvidia mentions that the China market remains competitive, and they continue to expect it to be very competitive going forward. However, Nvidia’s Data Center revenue in China grew sequentially in Q2, indicating that they are still able to maintain a strong position in the market despite the competition. Overall, the transcript suggests that Nvidia is facing strong demand for its products and is well-positioned to capitalize on the growing trends in accelerated computing and generative AI. The company is working to ramp up production of its Blackwell platform to meet the high demand, and is also continuing to see strong momentum in its Hopper architecture.

According to the transcript, the major risks and uncertainties mentioned are:

1. The company’s actual results may differ materially from the forward-looking statements made during the call, due to various factors such as the company’s future financial results and business, which are discussed in the company’s recent financial filings.

2. The company expects the China market to be very competitive going forward, after the imposition of export controls.

3. The company’s gross margins may continue to be impacted by the shift towards new products in the Data Center segment.

4. The company expects its operating expenses to grow in the mid to upper 40% range as it works on developing its next generation of products.

5. The company’s other income and expenses are expected to include gains and losses from non-affiliated investments and publicly-held equity securities. 6. The company’s tax rates are expected to be 17% plus or minus 1%, excluding any discrete items.


More By This Author:

Passive Active Ownership Strategy - Part 2
Passive Active Ownership Strategy - Part 1
Bitcoin Price Momentum Strategy Since The Halving In 2020

How did you like this article? Let us know so we can better customize your reading experience.

Comments