Should You Buy The NVDA Dip?
Image courtesy of 123rf.com
Following the DeepSeek-induced shock across the US stock market, the domino drained around $1.5 trillion worth of value from stocks. Now priced at $121.35, from $142.62 on Friday, Nvidia (NASDAQ: NVDA) returned to the early October 2024 price level, having lost over $900 billion market cap.
The main reason for this rapid capital withdrawal is that the Chinese DeepSeek AI model set a new standard for cost-to-performance compute efficiency. Therefore, the investments in the AI infrastructure shouldn’t carry over as much as they have, including in the energy sector.
Furthermore, it is also the case that open-source DeepSeek-V3 is supported by AMD Instinct chips (with AMD being Nvidia’s long-standing rival) in terms of full optimization for multimodal AI training. Compounding this, it has been speculated for over a year that NVDA stock is overvalued.
At present, NVDA stock’s forward price-to-earnings (P/E) ratio is 33.33 vs AMD’s P/E ratio of 24.75. Nvidia’s price-to-book (P/B) ratio is even greater, at 53 vs AMD’s P/B ratio of 3.5. Nonetheless, as these factors leveled down NVDA stock, is this an opportunity to buy it at a rare discount?
DeepSeek and Compute Power Required to Train AI
The main reason why Nvidia so rapidly transformed from GPU video gaming company to an AI data center company is its GPU dominance. Specifically, for providing a full-stack framework for AI developers on top of its performant GPU hardware.
Consequently, Nvidia seamlessly expanded its discrete GPU dominance, at 90% global market share in Q4 2024, to the emerging AI infrastructure market. In turn, Nvidia became effectively synonymous with the AI boom considering that it is unlikely that the company’s first mover advantage could be easily eroded.
Post-DeepSeek appearance, the question is, by how much does the new AI model reduce computing needs for the same output performance?
The Chinese startup stated that it took 2,048 Nvidia chips for training the R1 version. When ChatGPT was trained it took around 20,000 A100 chips, which are about 2.7x less performant than the mainstay H100 used for most LLMs including DeepSeek. This would mean that DeepSeek is at least 3.6x more compute efficient.
It is also likely that these were H800 GPU accelerators, as slightly nerfed versions of H100. To offset this, Chinese researchers came up with a wide range of optimizations. For instance, in the official paper, reinforcement learning (RL) algorithm was attributed to massively boost math performance.
“Notably, the average pass@1 score on AIME 2024 shows a significant increase, jumping from an initial 15.6% to an impressive 71.0%, reaching performance levels comparable to OpenAI-o1-0912. This significant improvement highlights the efficacy of our RL algorithm in optimizing the model’s performance over time.”
But does that mean that Nvidia shareholders should expect at least three times lower AI data center demand in the future?
Cost-Efficiency Comes with Greater Market Penetration
For decades, the cost of batteries posed as the main obstacle for mass EV adoption, in addition to the availability of charging infrastructure. The same dynamic is now in play with compute power.
Now that DeepSeek demonstrated that the same, or greater, performance can be achieved at a lower price point, it is likely this will entice businesses and organizations to more readily integrate AI models into their operations.
(Click on image to enlarge)
Like BYD to Tesla, DeepSeek realigned AI cost expectations. Image credit: ArtificialAnalysis.io
On top of that robust market mechanism, Western AI models, from Meta, Alphabet, Anthropic and OpenAI, are likely to adopt DeepSeek’s optimizations, leading to greater accuracy. And if there is greater AI accuracy, the demand for AI will further heighten.
In short, the lowered compute demand introduced by DeepSeek should be offset and surpassed by accelerated adoption of AI products across the board. With that said, it is yet to be replicated if DeepSeek indeed owes its training to just ~2k GPUs.
Yann LeCun, Meta’s head of AI research, hinted on Threads that DeepSeek owes its existence to Meta’s substantial investments in the Llama family of models.
“They came up with new ideas and built them on top of other people’s work. Because their work is published and open source, everyone can profit from it,”
Ironically, it seems that AI chip restrictions on China pushed the envelope on AI efficiency. And because DeepSeek is fully open-source, the Big Tech in the US is now forced to do the same instead of just throwing more compute power at the problem. But at the end of the line, greater AI reliability and cost-efficiency only benefits Nvidia (and AMD) in the long run.
What Is the Current NVDA Price Target?
Presently priced at $121.35, which is above the 52-week average of $113.99, NVDA stock now has significant upside potential despite cut price targets. Morgan Stanley revised its NVDA price target from $166 to $152 per share.
Further revisions from other financial institutions are expected, but are likely to remain in this range.
At the moment, the average NVDA price target is $175.16, per WSJ forecasting data, still set to “buy”. On a one-year timeframe, NVDA stock is no stranger to heavy drops, only to be followed by rallies. This happened in early August, September and mid-December.
The bottom line is, Nvidia is poised to remain the primary supplier of compute power for AI needs. And those needs will grow omnipresent if DeepSeek is the harbinger of new optimizations and accuracy boosts to come.
More By This Author:
RTX Corp Outperforms Expectations With $21.6 Billion In Q4 RevenueGeneral Motors Beats Q4 Expectations Despite Challenging Quarter
Energy Stocks Hit Hard As DeepSeek Shows AI Energy Demand Can Slow
Disclaimer: The author does not hold or have a position in any securities discussed in the article.