Investment Conclusion
Nvidia Corporation (NASDAQ:NVDA) appears invincible. The company is everywhere, from its GPUs powering Generative AI applications associated with Open AI and that of the cloud hyperscalers, to combining with Dell Technologies (DELL) to build an AI factory for Elon Musk’s xAI.
And why not. NVDA currently provides the fastest and most powerful GPUs in the world, its AI platform is end-to-end fully sufficient to render any corporation AI ready, with little friction, providing the hardware, software, and networking, required for AI. The competition appears floundering in the face of NVDA’s scorched earth policy towards them.
However, it is not a zero-sum game. If one gazes more closely at NVDA, there are chinks in the company’s armor the competition can exploit to get its foot in the door. NVDA’s Achilles Heel is its premium pricing policy, its attitude that stacking compute power is the solution for everything AI, and that hogging CUDA, its proprietary GPU programming software will handicap sales of the competition’s AI accelerators enough to ensure NVDA’s industry dominance into perpetuity.
However, the winds of change are blowing in. Advanced Micro Device’s (AMD) AI processors are improving exponentially, and Intel Corporation’s (INTC) new chips, although comparable in computing power to NVDA’s H100 (not B100), are priced at 1/3rd that of NVDA’s GPUs. In addition, a consortium of developers from Intel to Qualcomm (QCOM) to Alphabet (GOOG) (GOOGL), Amazon (AMZN), and Meta Platforms (META), are quietly preparing open-source code that will optimize performance of GPUs of any company, run on any hardware. Further, AMZN, GOOG, and Microsoft (MSFT) are developing custom silicon to power their internal AI workloads, and to support AI inference activity performed by their cloud computing clients. Overall, once the challengers efforts gather momentum, NVDA’s bound to lose pricing power and market share within the AI semiconductor industry.
Nevertheless, given the presupposition of AI thought leaders that the demand for AI processing power will likely only be limited by price, with the market having the ability to absorb large quantities of compute power, all major chip companies are likely to prosper for an elongated period into the future. The rising tide will lift all boats, all companies will benefit greatly, including NVDA.
We’re initiating on NVDA with a Sell Rating and $70/share Price Target. This is based on our 10 year Discounted Cash Flow (“DCF”) model, adjusted for lower revenues and earnings growth over the out years, as NVDA’s first mover advantage fades in the face of the competition’s strategic onslaught.
We’re cognizant that we will possibly encounter pushback from investors on our Price Target and Sell Rating on NVDA. In that regard, it is notable that our valuation surmises significant growth over the coming years, which tapers substantially over the out years. In addition, it is important to note that our model inputs for NVDA are highly aggressive, compared to that we attributed to INTC (which we initiated on two weeks ago). Specifically, we deployed straight lined 10 year: revenue growth/year, operating cash flows as percent of revenues/year, and capital expenditures as percent of revenues/year, of 8%, 30%, and 22% for INTC, versus 30%, 35%, and 5% for NVDA. Cumulatively, although NVDA’s stock price might jump every time the company reports earnings over the coming quarters, with some downside risk, a substantial decline in the price of the company’s shares is imminent. Make hay while the sun shines.
Investment Thesis
NVDA was founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, in Sunnyvale, California. The firm’s headquarters are located in Santa Clara, California. NVDA has data centers, and research and development and sales and administrative facilities in the U.S., shine, Israel, and Taiwan, among others.
The company derives its revenues from two segments: data center and networking, comprised of sales associated with its data centers and networking paraphernalia, and graphics, which includes revenues from NVDA’s gaming GPUs. During FY24, the data center and networking, and graphics categories accounted for 78% and 22% of the firm’s total revenues.
There appear to be two primary concerns surrounding the NVDA story. The predominant issue is whether the runaway growth the company is witnessing is sustainable. The secondary element drawing investor attention is related to NVDA’s long-term financial performance. We analyze both concerns below.
Runaway Growth Likely To Taper As Industry Matures
NVDA’s competitive advantage is built on the back of several factors. These include the prowess of its GPUs to process trillions of parameters and tokens of data at high speeds, its networking abilities which lash together tens of thousands of its GPUs to provide supercomputing powers, and its CUDA programming platform. Developers have built software libraries using CUDA to simplify AI tasks of all genres.
CUDA is proprietary to NVDA and not open sourced. Therefore, GPUs of alternate companies cannot be easily optimized using CUDA. GPU programmers over the years (CUDA was launched in 2006) have developed programs on CUDA that perform numerous AI tasks. Consequently, when an enterprise purchases a NVDA GPU, their programmers don’t have to program the hardware for associated tasks, as the programs already exist. In that regard, NVDA’s GPUs are like Apple Inc’s (AAPL) iPhone and CUDA is similar to iOS, upon which developers have built millions of applications, greatly enhancing the customer appeal of the iPhone.
AMD’s GPUs can be optimized using the firm’s ROCm platform (however the number of programs available is relatively small) and INTC’s SYCL platform translates CUDA, so INTC’s AI accelerators can avail of CUDA’s assets. Nevertheless, regarding the training of large language models, or LLMs, AMD’s and INTC’s semiconductors have not gained much market traction. Therefore, the firms along with large corporations, including GOOG, QCOM, MSFT, and Open AI, are collaborating to deliver the industry from CUDA’s dominance. ROCm, PyTorch, and Trinton are platforms under development to achieve the objective.
Further, stacking compute power, appears to be NVDA’s strategy to overwhelm customers and the competition. However, AI mimics humans, and humans have a multitude of capabilities, which they utilize to accomplish different tasks. Over time, the realization will dawn across the AI industry that stacking processing power is not the solution for making the most of AI. To limit costs and optimize performance, a combination of CPUs, GPUs, and AI accelerators with varied computing powers, will be required. To illustrate, LLM training necessitates significantly more processing and speed, requiring supercomputers and Petaflops of compute power, to ensure the processing of trillions of parameters and tokens of data, at breakneck speed. However, an AI inference task, that is the output a LLM spits out when prompted following training, requires substantially reduced computing power than during training.
In the context of inference workloads, INTC’s Gaudi 2 and Gaudi 3 AI accelerators, with processing power comparable with NVDA’s H100 GPU, and lower power requirements, appear more suitable. Moreover, prices associated with Gaudi 2 and Gaudi 3 systems, are between 1/3rd and 2/3rd that of NVDA’s GPU systems. In that respect, it is noteworthy that AI inference is projected to ultimately account for 80% of AI revenues.
Further, with an eye on the future and sensing an opportunity to reduce reliance on expensive NVDA processors, AMZN, MSFT, and GOOG are designing and producing their own: inference chips to fulfil AI computing demands of their customers, and custom silicon to support their own internal workloads. Given that NVDA generated 45% of its 2Q25 total revenues from sales of its GPUs to these cloud hyperscalers, with large Internet companies and multinational corporations accounting for another 50% of total revenues, the development will ultimately shrink the addressable market for NVDA’s products. In addition, once this small group of companies NVDA derives 95% of its revenues from have accumulated sufficient computing power generating hardware, their demand for NVDA’s offerings will decline dramatically.
Overall, although processing power requirements for artificial general intelligence (AGI) will pick up some slack in NVDA’s revenues, long-term market dynamics suggest that the company is bound to lose market share and pricing power, over time.
Current Financial Performance Appears Unsustainable Over The Long Term
NVDA reported solid financial results for FY24. On a year-over-year basis, revenues expanded 126% to $60.9 billion. The outperformance was driven by strong sales associated with the data center and networking category, which advanced 215% to $47 billion, and the graphics segment, which escalated 14% to $13.5 billion. Compared to FY23, operating income expanded by 681% to $33 billion. In addition, over the same period, net income increased by 584% to $30 billion, and earnings per share came in at $11.93 versus $1.74. Gross margins advanced 38% to 54%, and net margins expanded 327% to 49%. Data center and networking operating income escalated 530% to $32 billion. Graphics operating income, which accounted for 18% of total operating income, increased 681% to $5.8 billion.
NVDA’s blowout financial performance was fueled by the advent of Chat-GPT, which reflected in a hyper focus on the training of LLMs. NVDA’s GPUs, with the ability to process trillions of parameters and tokens of data at speed, are highly proficient at LLM training. What ensued was a scramble for the product among large corporations and government institutions across the world. The runaway volume and premium pricing associated with NVDA’s GPUs ensured that margins were extremely high, reflecting in significant growth in the company’s earnings and free cash flows for FY24.
We expect NVDA to repeat if not better the financial performance it experienced in FY24 over the upcoming few quarters, as enterprises focus on training LLMs on proprietary data, thereby requiring huge amounts of processing power. However, Moore’s Law states that the number of transistors on an integrated circuit will double every couple of years, while the cost of the chip will be halved over the same period. Therefore, the premium pricing that NVDA’s chips command due to the novelty of its technology, is likely to be damaged over time.
As described above, comparable semiconductors, a programming language to rival CUDA, GPU pricing wars, declining demand for AI hardware, and a shift in focus towards AI inference from AI training are likely to dampen sales growth of NVDA’s AI processors. This is despite AGI’s substantial compute requirements. Lower sales volume and reduced pricing will reflect on NVDA’s revenue growth rates, over the back end of the decade. Consequently, the firm’s margins, earnings, and free cash flows are likely to suffer over the out years.
Incorporating the above described qualitative narrative into our 10-year Discounted Cash Flow model, we arrive at a Price Target of $70/share for NVDA. Our valuation assumes a normalized 10-year revenue growth rate of 30%. In addition, we derive our net income for 10-years using a net profit margin of 42% (vs. the 49% net profit margin experienced in FY24). Based on our analysis of NVDA’s historic financial reports, we model normalized 10-year operating cash flows as 35% of revenues/year, and straight lined 10-year capital expenditure as 5% of revenues/year. Furthermore, we deploy a perpetual growth rate of 3% and weighted average cost of capital of 10% to reach our terminal value and present value of free cash flow figures. We utilize the current diluted outstanding share count of 24,848 million to arrive at our Price Target for NVDA.
Risk
AI Monetization Efforts Disappoint. Based on Sequoia Capital’s estimates, corporations purchased $50 billion worth of NVDA’s AI processors during 2023. In contrast, over the same period, AI processors derived revenues came in at less than $5 billion. Most large corporations are stacking upon processing power, hoping that the investment will pay off in the future.
Open AI generated $2 billion in Generative AI revenues in 2023, the cloud hyperscalers are monetizing AI by providing AI tools and services to enterprises seeking to automate tasks related to their businesses. However, use cases for AI are still evolving. The path for AI derived riches appears unclear, except for semiconductor companies. Undoubtedly, the world will be transformed by AI. However, similar to the internet era, visibility into where the bulk of AI related profits will ultimately land is weak.
In case AI fatigue unfolds among the client base of the cloud hyperscalers, a pullback in demand for NVDA’s chips might follow. In addition, an economic recession might reflect in a decline in spending on AI hardware. Nevertheless, the risk that these scenarios might occur is minimal, in our assessment. Overall, it appears that the world has bought into the AI story, hook-line-and-slinker. Wherever, the chips might settle regarding future AI profits, the semiconductor companies’ fortunes appear secure.
Bottom Line
NVDA and Open AI spearheaded the advent of Generative AI on Main Street. After all, Open AI’s Chat-GPT and GPT-4 LLMs are powered by NVDA’s GPUs.
Large technology companies, harboring the fear of losing out, are stacking layer upon layer of computing power, by purchasing as many NVDA GPUs they can lay their hands on. NVDA, aware of its enviable position, is flexing its muscles, to dominate the industry as long as it can, by aggressive marketing.
Undoubtedly, NVDA has strengths, and it can surprise with the staying power of an AMZN, AAPL, or MSFT. However, for now, investors should consider the reality we’ve outlined in our analysis, for they risk placing overly enthusiastic long-term bets on NVDA. Focusing on technology underdogs with solid game plans, as alternative investments, is worth considering.
Read the full article here
Leave a Reply