Prediction: These AI Chip Stocks Could Soar (Hint: It's Not Nvidia or Broadcom)

Global
Source: The Motley FoolPublished: 09/20/2025, 16:38:00 EDT
AMD
Marvell Technology
AI Chips
Semiconductors
Data Center Infrastructure
Image source: Getty Images

News Summary

This article highlights Advanced Micro Devices (AMD) and Marvell Technology as two overlooked companies with significant opportunities in the artificial intelligence (AI) chip market, beyond the dominant players like Nvidia and Broadcom. It suggests that both companies are well-positioned to benefit as the AI infrastructure buildout progresses into its next phase. AMD, traditionally a runner-up to Nvidia in the GPU market, is identifying a niche in the AI inference segment. As inference tasks, which occur every time an AI task runs, are projected to dwarf training in demand, AMD's ROCm software platform and cost-efficiency could enable it to gain market share. AMD is also co-founding the UALink Consortium as an open standard to challenge Nvidia's NVLink dominance in multi-GPU systems. Marvell Technology specializes in custom AI chips and high-speed interconnect technology, securing multi-generational design wins with major hyperscalers like Amazon. Despite recent stock volatility due to Amazon-related business lumpiness, Marvell has diversified, securing 18 custom compute sockets, including 12 at the largest U.S. hyperscalers. The company forecasts hypergrowth in the XPU Attach market and aims to capture 20% of its $94 billion data center total addressable market by 2028.

Background

The artificial intelligence (AI) sector is currently experiencing explosive growth, driving unprecedented demand for high-performance computing and specialized chips. Nvidia, with its CUDA software ecosystem and powerful GPU product line, has established a dominant position in the AI training chip market, with its data center revenue reaching tens of billions of dollars. Broadcom has also excelled in the custom AI chip segment, assisting clients in designing proprietary AI accelerators. However, the AI chip market is evolving. Beyond model training, the demand for inference (deploying and running AI models) is rapidly emerging and is projected to become an even larger market. Simultaneously, cloud service providers and major tech companies are increasingly opting to develop their own custom AI chips to optimize performance, reduce costs, and achieve differentiation, creating new opportunities for companies that provide IP, design services, and interconnect solutions.

In-Depth AI Insights

Can AMD's ROCm and UALink truly erode Nvidia's moat? AMD's strategy in the AI inference market is astute, as inference is more sensitive to cost and efficiency than raw peak performance. The improvements in the ROCm software platform and the UALink Consortium are critical moves against Nvidia's CUDA/NVLink ecosystem. However, investors should not underestimate the stickiness of Nvidia's ecosystem. CUDA's first-mover advantage and extensive developer support are deeply entrenched. While UALink, as an open standard, is appealing, breaking the existing paradigm will take time and require deep commitment from more industry giants. AMD's success hinges on its ability to offer a significant total cost of ownership advantage at comparable performance, and to convince developers and large customers to switch ecosystems. This is a long-term and arduous challenge, not a quick win. Does Marvell's reliance on the custom chip market harbor structural risks? Marvell's multi-generational design wins in custom AI chips and interconnect technology, especially with large hyperscalers, are powerful growth drivers. However, the