Nvidia GPUS and Broadcom, Marvell ASICs
Data & Insights for the Datacenter value chain …
As I digested the Nvidia, Marvell, and Broadcom earnings and reviewed the technical differences in outlooks across GPUs and ASICs, a few lingering questions kept popping up:
1. Where is this market headed in 2030?
2. What are the associated opportunities in these hyper datacenters?
So, I thought it was time to put the baby in action! The AI baby - the creation of these GPUs and ASICs. After watching Google I/O, the choice was simple - Google Gemini 2.5 Pro.
The first challenge was to ask highly specific questions as I wrote in a previous Substack blog. While this is true for all models, it is specifically important (and powerful) for Gemini 2.5 Pro, due to its innovative step-by-step thinking model supported by industry-leading context window (2M tokens).
A deep knowledge of the industry and function is a prerequisite.
With my “a priori” questions and contextual hypotheses about the GPUs vis-a-vis ASICs, I wrote my prompt, with ~10 rounds of subsequent back-and-forth fine-tuning.
Prompt:
“Assume you are a deep subject matter expert across the semiconductors and data center infrastructure industry. Assume you focus on industry value chain-related data collection from reputed sources and curating/consolidating those as inputs for long-term product growth strategy and investment roadmap development. With this context and background, please answer these questions with sufficient detail, each one of them backed by projected 2025 market size and 2025-2030 market growth rate, as applicable, from reputed sources.
1. For the AI and HPC related computing chips, which of GPU or ASIC is projected to grow faster, and why? Break this data down across training and inference.
2. For the AI and HPC related networking chips, which architecture is projected to grow faster between Ethernet and InfiniBand? Break down the data for each of Ethernet and InfiniBand market size and growth a. across AI training and AI inferencing demands, and b. across pluggable optics and converged packet optical. What would be your technical rationale for what is projected to grow faster?
3. For the AI and HPC related demand, which of DRAM or NAND is projected to grow faster, and why? Break down the data for each of NAND and DRAM market size and growth across a. AI training and AI inferencing. and b. GPU vs. ASIC driven compute.
4. Using the answers to questions above as demand drivers, show the 2025 market sizes of the various components that go into building an HPC/AI datacenter, and how they are projected to grow in the next 5 years? Break down the sub-categories.
5. For different components in question 4 above, which ones are likely to remain highly constrained due to technology development complexity, to satisfy the technical requirements for AI?"..
More questions, Gemini-generated public html report, and my strategic insights are in Substack here:
https://lnkd.in/gqMZGVnC
4