The artificial intelligence infrastructure buildout represents the most significant capital expenditure cycle in the technology sector since the construction of the internet's foundational architecture in the late 1990s. Unlike the dot-com era, however, this cycle is being driven not by speculative venture capital but by the largest, most profitable companies in the world — Microsoft, Alphabet, Amazon, and Meta — committing hundreds of billions of dollars in aggregate to build the compute, networking, and power infrastructure required to train and deploy frontier AI models at scale.
The numbers are staggering. Microsoft has committed to $80 billion in AI infrastructure investment in fiscal year 2025 alone. Alphabet's capital expenditure guidance for 2025 exceeds $75 billion. Amazon Web Services is investing at a similar scale. Meta has guided to $60–65 billion in capex for 2025, with AI infrastructure representing the dominant share. In aggregate, the four largest hyperscalers are on track to spend approximately $300 billion on AI-related infrastructure in 2025 — a figure that exceeds the annual GDP of many developed economies.
The Architecture of the AI Capex Cycle
Understanding where this capital is flowing requires a granular decomposition of the AI infrastructure stack. The buildout encompasses three primary layers: compute (GPUs and custom AI accelerators), networking (high-bandwidth interconnects and switching infrastructure), and power and cooling (the physical infrastructure required to operate dense GPU clusters at scale).
Compute. NVIDIA's H100 and H200 GPU clusters remain the dominant training infrastructure for frontier models, with the company capturing an estimated 70–80% of the AI accelerator market by revenue. However, the competitive landscape is evolving rapidly. AMD's MI300X has achieved meaningful traction in inference workloads, and custom silicon from the hyperscalers themselves — Google's TPUs, Amazon's Trainium and Inferentia, Microsoft's Maia, and Meta's MTIA — is capturing an increasing share of internal workloads. The custom silicon trend is structurally significant: as hyperscalers develop proprietary accelerators optimized for their specific model architectures and workload profiles, they reduce their dependence on merchant silicon and capture more of the value chain internally.
Networking. The networking requirements of large-scale AI training clusters are fundamentally different from traditional data center networking. Training a frontier model requires thousands of GPUs to communicate with each other at extremely high bandwidth and extremely low latency — a requirement that has driven massive investment in InfiniBand (dominated by NVIDIA/Mellanox) and Ethernet-based alternatives. The networking layer represents approximately 15–20% of total cluster cost, and the market is growing at rates that exceed even the GPU market.
Power and Cooling. The power consumption of large-scale AI data centers is creating a genuine infrastructure bottleneck. A single NVIDIA H100 GPU consumes approximately 700 watts; a cluster of 10,000 GPUs requires 7 megawatts of power — and that is before accounting for cooling, networking, and storage infrastructure. The aggregate power demand from AI data centers is expected to add 40–50 gigawatts of new electricity demand in the United States alone by 2030, equivalent to the total electricity consumption of several large states.
Investment Implications Across the Value Chain
The AI capex supercycle creates investment opportunities across multiple layers of the technology value chain, but the distribution of value capture is highly uneven. Identifying the companies with durable competitive advantages — rather than those that are simply benefiting from a temporary demand surge — is the critical analytical challenge.
Semiconductor Equipment. ASML, Applied Materials, Lam Research, and KLA Corporation occupy a uniquely advantaged position in the AI supply chain. The advanced logic chips required for AI accelerators — manufactured at TSMC's 3nm and 2nm nodes — require ASML's extreme ultraviolet (EUV) lithography equipment, which has no viable substitute. The semiconductor equipment companies benefit from the AI capex cycle through increased wafer starts at leading-edge nodes, but their competitive moats are structural rather than cyclical.
Advanced Packaging. The transition from traditional chip packaging to advanced packaging technologies — including TSMC's CoWoS (Chip-on-Wafer-on-Substrate) and Intel's EMIB — is a critical enabler of AI accelerator performance. Advanced packaging allows multiple chiplets to be integrated into a single package with high-bandwidth interconnects, enabling the memory bandwidth required for large model inference. The advanced packaging supply chain is capacity-constrained, creating pricing power for suppliers with established capabilities.
High-Bandwidth Memory. HBM (High-Bandwidth Memory) is a critical component of AI accelerators, providing the memory bandwidth required for large-scale matrix multiplication operations. SK Hynix, Samsung, and Micron are the primary suppliers, with SK Hynix currently holding a significant market share advantage in HBM3E — the latest generation required for NVIDIA's H200 and Blackwell GPUs. The HBM market is expected to grow from approximately $4 billion in 2023 to over $30 billion by 2027.
Power Infrastructure. The power infrastructure required to support AI data centers represents one of the most compelling investment themes in the current cycle. Utilities with significant renewable generation capacity and transmission infrastructure in data center-dense markets (Northern Virginia, Phoenix, Dallas, Chicago) are experiencing unprecedented demand from hyperscalers seeking long-term power purchase agreements. Transformer manufacturers — a highly concentrated market dominated by ABB, Siemens Energy, and Eaton — are facing multi-year order backlogs as the grid upgrade required to support AI data center demand strains existing manufacturing capacity.
The Return on Investment Question
The most important — and most contested — question in technology investing today is whether the extraordinary AI infrastructure investment will generate commensurate returns. The bear case is straightforward: the hyperscalers are engaged in a competitive arms race that is destroying value, building excess capacity that will ultimately be written down, and failing to identify the "killer applications" that will justify the investment at scale.
The bull case is more nuanced. The hyperscalers are not building infrastructure speculatively — they are responding to demonstrated, rapidly growing demand from enterprise customers who are deploying AI applications at scale. Microsoft's Azure AI revenue is growing at triple-digit rates. Google Cloud's AI-related revenue is accelerating. Amazon Web Services is seeing its highest-ever demand for GPU instances. The demand signal is real, even if the ultimate scale of the addressable market remains uncertain.
Our base case is that the AI infrastructure buildout will generate positive returns for the hyperscalers over a 5–7 year horizon, but that the returns will be unevenly distributed. Companies that build proprietary AI capabilities — unique models, specialized datasets, and AI-native applications — will capture disproportionate value. Companies that simply consume AI infrastructure as a commodity input will face margin pressure as AI capabilities become commoditized and competition intensifies.
Research Perspective
The AI capex supercycle is real, durable, and consequential for investors across asset classes. The critical analytical discipline is distinguishing between companies that are structural beneficiaries — those with defensible competitive positions in the AI supply chain — and those that are experiencing a cyclical demand surge that will normalize as the buildout matures. The former deserve premium valuations; the latter require careful scrutiny of the sustainability of current growth rates and margins.
Risks and Considerations
Several risks could disrupt the AI capex supercycle narrative. Export controls on advanced semiconductors — particularly NVIDIA's A100, H100, and Blackwell chips — have already constrained the Chinese AI market and could be extended to other geographies in response to geopolitical developments. A significant breakthrough in model efficiency (the "DeepSeek moment" that briefly rattled markets in early 2025) could reduce the compute requirements for frontier model training, potentially moderating demand for the most expensive GPU clusters.
Regulatory risk is also non-trivial. Antitrust scrutiny of the hyperscalers' AI investments — particularly their investments in AI startups and their exclusive partnerships with model developers — is intensifying in both the United States and Europe. A regulatory intervention that constrains the hyperscalers' ability to monetize their AI infrastructure investments could materially alter the return profile of the current capex cycle.
Disclaimer: This article is published for informational purposes only and does not constitute investment advice, a solicitation, or an offer to buy or sell any securities. The views expressed represent the analytical perspectives of Alpha Beta Capital's advisory team and are subject to change without notice.
© 2025 Alpha Beta Capital. All rights reserved.

