Memory Shortages and Infrastructure Delays Constrain AI Expansion; Anthropic Dominates the Competitive Narrative Amid a Pentagon Standoff; Record Capital Spending Collides with an Emerging Productivity Paradox

Epsilon Theory

April 21, 2026

Memory Shortages and Infrastructure Delays Constrain AI Expansion; Anthropic Dominates the Competitive Narrative Amid a Pentagon Standoff; Record Capital Spending Collides with an Emerging Productivity Paradox

EXECUTIVE SUMMARY

- Physical-world constraints have fully displaced earlier concerns about GPU availability and training data scarcity as the dominant media storyline constraining AI expansion. Memory chip shortages, power grid interconnect delays, and data center construction bottlenecks now define coverage of AI's growth limits. DRAM prices rose by 90% in Q1 2026, memory makers are expected to meet only roughly 60% of demand through 2027, and nearly half of planned U.S. data center builds face delays or cancellation. The persistence and breadth of these signals suggest that media consensus has settled on a durable framing: it is the physical world — not software, algorithms, or silicon — that governs the pace of AI expansion, and this gap is increasingly cast in geopolitical terms as nations compete to build the energy infrastructure that will determine AI leadership.

- Anthropic has emerged as the overwhelming frontrunner in media perceptions of AI competition, generating far denser leadership coverage than any rival, even though the company is locked in an extraordinary standoff with the Pentagon over AI safety guardrails. That confrontation has exposed a genuine split within the U.S. government, where one arm blacklists the company while another — the NSA — actively deploys its most powerful model. The contradiction sharpened further this week with a White House meeting described by both sides as productive, underscoring that the tension between national security imperatives and industry autonomy on AI safety remains unresolved at the highest levels of government.

- A widening disconnect separates AI ambition from demonstrated returns, and it is becoming the defining financial tension of this cycle. The four largest technology companies plan nearly $700 billion in AI infrastructure spending for 2026, yet an NBER study found that 90% of firms report no measurable productivity gains from AI adoption, and three-quarters of AI's economic benefits are concentrated in just 20% of companies. Media language around acute corporate skepticism softened this week — the largest single weekly shift in the entire dataset — but the underlying productivity paradox, in which a small cohort of leaders pulls far ahead while the broader economy waits, continues to anchor coverage.

- Despite elevated concern about a potential AI investment bubble, the standard historical frameworks that typically accompany such discourse have not gained traction, and media attention to AI's transformative promise in specific domains has quietly retreated. Comparisons to 1990s telecom overbuilding, "disappointment phase" framing, and warnings about excessive AI market valuations all register below their long-term averages, suggesting that coverage is still searching for the right lens through which to evaluate a spending boom that collides simultaneously with physical supply constraints and uncertain returns. Meanwhile, language about AI's potential to reshape science, education, and healthcare has faded, with the sharpest weekly decline among these occurring in scientific research coverage — indicating that media focus has migrated from what AI might transform to the financial and material obstacles standing in its way.

---

A Convergence of Physical Supply Constraints — From Memory Chips to Power Grids — Has Become the Defining Bottleneck Narrative

Perscient's semantic signature tracking the density of language asserting that unexpected memory chip shortages are slowing AI growth registers at an Index Value of 346.7, one of the two highest current readings in the entire dataset and more than three times its long-term mean. The reading reflects a media environment saturated with coverage of a RAM crisis reshaping not just AI hardware procurement but adjacent industries. DRAM prices rose by 90% in Q1 2026 compared to Q4 2025, driven by Samsung, SK Hynix, and Micron collectively shifting 93% of their combined market production toward high-bandwidth memory for AI data centers. As The Verge reported, SK Group's chairman has warned that these shortages could persist until 2030, with new fabrication capacity unlikely to reach volume production before 2027 at the earliest. Downstream effects are already visible: the PC market faces an 11.3% contraction in 2026 according to IDC, SSD prices have roughly quadrupled, and Nature has taken to covering the episode under the label "RAMmageddon" for its effect on scientific research budgets.

Social media commentary has been pointed about the structural nature of the problem. One widely shared analysis noted that "RAM shortages are not a random glitch, they are a supply-chain collision." AI servers, consumer devices, and data centers all compete for the same constrained memory supply, and a bromine bottleneck limits manufacturers' ability to scale production. Multiple observers flagged that memory makers are expected to meet only about 60% of demand by the end of 2027, with direct implications for the pace of data center expansion.

The memory crunch is only one layer of a broader infrastructure problem. Our semantic signature tracking the density of language asserting that data center construction delays are slowing AI growth stands at 114.9, more than double its long-term mean, held flat week-over-week alongside an equally elevated reading for language about slow power grid interconnect approvals. The stability of both signals suggests that these are now embedded features of the media environment rather than reactions to any single event. Close to half of the planned U.S. data center builds this year are projected to be delayed or canceled, as Tom's Hardware reported; the core bottleneck is electrical infrastructure rather than compute silicon. Lead times for high-power transformers have stretched from a pre-2020 norm of 24 to 30 months to as long as five years. Satellite and drone analysis by SynMax, covered by Ars Technica, found that almost 40% of U.S. data center projects are at risk of falling behind schedule. The gap between ambition and execution is even wider for 2027, where industry tracking shows 21.5 GW of announced capacity but only 6.3 GW has broken ground.

What makes this moment distinctive is the composition of the constraint narrative. Our semantic signature tracking language asserting that GPU shortages are slowing AI growth sits at just 7.8, roughly at its long-term mean. The equivalent signature for training data scarcity is slightly below average. Media attention has moved cleanly away from the supply constraints that dominated prior cycles and toward the physical world: memory, power, land, and construction. Increasingly, this infrastructure gap is being framed in geopolitical terms. Perscient's semantic signature tracking the density of language asserting that the country which builds the best energy infrastructure will determine AI leadership registers at 96.4, nearly double its long-term mean. Despite the Trump administration's reshoring efforts through tariffs, U.S. production capacity still falls short, and American AI companies continue to rely on Chinese components to bridge the gap. The convergence of these readings, all well above average and all flat, points to a durable media consensus: physical constraints, not software or algorithms, now define the pace of AI expansion.

Anthropic Commands AI Race Perceptions by a Wide Margin as a Government Confrontation Over AI Safety Tests the Limits of Industry Autonomy

While the infrastructure bottleneck shapes the broader picture, the competitive narrative in AI has become strikingly one-sided. Perscient's semantic signature tracking the density of language asserting that Anthropic or Claude leads AI competition registers at an Index Value of 355.9, the single highest reading in the entire dataset and more than 3.5 times its long-term mean. The value was unchanged week-over-week.

The lopsidedness is best understood through contrast. The equivalent signature for OpenAI sits at negative 42.7, well below its long-term mean. Signatures for Deepseek/China, Google/Gemini, and Grok/xAI are also below or only marginally above their respective averages. Our semantic signature tracking language claiming that the future AI leader has not yet been founded registers at negative 23.2, reinforcing that media coverage treats the frontrunner as a known entity.

This perception has been reinforced by commercial momentum. Anthropic released Claude Opus 4.7 in April with improved coding and vision capabilities, carried an estimated valuation of $380 billion as of February, and aired two commercials during Super Bowl LX. One widely shared post noted that Anthropic has reached $30 billion in annual recurring revenue; Claude Code alone has generated $1 billion in ARR within six months. Enterprise use accounts for 80% of Anthropic's revenue. OpenAI still leads in overall adoption at 81% of AI buyers, but Anthropic is narrowing the gap, reaching nearly 63% in March.

Yet Anthropic's commercial ascent is playing out against a deeply unusual government confrontation. After signing a $200 million contract with the Pentagon last July, negotiations over Claude's deployment broke down when the Department of Defense demanded unfettered access across all lawful purposes, while Anthropic sought assurances that its technology would not be used for fully autonomous weapons or domestic mass surveillance. The Pentagon subsequently designated Anthropic a "supply chain risk," a label historically reserved for foreign adversaries. A California federal judge blocked one of the orders in March, ruling that the Pentagon appeared to have unlawfully retaliated against Anthropic for its AI safety positions, but the D.C. Circuit subsequently declined to block the blacklisting while litigation continues.

The contradiction at the heart of this dispute sharpened last week when Axios reported that the National Security Agency is actively using Anthropic's most powerful model, Mythos Preview, even as the Department of Defense, which oversees the NSA, argues in court that the company represents a threat to national security. Anthropic CEO Dario Amodei visited the White House on Friday for a meeting with chief of staff Susie Wiles and Treasury Secretary Scott Bessent to discuss Mythos deployment across government. Both sides described the meeting as productive, and officials are reportedly in active discussions about deploying the model across civilian agencies even as the Pentagon maintains its restriction. The talks reflect a genuine split inside the administration over whether limiting the company's technology is sustainable.

Anthropic has further distinguished itself by engaging religious leaders in conversation about AI ethics. The Washington Post reported that the company convened Catholic and Protestant church leaders at its San Francisco headquarters for a two-day seminar on Claude's moral formation, underscoring the distinctive public positioning around AI safety at the center of the Pentagon dispute.

Record Capital Expenditure Plans Persist as a Productivity Paradox Emerges and Bubble Concerns Intensify Without a Clear Historical Analogy

The infrastructure constraints detailed above have not dampened capital spending plans, and the resulting tension between rising investment and uncertain returns has become the defining financial narrative of this AI cycle. Perscient's semantic signature tracking the density of language asserting that AI infrastructure spending is massive and increasing stands at an Index Value of 55.8, well above its long-term mean, consistent with reporting that the four largest technology companies collectively plan to pour nearly $700 billion into AI infrastructure in 2026. Goldman Sachs estimates that approximately 75% of aggregate hyperscaler capex this year will fund AI-related infrastructure, representing roughly $450 billion in AI-specific spending.

Yet this spending is drawing growing scrutiny. Our semantic signature tracking the density of language predicting that an AI investment collapse will crash overall markets registers at 58.2, also comfortably above its long-term mean. Time published an analysis identifying a fundamental mismatch between the trillions being invested in AI infrastructure and the billions being spent to actually use AI, urging policymakers to prepare for a potential correction. Research Affiliates, in an analysis covered by Fortune, argued that most hyperscaler capex is effectively maintenance spending due to three-year hardware obsolescence cycles, calling into question the durability of these investments.

An important shift occurred this week in corporate sentiment language. Our semantic signature tracking the density of language asserting that businesses increasingly doubt large AI spending declined by 10.5 points, the largest single weekly movement in the entire dataset, falling to 58.2. While still well above average, this moderation suggests that the sharpest edge of corporate skepticism language is softening even as spending plans expand. Signatures tracking doubts about hyperscale builds and growing opposition to large AI capital projects both held flat, reinforcing that broader caution remains even as the most acute skepticism narrative cools.

At the heart of the tension is a widening gap between AI investment and demonstrated returns. Our signature tracking language asserting that promised AI efficiency improvements have not materialized reads above its long-term mean, while the signature tracking language claiming that AI advances are translating to company profits sits well below average. A National Bureau of Economic Research study published earlier this year found that 90% of firms reported no measurable impact of AI on workplace productivity, even as executives projected that AI would increase productivity by 1.4%. PwC's new AI Performance study found that three-quarters of AI's economic gains are concentrated in just 20% of companies, helping explain how productivity skepticism and persistent capex growth can coexist: a small cohort of leaders is pulling far ahead while the broader economy waits.

Despite the elevated bubble language, the media has not settled on a familiar framework for explaining the risk. Signatures tracking language comparing AI spending to 1990s telecom overbuilding, language about AI entering a disappointment phase, and language asserting that AI represents an excessive portion of market valuation all sit well below their long-term means. The standard historical analogies that typically accompany bubble discourse have not gained traction. **Our signature tracking language predicting that AI creates a long-term investment supercycle reads above average, and one commentary from Ritholtz Wealth argued that "we are still short computing power given the explosive use of generative AI and there is no end in sight."** Trader consensus on Polymarket currently prices an 8% implied probability of an AI bubble burst by the end of 2026, meaningful but far from majority conviction.

The week also saw a quiet retreat in language about AI's transformative potential in specific domains. Signatures tracking language asserting that AI will fundamentally change scientific research, education, and healthcare delivery all sit below their long-term means; the science signature posted the second-largest weekly decline in the dataset. Media coverage is currently more focused on AI's financial and physical constraints than on its promise to reshape particular fields. The overall narrative features concurrent above-average readings for both spending growth and bubble concern, a fading set of historical comparisons, and a growing gap between AI adoption and demonstrated productivity gains.

pulse

DISCLOSURES

This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.