Memory Shortages, Claude Mania, and the $690 Billion Question: AI Narratives Defined by Physical Constraints, Competitive Shifts, and Capital Spending Skepticism

Epsilon Theory

April 14, 2026

Memory Shortages, Claude Mania, and the $690 Billion Question: AI Narratives Defined by Physical Constraints, Competitive Shifts, and Capital Spending Skepticism

EXECUTIVE SUMMARY

- Physical infrastructure—not chips or data—now defines the AI constraint narrative. Media language attributing AI growth limitations to memory chip scarcity is running at nearly four times its long-term average and has plateaued at the highest level ever recorded, while language about GPU supply issues and training data scarcity sits near baseline. Alongside persistent elevated coverage of data center build delays, energy capacity competition, and interconnect bottlenecks, the media has decisively reframed AI's binding constraints as memory, power, and construction rather than model architecture or dataset availability—a shift that carries implications for which companies and sectors investors and policymakers treat as strategically important.

- Anthropic has captured an outsized share of competitive media perception, producing the most lopsided provider-level divergence in the dataset. Language asserting that Anthropic or Claude leads AI competition is running at more than 3.6 times its long-term average—the second-highest reading ever—while equivalent measures for OpenAI, Google, and DeepSeek all sit below their baselines, with OpenAI's the most depressed. This concentration of narrative momentum around a single provider is striking given that Stanford's 2026 AI Index shows that the top models are separated by fewer than three percentage points on community benchmarks, suggesting that enterprise adoption stories and product-level buzz are shaping perceptions of leadership far more than technical superiority.

- Record capital expenditure commitments and rising skepticism about AI returns have simultaneously hardened into stable, coexisting media narratives rather than competing in a tug-of-war. Language about massive and increasing AI infrastructure spending sits above average, but so do three distinct skepticism-oriented measures—covering predictions of an AI investment crash, growing corporate doubt, and questioning of large infrastructure projects. None of these moved over the past week, indicating that the media environment has settled into a durable tension between the $660–$690 billion spending trajectory and doubts about when, or whether, that spending will generate proportionate returns.

- Media framing of AI's economic impact tilts toward disruption over realized benefit, though the most alarmist historical analogies have not gained broad traction. Language connecting AI to profit generation sits below its long-term average, and a widely cited study finding that nearly 90% of firms reported no productivity impact from AI reinforces the gap between investment and outcome. Yet language comparing current AI spending to the telecom overbuilding of the 1990s also remains well below average, suggesting that while bubble-adjacent skepticism is intensifying, media coverage has not converged on the view that the current cycle is destined to repeat the most cautionary precedent. The overall posture is one of conditional conviction: transformative potential is broadly accepted, but the market and the media alike are demanding proof.

---

A Memory Chip Crisis Anchors a Broader AI Infrastructure Bottleneck Narrative

Perscient's semantic signature tracking the density of language asserting that memory chip shortages are slowing AI growth stands at an Index Value of 378.3, the highest reading in the entire dataset. Media language attributing AI growth constraints to memory scarcity is running at nearly four times its long-term average, and the value held flat over the past week, indicating that this narrative has settled into an elevated plateau rather than spiking on any single event.

The underlying dynamics are structural, not cyclical. AI data centers now consume roughly 70% of all memory chips produced globally. Samsung, SK Hynix, and Micron have redirected production toward high-bandwidth memory used in AI accelerators, where margins run three to five times higher than conventional consumer DRAM. Big tech companies are collectively on track to spend roughly $650 billion on AI infrastructure in 2026, up by about 80% from last year. As Korea JoongAng Daily reported, average selling prices in 2026 are now rising sharply across conventional DRAM, not just the high-bandwidth variety. SK Group's chairman predicted in March that global memory shortages could persist until 2030, and Silicon Motion's president noted that the AI infrastructure boom has triggered a parallel NAND shortage and that memory profits are expected to increase twofold to threefold. The downstream consequences extend well beyond AI: NCTA warned this month that the crunch is now threatening U.S. broadband deployment because manufacturers prioritize high-end AI chips over the conventional semiconductors used in routers and network equipment.

Memory is not the only constraint gaining media traction. Three additional Perscient infrastructure signatures, tracking language about data center build delays, interconnect delays, and the proposition that AI leadership will be determined by which country builds energy capacity, are all running well above their long-term averages and remained steady this week. PJM Interconnection, the largest regional transmission operator in the U.S., is targeting up to 15GW of new generation capacity through an emergency auction, even as analysis warns of a potential 60GW shortfall. On social media, one widely shared post noted that half of all planned U.S. data center builds face delays or cancellations. Transformer lead times have stretched from 24 months pre-2020 to five years today, and the $650 billion earmarked by the major cloud providers is "still not enough." Broadcom has identified three critical supply chain constraints: lasers, advanced process wafers at TSMC, and printed circuit boards. PCB lead times for optical transceivers have extended from six weeks to six months. AMD CEO Lisa Su stated across platforms that "there's not enough compute out there for everything that wants to be done."

Perscient's signatures tracking language about GPU supply issues and training data scarcity constraining AI both sit near their long-term averages, at -2.7 and -3.3 respectively. The media's framing of AI growth constraints has turned decisively: the binding limitations are now physical infrastructure—memory, power, and construction—rather than chip architecture or data availability.

Anthropic Ascends in Media Perception as Established AI Competitors Lose Narrative Ground

While physical constraints define the supply side of the AI buildout, the competitive narrative among providers has concentrated to an unusual degree around a single company. Perscient's semantic signature tracking the density of language asserting that Anthropic or Claude leads artificial intelligence competition registers at an Index Value of 363.3, more than 3.6 times its long-term average and the second-highest reading in the full dataset. Comparable signatures for OpenAI, Google, and DeepSeek all sit below their long-term averages. OpenAI's reading is the most depressed at -36.3. The lone partial exception is Grok or X AI, which sits modestly elevated. This asymmetric pattern—one player vastly above baseline while three major rivals sit below it—is the most pronounced competitive divergence in the data.

The HumanX AI conference in San Francisco this week put the Anthropic narrative on full display. CNBC reported that Anthropic's coding agent, Claude Code, was "the tool on everyone's lips." Glean CEO Arvind Jain described what he called "Claude Mania" as now pressuring business leaders to deploy it. TechCrunch's correspondent noted hearing Claude's name "most often" across panels, while ChatGPT was largely absent from conversation and even OpenAI's board chairman, Bret Taylor, found himself defending Sam Altman. Business Insider captured the sentiment among VCs and founders, many of whom were openly placing their bets on Anthropic.

The financial trajectory is amplifying the narrative. Social media commentary this week centered on reports that Anthropic's annualized revenue run rate has crossed $30 billion, apparently overtaking OpenAI's roughly $25 billion. Reuters reported that while OpenAI held a significant revenue lead entering 2026, the popularity of Anthropic's coding agents has helped close the gap among business users. While OpenAI launched the generative AI era with ChatGPT in 2022, Anthropic appears better positioned to win contracts from the biggest spenders. However, the narrative is not without complication: Anthropic's refusal to remove restrictions on domestic surveillance and autonomous weapons reportedly prompted the DOD to designate the company a "supply chain risk," illustrating how principled positioning can carry costs even amid commercial momentum.

Stanford's 2026 AI Index offered a useful counterpoint. As of March, Anthropic's top model leads by just 2.7 percentage points on community-driven rankings, and the best models now compete primarily on cost, reliability, and real-world usefulness rather than raw capability. The depressed reading for Perscient's signature tracking language asserting that DeepSeek or China leads AI competition, at -29.0, adds another layer. Chinese open-weight models including GLM-5.1, Kimi K2.5, and Qwen3.5 are dominating industry benchmarks, and reports suggest that 80% of U.S. startups now use Chinese base models. The perception of "winning" is currently driven more by enterprise adoption and product momentum than by raw model performance. The Anthropic signal is material: media perception of leadership shapes purchasing decisions, talent flows, and investor confidence regardless of how close the underlying models actually are.

Record AI Capex Commitments Meet a Widening Market Credibility Gap

The capital spending underwriting this competitive intensity has generated its own contested set of media narratives. Perscient's semantic signature tracking language asserting that AI infrastructure spending is massive and increasing sits at an Index Value of 60.5, above its long-term average. But skepticism-oriented signatures sit higher: our signature tracking language predicting that an AI investment collapse will crash overall markets registers at 87.4, while signatures tracking growing corporate doubt about large AI spending and questioning of massive infrastructure projects both exceed 70. All four were unchanged this week, suggesting that these narratives have become settled features of the media environment rather than fleeting reactions.

The five largest U.S. cloud and AI infrastructure providers have collectively committed to between $660 billion and $690 billion in capital expenditure for 2026, nearly doubling 2025 levels. Amazon leads with a $200 billion plan that caught even bullish projections off guard. CEO Andy Jassy addressed skeptics directly in his shareholder letter: "We're not investing approximately $200 billion in capex in 2026 on a hunch," he wrote, noting that AWS's AI business is on pace for $15 billion in annual revenue. On social media, his full remarks circulated widely, including his characterization of AI as "a once-in-a-lifetime opportunity where the current growth is unparalleled."

The market's response has leaned toward caution. When Amazon and Alphabet revealed their 2026 plans, what followed was a $400 billion market capitalization drawdown; institutional sellers focused on the gap between outlays and tangible returns. Barclays analysts noted that they are now modeling negative free cash flow for 2027 and 2028, a trajectory they called "somewhat shocking" but likely representative of the broader arms race. Confidential financials from both OpenAI and Anthropic, obtained by the Wall Street Journal and discussed on social media, reinforce the tension: revenue at both labs is growing rapidly, but training costs are growing faster. OpenAI projects $121 billion in compute spending by 2028 and does not expect to reach breakeven until the 2030s.

Perscient's signature tracking language claiming that AI advances are translating to company profits sits at -34.1, below average, while media language about unrealized productivity gains runs above its long-term mean. A widely cited National Bureau of Economic Research study found that nearly 90% of firms reported no impact from AI on employment or productivity over the past three years, even as executives projected a modest 1.4% increase ahead. Perscient's signatures tracking language predicting that AI will eliminate entire job categories outpace those about AI creating new ones, tilting the broader media framing toward AI as a disruptive economic force whose returns remain largely prospective. One of only two meaningful weekly movements in the full dataset came from our signature tracking language connecting AI to productivity gains and universal basic income, which declined by 7.3 points to 24.1, consistent with a cooling of the productivity-to-redistribution narrative.

Longer-horizon conviction persists. Our signature tracking language predicting that AI creates a long-term investment cycle holds above average. And the signature tracking language comparing current AI spending to telecom overbuilding in the 1990s sits well below its long-term mean, suggesting that media is not broadly adopting the most alarmist historical analogy even as bubble-adjacent language intensifies through other channels. The overall picture is one of simultaneous conviction in AI's transformative potential and growing impatience with the pace of returns. As one analyst summarized, the market "needs true upside surprises to move higher, and for AI, that surprise will be a demonstration that the $690 billion bet is building a moat, not just a bill."

pulse

DISCLOSURES

This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.