AI Pulse

Harper Hunt

November 11, 2025

AI Capex Surge Meets Mental Health Concerns and Geopolitical Tensions

Hyperscale AI Infrastructure Spending Reaches Historic Highs

The artificial intelligence infrastructure buildout continues at an unprecedented pace, with major technology companies committing hundreds of billions of dollars to data center construction and computing capacity. Amazon recently completed its $8 billion Project Rainier initiative in Indiana, establishing 30 interconnected data centers, while Meta announced plans for a $1.5 billion gigawatt-scale facility in El Paso, Texas. These projects represent just a fraction of the broader investment wave reshaping the technology landscape.

The most significant development came in early November with the announcement of a seven-year, $38 billion partnership between OpenAI and Amazon Web Services, providing OpenAI with immediate access to hundreds of thousands of NVIDIA's newest GB200 and GB300 processors. This agreement marks OpenAI's first major cloud infrastructure commitment following its recent corporate restructuring and signals the scale of computing resources required for frontier AI development.

Perscient's semantic signature tracking narratives about the sustainability of massive AI infrastructure investments, however, shows a much more complicated narrative structure beneath the surface. The semantic signature measuring confidence that hyperscale builds will continue growing reached a z-score of 5.07, reflecting exceptionally strong narrative density around continued expansion. However, a competing signature tracking skepticism about these projects simultaneously hit an all-time high z-score of 5.91, intensifying by 0.78 from the previous week. This tension suggests the market is simultaneously convinced of both the necessity and the risk of these investments.

The economic impact has become measurable. AI-related capital expenditures contributed 1.1% to GDP growth in the first half of 2025, outpacing consumer spending as an engine of expansion. The four major technology companies—Alphabet, Meta, Microsoft, and Amazon—collectively expect to spend more than $380 billion on capital expenditures this year, with each company lifting guidance throughout 2025. Market analysts project the global hyperscale data center market will grow from $209.2 billion in 2024 to $724.9 billion by 2030, representing a compound annual growth rate of 23%.

The semantic signature measuring narratives that AI capex more broadly is huge and will keep growing rose to a z-score of 2.16. Yet concerns about whether this spending will generate commensurate returns persist, with some observers drawing parallels to the fiber optic overbuilding during the dot-com boom. The tension between bullish infrastructure narratives and emerging skepticism have created a self-reinforcing cycle of narrative and counter-narrative that will be familiar to any frequent reader of media AI commentary.

AI Race Geopolitics Intensifies Between US and China

One motivating factor behind continued capex in the face of rising concerns about whether they will ever pay off might be found in the perceived competition between the United States and China. Both nations have made it abundantly clear that they see AI dominance as an essential feature of future geopolitical power. A recent report from the Center for Security Policy warns that the US-China AI race represents a new cold war – and one whose victor will be decided within five years. The report argues that if China succeeds in dominating AI by 2030, the United States will be relegated to second-tier status.

Perscient's semantic signature tracking narratives that big AI capex is needed to compete with China rose by 0.83 from the previous week to reach an all-time high z-score of 3.96, reflecting unprecedented narrative density around this justification for infrastructure spending. The signature measuring narratives that the US must win the AI race intensified by 0.54 to a z-score of 2.49.

The competitive landscape appears more balanced than many American observers might assume. China accounts for 22.6% of all AI research citations as of 2023, compared with 20.9% from Europe and 13% from the United States. More strikingly, China represents 69.7% of all AI patents filed globally. Trump administration AI czar David Sacks estimated that China is perhaps only three to six months behind the United States in AI capabilities, not the multi-year lead many had assumed.

China's strategy revolves around massive chip clusters and abundant cheap energy, leveraging Huawei processors at scale despite U.S. export restrictions. The Atlantic examined how China's technological success has been built on government coordination, patient capital, and strategic focus on AI infrastructure. Meanwhile, the United States maintains advantages in frontier model development and access to cutting-edge NVIDIA hardware, though export controls have accelerated rather than prevented Chinese semiconductor development.

The competition extends beyond pure technological capability to questions of governance and standards. At the recent APEC summit, Chinese President Xi Jinping proposed establishing a global AI watchdog based in Shanghai, framing AI as a "public good" requiring international coordination. The United States rejected this framework, viewing it as an attempt to constrain American AI development while advancing Chinese influence over global technology standards.

International relations in 2025 are increasingly defined by what analysts call "geotechnology disputes"—conflicts over data access, digital infrastructure control, and AI capability that parallel traditional geopolitical competition. The outcome of this technological rivalry will likely shape not just economic competitiveness but the broader balance of global power for decades to come.

Mental Health Risks from AI Chatbots Intensify Regulatory Scrutiny

AI narratives are obviously not confined to geopolitical disputes and corporate spending, of course. In recent months, a growing body of research and clinical reports has raised serious questions about the mental health implications of AI chatbot interactions, particularly for vulnerable users. Researchers at Brown University presented findings at the October 2025 AAAI/ACM Conference on Artificial Intelligence, Ethics and Society showing that AI chatbots routinely violate core mental health ethics standards. An evaluation of 29 popular mental health chatbots found that none provided adequate responses to escalating suicidal risk.

OpenAI's own internal data reveals the scale of potential concerns. The company estimated that 0.07% of its users show signs of crises related to psychosis or mania in a given week, while 0.15% indicate potentially heightened levels of emotional attachment to ChatGPT, and another 0.15% have conversations including explicit indicators of potential suicidal planning or intent. Given ChatGPT's massive user base, these percentages translate to significant absolute numbers of at-risk individuals.

Clinicians are reporting unprecedented cases. Neuropsychiatrist Thomas Pollak at King's College London described encountering patients—some with no history of mental illness—exhibiting signs of chatbot-related delusions. Bloomberg documented cases of users losing touch with reality during marathon sessions with ChatGPT and other bots, a phenomenon some researchers are calling "chatbot delusions." Multiple families have come forward claiming that AI chatbot interactions preceded tragic outcomes, including suicide attempts.

The narratives we track around these ideas are elevated in density, but do appear to be moderating slightly in recent weeks. For example, Perscient's semantic signature tracking narratives that AI will cause mental illness drifted modestly downward to a z-score of 1.8, and the semantic signature measuring language warning against the use of AI as a therapist fell slightly to a still-high z-score of 1.14. These declines may reflect a shift from alarm to more nuanced discussion about appropriate guardrails and use cases.

The regulatory landscape remains uncertain. The New York Times reported that the Food and Drug Administration is exploring whether to regulate AI therapy chatbots as medical devices, a classification that would subject them to significantly more oversight. Meanwhile, technology companies face mounting pressure to implement better safety protocols, though questions remain about whether technical solutions can adequately address the psychological dynamics at play when vulnerable individuals form attachments to AI systems that simulate empathy and understanding.

pulse

DISCLOSURES

This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.