AI Infrastructure Narratives Expand Again as Energy Comes into Focus
November 24, 2025
AI Capex Skepticism Rises and Gemini 3 Launch Puts Eyes on Google
Mounting Skepticism Over AI Infrastructure Spending Reaches New Heights
The conversation surrounding artificial intelligence infrastructure investment has taken a cautious turn. Perscient's semantic signature tracking characterizations of AI capital expenditure as a risky bet with uncertain payoffs climbed 0.77 over the past week to reach 3.90, placing it within striking distance of its peak intensity. This shift reflects a growing chorus of voices questioning whether the current spending trajectory makes financial sense.
The numbers themselves are staggering. Amazon, Google, Meta, and Microsoft are collectively set to spend approximately $400 billion on AI this year, with some companies devoting roughly half their current cash flow to data center construction. According to Man Group, hyperscalers have quadrupled their capital expenditure in recent years to almost $400 billion annually, with expectations of $3 trillion over the next five years. Meta alone raised $30 billion in October, marking the largest corporate bond issuance in more than two years.
Fund managers are increasingly uncomfortable with this trajectory. A Bank of America survey found that for the first time in 20 years, respondents believe companies are overinvesting, with 202 panelists managing $550 billion in assets under management raising concerns unseen since 2005. As one financial commentator noted on social media, "The bond market is starting to call BS on all this AI spending that is not funded."
MIT economist and 2024 Nobel laureate Daron Acemoglu has added academic weight to these concerns, stating that "these models are being hyped up, and we're investing more than we should." While acknowledging that AI technologies will add real value, he cautioned that "much of what we hear from the industry now is exaggeration."
The semantic signature comparing AI capital expenditure to fiber construction during the dot-com boom reached 5.11, its highest recorded level, indicating that media comparisons between current AI spending and the 1990s telecom overbuilding have reached peak intensity. The Guardian captured this sentiment in a piece examining whether the debt-fueled exuberance will backfire, while NPR reported that financial analysts are increasingly worried about a bubble that will soon pop.
Even Google CEO Sundar Pichai acknowledged the tension, telling the BBC that while artificial intelligence represents an "extraordinary moment," the trillion-dollar investment boom has "elements of irrationality" and no company would be immune if the bubble burst. Investor Michael Burry, known for betting against the housing market in 2008, has returned with a warning that "the boom in GPUs, data centers, and trillion-dollar AI bets isn't evidence of unstoppable growth; it's evidence of a financial cycle that looks increasingly distorted."
The semantic signature tracking language about unrealized productivity gains from AI rose by 0.42 to 0.77, reflecting increased media attention to the gap between investment and results. According to Nature, nearly 80% of companies using AI found it had no meaningful impact on their earnings, suggesting the promised returns remain elusive.
AI Race Leadership Narrative Shifts as Google Gains Ground
These spending concerns have not dampened competitive intensity among AI leaders, though the perceived pecking order has undergone a realignment. The semantic signature tracking perceptions of OpenAI winning the AI race declined by 0.30 to negative territory at -0.11, while the corresponding signature for Google or Gemini rose by 0.26 to 1.66.
This shift coincides with Google's announcement of Gemini 3. The New York Times reported that the new model represents the second major release from the company this year, following similar updates from OpenAI and Anthropic a few months earlier. CNBC noted that Google claims the latest suite will require users to do "less prompting" to achieve desired results, while Bloomberg quoted executives describing the update as a "massive jump" in reasoning and coding ability.
Alphabet's shares responded accordingly, jumping more than 5% on Monday and adding to the previous week's gain of more than 8%, putting the stock on track to finish November higher by more than 11%. "Some investors are petrified that Alphabet will win the AI war due to huge improvements in its Gemini AI model and ongoing benefits from its custom TPU chip," wrote Melius Research analyst Ben Reitzes.
In a striking acknowledgment, OpenAI CEO Sam Altman openly admitted that Google is currently leading the AI race, recognizing the strength of Google's Gemini 3 model in a recent internal memo while assuring his team that OpenAI is catching up quickly.
The semantic signature for Anthropic or Claude winning the AI race remained flat at 0.98. Anthropic announced the release of Claude Opus 4.5 on November 24, 2025, claiming it is "the best model in the world for coding, agents, and computer use." The company's position has been bolstered by multi-billion-dollar investments from Microsoft and Nvidia, raising its valuation to around $350 billion.
Despite these competitive shifts, the semantic signature suggesting the eventual AI winner probably does not exist yet remained flat at 0.36, indicating continued uncertainty about long-term outcomes. After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models. The semantic signatures tracking language about slowing LLM breakthroughs and hard ceilings on improvement remained essentially flat, suggesting that while competitive positioning has shifted, fundamental questions about the technology's trajectory persist.
AI Mental Health Applications Draw Heightened Scrutiny
The competitive jockeying among AI giants stands in contrast to growing alarm about the technology's human costs, particularly in mental health applications. The semantic signature tracking concerns that AI will cause mental illness remained elevated at 1.59, while the signature asserting AI should not be used as a therapist held at 1.07. These elevated readings reflect sustained concern about AI's psychological impacts.
In November, seven additional lawsuits were filed in California against OpenAI alleging wrongful death, assisted suicide, involuntary manslaughter, and negligence. The Los Angeles Times reported that these lawsuits accuse ChatGPT of propelling AI-induced delusions and suicide. TechCrunch detailed how a wave of lawsuits describe ChatGPT using manipulative language to isolate users from loved ones and position itself as their sole confidant.
Researchers at Brown University found that AI chatbots routinely violate core mental health ethics standards. The Brown Daily Herald reported that chatbots, even when prompted to use evidence-based psychotherapy techniques, systematically violate ethical standards established by organizations like the American Psychological Association. The research showed chatbots are prone to ethical violations including "inappropriately navigating crisis situations, providing misleading responses that reinforce users' negative beliefs about themselves and others, and creating a false sense of empathy with users."
In November, a U.S. Food and Drug Administration committee agreed that robust regulations on AI chatbots for mental health care were needed. Psychiatric Times noted that the FDA's Executive Summary warned that AI chatbots may fabricate content, provide inappropriate or biased guidance, or fail to relay critical medical information.
The semantic signature tracking concerns that AI will amplify the loneliness epidemic remained elevated at 1.40. Experts note that companies often design AI bots to maximize engagement rather than mental health, meaning "more reassurance, more validation, even flirtation, whatever keeps the user coming back," and without regulation, there are no consequences when things go wrong.
The global AI therapy chatbot market is valued at $1.4 billion, with North America representing 41.6% of global revenue. Yet the semantic signatures for AI as a good friend for lonely people and AI never being able to be a friend both remained flat at 0.58 and 0.26 respectively, suggesting a nuanced media conversation that neither fully endorses nor rejects the concept of AI companionship. The tension between commercial opportunity and human vulnerability remains unresolved.
DISCLOSURES
This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.
Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.
