AI Vendors Go to War Over Leadership Narratives, CapEx Narratives Go to War With Themselves
December 9, 2025
AI Vendors Go to War Over Leadership Narratives, CapEx Narratives Go to War With Themselves
OpenAI's "Code Red" Signals Intensifying Competition in the AI Race
OpenAI CEO Sam Altman issued what internal communications described as a "code red" directive this week, instructing employees to prioritize improvements to ChatGPT above all other initiatives. The Wall Street Journal first reported that Altman's memo characterized this as a "critical time" for the company's flagship product, signaling that the once-unassailable leader in consumer AI now finds itself playing defense.
Perscient's semantic signature tracking language asserting OpenAI's leadership in artificial intelligence competition rose modestly over the past week, reaching a z-score of 2.4. But the more telling movement came in signatures tracking OpenAI's rivals. Language asserting Google or Gemini's leadership in artificial intelligence competition once again strengthened considerably, climbing by 0.6 this week to reach a z-score of 2.1. This shift coincides with Google's launch of Gemini 3, which prompted Google’s signature to hit the Pulse report two weeks ago, and for good reason: Gemini 3 has demonstrated benchmark results surpassing OpenAI's flagship models in multiple categories.
The irony of the situation has not been lost on industry observers. Fortune noted that Altman's emergency declaration comes almost exactly three years after ChatGPT's launch prompted Google CEO Sundar Pichai to issue his own internal "code red." The tables have turned with speed.
Meanwhile, Anthropic has also quietly consolidated a formidable position, especially in narrative space. The semantic signature tracking language asserting that Anthropic or Claude's competitive leadership remained elevated at 2.88, also near its highest recorded level. The company has built substantial enterprise traction, with large accounts representing more than $100,000 in annual run-rate revenue growing more than sevenfold over the past year. Its newest flagship model, Claude Opus 4.5, has drawn attention for outperforming GPT-5 in several domains, particularly coding tools and complex reasoning tasks.
The competitive pressure extends beyond American shores. DeepSeek-V3.2 launched with claims of matching GPT-5 and Gemini 3 Pro performance while using dramatically lower computational resources. The semantic signature tracking language asserting Chinese AI leadership held steady at 0.77, suggesting sustained narrative attention to international competition even as the domestic rivalry between OpenAI, Google, and Anthropic dominates headlines.
ChatGPT maintains approximately 810 million global monthly active users, but growth has decelerated to just 6% from August to November 2025. The semantic signature tracking durability of the AI investment theme remained stable at 1.66, while language suggesting AI is entering a trough of disillusionment stayed near baseline at 0.07. Markets have not yet embraced a disillusionment narrative, though the competitive dynamics suggest the sector's leadership structure may be more fluid than previously assumed.
AI Capex Skepticism Grows as Hyperscale Builds Face Mounting Questions
The competitive pressures reshaping AI leadership have a direct corollary in infrastructure spending, where the massive capital expenditure programs underpinning the AI buildout continue to draw intensifying scrutiny from analysts and investors alike. Perscient's semantic signature tracking language characterizing AI infrastructure spending as a risky bet declined slightly to 3.45, but remains extremely high, indicating that skeptical narratives continue to dominate media discourse around these investments.
That skepticism is part of a bifurcated narrative in which everyone is simultaneously convinced that we’re absolutely going to build a ton of data centers and that it’s absolutely a terrible idea. The signature tracking language questioning hyperscale builds rose to 2.19, while simultaneously, language predicting continued expansion of massive AI infrastructure climbed to 2.97. Both signatures sit near their highest recorded levels, suggesting that media coverage is nowhere near deciding whether the consensus riskiness of these capital expenditures will ever manifest in an actual slowdown.
BCA Research issued a pointed warning that enthusiasm for AI capital expenditure might be heading toward a "Metaverse Moment," a reference to the rapid collapse of corporate enthusiasm for virtual reality investments that followed Meta's aggressive pivot. The comparison resonates with growing concerns about the gap between spending and returns.
IBM CEO Arvind Krishna added fuel to the skepticism, telling Business Insider that "there is no way" current levels of data center spending will prove profitable at today's infrastructure costs. His napkin math on Big Tech's AI investments raised fundamental questions about the economics of the buildout.
The semantic signature comparing AI capital expenditure to fiber construction during the dot-com boom declined by 0.50 but remains elevated at 3.66. The parallel is instructive: during the late 1990s, telecommunications companies laid vast amounts of fiber optic cable based on projections of exponential bandwidth demand. Much of that infrastructure sat dark for years. NPR reported that as AI companies pour hundreds of billions into data centers, concerns about a similar dynamic are intensifying.
Capital expenditure among the major hyperscalers has reached extreme levels, with spending excluding dividends and share repurchases consuming 94% of operating cash flows in 2025, up 18 percentage points from 2024. The semantic signature tracking corporate skepticism about big AI investments rose to 1.57, while language asserting productivity gains from AI haven't materialized held steady at 2.09. A widely cited MIT study found that 95% of enterprises aren't seeing return on investment from AI initiatives yet, underscoring the gap between deployment and realized value. Language tracking failed corporate AI experiments remained flat at 2.19, reinforcing the tension between massive capital deployment and uncertain returns that has become a defining feature of the current AI investment cycle.
AI's Role in Loneliness Draws Intensifying Scrutiny
The questions surrounding AI's economic returns, leadership, and rate of growth continue to extend beyond corporate balance sheets to more fundamental concerns about human wellbeing. The therapeutic and emotional dimensions of AI interaction have emerged as a focal point for researchers and regulators as evidence accumulates about both potential benefits and serious risks. It is loneliness that seems to be on everyone’s radar this week. Perscient's semantic signature tracking language asserting that AI is amplifying the loneliness epidemic reached a z-score of 1.2.
Research from MIT has revealed a troubling pattern: people who are lonely are more likely to consider ChatGPT a friend and spend substantial time on the application, while also reporting increased levels of loneliness. The finding suggests a potentially self-reinforcing cycle where vulnerable users become more dependent on AI companionship even as their isolation deepens.
Therapy and companionship have become among the most popular use cases for conversational AI this year, according to Harvard Business Review analysis. The context matters: only 50 percent of people with diagnosable mental health conditions receive any form of treatment. ChatGPT has become a popular substitute for professional therapy or a confidant amid an ongoing mental health crisis and rising loneliness.
The semantic signature tracking language arguing AI should not be used as a therapist held steady at 0.94; meanwhile, language asserting AI is an effective therapist declined to 0.10, suggesting growing caution in media coverage about therapeutic applications. Researchers have called for clear regulations on AI tools used for mental health interactions, citing concerns about the absence of appropriate safeguards.
Mental health risk factors have been identified in several high-profile cases, including loneliness, extended uninterrupted chat sessions, and persistent chatbot memory features designed for personalization that ended up reinforcing delusional themes. The semantic signature tracking language suggesting that AI can be a good friend for lonely people declined by 0.28 to 0.41, while language denying the possibility of meaningful AI-human relationships remained near long-term means.
Loneliness carries health risks as serious as smoking or obesity. In 2023, the surgeon general declared it a public health epidemic. AARP's most recent study shows that four in ten U.S. adults 45 and older are lonely, a increase from 35% in both 2010 and 2018. For most, the best treatment remains simple: human connection. Current large language models, considering our deeply rooted mammalian need for physical presence and embodiment, may prove insufficient substitutes for what people need when in despair.
The intersection of AI's impact on employment and mental health adds another dimension to these concerns. As Pulse highlighted last week, the semantic signature tracking language predicting that AI will destroy consulting jobs remained elevated at 1.85. Data from job-search firm Indeed shows overall consulting job postings in Canada in February 2025 were down 44 percent from early 2022, with non-senior consulting roles falling 40 percent over the same span to a five-year low. The broader signature tracking fears of massive AI-driven unemployment held at 0.48, indicating sustained narrative presence around workforce displacement concerns that may compound the psychological pressures already evident in the loneliness data, even if these unemployment pressures have yet to show up in the data.
Except for the consultants, that is.
DISCLOSURES
This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.
Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor's individual circumstances and objectives.
