Mirage Escorts

The AI Bubble and the U.S. Economy: How Long Do “Hallucinations” Last?

oil&gas

Well-known member
Apr 16, 2002
15,342
2,675
113
Ghawar
Servaas Storm
Oct 2, 2025

The U.S. is undergoing an extraordinary AI-fueled economic boom: The stock market is soaring thanks to exceptionally high valuations of AI-related tech firms, which are fueling economic growth by the hundreds of billions of U.S. dollars they are spending on data centers and other AI infrastructure. The AI investment boom is based on the belief that AI will make workers and firms significantly more productive, which will in turn boost corporate profits to unprecedented levels. But the summer of 2025 did not bring good news for enthusiasts of generative Artificial Intelligence (GenAI) who were all hyped up by the inflated promise of the likes of OpenAI’s Sam Altman that “Artificial General Intelligence” (AGI), the holy grail of current AI research, would be right around the corner.





Let us more closely consider the hype. Already in January 2025, Altman wrote that “we are now confident we know how to build AGI”. Altman’s optimism echoed claims by OpenAI’s partner and major financial backer Microsoft, which had put out a paper in 2023 claiming that the GPT-4 model already exhibited “sparks of AGI.” Elon Musk (in 2024) was equally confident that the Grok model developed by his company xAI would reach AGI, an intelligence “smarter than the smartest human being”, probably by 2025 or at least by 2026. Meta CEO Mark Zuckerberg said that his company was committed to “building full general intelligence”, and that super-intelligence is now “in sight”. Likewise, Dario Amodei, co-founder and CEO of Anthropic, said “powerful AI”, i.e., smarter than a Nobel Prize winner in any field, could come as early as 2026, and usher in a new age of health and abundance — the U.S. would become a “country of geniuses in a datacenter”, if ….. AI didn’t wind up killing us all.





For Mr. Musk and his GenAI fellow travelers, the biggest hurdle on the road to AGI is the lack of computing power (installed in data centers) to train AI bots, which, in turn, is due to a lack of sufficiently advanced computer chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would exceed the capacity of the global credit and derivative securities markets. Spurred by the imperative to win the AI-race with China, the GenAI propagandists firmly believe that the U.S. can be put on the yellow brick road to the Emerald City of AGI by building more data centers faster (an unmistakenly “accelerationist” expression).

Interestingly, AGI is an ill-defined notion, and perhaps more of a marketing concept used by AI promotors to persuade their financiers to invest in their endeavors. Roughly, the idea is that an AGI model can generalize beyond specific examples found in its training data, similar to how some human beings can do almost any kind of work after having been shown a few examples of how to do a task, by learning from experience and changing methods when needed. AGI bots will be capable of outsmarting human beings, creating new scientific ideas, and doing innovative as well as all of routine coding. AI bots will be telling us how to develop new
medicines to cure cancer, fix global warming, drive our cars and grow our genetically modified crops. Hence, in a radical bout of creative destruction, AGI would transform not just the economy and the workplace, but also systems of health care, energy, agriculture, communications, entertainment, transportation, R&D, innovation and science.





OpenAI’s Altman boasted that AGI can “discover new science,” because “I think we’ve cracked reasoning in the models,” adding that “we’ve a long way to go.” He “think we know what to do,” saying that OpenAI’s o3 model “is already pretty smart,” and that he’s heard people say “wow, this is like a good PhD.” Announcing the launch of ChatGPT-5 in August, Mr. Altman posted on the internet that “We think you will love using GPT-5 much more than any previous Al. It is useful, it is smart, it is fast [and] intuitive. With GPT-5 now, it’s like talking to an expert — a legitimate PhD level expert in anything any area you need on demand, they can help you with whatever your goals are.”





But then things began to fall apart, and rather quickly so.





ChatGPT-5 is a letdown



The first piece of bad news is that much-hyped ChatGPT-5 turned out to be a dud — incremental improvements wrapped in a routing architecture, nowhere near the breakthrough to AGI that Sam Altman had promised. Users are underwhelmed. As the MIT Technology Review reports: “The much-hyped release makes several enhancements to the ChatGPT user experience. But it’s still far short of AGI.” Worryingly, OpenAI’s internal tests show GPT-5 ‘hallucinates’ in circa one in 10 responses of the time on certain factual tasks, when connected to the internet. However, without web-browsing access, GPT-5 is wrong in almost 1 in 2 responses, which should be troublesome. Even more worrisome, ‘hallucinations’ may also reflect biases buried within datasets. For instance, an LLM might ‘hallucinate’ crime statistics that align with racial or political biases simply because it has learned from biased data.

Of note here is that AI chatbots can be and are actively used to spread misinformation (see here and here). According to recent research, chatbots spread false claims when prompted with questions about controversial news topics 35% of the time — almost double the 18% rate of a year ago (here). AI curates, orders, presents, and censors information, influencing interpretation and debate, while pushing dominant (average or preferred) viewpoints while suppressing alternatives, quietly removing inconvenient facts or making up convenient ones. The key issue is: Who controls the algorithms? Who sets the rules for the tech bros? It is evident that by making it easy to spread “realistic-looking” misinformation and biases and/or suppress critical evidence or argumentation, GenAI does and will have non-negligible societal costs and risks — which have to be counted when assessing its impacts.

Building larger LLMs is leading nowhere

The ChatGPT-5 episode raises serious doubts and existential questions about whether the GenAI industry’s core strategy of building ever-larger models on ever-larger data distributions has already hit a wall. Critics, including cognitive scientist Gary Marcus (here and here), have long argued that simply scaling up LLMs will not lead to AGI, and GPT-5’s sorry stumbles do validate those concerns. It is becoming more widely understood that LLMs are not constructed on proper and robust world models, but instead are built to autocomplete, based on sophisticated pattern-matching — which is why, for example, they still cannot even play chess reliably and continue to make mind-boggling errors with startling regularity.

My new INET Working Paper discusses three sobering research studies showing that novel ever-larger GenAI models do not become better, but worse, and do not reason, but rather parrot reasoning-like text. To illustrate, a recent paper by scientists at MIT and Harvard shows that even when trained on all of physics, LLMs fail to uncover even the existing generalized and universal physical principles underlying their training data. Specifically, Vafa et al. (2025) note that LLMs that follow a “Kepler-esque” approach: they can successfully predict the next position in a planet’s orbit, but fail to find the underlying explanation of Newton’s Law of Gravity (see here). Instead, they resort to fitting made-up rules, that allow them to successfully predict the planet’s next orbital position, but these models fail to find the force vector at the heart of Newton’s insight. The MIT-Harvard paper is explained in this video. LLMs cannot and do not infer physical laws from their training data. Remarkably, they cannot even identify the relevant information from the internet. Instead, they make it up.

Worse, AI bots are incentivized to guess (and give an incorrect response) rather than admit they do not know something. This problem is recognized by researchers from OpenAI in a recent paper. Guessing is rewarded — because, who knows, it might be right. The error is at present uncorrectable. Accordingly, it might well be prudent to think of “Artificial Information” rather than “Artificial Intelligence” when using the acronym AI. The bottom line is straightforward: this is very bad news for anyone hoping that further scaling — building ever larger LLMs — would lead to better outcomes (see also Che 2025).

95% of generative AI pilot projects in companies are failing


Corporations had rushed to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices. Then came the news that the AI tools are not doing what they are supposed to do and that people are realizing it (s
ee Ed Zitron). An August 2025 report titled The GenAI Divide: State of AI in Business 2025, published by MIT’s NANDA initiative, concludes that 95% of generative AI pilot projects in companies are failing to raise revenue growth. As reported by Fortune, “generic tools like ChatGPT [….] stall in enterprise use since they don’t learn from or adapt to workflows”. Quite.





Indeed, firms are backpedaling after cutting hundreds of jobs and replacing these by AI. For instance, Swedish “Buy Burritos Now, Pay Later” Klarna bragged in March 2024 that its AI assistant was doing the work of (laid-off) 700 workers, only to rehire them (sadly, as gig workers) in the summer of 2025 (see here). Other examples include IBM, forced to reemploy staff after laying off about 8,000 workers to implement automation (here). Recent U.S. Census Bureau data by firm size show that AI adoption has been declining among companies with more than 250 employees.












MIT economist Daren Acemoglu (2025) predicts rather modest productivity impacts of AI in the next 10 years and warns that some applications of AI may have negative social value. “We’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees,” Acemoglu says. “It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.” Similarly, using two large-scale AI adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers in 7,000 workplaces) in Denmark, Anders Humlum and Emilie Vestergaard (2025) show, in a recent NBER Working Paper, that the economic impacts of GenAI adoption are minimal: “AI chatbots have had no significant impact on earnings or recorded hours in any occupation, with confidence intervals ruling out effects larger than 1%. Modest productivity gains (average time savings of 3%), combined with weak wage pass-through, help explain these limited labor market effects.” These findings provide a much-needed reality check for the hyperbole that GenAI is coming for all of our jobs. Reality is not even close.

GenAI will not even make tech workers who do the coding redundant, contrary to the prediction by AI enthusiasts. OpenAI researchers have found (in early 2025) that advanced AI models (including GPT-4o and Anthropic’s Claude 3.5 Sonnet) still are no match for human coders.
The AI bots failed to grasp how widespread bugs are or to understand their context, leading to solutions that are incorrect or insufficiently compr
ehensive. Another new study from the nonprofit Model Evaluation and Threat Research (METR) finds that in practice, programmers, using early 2025-AI-tools, are actually slower when using AI assistance tools, spending 19 percent more time when using GenAI than when actively coding by themselves (see here). Programmers spent their time on reviewing AI outputs, prompting AI systems, and correcting AI-generated code.





The U.S. economy at large is hallucinating





The disappointing rollout of ChatGPT-5 raises doubts about OpenAI’s ability to build and market consumer products that users are willing to pay for. But the point I want to make is not just about OpenAI: the American AI industry as a whole has been built on the premise that AGI is just around the corner. All that is needed is sufficient “compute”, i.e., millions of Nvidia AI GPUs, enough data centers and sufficient cheap electricity to do the massive statistical pattern mapping needed to generate (a semblance of) “intelligence”. This, in turn, means that “scaling” (investing billions of U.S. dollars in chips and data centers) is the one-and-only way forward — and this is exactly what the tech firms, Silicon Valley venture capitalists and Wall Street financiers are good at: mobilizing and spending funds, this time for “scaling-up” generative AI and building data centers to support all the expected future demand for AI use.

During 2024 and 2025, Big Tech firms invested a staggering $750 billion in data centers in cumulative terms
and they plan to roll out a cumulative investment of $3 trillion in data centers during 2026-2029 (Thornhill 2025). The so-called “Magnificent 7” (Alphabet, Apple, Amazon, Meta, Microsoft, Nvidia, and Tesla) spent more than $100 billion on data centers in the second quarter of 2025; Figure 1 gives the capital expenditures for four of the seven corporations.
...............................................................
 
Last edited:
Ashley Madison
Toronto Escorts