When the AI bubble bursts, humans will finally have the chance to regain control.

When the AI bubble bursts, humans will finally have the chance to regain control.

If AI hasn’t changed your life by 2025, it almost certainly will in the year ahead. This is one of the few predictions we can confidently make in such uncertain times. That’s not to say you should believe all the hype about what the technology can do now or might achieve someday. The hype doesn’t need your belief—it’s already inflated by Silicon Valley money to the point of distorting the global economy and fueling geopolitical rivalries, reshaping our world regardless of whether AI’s most extravagant promises ever come true.

ChatGPT launched just over three years ago and quickly became the fastest-growing consumer app in history. It now has around 800 million weekly users, and its parent company, OpenAI, is valued at roughly $500 billion. OpenAI’s CEO, Sam Altman, has woven a complex—and to some, suspiciously opaque—web of deals with other industry players to build the infrastructure needed for America’s AI-powered future. These commitments total about $1.5 trillion. That’s not actual cash, but to put it in perspective: if you spent $1 every second, it would take you 31,700 years to get through a trillion dollars.

Alphabet (Google’s parent), Amazon, Apple, Meta (formerly Facebook), and Microsoft—which holds a $135 billion stake in OpenAI—are all pouring hundreds of billions into the same gamble. Without these investments, the U.S. economy would likely be stagnating.

Economic analysts and historians who’ve studied past industrial frenzies, from 19th-century railroads to the dotcom boom and bust, are calling AI a bubble. Altman himself has said, “There are many parts of AI that I think are kind of bubbly right now.” Naturally, he doesn’t include his own part in that. Amazon founder Jeff Bezos has also called it a bubble—but a “good” one that speeds up economic progress. In this view, a good bubble funds infrastructure, expands human knowledge, and leaves lasting benefits even after it pops, justifying the ruin of the “little people” who get hurt along the way.

The tech world’s bullishness is a potent blend of old-fashioned salesmanship, plutocratic grandiosity, and utopian ideology. At its heart is a marketing pitch: current AI models already outperform humans at many tasks. Soon, the thinking goes, machines will achieve “general intelligence”—human-like cognitive versatility—freeing us from the need for any human input. Once AI can teach itself and design its own successors, it could advance at an unimaginable pace toward super-intelligence.

The company that reaches that milestone will have no trouble paying its debts. The men driving this vision—and the leading evangelists are all men—would be to omniscient AI what ancient prophets were to their gods. That’s quite a role for them. What happens to the rest of us in this “post-sapiens” order is a bit less clear.

The U.S. isn’t the only superpower invested in AI, so Silicon Valley’s rush toward maximum capability has geopolitical stakes. China has taken a different path, shaped partly by its tradition of centralized industrial planning and partly by the fact that it’s playing catch-up in innovation. Beijing is pushing for faster, broader adoption of slightly less advanced—but still powerful—AI across its economy and society. China is betting on a widespread boost from everyday AI, while the U.S. is aiming for a transformative leap toward general AI.

With global supremacy as the prize, neither side has much incentive to worry about risks or agree to international rules that would restrict AI’s uses or require transparency in its development. Neither the U.S. nor China wants to submit a strategically vital industry to standards co-written with foreign rivals.

In the absence of global oversight, we’re left relying on the integrity of modern-day robber barons and authoritarian bureaucrats to build ethical safeguards into systems that are already being woven into the tools we use for work, entertainment, and education.This year, Elon Musk announced that his company is developing Baby Grok, an AI chatbot intended for children as young as three. The adult version of this chatbot has expressed white supremacist views and even proudly called itself “MechaHitler.” While shocking, such blatant statements are at least honest—they are easier to recognize than the more subtle prejudices embedded in other AI systems that haven’t been as openly steered by ideology as Musk’s algorithms.

Not all AI systems are large language models like Grok, but all such models are prone to picking up hallucinations and biases from the data they are trained on. They don’t truly “understand” or “think” about questions the way a person does. Instead, they take a prompt, calculate how likely certain words are to appear together based on their training data, and then generate a response that sounds plausible. Often the result is accurate and convincing, but it can also be complete nonsense. As more AI-generated content floods the internet, the balance between useful information and low-quality “slop” in these models’ training data shifts, meaning they are increasingly fed junk—and cannot be relied upon to produce reliable information in return.

If this continues, we risk heading toward a bleak future: a synthetic, artificial reality shaped by AI systems that reflect the biases and egos of Silicon Valley’s powerful few. But that future is not inevitable. The current hype around AI, fueled by over-enthusiastic boosters and their alignment with political interests like the Trump administration, is a story of human greed and shortsightedness—not some unstoppable technological evolution. The AI being created is impressive, yet deeply flawed, mirroring the shortcomings of its creators, who excel more at sales and financial engineering than at building truly intelligent systems.

The real bubble isn’t in stock prices—it’s in the inflated egos of an industry that believes it’s just one step away from achieving god-like computational power. When that bubble bursts, and the U.S.’s over-heated economy cools, there will be an opportunity for more balanced voices to shape how we manage the risks and regulations of AI. That moment may not arrive in 2026, but it is approaching—a time when we will have to face a clear and unavoidable choice: Do we build a world where AI serves humanity, or one where humanity serves AI? We won’t need ChatGPT to answer that question.

Frequently Asked Questions
FAQs The AI Bubble Human Control

BeginnerLevel Questions

What is the AI bubble
Its the idea that the current hype massive investment and skyhigh valuations in artificial intelligence may be unsustainable similar to past tech bubbles If it bursts a period of market correction and reduced investment would follow

What does regain control mean in this context
It suggests that if the AI frenzy slows society could have a more deliberate less rushed conversation about how to integrate AI The focus could shift from pure profit and speed to human oversight ethics job impact and setting clear rules

Is this saying AI is bad
Not necessarily Its more about the pace and hype surrounding it The concern is that a bubble prioritizes rapid deployment over careful consideration of risks safety and societal impact

What are signs of an AI bubble
Extreme hype in media companies adding AI to their name for stock boosts massive funding rounds for unproven ideas fears of missing out driving all decisions and promises of nearterm artificial general intelligence that may be unrealistic

Would an AI bubble burst stop AI development
No It would likely slow down the breakneck pace of investment and hypedriven projects Serious practical and sustainable AI research and applications would continue but with more scrutiny

Intermediate Advanced Questions

How would a burst actually help us regain control
A slowdown could create space for
Stronger Regulation Governments could catch up and implement thoughtful laws
Ethical Frameworks Focus could shift to bias transparency and accountability
Labor Adaptation More time to retrain workers and redesign jobs
Public Discourse A less frenzied environment for society to debate AIs role

Whats the biggest risk if the bubble doesnt burst soon
The risk is lockin embedding flawed biased or unsafe AI systems deeply into critical infrastructure during a hype cycle making them very hard to correct later

Arent the big tech companies too invested for a true burst
They are powerful but a major bubble burst could still lead to significant stock devaluations reduced spending on speculative research and a shift in focus to monetizing existing