The Elon Musk vs. Sam Altman feud is a distraction. — Karen Hao

The Elon Musk vs. Sam Altman feud is a distraction. — Karen Hao

If it wasn’t obvious before, Elon Musk and Sam Altman can’t stand each other. The two were once co-founders of OpenAI, but now they’re locked in a bitter feud, playing out dramatically in a California courtroom. Musk is suing, claiming Altman and OpenAI president Greg Brockman tricked him into starting and funding the organization as a non-profit, only to later restructure it to include a for-profit arm. OpenAI argues Musk knew about those plans all along and says the lawsuit is just an attempt to hurt a competitor.

I know this story well. I’ve been covering OpenAI since 2019, and I even spent three days inside their office shortly after Musk left and Altman became CEO. If there’s one thing I’ve learned from following this company and the AI industry, it’s that this world breeds intense rivalries.

It’s no coincidence that nearly all of OpenAI’s original founders left under bad terms, or that every tech billionaire seems to have a nearly identical AI company. The frantic AI race is tied up with the petty, clashing egos of the ultra-wealthy, all determined to outdo each other.

If Musk wins his case, it could be devastating for OpenAI, especially as it prepares for a possible initial public offering this year. Musk is seeking $150 billion in damages from the company and one of its top investors, Microsoft. He also wants to turn OpenAI back into a non-profit, remove Altman and Brockman from leading the for-profit side, and kick Altman off the non-profit board.

But thinking the future of AI development will be decided by a personality contest misses the bigger picture. Yes, Brockman’s diary entries are telling, and former OpenAI CTO Mira Murati’s testimony about Altman pitting executives against each other backs up what I’ve reported before. But focusing on whether Altman is untrustworthy, or whether Musk is even worse, distracts from a much deeper issue.

If OpenAI lost its place as the AI industry leader, another barely different competitor—like Musk’s xAI or someone else—would simply take its place. That includes companies like Anthropic, which has a better reputation but still does many of the same things: rushing decisions for speed, ignoring intellectual property, and aggressively building massive computing systems that harm communities.

Nothing about this trial or OpenAI’s financial setup will change the drive of these companies to gather more data and money, reshape the planet, exhaust and replace workers, and embed themselves deep within governments to gain power over systems of control. We’d still live in a world where a tiny few have the immense power to shape it in their image and dictate how billions of people live.

Despite what Silicon Valley wants you to believe, AI doesn’t have to lead to domination, and broad benefits from the technology can’t come from such a foundation. Before the industry shifted hard toward building extremely resource-heavy AI models, many other types of AI thrived: small, specialized systems for detecting cancer, reviving endangered languages, forecasting extreme weather, and speeding up drug discovery. There were also ideas for new AI technologies that didn’t need much data at all, or that could run on mobile devices instead of massive supercomputers.

Even now, with large language models, plenty of research and examples—like DeepSeek—show that different methods can achieve the same results using a tiny fraction of the scale that AI companies use to justify their planet-consuming ambitions. As Sara Hooker, former vice president of research at Google, put it, “Scaling is a cheap formula for getting more performance, but it’s also a highly imprecise formula.”An architect at the Canadian AI company Cohere once told me, “We love it so much because it fits neatly into predictable planning cycles. It’s easier to say ‘throw more computing power at the problem’ than to come up with a new method.”

But these many paths are withering in the shadow of the big players. In the first quarter of last year, nearly half of all venture capital went to just two companies: OpenAI and Anthropic. That’s just the tip of a years-long trend of capital consolidation that has drained academia and starved research that goes against—or simply doesn’t fit—the corporate agenda. According to a study by MIT researchers published in Science, the share of AI PhD graduates who chose to work in industry jumped from 21% to 70% between 2004 and 2020. And it’s not just diversity in AI development that’s suffering. In 2024, funding for climate tech dropped by 40%, as investors redirected their money in part toward the brute-force scaling of the AI empires.

It doesn’t have to be this way. Over the past year, as I’ve traveled to dozens of cities across the US and around the world, I’ve seen this realization taking hold. People everywhere are taking up the cause of collective resistance. The most visible and vibrant examples are the data center protests popping up in communities across different regions and political divides. In New Mexico, I met residents eager to educate themselves about the AI industry over potluck dinners, demanding transparency and accountability for local projects—like a massive, multi-billion-dollar OpenAI supercomputing campus proposed for the state as part of the company’s $500 billion Stargate computing infrastructure buildout.

As much as Silicon Valley would like you to believe, AI doesn’t have to mean imperial conquest, nor can broad benefits from the technology ever come from such a foundation.

At a gathering in New York, I listened to KeShaun Pearson, a leader in the fight in Memphis, Tennessee, against Musk’s Colossus supercomputers. He gave a heartfelt reminder of the toll the facility’s dozens of methane gas turbines were taking on his community. “Take two deep breaths,” he told the audience. “That’s a human right” that was being taken from them. As of this month, Anthropic is using Colossus.

At the same event, Kitana Ananda, another community leader from Tucson, Arizona, who is mobilizing against Project Blue—an Amazon hyperscale AI facility—described the deep feeling she and her neighbors shared: that they were fighting not just for their own community, but for every community being steamrolled by the AI industry. On a 114°F day, as they packed into city hall in a show of force and watched the council vote 7-0 to pause the project in its current form, they cheered and cried with joy, knowing their victory was every community’s victory.

Workers are also striking across sectors and countries. In northern California, more than 2,000 healthcare professionals at Kaiser Permanente walked out over the threat of AI being used to automate their work or harm patient outcomes. In Kenya, data workers and content moderators hired by AI companies to train and clean up their models are organizing to draw international attention to their exploitation and demand better working conditions.

In more than 30 countries, cultural workers—from voice actors to screenwriters to manga illustrators—are mobilizing to speak out against issues like having their work used for training, having their likeness stolen by AI systems, or being replaced by them, according to the Worker Mobilizations around AI database, a research project led by the Creative Labour & Critical Futures group at the University of Toronto.

Forget the AI job apocalypse. AI’s real threat is worker control and surveillance.

Educators and students are pressuring their institutions. Victims and their families are filing lawsuits. Tech employees themselves are campaigning. Group chats for organizing are everywhere. People are marching.

This growing wave of collective pushback seems to be forcing the AI industry to scale back its ambitions. Already, more thanIn 2025, infrastructure projects worth $150 billion were blocked or stalled, according to Data Center Watch, a project tracking opposition led by AI research firm 10a Labs. Investors are taking notice and starting to lower their expectations about how much AI companies can actually deliver on their promises.

OpenAI shut down its video-generation app Sora, which company executives once praised as one of their most important products and a new frontier in AI development. As the Wall Street Journal reported, Sora’s closure was ultimately driven by several interconnected factors shaped by grassroots action: declining usage, negative public perception, tighter finances, and severe limits on computing resources.

Here’s the thing about empires. They don’t just try to consume everything—they rely on it to survive. In other words, what seems to give them immense strength is actually their biggest weakness. When even a small portion of the resources they need is cut off, the giants start to stumble. So if you’re wondering what will truly hold the AI industry accountable and offer a different path for the technology’s development, look beyond the billionaire squabbles. The real work is happening everywhere else.

Karen Hao is the author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

Frequently Asked Questions
Here is a list of FAQs based on the article The Elon Musk vs Sam Altman feud is a distraction by Karen Hao

BeginnerLevel Questions

1 What is this article about
Its about the public fight between Elon Musk and Sam Altman over who controls OpenAI The author argues that this personal drama is actually a distraction from bigger more important issues in AI

2 Who are Elon Musk and Sam Altman
They are both very famous tech leaders Elon Musk is the CEO of Tesla and SpaceX Sam Altman is the CEO of OpenAI the company that created ChatGPT They were cofounders of OpenAI together but had a falling out

3 What is the feud about
The feud is mainly about money and control Musk sued OpenAI claiming it broke its original promise to be a nonprofit for the good of humanity Altman says Musk just wants to slow down OpenAIs success because he has his own competing AI company

4 Why does the author say its a distraction
The author Karen Hao believes that while we are all watching the personal drama between two billionaires we are ignoring more critical questions These include who really benefits from AI how it will affect jobs and what rules should govern it

5 What should we be paying attention to instead
Instead of the Musk vs Altman soap opera we should be paying attention to the realworld impact of AI on regular peoplethings like job automation bias in algorithms privacy and the concentration of power in a few big tech companies

AdvancedLevel Questions

6 What is the core argument of Karen Haos article
The core argument is that the highprofile legal and personal battle between Musk and Altman serves as a smokescreen It turns a complex public policy debate about AI safety and ethics into a simple celebrity gossip story which benefits the tech giants by keeping the public focused on personalities rather than regulation

7 How does the feud specifically distract from AI safety
The feud frames AI safety as a personal dispute This oversimplifies the problem It prevents a serious public conversation about technical safety measures corporate accountability and the need for democratic oversight of powerful AI systems