Elon Musk walked into a California federal courtroom this week to convince a jury that Sam Altman stole his nonprofit and turned it into a $130 billion for-profit machine. He left having publicly confirmed that his own AI company, xAI, used OpenAI’s models to help build Grok — the very chatbot he positions as OpenAI’s ethical alternative. It’s the kind of courtroom moment that doesn’t just undermine a legal strategy; it rewrites the entire narrative around who’s really copying whom in the AI arms race.
When pressed by opposing counsel on whether xAI had employed model distillation — a technique where a smaller model learns by mimicking the outputs of a larger one — Musk initially tried to wave it away as standard industry practice. When asked if that meant “yes,” he conceded: “Partly.” That single word may end up costing him more than any jury verdict.
You Can’t Sue Your Supplier and Call Yourself Independent
The fundamental contradiction here is breathtaking. Musk’s entire lawsuit against OpenAI rests on the argument that Sam Altman betrayed the original nonprofit mission — that OpenAI was supposed to be open, transparent, and built for humanity’s benefit, not to make Microsoft richer. Fair enough. But while making that argument, Musk has simultaneously confirmed that xAI treated OpenAI’s models as a training resource for Grok.
Think about that for a second. Musk is essentially saying: “OpenAI is a corrupt monopoly that betrayed its founders — and also, we used their technology to build our competitor.” It’s like suing a restaurant for bad food while admitting you stole their recipe. The moral high ground doesn’t survive that kind of admission.
Model distillation is, as Musk claimed, common practice in the AI industry. Smaller labs routinely use outputs from frontier models to bootstrap their own systems. But there’s a massive difference between an anonymous startup quietly distilling GPT outputs and the world’s richest man doing it while filing a federal lawsuit against the company whose models he’s using. The hypocrisy isn’t technical — it’s strategic.
Musk’s Own AI Rankings Reveal the Problem
During testimony, Musk was asked to rank the world’s leading AI providers. His answer was telling: Anthropic first, OpenAI second, Google third, Chinese open-source models fourth. Notice who’s missing from the top tier? xAI itself. The man who has poured billions into building a ChatGPT killer publicly admitted his company isn’t even in the conversation when it comes to the best AI systems.
This ranking does two things. First, it validates Anthropic’s position as the quiet frontrunner in the AI race — a company that has deliberately avoided the public circus that Musk and Altman have turned the industry into. Second, it raises an uncomfortable question: if Grok isn’t competitive enough to rank alongside its rivals, and xAI needed to distill from OpenAI to build it, what exactly is the independent value proposition that justifies xAI’s reported $50 billion valuation?
The “Standard Practice” Defense Is a Trap
Musk’s attempt to normalize distillation by calling it “standard practice” and saying “it is standard practice to use other AIs to validate your AI” is technically accurate but legally dangerous. Here’s why: OpenAI’s terms of service explicitly prohibit using its model outputs to develop competing AI systems. If xAI distilled from OpenAI’s API outputs, that’s potentially a terms-of-service violation — and Musk just confirmed it under oath.
OpenAI’s legal team didn’t miss this. They’ve already been building a case around xAI’s practices, and Musk’s testimony hands them a gift-wrapped admission. The irony is sharp: Musk came to court to prove OpenAI wronged him, and may have inadvertently given OpenAI grounds for a countersuit.
It’s also worth noting that the distinction between “validation” and “training” matters enormously in this context. Using another model to validate your outputs — essentially running quality checks — is indeed common. But distillation for training purposes is a fundamentally different activity. Musk’s attempt to blur that line under oath suggests he knows exactly how damaging the full truth would be.
What This Means for the AI Industry
Beyond the courtroom drama, Musk’s admission surfaces a dirty secret that the entire AI industry has been quietly living with: almost nobody is building from scratch anymore. The frontier labs — OpenAI, Anthropic, Google DeepMind — train on massive original datasets. Everyone else, to varying degrees, is learning from their outputs. The Chinese open-source ecosystem that Musk ranked fourth has been openly built on distillation. Startups across Silicon Valley do it quietly. Musk just became the first billionaire to confirm it under oath.
This has massive implications for how we think about AI competition. If the second tier of AI companies is fundamentally dependent on the first tier’s outputs, then the moat around frontier labs is even deeper than their valuations suggest. OpenAI and Anthropic aren’t just building AI products — they’re building the training data that their competitors need to exist. That’s not a competitive landscape; it’s a dependency chain.
The Verdict: Musk’s Credibility Just Took a $130 Billion Hit
The trial is still ongoing, and Musk’s legal team will undoubtedly try to minimize the distillation admission. But the damage is done. Musk entered this courtroom as a betrayed founder fighting for AI’s soul. He’s leaving it as a competitor who borrowed his rival’s homework and then sued them for changing the assignment. The jury will remember that. And more importantly, the AI industry — which has been watching this trial with popcorn in hand — now has public confirmation that the man who calls himself AI’s biggest advocate couldn’t build his flagship product without the company he says betrayed humanity.
The Musk v. Altman trial was supposed to be about OpenAI’s corporate governance. Instead, it’s becoming a referendum on whether Elon Musk’s AI empire was ever as independent as he claimed. Based on this week’s testimony, the answer is a resounding no.