NHS England — the organisation responsible for the healthcare of 56 million people — just ordered every one of its technology teams to make their public GitHub repositories private by May 11, 2026. The reason? A single line buried in an internal directive: “rapid advancements in AI models capable of large-scale code ingestion, inference, and reasoning” — with a specific callout to Anthropic’s Mythos model — have made open-source code a national security liability.

Let that sink in. The world’s fifth-largest employer and the backbone of British public healthcare just decided that AI has gotten good enough to read code, understand architecture, and find vulnerabilities faster than human defenders can patch them. And instead of racing to fix the code, they chose the nuclear option: hide it.

What Happened — And Why It’s More Dramatic Than It Sounds

On May 5, NHS England’s Engineering Board issued an internal directive to all technical leads: every public GitHub repository under the NHS England organisation must be switched to private by May 11. Teams that want to keep a repo public must apply for an exemption — by May 6. That’s a one-day window to justify keeping your open-source project alive.

The directive doesn’t mince words. It states that public repositories “materially increase the risk of unintended disclosure of source code, architectural decisions, configuration detail, and contextual information that may be exploited.” The document explicitly names Anthropic’s Mythos as the catalyst — an AI model the NHS believes can ingest entire codebases, reason about their structure, and identify attack surfaces at a speed and scale no security team can match.

An NHS England spokesperson confirmed the move, telling reporters: “We are temporarily restricting access to some NHS England source code to further strengthen cyber security while we assess the impact of rapid developments in AI models.”

The word “temporarily” is doing a lot of heavy lifting in that sentence.

The Real Fear Isn’t Code — It’s What AI Can Infer From It

Here’s the thing most coverage is missing: this isn’t about someone finding a hardcoded password in a repo. That’s been a problem forever, and there are tools for it. The NHS’s concern is fundamentally different — and arguably more terrifying.

What models like Mythos can do is take hundreds of repositories, cross-reference them, understand how services talk to each other, identify which systems handle patient data, map out authentication flows, and pinpoint the weakest links — all in minutes. It’s not about finding a single bug. It’s about an AI building a complete mental model of your organisation’s infrastructure from publicly available code, and then telling an attacker exactly where to hit.

Think of it like this: a human security researcher might spend weeks understanding how NHS systems interact. A reasoning model can do it during a coffee break. And the NHS, which suffered a devastating ransomware attack on its supply chain in 2024, knows exactly what happens when attackers find the soft spots.

The Open-Source Community Is Furious — And They Have a Point

Within hours of the directive leaking, an open letter signed by 74 developers, researchers, and open-source advocates hit the internet. The letter argues that closing these repositories doesn’t actually improve security — it just removes the ability of independent researchers to find and report vulnerabilities. Security through obscurity, they argue, is the oldest bad idea in cybersecurity.

Terence Eden, a prominent UK open-source advocate and former NHS digital advisor, put it bluntly: “Don’t let them take away your right to see the code which underpins our nation’s healthcare.”

The critics aren’t wrong. Open-source code for public services exists for a reason: transparency, community auditing, interoperability, and trust. When the UK Government Digital Service was established, making code open was a core design principle — the idea being that public money should produce public code. NHS England is now reversing that principle, and the justification is essentially: “AI is too smart for transparency.”

This Is the First Domino — Every Government Will Face This Decision

Here’s the second-order effect nobody’s talking about yet. If the NHS’s logic holds — that AI models can now systematically exploit open-source codebases at scale — then every government in the world running open-source infrastructure faces the same problem. And most of them are running far more of it than the NHS.

India’s ABDM (Ayushman Bharat Digital Mission), which aims to digitise healthcare records for 1.4 billion people, has significant open-source components. The US Department of Defense uses open-source software extensively. The European Commission has been pushing for more open-source adoption in government for years. Are they all going to close their repos too?

The uncomfortable truth is that the NHS might be right about the threat but wrong about the solution. AI models don’t need your public GitHub to find vulnerabilities — they can probe running systems, analyse network responses, and reason about likely architectures from external behaviour. Hiding the code slows them down, but it also slows down every legitimate researcher trying to help.

Follow the Money: Who Benefits When Public Code Goes Dark?

There’s a cynical reading of this decision that’s worth considering. NHS England has been under enormous pressure to modernise its IT systems, and it has increasingly turned to private vendors — including Palantir, whose Federated Data Platform contract has been controversial from day one. When open-source alternatives go dark, the argument for proprietary solutions gets easier to make.

Nobody is suggesting this was the primary motivation. But in bureaucracies, security concerns have a convenient tendency to align with procurement preferences. And when the justification is “AI is too dangerous for transparency,” that’s an argument that can be stretched to cover almost anything.

The Verdict: Right Problem, Wrong Fix

The NHS is correct that AI-powered code analysis represents a genuine step change in offensive capability. Models that can reason about entire codebases, infer architectural patterns, and chain vulnerabilities together are a real and growing threat. The WannaCry attack that crippled NHS systems in 2017 and the supply chain breach in 2024 prove this isn’t theoretical.

But the response — nuking hundreds of open-source repositories with a one-day exemption window — is a panic move, not a strategy. Real security comes from hardening code, not hiding it. The NHS should be investing in AI-powered defensive tools that scan its own code the way Mythos could, fixing the vulnerabilities before attackers find them, and using the open-source community as an asset rather than treating it as a liability.

Instead, they’ve just told every attacker on the planet that there’s something worth finding in those repos — and that the NHS doesn’t believe its own code can withstand scrutiny. That’s not a security posture. That’s a confession.