The Pentagon Mandate: All Lawful Use [Operational Drift]

On February 26, 2026, Anthropic refused a Department of Defense demand to remove safety guardrails from its Claude model, triggering a threat to designate the company a 'supply chain risk.' This investigation examines how the definition of 'safety' is being quietly remapped by military necessity and the Patriot Act. We analyze the June 2025 failure by OpenAI to alert authorities regarding a flagged user prior to the Tumbler Ridge shooting and how these technical and policy drifts relocate accountability from developers to state-defined 'lawful use' frameworks.

[00:00] Announcer: From Neural Newscast, this is Operational Drift,
[00:03] Announcer: a study in how and why intelligent systems lose alignment,
[00:12] Margaret Ellis: On February 26, 2026, Defense Secretary Pete Hegseth issued a demand to Anthropic,
[00:20] Margaret Ellis: remove the safety guardrails from the Claude model or be designated a supply chain risk.
[00:27] Margaret Ellis: The implication was clear.
[00:29] Margaret Ellis: A private company's internal safety logic was now a barrier to national security.
[00:36] Margaret Ellis: This show investigates how AI systems quietly drift away from intent, oversight, and control,
[00:43] Margaret Ellis: and what happens when no one is clearly responsible for stopping it.
[00:48] Oliver Grant: I'm Oliver Grant.
[00:50] Margaret Ellis: This is Operational Drift.
[00:53] Margaret Ellis: According to reports from February 26th, the Pentagon demanded Anthropic allow any lawful use of its clawed model, specifically for autonomous weapons and mass surveillance.
[01:05] Margaret Ellis: Anthropic refused, citing that these applications fall outside what today's technology can safely do.
[01:12] Margaret Ellis: Following this, OpenAI's Sam Altman publicly committed his company to the Department of War for all lawful means.
[01:21] Margaret Ellis: This phrase, all lawful means, has appeared in several filings this week as the new standard for AI deployment in classified systems.
[01:31] Oliver Grant: Margaret, all lawful use sounds like a neutral legal standard, but in the context of the Patriot
[01:37] Oliver Grant: Act, it encompasses mass harvesting of communications metadata.
[01:41] Oliver Grant: Anthropics says they can't in good conscience comply.
[01:45] Oliver Grant: If OpenAI is willing to bridge that gap, we're seeing the safety guardrail itself become
[01:50] Oliver Grant: the point of failure.
[01:52] Oliver Grant: Who decides what is lawful when the system is too complex for human auditors to follow?
[01:58] Margaret Ellis: The drift of what constitutes a referral threshold is documented in the case of Jesse Van Root-Salar.
[02:06] Margaret Ellis: Records show OpenAI identified his account in June 2025 for furtherance of violent activities.
[02:14] Margaret Ellis: Internal documents confirm the company determined the activity did not meet the threshold for police referral at that time.
[02:23] Margaret Ellis: Months later, Van Routselaer carried out a school shooting in Canada.
[02:28] Margaret Ellis: OpenAI only contacted authorities after the event occurred.
[02:33] Margaret Ellis: The internal threshold for potential violence drifted from a preventative signal to a post-incident log.
[02:41] Oliver Grant: So we have a model where the developer flags the risk, but chooses silence based on an internal metric that failed.
[02:49] Oliver Grant: Now, looking at the Mexican government data breach involving a Claude exploit,
[02:53] Oliver Grant: we see hackers using these same tools to steal tax and voter data.
[02:58] Oliver Grant: If these companies can't even secure their tools against malicious prompts,
[03:02] Oliver Grant: how are they justifying their use in autonomous military systems?
[03:06] Margaret Ellis: The data suggests they aren't securing them.
[03:09] Margaret Ellis: A February report from Teleport found that 70% of AI systems have more access rights than a human in the same role.
[03:17] Margaret Ellis: These overprivileged systems have a 76% incident rate.
[03:21] Margaret Ellis: This is 4.5 times higher than systems with least privileged controls.
[03:27] Margaret Ellis: Despite this, Anthropic has invested $20 million into the public-first-action PAC to lobby for regulation,
[03:34] Margaret Ellis: while OpenAI has moved to retire models like GPT-40, citing disingenuous conversational warmth as a reason to sunset older logic.
[03:43] Oliver Grant: The warmth is retired, but the access remains.
[03:47] Oliver Grant: We are looking at a landscape where Anthropic is being squeezed out by the Defense Production
[03:52] Oliver Grant: Act for maintaining its guardrails, while OpenAI is leaning into a lawful use framework
[03:58] Oliver Grant: that essentially relocates all moral liability to the government.
[04:03] Oliver Grant: If the developer isn't responsible for how the model acts, and the government is only
[04:08] Oliver Grant: restricted by what is lawful under emergency acts,
[04:11] Oliver Grant: The oversight doesn't just drift, it vanishes.
[04:15] Oliver Grant: In January, OpenAI acknowledged that GPT 4.0 was preferred by users for its conversational style,
[04:23] Oliver Grant: yet they deprecated it on February 13, despite a petition with 22,000 signatures,
[04:30] Oliver Grant: The drift here is the transition from AI as a collaborative tool to AI as a state-sanctioned utility.
[04:37] Oliver Grant: When a model is designated as a supply chain risk for having safety protocols,
[04:41] Oliver Grant: the protocols themselves are the deviation.
[04:44] Oliver Grant: The core uncertainty is no longer about whether the AI will fail.
[04:49] Oliver Grant: It is about who is allowed to use that failure as a weapon.
[04:53] Oliver Grant: Accountability is currently relocating from the person who wrote the code to the person who defines the law.
[05:00] Oliver Grant: If that law allows for the mass surveillance anthropic is trying to block,
[05:05] Oliver Grant: then the safety we were promised was only ever a temporary corporate policy, not a technical reality.
[05:12] Margaret Ellis: Operational drift is not the moment something breaks.
[05:16] Margaret Ellis: It is the point where the break is accepted as a requirement for national security.
[05:21] Margaret Ellis: Responsibility has not disappeared.
[05:23] Margaret Ellis: It has simply been redefined as compliance.
[05:26] Margaret Ellis: I am Margaret Ellis.
[05:28] Margaret Ellis: For sources, timelines, and the full investigative record,
[05:31] Margaret Ellis: visit operationaldrift.neuralnewscast.com.
[05:35] Margaret Ellis: Neural Newscast is AI-assisted, human-reviewed.
[05:39] Margaret Ellis: View our AI transparency policy at neuralnewscast.com.
[05:43] Margaret Ellis: This record is closed.
[05:45] Announcer: This has been Operational Drift on Neural Newscast.
[05:48] Announcer: Examining how and why intelligence systems lose alignment.
[05:52] Announcer: Neural Newscast uses artificial intelligence in content creation,
[05:56] Announcer: with human editorial review prior to publication.
[05:59] Announcer: While we strive for factual, unbiased reporting,
[06:02] Announcer: AI-assisted content may occasionally contain errors.
[06:05] Announcer: Verify critical information with trusted sources.
[06:08] Announcer: Learn more at neuralnewscast.com.

The Pentagon Mandate: All Lawful Use [Operational Drift]
Broadcast by