Pentagon Pressures Anthropic Over Military Access [Model Behavior]

Defense Secretary Pete Hegseth has reportedly issued a Friday deadline to Anthropic CEO Dario Amodei, demanding the company lift restrictions on military use of its AI technology. This ultimatum highlights a growing friction between Anthropic’s safety-centric usage policies—which prohibit autonomous targeting and domestic surveillance—and the Pentagon's requirement for tools without ideological constraints. Simultaneously, Anthropic has released its Claude Sonnet 4.6 model for free users, offering performance levels previously reserved for its expensive Opus tier, including a 1 million token context window and enhanced computer-use capabilities. In the enterprise sector, OpenAI has formed the Frontier Alliance with major consultancy firms like McKinsey and Accenture to accelerate the deployment of agentic AI systems. These developments represent a pivotal moment where AI safety frameworks are being directly challenged by national security priorities and rapid commercial scaling through global infrastructure partners.

[00:00] Announcer: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models
[00:05] Announcer: shaping our world.
[00:12] Chad Thompson: Welcome to Model Behavior.
[00:13] Chad Thompson: Model Behavior examines how AI systems are built, deployed, and operated in real professional
[00:20] Chad Thompson: environments.
[00:21] Chad Thompson: Joining us today is Chad Thompson, a director-level AI and security leader with a systems-level
[00:28] Chad Thompson: perspective on automation, enterprise risk, and operational resilience.
[00:33] Chad Thompson: It's great to have you with us.
[00:35] Chad Thompson: Thank you, Nina.
[00:38] Chad Thompson: It is a critical time to be discussing the intersection of model safety and national security infrastructure.
[00:44] Nina Park: We start with reports that Defense Secretary Pete Hegseth has given Anthropic a Friday deadline to grant the military unrestricted access to its technology.
[00:57] Nina Park: Currently, Anthropic blocks its models from being used for fully autonomous targeting and domestic surveillance.
[01:05] Nina Park: Nina, this seems like a direct challenge to the safety-first identity Anthropic has cultivated.
[01:12] Chad Thompson: Exactly, Thetcher.
[01:13] Chad Thompson: Defense officials have warned they might designate Anthropic as a supply chain risk
[01:19] Chad Thompson: or invoke the Defense Production Act to gain authority over how the product is used.
[01:25] Chad Thompson: While competitors like OpenAI and XAI are moving toward secure military networks,
[01:33] Chad Thompson: Anthropic CEO Dario Omodai has remained firm on ethical boundaries regarding lethal force and mass surveillance.
[01:41] Chad Thompson: From a systems-level perspective, this is a classic tension between operational resilience and ethical guardrails.
[01:50] Chad Thompson: The Pentagon argues that military tools should not have built-in ideological limitations,
[01:56] Chad Thompson: while Anthropic is concerned about the systemic risks of AI-assisted descent tracking or autonomous weaponry.
[02:06] Thatcher Collins: While that conflict unfolds in Washington, Anthropic is simultaneously making a major move in the consumer market.
[02:14] Thatcher Collins: Today's news confirms that Claude Sonnet 4.6 is now free for all users on Claude.ai.
[02:23] Thatcher Collins: This model reportedly delivers performance comparable to their flagship Opus tier, but at a much lower operational cost.
[02:32] Chad Thompson: That is a significant upgrade for free users' stature.
[02:36] Chad Thompson: Sonnet 4.6 features a 1 million token context window and a new adaptive thinking capability
[02:43] Chad Thompson: where the model automatically decides when a problem requires deeper reasoning.
[02:48] Chad Thompson: It also shows a massive improvement and computer use benchmarks, scoring over 72% on the OS World Verified Test.
[02:56] Chad Thompson: Right. The 1 million token window is particularly relevant for enterprise risk and audit tasks.
[03:03] Chad Thompson: It allows for the analysis of entire code bases or months of meeting notes in one go,
[03:10] Chad Thompson: which is a major leap for a model that is now essentially the entry-level experience for the public.
[03:18] Thatcher Collins: Scaling that capability is also the focus for OpenAI, which just announced the Frontier Alliance.
[03:25] Thatcher Collins: They are partnering with Accenture, BCG, Cab Gemini, and McKinsey to help enterprises deploy agentic AI.
[03:34] Thatcher Collins: Nina, it sounds like OpenAI is acknowledging that model intelligence alone isn't enough
[03:39] Thatcher Collins: for business transformation.
[03:41] Chad Thompson: Absolutely, Thatcher.
[03:43] Chad Thompson: The Alliance aims to solve the integration problem, connecting company data to AI agents.
[03:49] Chad Thompson: While McKinsey and BCG will focus on strategy and workflows, Accenture and Capgemini are
[03:56] Chad Thompson: handling the cloud and infrastructure side.
[03:58] Chad Thompson: It's a massive push to turn these models into functional enterprise employees.
[04:04] Chad Thompson: It certainly marks 2026 as the year the industry moves from experimentation to massive consultant-led integration.
[04:13] Chad Thompson: Chad, thank you for sharing your insights with us today.
[04:16] Chad Thompson: My pleasure.
[04:18] Chad Thompson: It's clear that both the public and private sectors are now testing the limits of these systems in very different ways.
[04:26] Chad Thompson: Thank you for listening to Model Behavior, a Neural Newscast editorial segment, mb.neuralnewscast.com.
[04:35] Chad Thompson: Neural Newscast is AI-assisted, human-reviewed.
[04:39] Chad Thompson: View our AI transparency policy at neuralnewscast.com.
[04:44] Announcer: This has been Model Behavior on Neural Newscast.
[04:47] Announcer: Examining the systems behind the story.
[04:50] Announcer: Neural Newscast uses artificial intelligence in content creation,
[04:54] Announcer: with human editorial review prior to publication.
[04:57] Announcer: While we strive for factual, unbiased reporting,
[05:00] Announcer: AI-assisted content may occasionally contain errors.
[05:03] Announcer: Verify critical information with trusted sources.
[05:06] Announcer: Learn more at neuralnewscast.com.

Pentagon Pressures Anthropic Over Military Access [Model Behavior]
Broadcast by