Anthropic and Pentagon Clash Over AI Safeguards [Model Behavior]

This episode covers the growing tension between AI safety and national security as the Pentagon reportedly considers cutting a $200 million contract with Anthropic over usage restrictions. The rift follows the capture of former Venezuelan President Nicolás Maduro, where Anthropic's Claude was allegedly deployed despite CEO Dario Amodei's stance against lethal operations. We also examine Google Cloud’s five-year partnership with Liberty Global to bring Gemini AI to 80 million connections across Europe, and Microsoft's strategic pivot toward AI self-sufficiency. Microsoft AI chief Mustafa Suleyman confirmed the company is developing internal models to reduce reliance on OpenAI, even as major partners like Westpac deploy Microsoft Copilot to 35,000 employees globally. Systems expert Chad Thompson joins the discussion to analyze the friction between commercial AI guardrails and military requirements.

[00:00] Nina Park: Welcome to Model Behavior. We examine how AI systems are built, deployed, and operated
[00:07] Nina Park: in real professional environments. Joining me today is our correspondent, Thatcher.
[00:13] Nina Park: Thanks, Nina. Today we start with a significant infrastructure expansion for Google Cloud.
[00:18] Nina Park: Google and Liberty Global have announced a five-year strategic partnership that puts Gemini AI models at the center of the European Telecom Operators Digital Transformation.
[00:30] Nina Park: The deal covers approximately 80 million fixed and mobile connections, including Virgin Media O2 in the UK and Telanet in Belgium.
[00:39] Nina Park: Satcher, it's notable that the integration spans both customer-facing products and internal network operations.
[00:47] Nina Park: This follows a broader trend of hyperscalers moving deeper into the telecom stack.
[00:52] Nina Park: However, we're also seeing a shift in the relationship between major providers and their model partners.
[00:58] Nina Park: Microsoft AI chief Mustafa Salomon confirmed recently that Microsoft is pursuing true self-sufficiency by developing internal models to reduce its dependence on open AI.
[01:09] Nina Park: Right. That shift towards self-sufficiency is a critical strategic move for Microsoft as they eye the enterprise market.
[01:17] Nina Park: While they continue to offer open AI-powered features, we're seeing the massive scale of
[01:22] Nina Park: their current deployments.
[01:24] Nina Park: For example, the Australian bank Westpac recently rolled out Microsoft 365 co-pilot to its
[01:30] Nina Park: entire global workforce of 35,000 people.
[01:34] Nina Park: Nina, this is currently one of the largest corporate AI assistant rollouts to date.
[01:39] Nina Park: It certainly demonstrates the reach Microsoft currently holds.
[01:43] Nina Park: But our lead story involves a growing rift between the public sector and AI safety-focused labs.
[01:51] Nina Park: Joining us today is Chad Thompson, who brings a systems-level perspective on AI, automation, and security.
[01:58] Nina Park: Chad, what's driving the current tension between Anthropic and the Pentagon?
[02:04] Thatcher Collins: Nina, it centers on usage policies versus operational utility.
[02:09] Thatcher Collins: Reports today indicate the Pentagon may cut ties with Anthropic, potentially avoiding a $200 million contract.
[02:16] Thatcher Collins: The friction stems from the revelation that Claude was used in the capture of Nicolas Maduro in Venezuela.
[02:23] Thatcher Collins: Anthropic CEO Dario Amo Dei has been vocal about restricting AI from lethal operations or mass surveillance.
[02:31] Thatcher Collins: But the Defense Department is demanding models they can use for all lawful warfighting purposes.
[02:37] Chad Thompson: The conflict is quite public.
[02:39] Chad Thompson: Defense Secretary Pete Hegsath recently noted that the agency will prioritize models that don't restrict how the military fights wars.
[02:49] Chad Thompson: Chad, it seems the Pentagon is already looking toward alternatives like XAI or Palantir's
[02:55] Chad Thompson: integrated solutions, if Anthropic maintains these hard lines on usage.
[03:00] Nina Park: Amadei recently argued in an essay that democracies should use AI for defense in ways that do
[03:08] Nina Park: not mirror autocratic adversaries.
[03:11] Nina Park: This rift underscores the challenge for AI labs trying to balance commercial safety missions
[03:19] Nina Park: with the high-stakes requirements of national security contracts.
[03:24] Nina Park: Thatcher, it appears this will be a defining debate for the 2026 defense budget.
[03:31] Nina Park: Thank you for listening to Model Behavior, a neural newscast editorial segment.
[03:37] Nina Park: For more technical details on these stories, visit mb.neuralnewscast.com.
[03:43] Nina Park: Neural newscast is AI-assisted human-reviewed.
[03:48] Nina Park: View our AI transparency policy at neuralnewscast.com.

Anthropic and Pentagon Clash Over AI Safeguards [Model Behavior]
Broadcast by