Anthropic Faces Pentagon Ultimatums and Acquisition [Model Behavior]
[00:00] Announcer: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models
[00:05] Announcer: shaping our world.
[00:08] Nina Park: Welcome to Model Behavior.
[00:14] Nina Park: We examine how AI systems are built and operated in professional environments.
[00:20] Chad Thompson: Joining us today is Chad Thompson, a security leader with a systems-level perspective on
[00:25] Chad Thompson: automation and enterprise risk.
[00:28] Chad Thompson: Chad, great to have you.
[00:29] Thatcher Collins: It is a pleasure to be here, Thatcher.
[00:32] Thatcher Collins: We are seeing a critical inflection point where corporate safety policies are colliding directly with national security directives.
[00:41] Nina Park: That is the lead story today.
[00:44] Nina Park: Defense Secretary Pete Hague-Seth has threatened to blacklist anthropic, demanding they remove safety standards that prevent Claude from being used for autonomous weapons or mass surveillance.
[00:56] Nina Park: Chet, how does this threat to invoke the Defense Production Act change the risk profile for enterprise AI?
[01:04] Thatcher Collins: It creates massive uncertainty.
[01:07] Thatcher Collins: If the government uses the DPA to force a company to produce a version of its model without built-in safety limits,
[01:14] Thatcher Collins: it, you know, undermines the very constitutional framework anthropic built.
[01:20] Thatcher Collins: It moves from a partnership to a forced requisition of intellectual property.
[01:25] Announcer: Nina, I have to point out that the Pentagon spokesperson, Sean Parnell, claims this woke
[01:31] Announcer: AI narrative is fake.
[01:33] Announcer: He argues the DOD just wants the same lawful purpose access that OpenAI and XAI have already
[01:41] Announcer: granted.
[01:42] Announcer: Why should Anthropic be allowed to dictate terms to the military?
[01:46] Nina Park: Thatcher, absolutely.
[01:48] Nina Park: Dario Amode's argument is that these models are simply not reliable enough for autonomous
[01:54] Nina Park: lethality.
[01:54] Nina Park: He says he cannot, in good conscience, accede to requests that might lead to a drone army
[02:01] Nina Park: operated without human cooperation.
[02:04] Nina Park: It is a fundamental disagreement on the maturity of the technology.
[02:09] Thatcher Collins: While that standoff continues, Anthropic is moving fast on the commercial side.
[02:14] Thatcher Collins: They recently acquired Vercept, a Seattle startup focused on computer use agents,
[02:20] Thatcher Collins: and updated Claude co-work with plugins for HR and finance.
[02:25] Thatcher Collins: Chad, these plugins caused a massive sell-off in legacy software stocks earlier this month.
[02:31] Thatcher Collins: Is the market overreacting?
[02:32] Thatcher Collins: The market is reacting to the threat of displacement.
[02:36] Thatcher Collins: When Claude can live inside Excel or PowerPoint and perform legal or financial analysis directly,
[02:43] Thatcher Collins: the value of specialized research tools from firms like Faxet or R-ELX is questioned.
[02:50] Thatcher Collins: It is about whether Anthropic remains a platform or becomes the product that eats the workflow.
[02:56] Nina Park: We also saw Perplexity enter this space yesterday with Computer.
[03:00] Nina Park: It's an agentic system that coordinates different models like Cloud 4.6 and Gemini for complex workflows.
[03:07] Nina Park: Thatcher, this seems like a more controlled response to the viral open-cloud tool we saw recently.
[03:13] Chad Thompson: Exactly, Nina.
[03:14] Chad Thompson: Perplexity is trying to button up the agent experience by running tasks in the cloud rather than on local machines.
[03:22] Chad Thompson: It is a safer walled garden approach compared to the security vulnerabilities we saw with
[03:28] Chad Thompson: the unregulated open-claw plugins.
[03:31] Nina Park: Before we wrap, we should note Google also released NanoBanana 2 today,
[03:35] Nina Park: officially known as Gemini 3.1 Flash Image with Advanced World Knowledge.
[03:40] Nina Park: Thank you for listening to Model Behavior, a Neural Newscast editorial segment.
[03:44] Nina Park: Visit mb.neuralnewscast.com.
[03:47] Nina Park: Neural Newscast is AI-assisted, human-reviewed.
[03:50] Nina Park: View our AI Transparency Policy at neuralnewscast.com.
[03:54] Announcer: This has been Model Behavior on Neural Newscast.
[03:57] Announcer: Examining the systems behind the story.
[04:00] Announcer: Neural Newscast uses artificial intelligence in content creation,
[04:03] Announcer: with human editorial review prior to publication.
[04:07] Announcer: While we strive for factual, unbiased reporting,
[04:09] Announcer: AI-assisted content may occasionally contain errors.
[04:13] Announcer: Verify critical information with trusted sources.
[04:16] Announcer: Learn more at neuralnewscast.com.
