Dohdoor Malware Hits US Healthcare and AI Agents Pose Security Risks
[00:00] Announcer: From Neural Newscast, I'm joined by Michael. It is Friday, February 27th, 2026.
[00:06] Announcer: And I'm joined by Vanessa. We are starting with a significant cybersecurity alert today.
[00:11] Announcer: Security researchers at Cisco Talos have identified a new piece of malware called Dodor
[00:17] Announcer: that's actively hitting the healthcare and education sectors here in the United States.
[00:22] Vanessa Calderon: Researchers are linking this activity to a group they track as UAT 10027.
[00:29] Vanessa Calderon: While they have low confidence in the attribution,
[00:32] Vanessa Calderon: they noted several technical similarities to the Lazarus Group and other gangs backed by North Korea.
[00:38] Announcer: The choice of targets is particularly concerning Vanessa.
[00:41] Announcer: We've seen infections at educational institutions and even an elderly care facility.
[00:47] Announcer: The attackers seem to be using phishing emails to drop a Windows bash script that eventually side loads this do-door backdoor.
[00:54] Vanessa Calderon: It's a very stealthy operation.
[00:57] Vanessa Calderon: They use DNS over HTTPS via Cloudflare so that all the malicious traffic looks like standard HTTPS web browsing.
[01:06] Vanessa Calderon: This helps them bypass traditional security tools that monitor DNS requests.
[01:11] Announcer: While healthcare systems deal with that threat, there's another vulnerability growing inside the enterprise.
[01:18] Announcer: Industry leaders are warning that autonomous AI agents are creating a security Wild West
[01:24] Announcer: because they have more access to systems than almost any other software.
[01:28] Vanessa Calderon: The Model Context Protocol simplifies the integration of these agents,
[01:33] Vanessa Calderon: but experts say it's currently extremely permissive.
[01:37] Vanessa Calderon: Traditional security frameworks are built around human interactions
[01:41] Vanessa Calderon: and don't yet have a defined construct for agents that can work autonomously.
[01:46] Announcer: It becomes a massive accountability puzzle.
[01:49] Announcer: If an AI agent misauthenticates a user or leaks sensitive data,
[01:53] Announcer: the audit trail can become a labyrinth.
[01:56] Announcer: We're moving toward a future with hundreds of agents, each with their own identities and access levels.
[02:02] Vanessa Calderon: It's clear that the industry needs to develop concrete standards for how these bots interact.
[02:08] Vanessa Calderon: Until then, Michael, the burden is on developers to figure out which tools these agents can actually touch.
[02:15] Announcer: From Neural Newscast and on behalf of Vanessa, thank you for listening.
[02:19] Vanessa Calderon: And for Michael, thanks for joining us.
[02:22] Vanessa Calderon: Neural Newscast is AI-assisted, human-reviewed,
[02:25] Vanessa Calderon: View our AI Transparency Policy at NeuralNewscast.com.
[02:30] Michael Turner: Neural Newscast uses artificial intelligence in content creation
[02:34] Michael Turner: with human editorial review prior to publication.
[02:37] Michael Turner: While we strive for factual, unbiased reporting,
[02:40] Michael Turner: AI-assisted content may occasionally contain errors.
[02:43] Michael Turner: Verify critical information with trusted sources.
[02:46] Michael Turner: Learn more at neuralnewscast.com.
