OpenAI Uses ChatGPT to Identify Employee Leakers [Model Behavior]

This episode covers reports that OpenAI is utilizing a custom internal version of ChatGPT to monitor and identify employees responsible for leaking confidential information. We explore the methodology, which involves cross-referencing published news articles with internal Slack channels and emails. Additionally, we discuss a significant patient victory in which Anthropic’s Claude was used to audit a $195,000 hospital bill, resulting in a $163,000 discount by identifying improper Medicare billing codes. The episode also highlights Anthropic’s move to provide advanced file creation and skill features to free users, and Elon Musk’s recent all-hands meeting at xAI. Musk outlined a new organizational structure including the 'Macrohard' project and long-term plans for lunar data centers and space-based AI infrastructure.

[00:00] Chad Thompson: Welcome to Model Behavior.
[00:02] Chad Thompson: Model Behavior examines how AI systems are built, deployed, and operated in real professional environments.
[00:09] Chad Thompson: Joining us today is Chad Thompson, who provides a systems-level perspective on AI, automation, and security.
[00:17] Chad Thompson: Chad, it is good to have you.
[00:18] Nina Park: Thanks, Nina.
[00:19] Nina Park: We are starting with a report from yesterday regarding OpenAI.
[00:23] Nina Park: The company is reportedly using a custom internal version of ChatGPT to identify employees who leak confidential information.
[00:31] Nina Park: According to the information, security personnel run published news stories through the model,
[00:37] Nina Park: which has access to internal slat logs, emails, and documents to cross-reference specific leaked details.
[00:44] Thatcher Collins: It is a clear example of using LOMS for internal telemetry analysis.
[00:49] Thatcher Collins: I mean, from a systems perspective, the ability to automate the matching of unstructured communication data against external reports is a powerful security tool.
[01:00] Thatcher Collins: However, it shifts the role of the AI from a productivity assistant to a surveillance mechanism within the corporate environment.
[01:08] Chad Thompson: Thatcher, on the consumer side, we've seen a very different use of these auditing capabilities.
[01:14] Chad Thompson: A report yesterday highlighted a patient, Matt Rosenberg, who used Anthropics Claude
[01:19] Chad Thompson: to audit a $195,000 hospital bill.
[01:23] Chad Thompson: Claude identified that the hospital had unbundled procedures that Medicare requires to be
[01:29] Chad Thompson: billed as a single package, eventually helping negotiate the bill down to $32,000.
[01:36] Thatcher Collins: Mm-hmm.
[01:36] Thatcher Collins: Nina, it's a massive win for patient advocacy.
[01:40] Thatcher Collins: It is also worth noting that Anthropic made these specific advanced tools, including file
[01:47] Thatcher Collins: creation, connectors, and custom skills, available to all free users earlier this week.
[01:53] Thatcher Collins: they are clearly positioning Claude as a tool for high-utility document analysis
[01:59] Thatcher Collins: and specialized tasks without the need for a subscription.
[02:03] Thatcher Collins: The medical billing case is significant because it required Claude to act as a specialized auditor,
[02:08] Thatcher Collins: comparing CPT codes against federal regulations.
[02:12] Thatcher Collins: This move towards skills and tools like Claude Code
[02:15] Thatcher Collins: suggests we are transitioning from simple chatbots to autonomous agents
[02:20] Thatcher Collins: that can navigate complex bureaucratic systems.
[02:23] Chad Thompson: Turning to industry shifts, earlier this week, Elon Musk held an all-hands meeting for XAI.
[02:31] Chad Thompson: The company has now split into four specialized teams, Grok Main, Coding, Imagine, and a new simulation project called Macrohard.
[02:41] Chad Thompson: Musk also discussed long-term plans for orbital data centers and lunar satellite factories to explore deep space.
[02:50] Nina Park: Musk's vision is certainly expansive, but we are also seeing immediate practical enterprise deployments.
[02:59] Nina Park: For instance, the developer Evron recently integrated QN and Yandex GPT into their internal systems to automate HR resume parsing.
[03:11] Nina Park: They reported a 90% reduction in manual salary lookups by using LLMs to normalize unstructured data.
[03:20] Thatcher Collins: Both MacroHard and the Everone case show that the next phase of AI is about deep integration into business infrastructure.
[03:28] Thatcher Collins: Whether it is simulating entire software firms or just cleaning up HR data, the goal is reducing manual friction in the system.
[03:37] Thatcher Collins: The focus is shifting from what the AI can say to what the AI can do within a professional workflow.
[03:44] Chad Thompson: Thank you for listening to Model Behavior, a neural newscast editorial segment,
[03:50] Chad Thompson: mb.neuralnewscast.com.
[03:55] Chad Thompson: Neural Newscast is AI-assisted, human-reviewed.
[03:59] Chad Thompson: View our AI transparency policy at neuralnewscast.com.

OpenAI Uses ChatGPT to Identify Employee Leakers [Model Behavior]
Broadcast by