Mistral Forge and Google’s New Gemini Tiers [Model Behavior]

This episode of Model Behavior examines the strategic divergence between Mistral and Google following significant announcements at Nvidia GTC and via product updates. We break down the launch of Mistral Forge, a platform enabling enterprises to train custom AI models from scratch using proprietary data—a move aimed directly at corporate sovereignty. We also analyze Google's comprehensive rebranding of its subscription services into the AI Plus, Pro, and Ultra tiers, which introduces sophisticated reasoning models and expanded hardware integration for Gemini across the Google Home ecosystem.

[00:04] Nina Park: I'm Nina Park. Welcome to Model Behavior. On this program, we examined the construction and operational deployment of AI systems within professional environments.
[00:15] Nina Park: Today is March 18, 2026, and we are tracking several major shifts in how the industry's most prominent players are positioning their latest frontier models.
[00:26] Thatcher Collins: And I'm Thatcher Collins.
[00:28] Thatcher Collins: Yesterday at the NVIDIA GTC conference, we observed a significant strategic pivot in how European AI is courting the enterprise sector, Nina.
[00:39] Thatcher Collins: Specifically, we are seeing a renewed emphasis on data sovereignty and localized infrastructure as startups move to compete with the dominant American cloud providers, Nina.
[00:50] Nina Park: The primary focus of that shift is the French developer Mistral.
[00:54] Nina Park: They recently announced Mistral Forge, a platform designed to allow corporations to build bespoke models
[01:00] Nina Park: trained specifically on their own proprietary documents and internal workflows.
[01:06] Nina Park: This represents a departure from Standard Retrieval Augmented Generation, or RAG, which simply pulls data at the moment of the query.
[01:14] Nina Park: Mistral claims Forge allows companies to actually train models from the ground up using their updated Open Weight Library, which now includes the flagship Mistral Small 4.
[01:26] Nina Park: By moving the training process in-house, they are promising a level of performance and security
[01:32] Nina Park: that multi-tenant cloud systems often struggle to guarantee.
[01:36] Thatcher Collins: I want to look at the friction inherent in that approach, Nina.
[01:40] Thatcher Collins: Traditionally, enterprises have favored fine-tuning because full-scale training from scratch
[01:45] Thatcher Collins: is notoriously resource-intensive and requires high-level technical expertise that many firms simply don't have.
[01:53] Thatcher Collins: Mistral is attempting to lower that barrier by embedding forward-deployed engineers directly within client teams.
[02:01] Thatcher Collins: It is a labor-intensive strategy we've seen successfully utilized by firms like IBM and Palantir.
[02:08] Thatcher Collins: with a target of $1 billion in annual recurring revenue.
[02:12] Thatcher Collins: This calendar year, they are betting that industrial giants, such as the semiconductor leader ASML,
[02:18] Thatcher Collins: are seeking a degree of control and privacy that the current OpenAI ecosystem does not provide.
[02:25] Nina Park: It certainly signals a move away from the general consumer spotlight currently occupied by Anthropic and OpenAI.
[02:32] Nina Park: Transitioning to subscription models, Google also moved to reorganize its consumer ecosystem yesterday.
[02:38] Nina Park: They have rebranded Google One AI Premium to Google AI Pro
[02:43] Nina Park: and introduced a new higher-tier offering called Google AI Ultra.
[02:47] Nina Park: This appears to be a consolidation effort aimed at simplifying their messaging
[02:51] Nina Park: as their AI services expand across different hardware platforms.
[02:55] Thatcher Collins: The nomenclature is becoming quite dense, Nina.
[02:59] Thatcher Collins: The AI Pro tier remains $20 monthly, but it now features a 1 million token context window
[03:06] Thatcher Collins: and expanded usage limits for their high-reasoning thinking models.
[03:11] Thatcher Collins: What is particularly notable is the hardware integration.
[03:15] Thatcher Collins: Google Home Premium subscribers are now receiving Gemini Live on Nest devices.
[03:21] Thatcher Collins: This facilitates much more natural language automation and allows the system to maintain household-specific memory,
[03:28] Thatcher Collins: which could change how users interact with their environments on a daily basis.
[03:33] Nina Park: It appears Google is pursuing total vertical integration.
[03:38] Nina Park: From the multimodal reasoning of Project Mariner and Genie to the 30 terabytes of storage offered in the Ultra Tier,
[03:47] Nina Park: they are positioning themselves as an all-encompassing service provider.
[03:51] Nina Park: While Mistral builds specialized sovereign tools for entities like the European Space Agency,
[03:58] Nina Park: Google is attempting to make Gemini the foundational invisible operating system
[04:04] Nina Park: for both the private home and the professional inbox.
[04:08] Thatcher Collins: Two very different paths for scaling model utility.
[04:13] Thatcher Collins: Thank you for listening to Model Behavior.
[04:16] Thatcher Collins: You can find more detail on these updates at mb.neuralnewscast.com.
[04:22] Thatcher Collins: Neural Newscast is AI-assisted, human-reviewed.
[04:26] Thatcher Collins: View our AI Transparency Policy at neuralnewscast.com.

Mistral Forge and Google’s New Gemini Tiers [Model Behavior]
Broadcast by