Prime Cyber Insights: The AGI Conspiracy
Welcome to Prime Cyber Insights. I'm Noah Feldman. Today, we're looking at the digital economy's most expensive ghost, artificial general intelligence, or AGI. Depending on who you ask, you know, it's either going to colonize the stars or kill us all next Tuesday. Right. And I'm Sophia Bennett. While the technology itself, well, it remains hypothetical, its influence is very real. Will Douglas Hevin recently argued in MIT Technology Review that AGI has become the most consequential conspiracy theory of our time. It's a bold claim, Noah, but the parallels he draws between Silicon Valley and fringe movements, they're really striking. Totally! It's that feeling of AGI. I mean, you have people like Ilya Sutskever literally leading chance of feel the AGI. Heaven's point is that AGI fulfills all the hallmarks of a conspiracy, a flexible scheme that survives failure. promise of hidden truth and a hope for salvation from a messy world. Yeah, and you know, from an institutional perspective, the history is fascinating. In 2007, AG was a dirty word in serious research circles. It was popularized by figures like Ben Gertzel and Shane Legg, who were essentially outliers. Now it's the North Star for Alphabet and Microsoft, but as ethics researcher Shannon Valor points out, unlike the Internet age, the AG age is centered on something that doesn't actually exist. Mm-hmm. And that's where the labor and capital side gets wild. This myth is propping up the stock market. I mean, we're seeing $100 billion partnerships between OpenAI and NVIDIA for data centers that require 10 gigawatts of power, which is more than several nuclear plants. All based on the promise of a machine that can do anything a human can do. It's a massive gamble on a definition that nobody even agrees on. Exactly. Some say AGI must be able to make coffee. Others say it just needs to make money. This lack of a clear finish line allows tech CEOs to move the goalposts whenever a model like GPT-5 doesn't deliver the step change the public expected. It's always just around the corner, you know, much like the apocalyptic predictions of certain sects. And let's talk about the doomers, Eliezer Yudkowski and his 99.5% chance of extinction. Even while they warn of the apocalypse, they're fueling the hype. It creates a god-like aura around the developers. I mean, if you're building something that could end the world, you must be pretty important, right? Right. That's the situational awareness Leopold Ashenbrenner writes about. The idea that there are insiders who see the truth and outsiders who don't But in the world of policy, this narrative is dangerous. It shifts the focus from immediate issues like algorithmic bias and labor displacement toward existential risk, which is far harder to regulate. It's a distraction. If we're worried about a rogue AI five years from now, we aren't looking at the gig workers being exploited today. Heaven suggests that AGI is a techno-utopian fever dream that allows us to avoid the hard work of solving real-world problems through cooperation. It's much easier to focus on a future deity than current policy. Ultimately, if we treat technology as a god, we stop treating it as a tool. Whether you think AGI is a conspiracy or an inevitability, the resources being poured into it are reshaping our physical and political landscape. We need to stay grounded in what the machines can actually do today, not what they might do in a dream. Well said. That's all for this episode of Prime Cyber Insights. I'm Noah Feldman. And I'm Sophia Bennett. Join us next time as we continue to decode the intersection of technology and power. Neural Newscast is AI-assisted, human-reviewed. View our AI transparency policy at neuralnewscast.com.
