In 1930, John Maynard Keynes warned that the world was heading toward a future of “technological unemployment”—a state where machines would render human labor obsolete. He predicted that within a century, advances in technology could reduce the average workweek to 15 hours, as productivity soared and economies became self-sustaining. Spoiler alert: we’re five years away from his hundred-year horizon, and most of us are still burning through 40–60 hour weeks while texting ChatGPT to summarize our meetings.
In 1930, John Maynard Keynes warned that the world was heading toward a future of “technological unemployment”—a state where machines would render human labor obsolete. He predicted that within a century, advances in technology could reduce the average workweek to 15 hours, as productivity soared and economies became self-sustaining. Spoiler alert: we’re five years away from his hundred-year horizon, and most of us are still burning through 40–60 hour weeks while texting ChatGPT to summarize our meetings.
Keynes wasn’t wrong to flag the dislocations that new technology brings, but he was wildly off in timing and impact. Which is exactly why we should be wary when today’s public intellectuals, like Thomas Friedman in his latest New York Times column, echo similar alarm bells about Artificial General Intelligence (AGI) being “at our doorstep.” It’s a seductive headline—but it risks misguiding how we approach one of the most consequential technologies of our time.
Let’s start with the basics: AGI, the idea of a machine with cognitive abilities matching or surpassing humans, remains a distant aspiration. We’re not “on the brink”—we’re still wandering through the early innings of pattern recognition and predictive language models. While tools like ChatGPT and Claude have shown remarkable fluency in generating text, their actual understanding, reasoning, and generalization remain narrow and brittle. As one leading AI researcher recently put it, “Calling these systems intelligent is like calling a calculator a mathematician.”
And even if we did want to get to AGI, we’re not even close to building the infrastructure required. Progress in generative AI has already begun to slow—not because of lack of talent or ambition, but because of physics. The sheer amount of computing power required to train the next generation of AI models is staggering. We’re talking about an exponential expansion of GPU capacity, data center throughput, and energy draw just to push the boundaries of current systems. And that’s before we get into the still-hypothetical world of quantum computing, which could recalibrate the playing field entirely.
So yes, let’s plan. Let’s think ahead. But let’s also stay grounded in reality. Because while AGI might be the headline, it’s today’s mundane applications of generative AI that are already reshaping our world—and not always for the better.
This is the layer we urgently need to pay attention to: AI systems being deployed to issue parking tickets, screen job applicants, transcribe sensitive meetings, or monitor employee behavior. These aren’t theoretical risks. These are real deployments happening now, often without clear policy, transparency, or public consent. The problem isn’t that the robots are taking over. It’s that we, humans, are giving AI tools too much autonomy without the right safeguards.
What Friedman gets right is the need for global coordination. The U.S., China, and others must come together not just for AGI guardrails, but for the regulation of the everyday AI systems already embedded in the machinery of government, education, and commerce. But governance isn’t just about treaties and frameworks—it’s about tools, systems, and infrastructure. The same way cybersecurity evolved to protect digital networks in the age of the internet, we need an entire security stack to manage the spread of generative AI.
That means building the GenAI equivalent of firewalls, threat detection, compliance monitors, and policy enforcement layers. It means creating shared standards for what responsible AI use looks like across industries and borders. And it means making those tools as accessible and scalable as the AI systems they’re meant to manage. The age of superintelligence may or may not come. But the age of generative AI is already here, in our schools, our cities, and our inboxes.