“...that government of the people, by the people, for the people shall not perish from the Earth.”
Abraham Lincoln's words, spoken at Gettysburg, urged a divided nation to honor sacrifice and fight for a democratic future. Today, in a different context, they resonate just as powerfully. As artificial intelligence continues to reshape our world, it's crucial we ensure AI serves humanity—all of humanity.

The Stanford Digitalist Papers project, modeled after the Federalist Papers, argues that AI regulation must involve us all—not just experts or policymakers, but citizens. We’re not just passive recipients of technology; we have a stake in shaping the future it creates.

This is timely.

Thought leaders like Yuval Noah Harari paint a grim picture of AI's unchecked power. Harari warns that AI could “supercharge existing human conflicts, dividing humanity against itself.” The future he envisions is one where we’re ruled by algorithms beyond our understanding or control, where even our bodies and minds could be re-engineered by non-human intelligence.

We can avoid that dystopia—but it will take deliberate effort.

The conversation about regulating AI can’t be confined to a few voices in academia or government. It’s about time we start asking ourselves: How do we want AI to serve us? What kind of future do we want to build?

Stanford's initiative proposes something radical: democratic deliberation. In Taiwan, citizen assemblies gather to guide policy, with regular people shaping laws that constrain the power of big tech. This model has been successful in areas like anti-fraud legislation, and it offers a blueprint for AI governance. Taiwan’s digital minister, Audrey Tang, has noted the effectiveness of these online forums in harnessing the collective intelligence of the public.

Why can’t we take this approach with AI? If we can crowdsource solutions to political deadlock, why can’t we do the same for the most transformative technology of our time?

A future where AI is governed “by the people” isn’t just idealism—it’s essential.

Leaving AI regulation in the hands of a few tech companies or governments risks turning citizens into subjects of technology, rather than its beneficiaries. AI should reflect the values, needs, and concerns of real people, not just the elite few.

Let’s not underestimate the intelligence of the public, but also—let's not overestimate their understanding of AI's nuances. That’s where education comes in. We need to create spaces where people can learn about AI, understand its impact, and contribute meaningfully to how it’s regulated. As Harari’s warnings highlight, if we don’t democratize AI, it may become an alien force ruling over us rather than a tool working for us.

In tech and entrepreneurship, we see successful partnerships between companies and customers to co-create solutions. When it comes to AI policy, it should be no different. Let’s involve citizens, partner with them in shaping the future of AI. Because if we don’t, we risk building a future where technology serves the few rather than the many.

AI should enhance our lives—not govern them. That’s the standard we must hold ourselves to as we shape its future.

Interested in Learning More?