Budget cuts at federal agencies and a deregulatory push from the Trump administration are changing the dynamics of government oversight. Instead of Washington setting the rules, state and local governments are being asked to step up.
Budget cuts at federal agencies and a deregulatory push from the Trump administration are changing the dynamics of government oversight. Instead of Washington setting the rules, state and local governments are being asked to step up.
We’re entering a new chapter in how AI is governed in the U.S.: one where responsibility is shifting away from the federal level and landing closer to home.
This shift has real implications. The previous administration emphasized transparency, fairness, and ethical guardrails for AI. That framework is now being dismantled. Without it, we could see a surge in local experimentation - some of it thoughtful, some of it not.
The upsides of decentralization are easy to imagine. Cities and counties can tailor AI to local needs, whether it’s for transportation planning or improving access to public benefits. Without a heavy federal hand, they’re free to innovate.
However, the risk of ending up with a fragmented landscape is significant. While many local agencies may deploy AI responsibly - with clear documentation and public input - others may not. Your experience with a government chatbot or automated decision tool could vary wildly based not only on your needs, but on your ZIP code.
There’s also the question of whether local governments are ready. Many are already stretched thin. Few have Chief AI Officers, let alone technology ethics teams. Yet, they’re now expected to evaluate, implement, and monitor AI systems - on top of everything else. That pressure may drive some agencies to adopt off-the-shelf tools that promise efficiency, even if they haven’t been vetted for fairness or transparency.
Supporting responsible experimentation at the local level requires more than just handing over authority - it means equipping agencies with the tools, guidance, and connections they need to make informed decisions. Regional templates for AI governance - such as transparency checklists or bias testing protocols—can help multiple agencies align on key principles without having to wait for federal action. Equally important is building channels for collaboration. Cities and counties shouldn’t have to start from scratch or reinvent best practices in isolation. Peer networks can surface patterns early, highlight risks, and offer lessons from both successful deployments and cautionary ones.
In addition, access matters. Not every local official is a machine learning expert - and they shouldn't have to be. What’s needed are practical, user-friendly tools: dashboards that help flag risk, systems that provide visibility into how AI is being used across departments, and resources that explain AI behavior and limitations in clear language. Democratizing these tools is foundational to building thoughtful, accountable AI systems at the local level.
The heart of government innovation isn’t just code or policy - it’s public trust. Constituents want to understand how decisions are made, especially when they impact housing, jobs, schools and other social benefits. Thus, AI oversight can not be optional. Transparency and accountability must be baked into the process, not bolted on after something goes awry.
We’re entering an era in which every local agency could become its own AI lab. This may open the door to creativity and responsiveness, but it potentially leads to fragmentation and risk. Without coordinated oversight, we could end up with a patchwork of tools and standards that vary as much by geography as by intent. Freedom must be matched with transparent governance, shared learning, and ethical safeguards to maintain public trust and protect data safety.
Empowering state and local agencies is a strategic advantage. It allows us to leverage what the U.S. does best: decentralized problem-solving and regional ingenuity. Rather than relying on one-size-fits-all mandates, we can enable thousands of governments to tailor AI to their communities' values and needs. That’s how we preserve American dynamism and fuel responsible progress from the ground up.
However, local leadership doesn’t remove the need for national guardrails. A steady federal hand - grounded in bipartisan values such as fairness, privacy, and civil liberties - must help ensure that no community is left unprotected or politicized in the process. AI use and regulations should not be about top-down control, but about setting a baseline for what responsible AI should look like.
Fewer regulations should mean more responsibility - not less.