top of page

AI at a Crossroads: What the Biden-to-Trump administration change could mean for State and Local Agencies

AI at a Crossroads.jpg

By Noam Maital

 

The federal approach to AI policy is shifting, and for state and local agencies, it may be time to fasten their seatbelts.

 

For years, the federal government has been shaping the conversation on artificial intelligence in public administration, driven by concerns over ethics, data privacy, and transparency. Under the Biden administration, the Office of Management and Budget (OMB) set ambitious guidelines to ensure responsible AI adoption, urging agencies to integrate AI ethically and avoid unintended biases. Among these guidelines was an expectation that federal departments would appoint AI leaders responsible for steering each agency’s AI strategies.

 

However, not all of these policies have been practical for local public agencies. For federal agencies with dedicated resources, the OMB guidance was a way to get ahead on AI’s benefits while addressing risks. But for state, county, and city agencies, which operate with smaller budgets and fewer personnel, adhering to federal expectations could feel overwhelming. This divide has often left local CIOs and IT teams with more questions than answers on how to take high-level federal policies—like eliminating data bias in AI—and make them workable in smaller, resource-limited environments.

 

Biden’s Blueprint: Standards Without Specifics

 

Under Biden’s approach, federal AI policy focused on broad, foundational goals, such as transparency, security, and ethical use. The objective was to build a culture of responsible AI use that prioritized public trust. While these principles are essential, they were also high-level, with limited guidance on how to operationalize them effectively. The OMB’s directive to avoid “data bias,” for example, could be challenging for CIOs and IT departments managing dozens of AI-driven tools without dedicated resources to track and monitor for bias.

 

The Biden administration’s policies are valuable as a framework for ethical AI use. However, they sometimes feel out of reach for smaller agencies. While they succeeded in outlining broad standards, local public agencies were often left searching for practical steps to implement them. For instance, appointing a Chief AI Officer is a feasible step for a federal agency with thousands of employees. But for local agencies working with lean teams, these expectations sometimes proved to be more aspirational than actionable.

 

Trump’s Take: Speed and Autonomy Over Structure

 

With Trump at the helm, federal AI policy is likely to pivot toward flexibility and rapid adoption. Trump’s stance on technology has generally favored innovation with fewer restrictions, empowering local agencies to chart their own AI paths without federal oversight dictating the specifics. This approach is expected to open doors for agencies eager to deploy AI solutions without being bound by strict national standards.

 

This freedom could mean faster rollouts, as agencies can adopt AI applications tailored to their needs—whether in traffic management, predictive analytics, or resource allocation—without navigating federal compliance. But this less-structured approach also comes with potential downsides. Without a unified framework, each agency’s approach to AI will vary, and the absence of consistent guidelines could make issues like privacy, ethics, and data security harder to manage. 

 

Trump’s model of “letting agencies lead” could thus create a landscape of innovation without universal standards. Agencies might rapidly advance their own solutions, but they could also face the risk of ethical and operational inconsistencies that would have been mitigated under a more standardized approach.

 

A Time of Growth and Opportunity for Local AI Governance

 

As the upcoming administration likely shifts more power back to local public agencies, these organizations will gain greater autonomy in their AI adoption. Depending on one’s perspective, this could be either a boon or a challenge. On the one hand, public agencies are typically more in tune with the needs of their communities, given their proximity and smaller size. This close connection can make AI deployments more responsive to the public’s actual needs. On the other hand, many public agencies lack the internal resources, specialized tools, and workforce necessary to implement robust AI governance on their own. Without federal frameworks, they may struggle to put the guardrails in place that are often required to scale AI effectively and responsibly.

 

In a best-case scenario, this freedom will encourage collaboration among local public agencies, fostering a network of shared knowledge, best practices, and AI governance frameworks tailored to the realities of local governance. This approach could build a resilient, adaptable community of public AI users who learn from each other's experiences. If, on the other hand, agencies each go at it alone, the landscape could become a “wild west” of AI adoption, with a patchwork of different standards and regulations. Agencies may establish their own guardrails, leading to inconsistencies that could confuse citizens and create compliance risks.

 

The future of AI governance in public agencies stands at a crossroads. The coming years will test local leaders' abilities to balance innovation with responsibility, building a decentralized but cohesive approach to AI that reflects their unique community needs and capabilities. Whether through shared standards or diverse practices, the goal remains the same: leveraging AI to enhance public services while ensuring ethical and effective use. The journey may be less structured, but with thoughtful leadership, it promises to bring AI closer to communities in meaningful and transformative ways.

bottom of page