Why Government AI Has to Be Different
As we enter 2026, the integration of AI into government services continues to accelerate. Government agencies face mounting pressure to adopt it quickly — often without asking the most important questions: Should we use AI? Is it appropriate here? Who could be harmed?
Last fall at the Service Design in Government conference, Steph Wright, head of Scottish AI Alliance, delivered a keynote that remains deeply relevant today. She laid out a clear and ethical framework for thinking about AI in public services. Her message was not about rejecting technology, but about putting people first and protecting the foundations of equity, public trust, and accountability in government.
Our responsibility is not just to be efficient or innovative — it’s to serve people equitably, ethically, and transparently. When it comes to adopting AI in public systems, we have to ask hard questions, and we have to be willing to say no, or not yet, if the technology isn’t aligned with the public good.
This perspective is not rooted in fear or opposition to technology, but in a commitment to responsible, ethical, and inclusive public service. As Steph emphasized in her keynote, the role of government is fundamentally different from the private sector. The individuals we serve are not users or customers — they are members of the public, and our partners in ensuring that services are fair, accessible, and equitable.
The Standards Are Higher in Public Service
Commercial use of AI is driven by metrics like profit, engagement, and scalability. But public service is different. Equity, inclusion, and access must be the starting point, not afterthoughts. In government, one person falling through the cracks is one too many.
AI tools are often trained on incomplete, biased, or reductive data. These tools don’t just reflect inequality — they amplify it. When used uncritically, they deepen the very divides we aim to close. And those most affected are often those with the least visibility, the least agency, and the most at stake.
Efficiency for Systems Can Mean Harm for People
Yes, AI might make a process faster. But faster for whom? If the tool increases speed at the expense of fairness, understanding, or inclusion, then is it the right tool for the job?
We should be starting with the problem, not the tool. Good service design begins with people, and AI should be no exception. If a solution doesn’t serve the public good, especially those most vulnerable, then it’s not a solution at all.
Responsibility Can’t Be Outsourced
When AI makes decisions — especially in critical areas like housing, social services, justice, or healthcare — who is accountable when it gets things wrong? Who answers to the people harmed? We cannot hand off public responsibility to a black-box algorithm.
Transparency isn’t just about posting technical documentation. It’s about making sure real people can understand and challenge the role AI plays in their lives. This means plain language. This means digital literacy. This means participatory design. And most importantly, this means creating systems where public auditing and accountability are possible.
Inclusion Is Not Optional
AI is not neutral. The way it's developed, trained, and deployed reflects the priorities, assumptions, and biases of its creators. And right now, those creators rarely reflect the diversity of the people our governments serve. We have to ask: Who defines the problem? Who sets the goals? Who benefits — and who bears the burden?
Inclusion must not be treated as an add-on or a checkbox — it has to be embedded at every stage, from design to deployment. Because when we design only for the "average" user, we exclude those who need access the most. And in government, exclusion is not just inconvenient — it’s harmful.
Tech Can’t Fix Underfunding or Complexity
AI is not a magic solution. It can't solve systemic underinvestment. It can't navigate the messy, human complexity of people's lives on its own. Acknowledging that isn’t pessimism — it’s responsible, grounded decision-making.
Rather than starting with the tool, we should start with the problem we’re trying to solve. What are the needs? What are the constraints? Where might technology help — and where might it harm?
Instead of asking, “How can we use AI?” a better question might be, “What outcome are we trying to achieve, and what’s the most responsible way to get there?” Sometimes AI will be part of that answer — but sometimes, the right solution will be more time, more people, or simply a better process.
Caution as Commitment
Hope and caution can and must coexist. Public trust is not something the public owes us; it’s something we must continuously earn. And that means resisting blind adoption, asking hard questions, and doing the slow, difficult work of building services that are truly for everyone.
Steph’s keynote was a powerful reminder to me that caution is not pessimism. It is responsibility. And in public service, responsibility is not optional — it’s foundational.
Jane currently serves as the principal product manager for our Department of Veterans’ Affairs Disability Benefits Crew contract. If you are interested in learning more about how Aquia can help your agency navigate the responsible use of AI, contact us at federal@aquia.us.
