There's a phrase we keep hearing in conversations with government teams: "We need to do something with AI."

It's almost 2026, and artificial intelligence isn't a future consideration anymore – it's an immediate urgency. Departments everywhere are launching pilots, testing chatbots, and experimenting with document summarization tools. The excitement is real. The potential feels enormous.

But after the initial buzz fades, many teams find themselves stuck. They've experimented, sure. But the skills, trust, and governance needed to scale those experiments into sustainable practice? Those haven't been built yet.

It's not a lack of ambition holding people back. It's the gap between curiosity and capability.

The Pilot Trap

Here's what we're seeing across our client work and the industry: organizations launch dozens of small AI initiatives that rarely move into production. Projects generate initial excitement but lack sustained impact.

The challenges are familiar:

  • Technology before strategy. Teams invest in tools before establishing clear purpose or desired outcomes.
  • Data immaturity. Legacy systems, incomplete datasets, and privacy constraints limit what's possible.
  • Gaps in governance. Policies haven't evolved at the same pace as innovation.
  • Lack of coordination. AI initiatives aren't aligned across departments.
  • Public skepticism. Without transparency, even the best-intentioned projects can erode citizen trust.

Each of these challenges is solvable. But not with technology alone. The real solution lies in building organizational capability.

The Maturity Journey

Through our work with public sector clients, we've observed four broad stages in AI adoption. Organizations move through these gradually, building both confidence and competency along the way.

Curiosity

The exploratory phase. Teams want to see what's possible. Someone launches a pilot or tests a chatbot. Results are varied. Excitement mixes with uncertainty about next steps. AI discussions are reactive, without strategy.

Control

Governance frameworks begin to take shape. Leaders recognize the need for ethical guidelines and risk management. AI policies get drafted, committees form, IT teams get more involved. The priority shifts to safe implementation – though caution can sometimes stall innovation.

Capability

AI stops being the shiny new toy. It gets integrated into planning, supported by data infrastructure. Skills develop across departments, not just in IT. New roles emerge, training gets implemented, and projects connect to organizational outcomes.

Confidence

AI becomes part of day-to-day work, supported by clear measurement, accountability, and continuous learning. Teams understand when and why to use AI – not just how.

Where does your organization sit on this spectrum? And what would it take to move one step forward?

Five Steps to Building AI Capability
1. Start with "Why"

Begin with the problem, not the tool. Identify specific pain points and design around outcomes rather than technology. The question isn't "How can we use AI?" It's "What problem are we trying to solve, and might AI help?"

2. Build Cross-Functional Teams

Diversity in expertise prevents bias and blind spots. AI projects aren't just for IT. Bring together policy experts, service designers, data specialists, and the people who will actually use these tools.

3. Focus on Data Readiness

AI is only as strong as the data beneath it. Prioritize data cleaning and governance as foundational work. This accelerates not just current projects, but every future use case.

4. Create Simple Governance

Caution can stall innovation. Clear ethical frameworks, privacy standards, and review processes build confidence – and allow teams to move faster, not slower.

5. Measure Trust and Impact

Efficiency isn't the only measure of success. Citizen trust, staff adoption, and transparency are equally important metrics. Seek feedback. Publish your learnings. Treat trust as a measurable outcome.

How We Approach AI in Service Design

As service designers, the Button team works within an evolving landscape of tools – and we're responsible for investigating how to continuously deliver the best possible experience to users. That includes exploring opportunities to use AI in our work process.

But the technology is only useful when it's grounded in real user needs.

To incorporate AI in an ethical and useful way, we conduct deep user research. We shadow service providers during their process. We map out workflows step by step. We identify the moments where AI could genuinely support people, not just automate for automation's sake.

We ask questions like:

  • Where in the process are there opportunities for human error that AI could help prevent?
  • Where is something repetitive yet time-consuming that could be offloaded?
  • Where do staff spend mental energy on tasks that don't require human judgment?
  • Most importantly: where would AI bring the most value to both the people delivering the service and the people receiving it?

This comprehensive user research with our clients helps us identify where AI would create lasting solutions and not just temporary excitement. It's the difference between implementing technology because it exists and implementing it because it solves a real problem.

This approach aligns with what we see in the most successful government AI strategies worldwide: starting with people, not technology.

Lessons from Elsewhere

With AI transforming public services worldwide, governments have both a unique responsibility and opportunity. Their mandate isn't simply innovation, but stewardship.

We looked at countries that have recently refreshed or updated their AI strategies. Some are driving investment and governance. Others are prioritizing capacity, sovereignty, and trust. Here are the standouts:

Singapore: Strategy Meets Execution

Singapore's National AI Strategy 2.0 (2023) focuses on infrastructure, data ecosystems, talent development, and responsible governance. What sets it apart is specificity: the strategy targets priority sectors like manufacturing, financial services, transport, biomedical sciences, healthcare, and education – with measurable goals for each.

Key takeaways:

  • Clear leadership and coordination across government and industry
  • Responsible AI prioritized alongside deployment
  • Major investment in talent and skills development to prepare the workforce
Finland: Education First

Finland's strategy, Leading the Way into the Age of AI (2019), emphasizes inclusive, skills-driven adaptation. They launched "Elements of AI," a free online course aimed at building AI literacy among all citizens, not just developers and businesses.

Key takeaways:

  • AI transformation positioned as everyone's journey, not just a technical one
  • Thoughtful integration of AI into public services
  • Education as the cornerstone of successful adoption
Brazil: Inclusive Development at Scale

Brazil's AI Plan (2024-2028) is structured around infrastructure and national data centres; training and capacity-building; AI applications in health, education, and public security; innovation support; and ethical, transparent governance. What's notable is the substantial budget commitment to infrastructure, innovation, and talent.

Key takeaways:

  • Large emerging economy aligning AI strategy with inclusive development goals
  • Explicit integration of ethics, transparency, and governance from the start
  • Investment backing the strategy at scale
Canada: Building on Foundations

Canada is investing in efforts to drive AI adoption through the Pan-Canadian Artificial Intelligence Strategy – renewing and expanding what was the world's first national AI program. With C$443 million in funding, the strategy strengthens research clusters in Montreal, Toronto, and Edmonton while accelerating AI adoption, advancing responsible AI standards, and improving access to compute infrastructure.

Moving Forward

When AI is thoughtfully implemented, it helps public servants do more of what they already do best: serve people, solve complex problems, and make information more accessible.

Capability takes time. It's built through consistent collaboration, governance, and learning, not quick wins. It's built by understanding the actual work people do and finding opportunities where technology genuinely supports them.

Curiosity starts the journey. Capability sustains it.

AI done right isn't measured by how quickly you adopted it, but by how thoughtfully you brought it into your organization. And how well it serves the people who matter most.

In our next post, we'll explore how we're thinking about AI in the grant landscape – and share details about our upcoming workshop on responsible AI operationalization. Subscribe to our newsletter to stay up to date.

Subscribe to Button Insight  ✉️

Our twice-monthly digital services newsletter. Get our best tips, insights and resources delivered straight to your inbox!

Get in Touch!

We love to have conversations with decision makers, technology leaders, and product managers from government and industry. If that sounds like you, and you have a digital project you’d like to let us know about, please fill out our contact form.

Our business development team will reach out promptly.

A headshot of our CEO, Alec Wenzowski

Alec Wenzowski

President & Founder

Thank you! Your submission has been received!

What to expect

1. We'll get back to you in 1 business day
2. We'll schedule a discovery meeting at your convenience to get to know you
3. We'll listen attentively and see how we can best provide value as a team

Oops! Something went wrong while submitting the form.