AI has the potential to revolutionize public services through benefits such as predictive analytics, automated customer service, and more—but there are serious concerns to consider as well.
Wherever government services can get more efficient, they offer double value: the public gets better services as users, and more bang-for-buck as taxpayers.
Machine learning, or artificial intelligence, is becoming one of the most promising sources of efficiency in the digital sector, and there are a number of clear ways it can help with large datasets and step-based services within the online public service.
However, when it comes to AI, making a service operate more efficiently may risk eroding public trust. When lost, this trust is hard to win back, and people losing trust can end up reducing overall use of, and faith in, government services across the board and the government as a whole.
To avoid AI entirely would leave the digital public service falling behind, and using public money less efficiently than it could. So the challenge that now falls to governments is how to incorporate the beneficial aspects without the real risks—and potential public backlash—of the downsides.
Here are 7 promising potentials and 7 potential pitfalls to consider:
- Predictive analytics: AI can analyze large volumes of data to identify patterns, trends, and insights. Public services work with very large volumes of data, and recognizing common trends—and outliers—and the increased pattern-seeking capacity can help them make informed decisions and figure out how to best serve everyone and avoid waste. From traffic to healthcare, machine learning can help identify where and how to allocate resources.
- Real-time monitoring and assessment: Similarly, AI’s volume analysis can make it easier and much faster to monitor systems’ performance and catch risks early. These systems can be anything from the aforementioned traffic flows to government service website designs to hospital scheduling to major infrastructure that needs to be monitored for safety and performance.
- Automated customer service: AI-powered chatbots and virtual assistants can handle routine inquiries, provide information, and guide citizens through various public services. This helps guide the public through complicated processes, improves efficiency, and reduces waiting times. It can allow human staff to focus on the more unique or complicated cases, or to spend more of their time on tasks other than citizen service. These bots, however, must be clearly marked and have strict controls on what direction they can provide. There must also be clear and easy ways for frustrated users to talk to a human instead.
- Finding fraud: AI algorithms can analyze vast amounts of data to detect anomalies and patterns indicative of fraudulent activities in the realms of tax evasion or falsely claiming certain benefits, business grants, and so on. There are, of course, ethical concerns to stay on top of when involving AI in anything punitive.
- Personalized services at scale: AI can analyze citizen data and preferences to provide personalized services and recommendations. This could include customized healthcare plans, personalized education pathways, and tailored government assistance programs. These things will still often require a human touch, but AI can extend this customization—or at least its early stages—to many more people without massive staffing requirements.
- Proactive help: Being able to quickly assess individual needs and situations at scale can also be used to proactively advise residents of their eligibility, or potential eligibility for different programs. Information across departments can be quickly assessed; one change in one area of citizen-government interaction (e.g. a change of address) can trigger an automatic automated assessment of what other forms may need changing and what eligibilities might open up for the person based on their new status. Similarly, AI’s aid in health assessments can help doctors flag issues for certain patients and can quickly clue health officials in on macro health trends in the population.
- Process automation: AI can automate repetitive administrative tasks—data entry, document processing, record-keeping, and so on. The time saved can up human resources for more (and more valuable) other tasks. This automation can also reduce human error n some cases—though it must still be monitored so that errors that do exist can be caught before being replicated.
- Privacy and data security: The use of AI in public services often involves collecting and analyzing large amounts of personal data. Governments must ensure strict data protection measures to safeguard citizens' privacy and prevent unauthorized access or misuse of sensitive information. They must also make sure the way AI uses data is consistent and compliant with how people were told their data would be used—for instance, an AI must not be able to re-associate respondent identities in data that was to be anonymized. Concerns about AI and surveilling or monitoring the population are also of course amplified when the observer s the state.
- Bias and discrimination: AI algorithms are only as good as the data they are trained on. If the training data contains biases or reflects existing inequalities, AI systems can perpetuate or amplify them. This can result in discriminatory outcomes, particularly in areas like law enforcement, hiring processes, and social services. Governments have a particular responsibility to ensure equitable treatment of residents and to not repeat historical discrimination.
- Accessibility and equity gaps: The overarching issue with incorporating AI is that it can accelerate, amplify, and obfuscate existing problems. Many of the concerns with AI are simply concerns with government conduct generally. Not only must outright bias be monitored for, but subtler and often unintentional discrimination can creep in when trying to deliver services to the masses. Are language or internet-access barriers considered? Are all demographics properly represented in the data sets AI is trained on?
- Transparency: AI systems can be complex and opaque, making it challenging to understand their decision-making processes. Lack of transparency can erode public trust, especially when AI-driven decisions impact individuals' rights or access to essential services. Governments must prioritize being able to explain—and must be accountable for—everything being done by AI systems.
- Accountability: That a machine can’t be held personally accountable for a decision is a common concern with AI; it is especially concerning when accountability to the entire public is as important as it is in the public sector.
- Job displacement: While automating some tasks—particularly routine and repetitive ones—can make the government more efficient, it also could lead to the loss of good public sector jobs. Governments need a plan to transition workers with reskilling and upskilling programs.
- Over-reliance on AI / Risk of technical failure: Without human oversight, AI errors or glitches can have major consequences. That fallout could be especially disastrous in key areas the government handles, from healthcare to disaster response.
How can governments mitigate the pitfalls and maximize the potential of AI in public services? We further discuss some aspects of responsible AI use in this post, but one major priority is to ensure humans are observing, choosing, and vetting the things AI is being tasked with.
Governments must build in ethical frameworks and oversight teams that are diverse in terms of both knowledge and representation. Governments must also have robust data governance policies, and be transparent and accountable about whatever they task AI to do. Major institutions can also help fund and support the overall advance of responsible AI education and research.