View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 1, 2021updated 06 Sep 2021 2:56pm

Can AI aid policymaking?

AI can aid with every stage of the policymaking process but ensuring it is used ethically is more important than ever.

By Cristina Lago

Would you vote for an algorithm? The majority of Europeans (51%) are in favour of reducing the number of parliamentarians in their country and replacing them with an “artificial intelligence algorithm”, according to a 2020 survey by Spain’s IE University.

Automated politicians may be some way off but policymakers are beginning to use AI to help design and implement policies. It is a natural fit, according to a recent article by management consultants BCG. “The foundations of policymaking – specifically, the ability to sense patterns of need, develop evidence-based programs, forecast outcomes, and analyse effectiveness – fall squarely in AI’s sweet spot,” it argues.

By their nature, however, policy decisions affect many lives. Trust in government depends on accountability and transparency in decision-making. It is especially important, therefore, that the application of AI in policymaking is fair, transparent and accountable. Before rushing to implement AI, policymakers should make sure these foundations are in place from the outset, experts told Tech Monitor.

AI policymaking

Attributing last year’s exam result fiasco to a ‘mutant algorithm’ was “a good way of blaming something other than the policymakers”. (Photo by Chris J Ratcliffe/Getty Images)

AI applications in policymaking

The use of AI in the public sector is growing but has so far focused more on the delivery of public services than on policymaking. An assessment of AI usage by EU member states found that 38% of AI applications support public services delivery or communication with citizens, while fewer than half as many (17%) relate to the policymaking process.

Nevertheless, there are use cases for AI at every step of that process, says BCG, from identifying the need for new policy interventions, through designing and implementing them, to assessing their impact. Healthcare policymakers in the Australian state of Victoria, for example, use AI to analyse symptom prevalence in order to identify new policy requirements, BCG says. Economic development specialists in Quebec have used AI to identify the profile of particular regions, to help target new policy initiatives. And the UK government is using AI to estimate the impact of emissions restrictions on productivity.

There are practical and organisational hurdles that policymakers will need to overcome. BCG advises that they focus on developing the business case for AI; bolstering their operational capabilities, including their digital and data skills base; and constructing the data infrastructure required to underpin AI-driven policymaking: "One of the first steps for government officials is to free data from silos and explore external data sets, such as social media channels, that could have unique value for policymaking."

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

Indeed, for Helen Margetts, director of the public policy programme at the UK's Alan Turing Institute, a lack of suitable data is a significant hindrance to AI-powered policymaking, as evidenced during the pandemic. “During the pandemic, we lacked the right data to build the kind of models that would have helped us to understand the effect of interventions” such as lockdowns and school closures, she says. "We should try harder next time.”

For many experts, however, the chief challenges that policymakers face in adopting AI concern the fairness and transparency of AI-based decision-making.

Boosting transparency in AI-powered policymaking

Issues surrounding a lack of transparency in AI-backed decisions could be especially damaging in the context of policymaking. Unless the reasons for policy decisions are transparent and explainable, and accountability for their impact made explicit, AI-powered policymaking could gravely damage trust in government – and democracy. Attempts to address these concerns are still nascent, however.

Some governments have sought to address the lack of transparency in public sector AI by applying 'algorithmic accountability' mechanisms, ranging from frameworks and guidelines to more restrictive controls. A recent assessment of these mechanisms (applied in the context of public service delivery) found that many "are based on untested claims and assumptions", however, and in many cases, there is "no clear legal framework" to enforce them. The study concludes that these mechanisms should incorporate organisational incentives or "binding legal frameworks" in order to be effective.

Technology suppliers are developing 'explainable AI solutions' that provide users with information on which factors contributed to a given output or decision, explains Rena Bhattacharyya, service director of enterprise technology and services at GlobalData Technology. This, they hope, will build confidence in their systems.

Boosting the explainability of AI systems may require a sacrifice in their effectiveness, however. "With machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability," economist Diane Coyle wrote for the Brookings Institute think tank last year. "The larger and more complex a model, the harder it will be to understand, even though its performance is generally better."

Furthermore, it is not just the functionality of AI systems that undermines their transparency, explains Florian Ostmann, policy theme lead in the public policy programme at the Alan Turing Institute. It may also result from commercial suppliers withholding information about how their systems work, or from a lack of technical expertise on the part of the buyer. Outsourcing arrangements can also complicate the accountability of AI-backed decisions, he adds.

Lastly, there is a danger that AI decision-making could be used by politicians as a smokescreen to avoid accountability. Margetts points to the fiasco surrounding the UK's 2020 GCSE exam results. In order to compensate for the disruption of lockdowns, results were assigned by what PM Boris Johnson described as a “mutant algorithm” following a public outcry.

“It wasn’t a 'mutant algorithm' at all,” says Margetts, who fears this language will damage AI’s reputation and will foster distrust. “It was a statistical method that did more or less exactly what it was asked to do by policymakers. But it was a good way of blaming something other than the policymakers.”

AI in policymaking: ensuring fairness

The potential for AI systems to entrench and legitimise inequality has been well documented. At the policymaking level, this could be disastrous.

Evidence-driven policymaking is something to aspire to, says AI ethicist and data activist Renée Cummings. "But the challenge is that in the design, development and deployment of AI, equity is often sacrificed for expediency, and AI tools that promise efficiency and effectiveness often underdeliver because of the bias and discrimination baked into many data sets."

Cummings argues that, beyond frameworks and legal guidelines, the key to ensuring that AI-backed policymaking is equitable and fair is for it to be overseen by policymakers who understand the dangers of AI bias. “To design policy that is creative, innovative, equitable and responsive to the challenges of a post-pandemic reality, we need a critical rethinking of the data being used to design AI policy," she says. "We also need policymakers and technocrats who understand the importance of ethical AI and live by it."

"We need policymakers and technocrats who understand the importance of ethical AI and live by it." Renée Cummings, University of Virginia

Ostmann agrees that carefully balancing AI with human judgement is essential. On one hand, giving too much weight to an AI system, failing to critically examine its outputs or distrusting in human judgement, could lead to ‘automation bias’ – an over-reliance on automated systems. On the other hand, too much distrust of AI can also lead to biased outcomes. “That could lead to sort of a mixing of the systems' output with human judgments in ways that lead to a more biased decision than what you would get if you followed the system,” says Ostmann. “So it can go can go both ways.”

To ensure that any AI system used to support policymaking is fair, accountable and transparent, organisations must pursue the ‘ethics of responsible innovation’ right from the very beginning of its development, says Margetts, as bias can creep in right from the outset. The Alan Turing Institute has published a guide for ethical AI development in the public sector.

Despite the apparent appetite among Europeans, no one is suggesting that politicians will be automated any time soon. “AI will not replace policymakers,” says Cummings. “What we will see is collective intelligence, the best of human intelligence working with the most sophisticated artificial intelligence.”

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU