Internet Explorer is no longer supported by Microsoft. To browse the NIHR site please use a modern, secure browser like Google Chrome, Mozilla Firefox, or Microsoft Edge.

Involving the public in complex questions around artificial intelligence research

Published: 14 June 2019

Artificial intelligence (AI) is reaching our daily lives through chatbots, self-driving cars, and decision-making algorithms. It is also making its way into the NHS, and promises to create a step change in clinical decision making. Some people have even suggested that AI diagnostic tools might replace radiologists in the future.

But like any new technology, AI can also introduce new risks to patients. The algorithms they use are so complex that we don't understand why certain decisions are made - they’re a ‘black box’.

With issues that are so complex – both technically and ethically – can we still involve the public, and give them a say in public policy? Collaborating with the Information Commissioner's Office (ICO), the NIHR Greater Manchester Patient Safety Translational Research Centre recently ran two citizens' juries in Coventry and Manchester to involve the public in conversations about AI in healthcare, specifically the need for so-called ‘explainable’ AI.

In a citizens’ jury, a cross-section of the public (representative of the population in terms of age, gender, ethnicity, education, and employment status) are recruited and paid to tackle a public policy question. The jury meets for several days and is provided with reliable, impartial information from expert witnesses. The jury members ask questions of the experts, and work together during small group discussions.

Our juries explored the trade-off between AI transparency and AI performance, and specifically whether an explanation for AI decisions that affect people’s lives should always be provided, even if that results in less accurate AI decisions. Such an explanation would enable someone to understand a decision, and possibly contest it, without the need for understanding the technicalities of the ’black box’.

Initially the jury members felt that AI was potentially eroding society by putting people out of work, that it relied on data so was susceptible to hacking, and that ‘the robots might take over’. If AI cannot be explained, then what will happen if it starts going wrong? And would we even know? The juries soon realised during their discussions that AI was already being used, from deciding who should be granted finance loans (one of the jurors worked in the banking industry) to dating apps that matched people based on information they provided.

Casting aside the negatives, they also recognised the benefits of AI: freeing-up more leisure time; increasing profitability; and avoiding human error. One jury member said: "The process made us recognise the speed at which AI technology is developing and how it will continue to influence all areas of our lives”.

Four scenarios were introduced, two focusing on healthcare settings (one around AI to diagnose stroke and another to find potential matches for a kidney transplant) and two on non-health settings (one on screening job applications to determine who should be interviewed, and one in criminal justice where AI would determine who should be offered a rehabilitation programme rather than the usual court procedure). Information on the scenarios was provided by 'expert witnesses': often in person; sometimes by pre-recorded video or through a live videolink.

At the end of each scenario the jury members independently completed an online vote on the importance of receiving an explanation of an automated decision, and to what extent the lack of information on how a decision had been reached mattered to them.

The juries concluded that whether or not an explanation for an AI decision was required depends on the context. In the two healthcare scenarios, both juries strongly favoured accuracy over explanation. “It is not essential to provide an explanation for an automated decision where it is a matter of life or death,” one member said. The speed afforded by AI was viewed as essential in stroke diagnosis but not around kidney transplantation.

The results from the other two scenarios were different, with the juries recognising the importance for individuals to receive an explanation for decisions made about them. For instance, they argued that there must be an explanation in order to prove there is no bias in the criminal justice system. A majority of jury members felt that, in general, automated decisions need not be explained to individuals in contexts where human decisions would not usually be explained.

As part of its AI Sector Deal, the government has tasked the ICO to work with the Alan Turing Institute (the national institute for data science and AI) to produce guidance for organisations on explaining AI decisions to the people affected by them. The citizens’ juries were an essential part of the research conducted by the ICO and will play a key role in developing this guidance.

This initiative was commissioned by the NIHR Greater Manchester Patient Safety Translational Research Centre (based at the University of Manchester) and the Information Commissioner’s Office (ICO). It was carried out by Citizens’ Juries c.i.c., a social enterprise dedicated to designing and running citizens’ juries in partnership with the Jefferson Center, a US-based charity that developed the citizens’ jury method.

NIHR blog