It would be easy to assume that because algorithms and AIs cannot think they are unbiased. However, this is not the case – algorithms and AIs can learn biases from their creators. These learned biases come from the datasets they are trained on. For example, a facial recognition algorithm trained solely on white faces may struggle to accurately identify black faces. An algorithm’s inability to accurately identify black faces would be a frustrating problem, but it would not cost lives. Flawed biases in healthcare algorithms could have immense ramifications for public health on an individual or systemic scale.
The NHS in England will soon conduct a ‘world-leading pilot into Algorithmic Impact Assessments (AIAs) in healthcare to eradicate biases in algorithms.’ This world-first pilot into AIAs in healthcare, designed by the Ada Lovelace Institute, will support researchers and developers in assessing the possible risks and biases of AI systems before they can access NHS data. The overarching goal of these AIAs is to increase the transparency, accountability and legitimacy of the use of AI in healthcare.
Tackling biases
Algorithms being designed and implemented now will likely underpin health and care services in the UK for years to come, so it is vital that any biases are successfully eliminated before they become entrenched in the system. Tackling these biases before they become further established is a wise move and will pave the way for the ethical widespread adoption of AI in the NHS.
AI has huge potential to improve public health and support NHS staff, but it could also exacerbate existing health inequalities. Innovation Minister Lord Kamall said that ‘while AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities.’
This proactive approach to tackling the risks and biases in new systems showcases the UK’s commitment to ethical and patient-centred care. These systems have huge potential for benefit, and Lord Kamall commented that these AIAs will ensure the creation of ‘a system of healthcare which works for everyone, no matter who you are or where you are from.’ These AIAs will be absolutely crucial in ensuring that these technologies are used to combat existing health inequalities in the UK, as opposed to simply replicating the ones we are seeing now.
NHS AI Lab
These AIA trials will complement the ongoing work of the NHS AI Lab. The NHS AI Lab strives to accelerate the safe and ethical adoption of artificial intelligence at scale in health and care. Brhmie Balaram, Head of AI Research & Ethics at the NHS AI Lab, said that ‘through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.’
To ensure that best practises are embedded in future technologies, ‘the NHS will support researchers and developers to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.’ The impact assessments will prompt developers to more closely interrogate the legal, ethical, and social impacts of future work as well as make necessary changes to existing systems.
Octavia Reeve added that the Ada Lovelace Institute hopes ‘that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.’ This links back to things such as the facial recognition algorithm mentioned earlier – many smartphones offer facial recognition as a convenient way to unlock your phone, yet something as commonplace as unlocking your phone remains easier with lighter skin. As a further example, we have found that the audio transcription software used for our interviews tends to be better at accurately transcribing the voices of middle-class male accents, which is likely indicative of the voices it was trained to identify.
Hopefully, the steps the NHS is taking to publicly and rigorously examine biases in healthcare algorithms and AI will ripple outwards into other sectors where algorithms have simply replicated the unconscious biases of their creators.
However, questions remain as to whether a society that is yet to eliminate its own biases in the real world can successfully program a wholly unbiased algorithm. Creating unbiased algorithms to support our struggling health system nevertheless remains a noble goal, and we will likely learn much about ourselves in the process.
Recommended for you

Antidepressant Prescribing at Six-Year High
More people are taking antidepressants than ever. Is this a dark sign of the times or an indication that mental health stigma is changing?

Can AI be Used to Determine Cancer Recurrence?
When cancer patients go into remission, they often worry about it coming back. AI can now help identify those at risk of cancer recurrence.

Pegasus – Still a Threat to the UK?
The notorious Pegasus spyware has been misused to exploit vulnerabilities in devices, even those kept within the walls of Number 10.
Trending

Drug Decriminalisation: Could the UK Follow Portugal?
Portugal’s drug decriminalisation has reduced drug deaths and made people feel safe seeking support. Would the UK ever follow suit?

Calling All Unvaccinated UK Adults
With Covid cases rising, the NHS is urging the 3 million UK adults who remain unvaccinated to come forward.