Live COVID-19 Cases
  • World 354,904,793
    World
    Confirmed: 354,904,793
    Active: 67,462,256
    Recovered: 281,820,226
    Death: 5,622,311
  • USA 72,958,690
    USA
    Confirmed: 72,958,690
    Active: 27,238,138
    Recovered: 44,828,957
    Death: 891,595
  • India 39,543,328
    India
    Confirmed: 39,543,328
    Active: 2,249,287
    Recovered: 36,804,145
    Death: 489,896
  • Brazil 24,134,946
    Brazil
    Confirmed: 24,134,946
    Active: 1,659,612
    Recovered: 21,851,922
    Death: 623,412
  • France 16,800,913
    France
    Confirmed: 16,800,913
    Active: 6,211,015
    Recovered: 10,460,876
    Death: 129,022
  • UK 15,953,685
    UK
    Confirmed: 15,953,685
    Active: 3,394,801
    Recovered: 12,404,968
    Death: 153,916
  • Russia 11,173,300
    Russia
    Confirmed: 11,173,300
    Active: 801,197
    Recovered: 10,045,336
    Death: 326,767
  • Italy 10,001,344
    Italy
    Confirmed: 10,001,344
    Active: 2,709,857
    Recovered: 7,147,612
    Death: 143,875
  • Spain 9,280,890
    Spain
    Confirmed: 9,280,890
    Active: 3,562,883
    Recovered: 5,626,013
    Death: 91,994
  • Germany 8,808,107
    Germany
    Confirmed: 8,808,107
    Active: 1,417,492
    Recovered: 7,273,100
    Death: 117,515
  • China 105,660
    China
    Confirmed: 105,660
    Active: 2,754
    Recovered: 98,270
    Death: 4,636
Generic selectors
Exact matches only
Search in title
Search in content
Generic selectors
Exact matches only
Search in title
Search in content
covid free office

BY Shadine Taufik

Healthcare

Preventing Bias in Healthcare AI

Technology can promote more equal systems, but AI trained on biased data perpetuates those biases. In healthcare AI, this becomes dangerous.

NOVEMBER 03  2021

Follow

In a utopian world, the biases and prejudices of humans no longer affect integral functions of society like education, employment, policing, and healthcare. Especially in the latter two, the consequences of discrimination could be fatal.

In a public workshop held by the US Food and Drug Administration (FDA) last month, director of the FDA’s Center for Devices and Radiological Health Jeff Shuren announced that ‘better methodologies for identification and improvement of algorithms prone to mirroring “systemic biases” in the healthcare system and the data used to train artificial intelligence and machine learning-based devices’.

He continued: ‘It’s essential that the data used to train [these] devices represent the intended patient population with regards to age, gender, sex, race and ethnicity’.

It was stressed that clinical trials need to make more of an effort to include racially and ethnically diverse populations.

The virtual meeting took place nine months after the FDA released an action plan for creating regulations for an AI/ML-based Software as a Medical Device (SaMD).

This sentiment is well-rooted in worries that AI could further perpetuate inequalities in different sectors due to biased data and programming. Though technology, at its core, is a blank slate, impartial to the body politics of human societies, problems of unconscious bias still linger in its human creators, which can easily transfer into artificial intelligence code.

History of bias in healthcare

Healthcare has a long history of racism, misogyny, homophobia, and transphobia. The quick categorisation skills that helped primitive ancestors differentiate dangerous parties from friendly ones have carried on into the 21st century, putting many marginalised individuals into risky situations.

Though generalisation may result in faster treatment, it can lead to misdiagnosis and incorrect treatment for many, leaving them feeling unheard. Stereotypes rooted in gender, age, sexual identity, and preference, as well as race, can cloud the identification of serious ailments. Even though they may be based upon real prevalence rates, it is important to disregard these and focus on the individual’s specific symptoms to minimise blind spots – perpetuated stereotypes make doctors view their patients as mere statistics, and not unique, idiosyncratic people.

Some alarming statistics show how grave bias can be in healthcare. It was shown that black people are 41% less likely to get medicated than white people, and pain was overestimated in 18.9% of white people, compared to 9.5% in black people.

These assumptions can also hinder research and the discovery of new illnesses. During the HIV epidemic of the 1980s, there was a high prevalence amongst gay males, and many doctors believed that the virus only existed within the queer community. This means that for a long time, heterosexual men, women, and children were thought to not be susceptible to HIV, delaying treatment and progress.

The Tuskegee Syphilis Study is another example of discrimination in healthcare. 600 African American men were targeted as the participants of a syphilis study and were given placebo medicine to track the effects and timeline of the illness. Many died, went blind, or experienced detrimental mental distress. Part of the goal was to gather data through post-mortem autopsies. This was an act of conscious discrimination, instilling minorities with a deep distrust of the healthcare system. The targeting of these men was rooted in disrespect for minority lives.

To this day, a lot of data collected in the Global North is still white, middle-class, and cisgender leaning. This is due to the fact that many people worldwide cannot afford healthcare or are afraid to visit doctors due to a fear of being mistreated or misunderstood.

Additionally, many communities are underrepresented in clinical trials and research. In fact, 78% of participants in US-based clinical trials are white. With a stratified group of participants, knowledge of the reactions to medication and treatments is left unidentified. The diversity gap is alarming and must be bridged to ensure the feeding of impartial data into future AI healthcare devices.

AI biases

Artificial intelligence, particularly machine learning AI, utilises datasets to gradually become smarter and more efficient at carrying out the task they were programmed for. Algorithmic bias arises when the ‘training sets’ of data used to teach the AI are not well-balanced, and carry out discriminatory practices favouring a specific group. Due to the fact that white, heteronormative individuals possess more well-documented histories due to systems being built around this community, data will reflect this, and become more familiarised with this group. This is why AI ends up benefitting a less diverse set of people.

This manifests in a number of ways. Twitter users have tested the website’s photo-cropping algorithm and found that thumbnails would only preview white faces. Additionally, the AI art emulator PortraitAI would only create portraits of white people, even when fed a photo of a minority. Though unsavoury, these examples are fairly trivial. However, when it comes to a serious, life-risking subject such as healthcare, it could be extremely damaging to many.

These datasets need to be optimised to support all communities equally and diversify as much as possible. Where data is unavailable, more funding and planning into diverse research should be implemented. This is integral to creating more intelligent, accurate AI.

The state of healthcare AI

AI is already being used in healthcare. One of the main advantages of using it is its ability to go through large sums of data and filter it out to show connections and valuable insights. This is why doctors utilise it to automate medical imaging analysis, in order to quickly arrive at accurate diagnoses, free of human error. IBM’s Watson for Healthcare and Google DeepMind are the largest projects at the forefront of this.

Algorithms have also been used to develop medicine by scanning through databases of molecular structures and determining which potential medicines would be effective or ineffective for various diseases.

This technology can help emergency medical services as well. AI tool Corti analyses the emergency caller’s voice and relevant medical history to inform whether or not the patient is experiencing cardiac arrest, leading to faster treatment and better instructions.

Consumer wearables such as Fitbit also allow individuals to keep track of their health stats, the data from which can be integral in diagnosing illnesses such as heart disease. There are also consumer health applications, such as WebMD and ADA, which help users map out symptoms, possibly diagnose, and provide details on what they should be doing.

The AI healthcare systems of the future will be more advanced in their abilities, but we must make sure that the diversity gap does not widen, and doctors, researchers, regulators, and programmers must band together and act now for healthier, more equitable societies.

 

 

About the Author: Shadine Taufik

Shadine Taufik is a contributing Features writer with expertise in digital sociology and culture, philosophy of technology, and computational creativity.

Recommended for you

Modelling the Potential Consequences of Omicron

Analysing the impact mask-wearing has on Omicron hospital admissions, and whether the public’s choices can change the outcome.

Minesto: Revolutionising Ocean Energy

Minesto: Revolutionising Ocean Energy

Through their innovative tech, Swedish company Minesto leverages the power generated from underwater ‘kites’ as a source of renewable energy.

Trending