The Health and Care Industry: Unpacking AI Bias

Calendar Icon
14/5/2024
Clock Icon
4 minutes
Written by
Kate Walsh

As artificial intelligence (AI) becomes increasingly present in our day-to-day lives, it is important to constantly review how this contemporary technology reflects, or fails to reflect, the diverse world that we live in. Though its integration across various industries has many positive implications, we must be mindful about the ramifications of failures to ensure diversity at all stages of its development and implementation.

With biased datasets leading to skewed outcomes, misdiagnoses, and unequal treatment, the health and care industry faces a unique set of challenges derived from the biases present in AI.

Understanding Bias in AI

At its core, the biases which pervade AI in healthcare and across industries stem from the homogeneity of the individuals developing the technology. In its short history, the field of AI has been dominated by a narrow demographic: white, cisgender, male individuals from privileged backgrounds. This is at all stages of the larger AI lifecycle, from problem definition, data set selection and curation, model training to deployment (Girachoya et al., 2023).

Algorithms are trained on datasets which exhibit a lack of diversity, resulting in biased algorithms which produce both inaccurate and discriminatory outcomes (Celi et al., 2022). It must be noted that this extends beyond ethnicity and gender biases, also encompassing crucial factors such as age, socioeconomic status, and disability (Straw et al., 2020).

Bias in Healthcare

Therefore, as the use of AI within medical diagnoses, treatment recommendations, and patient care increases, the detrimental implications of these biases within algorithms and their outputs pose significant challenges to equitable healthcare delivery. When AI systems inherit these biases from flawed datasets, they exacerbate existing disparities and perpetuate systemic discrimination (Chin et al., 2023).

The consequences of biases in AI manifest in several ways…

  • The technology used to classify images of skin lesions are trained with images of white patients, using datasets in which the estimated proportion of Black patients is approximately 5-10%, resulting in approximately a 50% reduction in diagnostic accuracy for Black patients (Kamulegeya et al., 2019).
  • An algorithm developed to predict hospital length of stay based its output on the affluence of zip codes, suggesting that patients from less affluent areas were likely to have longer hospital stays, which resulted in a reduced likelihood of these patients receiving case management facilitation of early discharge (Nordling et al., 2019).
  • 74% of first and last authors of papers related to AI in healthcare are male (Celi et al., 2022).
  • AI-driven virtual chat services in healthcare settings reinforce stereotypes through the language used, contributing to both patient discomfort and perpetuating preexisting language biases (Hanna et al., 2022).

Addressing Bias in AI for Equitable Healthcare

Mitigating bias in AI is henceforth necessary for achieving equitable healthcare experiences and outcomes (Parikh et al., 2019). To tackle this issue, Chin et al. (2023) propose a multifaceted approach:

  1. Promote health and health care equity during all phases of the health care algorithm life cycle.
  2. Ensure health care algorithms and their use are transparent and explainable.
  3. Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness.
  4. Explicitly identify health care algorithmic fairness issues and trade-offs.
  5. Establish accountability for equity and fairness in outcomes from health care algorithms.

Together, these steps will help progress towards reducing bias in AI and ultimately improving patient experience and outcomes.

Key Takeaways

  • Biases in AI are a consequence of both human and technological factors, and lead to unfair outcomes that exacerbate pre-existing societal inequalities.
  • The lack of diversity at all stages of AI’s development contributes to this issue.
  • The target population for AI models differ from the population on which it was trained because this latter population is not representative of the whole.
  • Guidelines and regulations are required to provide an ethical framework that will ultimately reduce the effect of bias in AI on health and care inequalities.

References

Celi, L.A., Cellini, J., Charpignon, M.L., Dee, E.C., Dernoncourt, F., Eber, R., Mitchell, W.G., Moukheiber, L., Schirmer, J., Situ, J. and Paguio, J., 2022. Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health, 1(3), p.e0000022. https://doi.org/10.1371/journal.pdig.0000022

Chin, M.H., Afsar-Manesh, N., Bierman, A.S., Chang, C., Colón-Rodríguez, C.J., Dullabh, P., Duran, D.G., Fair, M., Hernandez-Boussard, T., Hightower, M. and Jain, A., 2023. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Network Open, 6(12). https://doi.org/10.1001/jamanetworkopen.2023.45050

Gichoya, J.W., Thomas, K., Celi, L.A., Safdar, N., Banerjee, I., Banja, J.D., Seyyed-Kalantari, L., Trivedi, H. and Purkayastha, S., 2023. AI pitfalls and what not to do: mitigating bias in AI. The British Journal of Radiology, 96(1150). https://doi.org/10.1259/bjr.20230023

Hanna, J.J., Wakene, A.D., Lehmann, C.U. and Medford, R.J., 2023. Assessing Racial and Ethnic Bias in Text Generation for Healthcare-Related Tasks by ChatGPT1. MedRxiv. https://doi.org/10.1101%2F2023.08.28.23294730

Kamulegeya, L., Bwanika, J., Okello, M., Rusoke, D., Nassiwa, F., Lubega, W., Musinguzi, D. and Börve, A., 2023. Using artificial intelligence on dermatology conditions in Uganda: A case for diversity in training data sets for machine learning. African Health Sciences, 23(2), pp.753-63. https://doi.org/10.4314/ahs.v23i2.86

Nordling, L., 2019. A fairer way forward for AI in health care. Nature, 573(7775). https://doi.org/10.1038/d41586-019-02872-2

Parikh, R.B., Teeple, S. and Navathe, A.S., 2019. Addressing bias in artificial intelligence in health care. Jama, 322(24), pp.2377-2378. https://doi.org/10.1001/jama.2019.18058

Straw, I., 2020. The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. Artificial Intelligence in Medicine, 110, p.101965. https://doi.org/10.1016/j.artmed.2020.101965

Achieve Lasting Change

We're ready to work with you to create a customised plan to deliver sustainable change for your system.

CONTACT US TODAY

We use cookies to provide social media features and to analyse our traffic. By clicking "Accept", you consent to our use of cookies as described in our Privacy Policy. If you prefer not to use cookies, click "Reject". However, please note that some parts of our website may not function properly without cookies enabled. To learn more about how we use cookies and your privacy options, please read our Privacy Policy

DenyAccept