With the advent of the digital age, artificial intelligence (AI), automation, big data, and algorithms are playing an increasingly important role in our daily lives. These technologies are transforming our way of life and social interactions. One of the changes is that machines are gradually participating in decisions that affect human life. Large organisations, especially in healthcare, are trying to use massive data sets to improve the way they operate (Johnson, 2019)
Healthcare, as one of the main concerns of human society, affects almost every aspect of our lives, from work to relationships. Therefore, it is becoming increasingly important to use technologies such as algorithms to optimise healthcare services. For example, to reduce medical risks and to distribute healthcare resources equitably.
However, at the same time, the healthcare field is facing some challenges posed by these technologies. For example, patient data privacy and security, prejudice due to algorithms and data, unexpected technical errors, and accidents. These issues require interdisciplinary collaboration and a balance of technical and ethical considerations.
This blog will focus on the bias and discrimination caused by algorithms in the healthcare system through the case of Optum, analysing the causes and effects of its emergence, and exploring governance and policy on the phenomenon.
Let’s start with the definition of algorithms in healthcare.
What Is A Healthcare Algorithm?
Since the 1970s, artificial intelligence and medical algorithms have been closely linked to our healthcare system (Fleur, 2022). Medical algorithms mainly influence modern healthcare practice through three components (Velichko, 2021):
- Clinical aspects, by improving the quality of treatment (e.g. by increasing the speed and accuracy of diagnosis)
- Pharmaceutical industry, by facilitating the development of new drugs (e.g. by speeding up research and report writing)
- Administrative aspects, by automating income cycle management and bureaucratic tasks (e.g. by automating data processing)
We will focus on the clinical aspects of algorithms as it may be relevant to each of us.
Flew (2021) defines an algorithm as a computational process in which user inputs interact with a dataset to generate an output. In the clinical context, algorithms can assist in decision-making for doctors, including diagnosis, treatment plans, and clinical research. It is used to search for personalised treatments for each patient to reduce costs and improve outcomes.
Doctors input data on the patient’s symptoms, medical history, demographics, and other factors into healthcare AI systems. The algorithm analyses and summarises the characteristics of each patient’s illness based on the collected dataset collected. Subsequently, the doctor will receive a diagnosis or recommended correct dosage and treatment plan to achieve treatment benefits.
Let’s take a look at some of the most prominent use cases of algorithms in healthcare.
SmokeBeat changes users’ smoking habits by collecting data on their smoking behavior.
Molly remotely monitors data such as blood pressure and weight of patients to simplify care.
Face2Gene helps clinicians diagnose rare diseases more accurately through facial recognition.
We have high hopes for machine learning to make healthcare more intelligent, and we envision that ideal algorithms meet the following criteria (Loftus et al., 2022): interpretable, dynamic, accurate, autonomous, fair, and reproducible.
However, biases in AI algorithms have a significant impact on healthcare, thus hindering the vision of equal access to quality healthcare for everyone.
A Case Study of Bias and Discrimination Through Algorithms
Optum is a leading healthcare services company, which dedicated to providing care with the help of technology and data to give the public the guidance and tools they need to achieve better health.
An algorithm sold by Optum identifies high-risk patients with complex health needs, such as patients with chronic conditions, to help them access special resources to manage their health. Hospitals and health insurers will also provide personalised attention to these patients, such as specially trained carers, and additional primary care visits for closer monitoring.
The algorithm aims to pre-empt severe complications, reduce costs and improve patient satisfaction. Its user base has at one point covered more than 200 million Americans (Johnson, 2019).
However, a study (Obermeyer et al., 2019) indicates that the algorithm, which is widely used by the health system, has been inadvertently but systematically discriminating against black people.
The study analysed the medical records of nearly 50,000 patients, of which 12% self-identified as Black and 88% self-identified as White. The study compared their algorithmic risk scores with their actual health history, which showed that the risk scores assigned to Black patients by the algorithm were consistently lower, even though their conditions were the same as those of White patients.
Furthermore, when the company repeated the analysis on a nationwide dataset of 3.7 million patients, they found that Black patients who were rated by the algorithm as needing the same level of extra care as White patients had significantly more serious conditions: they had a total of 48,772 additional chronic diseases (Johnson, 2019). The study also demonstrated that addressing this discrepancy would increase the proportion of Black patients receiving extra help from 17.7% to 46.5%.
There is no doubt that the algorithm seriously impacts on the equal enjoyment of basic rights as citizens by people of colour, and even leads to racial bias.
Unfortunately, this widely used algorithm is typical of the methods used throughout the healthcare industry.
It makes me wonder what exactly leads to this algorithmic bias and discrimination?
Root Causes of Bias and Discrimination Through Algorithms
The algorithm is not intentionally discriminatory. Instead, to ensure that more patients benefit equally from medical help, the algorithm uses a seemingly race-neutral metric, healthcare costs.
The algorithm predicts the likelihood of future risks based on the patient’s medical history and the accumulated healthcare expenditure in the past year. It predicts the future healthcare costs that patients may need to pay as a measure of their healthcare needs, and determines whether additional care is required (Christensen et al., 2021).
However, cost differences may have complex roots, including but not limited to access and quality of healthcare, commercial or health insurance utilisation, distrust of the medical system, income differences, cultural misunderstandings, racism, and discrimination, or unconscious bias that doctors may not even be aware of. All of these potential social factors will influence the output of AI models.
Panch, Mattie, and Rifat Atun (2019) first defined algorithmic bias in AI and healthcare systems as an algorithmic application that amplifies and exacerbates existing inequalities in social and economic status, race, ethnicity, religion, gender, disability, or sexual orientation, thus lead to unfairness in the healthcare system.
We can simply understand the concept as the dataset in healthcare technology being a reflection of the real world. Algorithms innocently reflect inequalities that persist in societies, cultures, and institutions and perpetuate existing inequalities (Vartan, 2019), or even eventually exacerbate them.
Optum’s algorithm captures systemic racism, leading to additional inequalities. Historical bias leads to digitally-driven bias. While it’s not intentional, it does harm people of colour’s access to healthcare.
Black Americans, as one of the groups with the highest poverty rates in the US, have limited opportunities for services and resources, resulting in lower healthcare spending (Christensen et al., 2021). That is why the study indicates that black patients spend approximately $1,800 less annually on healthcare than white patients with the same number of long-term conditions (Obermeyer et al., 2019).
As a result, black patients are incorrectly labelled as unlikely to require significant healthcare in the future, while healthier white patients are marked as requiring more intensive care management. The algorithm incorrectly concludes that black patients are healthier than equally ill white patients. This misclassification could affect doctors’ diagnosis of the condition, resulting in black patients not receiving the necessary medical care and monitoring, while white patients will be over-treated and monitored. This vicious cycle will only continue to deepen this disparity and bias.
We have to acknowledge that machine learning is a vast and complex process. Errors can occur in every step of data collection, input, analysis, integration, and cleaning. Any errors will be amplified by the algorithm and affect the output of the final result.
In this study, the lack of diversity in the data was a harmful issue. The number of the white samples was nearly nine times greater than the number of the black samples.
Whether for explicit or implicit reasons, algorithmic bias can have a devastating impact on marginalised groups at each step of the healthcare process, severely affecting millions of patients, from disease classification to the quality of care they receive.
Therefore, addressing systematic discrimination and bias in healthcare is urgent.
Governance & Regulation
Reformulating the algorithm may eliminate racial bias in predicting who needs extra care.
Obermeyer and his colleagues (2019) were given access to the algorithm and modified the model. This level of transparency, sharing, and accountability is currently uncommon. Since algorithms are known to be opaque (Flew, 2021), expensive, and involve confidential proprietary products. This unprecedented study ultimately reduced algorithm bias by 84% (The Lancet Digital Health, 2019).
Therefore, algorithms need to be constantly reviewed and improved to help clinicians make the most effective care decisions for each patient, and to ensure that society is moving in a positive direction when using AI and machine learning.
It is also the nature of the algorithmic process to improve over time based on more data, more diverse data, and regularities involving specific user participation (Flew, 2021).
Fortunately, AI developers, healthcare practitioners, regulators, policy advisors, and patients are all recognising the importance of transparent and fair data. They are also attempting to avoid the effects of algorithmic bias through rigorous design, testing, and monitoring of AI systems.
Open science practices can contribute to promoting fairness in AI in healthcare (Norori et al., 2021).
Data sharing would address the bias of algorithms towards members of disadvantaged or under-represented groups due to a lack of data diversity. At the same time, we need to take into account privacy issues involving sensitive information.
Explainable AI (Shaban-Nejad et al., 2020) will provide a ‘white-box’ diagnostic method for doctors and patients to establish a culture of transparency and accountability. Platform developers should disclose the strengths and weaknesses of their models in the decision-making process, and how they are constantly learning and evolving. Continuous alignment and real-time monitoring ensure that algorithms are constantly adapting to make more accurate predictions and improve treatment outcomes.
Additionally, although absolute equality may not be possible in genetics or some aspects, for example, the probability of women developing breast cancer is higher than men. If we artificially level the probability, it still affects the outcome of the algorithm. Consequently, we need to at least build an “I don’t know” function into the AI model to transparently communicate the limitations of the algorithm to clinicians and policymakers.
AI in healthcare: opportunities and challenges | Navid Toosi Saidy | TEDxQUT. Source
These improvements and regulations need to be carried out within a standardized institutional framework that aims to uphold human rights and address the ethical risks associated with artificial intelligence.
- WHO (2021) published the first global report on AI in health and adopted 6 guiding principles as the foundation for the regulation and governance of AI.
- UNESCO Member States (2021) released Recommendation on the Ethics of Artificial Intelligence, which is the first global agreement on the ethics of artificial intelligence.
- The Australian Human Rights Commission (2021) issued the Human Rights and Technology Final Report to develop guidelines for government and non-government organisations to comply with federal anti-discrimination laws when making decisions with the use of AI.
These policies require multi- stakeholder self-regulation, external regulation, and co-regulation.
In the process, we also need to continually test the efficiency and fairness of algorithms and refine regulatory mechanisms with stakeholders. These processes should also be closely monitored, as humans are also subject to bias and discrimination.
Accordingly, we need to train AI developers, relevant technical staff, and managers on anti-discrimination laws to ensure that automated decision-making processes are free from bias and that healthcare services from AI are fair, reliable, and credible.
What’s The Next!
The development of algorithmic medicine in healthcare has brought an exciting and hopeful era for diagnosis. Its emergence aims to maximize the benefits of artificial intelligence in healthcare and public health.
Healthcare does not have a racial dimension. Racial discrimination only exists in society. Therefore, we must overcome this and fundamentally eliminate this biased ideology to preserve the future fairness of our healthcare system and minimise health disparities.
Although, this is not an easy process.
After all, changing the way machines operate only requires a few keystrokes; changing people’s ways of thinking requires much more than that.
What are your thoughts on algorithmic biases?
Feel free to leave your comments!
Christensen, D., Manley, M., & Resendez, J. (2021, September 9). Medical Algorithms Are Failing Communities Of Color. https://www.healthaffairs.org/do/10.1377/forefront.20210903.976632/
Fleur, N. (2022, May 30). Listen: ‘Racism is America’s oldest algorithm’: How bias creeps into health care AI. https://www.statnews.com/2022/05/30/how-bias-creeps-into-health-care-ai/
Flew, T. (2021). Regulating Platforms. Cambridge: Polity, pp. 79-86.
Johnson, C. (2019, October 24). Racial bias in a medical algorithm favors white patients over sicker black patients. The Washington Post. https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
Loftus, T. J., Tighe, P. J., Ozrazgat-Baslanti, T., Davis, J. P., Ruppert, M. M., Ren, Y., … Bihorac, A. (2022). Ideal algorithms in healthcare: Explainable, dynamic, precise, autonomous, fair, and reproducible. PLOS Digital Health, 1(1), e0000006. https://doi.org/10.1371/journal.pdig.0000006
Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (New York, N.Y.), 2(10), 100347–100347. https://doi.org/10.1016/j.patter.2021.100347
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. https://www.science.org/doi/10.1126/science.aax2342
Shaban-Nejad, A., Michalowski, M., & Buckeridge, D. L. (2020). Explainable AI in Healthcare and Medicine. Springer International Publishing AG.
The Lancet Digital Health. (2019). There is no such thing as race in health-care algorithms. https://www.thelancet.com/action/showPdf?pii=S2589-7500%2819%2930201-8
Vartan, S. (2019, October 24). Racial Bias Found in a Major Health Care Risk Algorithm. Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
Velichko, Y. (2021, December 2). Types and Applications of AI in Healthcare. https://postindustria.com/types-and-applications-of-ai-in-healthcare/