Artificial intelligence (AI), automation, algorithms and datafication are impacting every aspect of life today, changing how we interact with the world, from business to entertainment to healthcare. Amazon knows which items interest us, and virtual voice assistants like Siri and Alexa can respond to our requests. Looking back at the development of AI in recent years, this technology has been applied to the healthcare field from simple language conversation. According to Statista (Stewart, 2023), the AI market value in the global healthcare market will grow from US$11 billion in 2021 to US$188 billion in 2030. Technologies such as deep machine learning, visual image recognition and natural language processing in AI could transform the way healthcare is delivered, from early detection to providing treatment plans. By analysing large amounts of patient data, AI algorithms can predict potential health problems, allowing early intervention and preventing further deterioration. Automation can reduce errors in manual medical procedures. And datafication provides more accurate treatment plans by collecting data on patients’ health status data. It can also assist in healthcare management, helping hospitals manage patients’ medical records and medication usage, improving healthcare efficiency.
This blog post will discuss the application of AI in healthcare, with a particular focus on digital platforms, in four parts. Firstly, I will delve into the definition and current state of AI in healthcare; Secondly, I will analyse the concerns about the use of AI in healthcare; then, I will provide a concrete case study that illustrates the practical application of AI in healthcare; finally, I will discuss whether and how AI in healthcare needs to be regulated.
What is AI in healthcare?
In healthcare, AI can be defined as the simulation of human cognitive functions to analyse complex medical data. Machine learning and natural language processing are the most common AI techniques that can be applied to digital platforms. Machine learning applications in healthcare include predicting and treating diseases, providing medical imaging and diagnosis, and collating cases (Urwin, 2023). Natural language processing systems can analyse patient clinical records, transcribe patient interactions, or conduct conversational AI (Davenport & Kalakota, 2019). For example, Wysa, a chatbot for mental health treatment, is an AI that uses a natural language processing system.
For the various applications of AI in healthcare, I would like to introduce two more common aspects — health management and optimising healthcare resource allocation. The application of AI in health management mainly uses internet applications or platforms or wearable smart devices to obtain vital human signs and health data in an all-around way. In addition, it is possible to deeply mine medical and health data, including consultation data, psychological condition and physiological status, to provide personal health management plans (Insider Intelligence, 2023). For example, UK Babylon Health company has developed the Health Virtual Assistant application system that provides 24/7 personalised healthcare to consumers by combining a real doctor with an AI assistant using an intelligent portable device; Thymia, an AI-powered company founded in 2020, provides clinicians and patients with constant remote monitoring to detect dangerous signs of mental illness through gaming (Insider Intelligence, 2023).
Hospitals may also improve healthcare resource distribution and remove management gaps by implementing an AI system. Medical AI can manage resource allocation, reduce medical labour costs, enhance the quality of medical care, and increase patient happiness using big data from medical cases, medical treatments, and user feedback (Patzer, 2023). An ABC News article posted by Beavis (2022) reported that a woman in her 70s died after waiting more than nine hours to be admitted to the emergency room. This incident reflects the collapse of the local health system and the irrational use of hospital resources, which shows that they urgently need AI help. The planned organisation, coordination and control of hospital resources by AI can maximise medical utility and reduce the probability of patient accidents.
The potential issues and concerns
Although medical AI applications are booming, there are still potential issues and risks with the application of AI in the healthcare field. As a new development in current technology, medical AI is in a blank stage regarding its relevant standards and regulations, and the problems caused by the legal lag in its application have been exposed. For example, the privacy and security of personal medical and health data, ethical and moral issues and the determination of liability for medical incidents caused by the use of AI in healthcare services.
- Personal data privacy and security
Healthcare involves sensitive personal information, such as medical records, health conditions, imaging data, etc. However, this data can be in danger of being compromised or hacked. According to a report by Tham (2018) in The Straits Times, hackers once broke into the computers of SingHealth, the largest group of medical institutions in Singapore, and stole the personal data of 1.5 million patients, including their name, gender, date of birth, race, address and ID number. These extremely private personal data could have unimaginable severe consequences for patients if illegally used.
- Ethics, bias and social influence
AI may have algorithmic biases that cause it to make wrong diagnoses, further exacerbating social issues. AI affected by algorithm bias may have difficulty predicting disease or have a greater likelihood of illness due to race or gender (Davenport & Kalakota, 2019). More seriously, problematic social attitudes can be amplified and powerful through systems with algorithmic biases (Hoffmann, 2018). For example, racially biased software developers inevitably reject dark skin samples when collecting data; an AI platform for estimating breast cancer risk calculated that black women were at lower risk than whites even when all other risk factors were equal (Martin-Mercado, as cited in Mazzolini, 2022). Racial bias caused by algorithmic bias threatens the human rights of black patients.
- Identification of medical liability
There is no unified standard or specification for determining medical liability for errors in AI algorithms or medical malpractice. Nolan (2023) claims that AI in global healthcare is relatively early, resulting in a need for case law to draw on. Therefore, the medical liability determination of AI is a complex issue.
Case study: DermAssist
DermAssist is a guided skin search platform developed by Google Health. As Constantinides et al. (2018, as cited in Flew, 2021, p. 70) said, a digital platform is a set of resources enabling consumer and producer interaction to create value. DermAssist is an AI-powered tool that employs machine learning technologies to assist users in identifying skin issues. When users want to discover their skin concerns, they submit images of the affected region to the app and get personalised analysis from the AI model (Desalvo, as cited in Google, 2021). DermAssist’s AI model is developed and trained to detect 288 skin conditions (Desalvo, as cited in Google, 2021) by parsing millions of skin problems images, around 65,000 images of already diagnosed skin conditions, and thousands of instances of healthy skin (Bui & Liu, 2021). People may encounter skin problems in their daily life and seek help from the Internet, family or friends by taking pictures. However, this often does not lead to accurate professional answers and can increase panic and anxiety. For these people, DermAssist is a great tool to help reduce their stress and panic. However, there are some potential issues and concerns.
DermAssist suffers from algorithmic biases in the form of incomplete data bias and pre-existing bias. Incomplete data bias means algorithm bias due to incomplete statistics of data. Stanford University dermatologist Daneshjou (2021) claims that Google’s publicly released DermAssist study did not include the darkest skin types in its test set. Furthermore, Feathers (2021) reported that the number of patients with light skin types in Google’s skin condition Big Data was about 26 times higher than those with dark skin types. DermAssist’s incomplete data bias is grossly unfair to black patients. It is like a skin testing tool designed for white users, threatening the human rights of patients with black skin.
Second, pre-existing bias is also present in DermAssist. Pre-existing bias means software developers bring bias in individuals or society before the system was created into big data collection. Google has been exposed to internal scandals of racial bias against employees, which is considered its representative issue (Feathers, 2021). As a result, Google’s software developers inevitably inject pre-existing representative problems within the company into software development. Because of these pre-existing racial biases, machine training has no black samples. These algorithmic biases deepen racial biases and lead to ethical issues.
Due to the algorithm bias of DermAssist, the accuracy of its determinations could be better, and the number of misjudgments will increase. Too many patients going to the hospital for further tests due to misjudgments will pressure the health system, increase waiting times for actual patients and possibly even cause them to miss their optimal treatment. Moreover, due to the unstable accuracy of DermAssist, contradictory conclusions may exist between doctors and devices, leading to a crisis of trust between patients and doctors.
DermAssist is a practical application of AI in healthcare. This case shows that the application of medical AI in skin detection does help patients in emergencies. Still, its results are affected by many aspects and must be more accurate. At the same time, it can also be seen from this case that the social problems caused by medical AI due to varying results should not be ignored. Therefore, the regulation of AI in healthcare platforms is essential.
Does that need to regulate, and how to regulate?
AI has the potential to transform all aspects of healthcare. However, although the healthcare industry is heavily regulated, no regulations address AI use in healthcare (Chung, 2021). Given the many concerns and potential ethical challenges surrounding AI in healthcare, it needs to be regulated. As one of the main themes of the HIMSS22 Global Health Conference, health equity and the elimination of implicit bias are also one of the main challenges of AI in healthcare (Mazzolini, 2022). Regulation can ensure the safety and reliability of AI in healthcare platforms to minimise potential harm to patients. The World Health Organization (2021) proposes six guiding principles for AI regulation and governance in the field of healthcare:
- Protecting human autonomy
- Promoting human well-being and security and the public interest
- Ensuring transparency, interpretability and understandability
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting sustainable AI
The following summarises the regulatory approach to AI in healthcare platforms.
Healthcare platforms can self-regulate their AI applications. This requires healthcare platforms to establish self-regulatory mechanisms, including developing internal policies, norms, processes and management systems to ensure the safety and reliability of AI applications.
Industry standards can facilitate the regulation of AI applications in healthcare platforms. Industry organisations can specify criteria for using AI applications, including algorithm development, testing, auditing, and verification. The unintended consequences of using AI should be considered in policy designation (Chung, 2021).
- Government supervision
The government can formulate laws and regulations to specify the conditions and restrictions on the use of AI applications and set up regulatory agencies to supervise and enforce these regulations. Government regulation should focus on the effectiveness and safety of these applications (Bates, 2023).
This blog presents the definition, current status, concerns, and regulatory issues of AI in healthcare. And through a specific case study to show the practical application of AI in healthcare. In general, responsible regulation is a top priority in developing and using AI in healthcare. On the premise that developers minimise the problems of medical AI, AI will bring more practical help to the healthcare field.
Bates, D.W. (2023). How to regulate evolving AI health algorithms. https://doi.org/10.1038/s41591-022-02165-8
Beavis, L. (2022, August 10). Launceston General Hospital patient dies after being ramped for more than nine hours. ABC News. https://www.abc.net.au/news/2022-08-10/lgh-patient-dies-after-being-ramped-for-nine-hours/1 01313434
Bui, P. & Liu, Y. (2021, May 18). Using AI to help find answers to common skin conditions [Google log post]. https://blog.google/technology/health/ai-dermatology-preview-io-2021/
Chung, J. (2021, October 18). How Will Health Care Regulators Address Artificial Intelligence? The Regulatory Review. https://www.theregreview.org/2021/10/18/chung-how-will-health-care-regulators-address-artifi cial-intelligence/
Daneshjou, R. [RoxanaDaneshjou]. (2021, May 19). Thread [tweet]. https://twitter.com/RoxanaDaneshjou/status/1394745183015641091?s=20&t=7MIbKh_hjoFT WDWfFcKQSA
Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future healthcare journal, 6(2), 94.
Feathers, T. (2021, May 20). Google’s New Dermatology App Wasn’t Designed for People With Darker Skin. Vice. https://www.vice.com/en/article/m7evmy/googles-new-dermatology-app-wasnt-designed-for-pe ople-with-darker-skin
Flew, T. (2021). Regulating platforms. Polity.
Google. (2021, May 19). Google Keynote (Google I/O ‘21) – American Sign Language [online video]. https://www.youtube.com/watch?v=Mlk888FiI8A
Hoffmann, A. L. (2018, May 1). Data Violence and How Bad Engineering Choices Can Damage Society. Medium. https://medium.com/s/story/data-violence-and-how-bad-engineering-choices-can-damage-socie ty-39e44150e1d4
Insider Intelligence. (2023, January 11). Use of AI in healthcare & medicine is booming – here’s how the medical field is benefiting from AI in 2023 and beyond. https://www.insiderintelligence.com/insights/artificial-intelligence-healthcare/#:~:text=Benefits %20of%20AI%20in%20Healthcare,and%20at%20a%20lower%20cost.
Mazzolini, C. (2022, March 18). Why artificial intelligence in health care needs regulation. Medical Economics. https://www.medicaleconomics.com/view/why-artificial-intelligence-in-health-care-needs-regul ation
Nolan, P. (2023). Artificial intelligence in medicine: how do we determine legal liability when things go wrong? (Version 1). Macquarie University. https://doi.org/10.25949/22138298.v1
Patzer, A. (2023, January 20). Ensuring Better Patient Experience And Outcomes Through Artificial Intelligence. Forbes. https://www.forbes.com/sites/forbestechcouncil/2023/01/20/ensuring-better-patient-experience- and-outcomes-through-artificial-intelligence/?sh=384da64538a9
Stewart, C. (2023). Artificial intelligence (AI) in healthcare market size worldwide from 2021 to 2030(in billion U.S. dollars) [Data set]. Statista. https://www.statista.com/statistics/1334826/ai-in-healthcare-market-size-worldwide/
Tham, I. (2018, July 20). Personal info of 1.5m SingHealth patients, including PM Lee, stolen in Singapore’s worst cyber attack. The Straits Times. https://www.straitstimes.com/singapore/personal-info-of-15m-singhealth-patients-including-pm -lee-stolen-in-singapores-most#:~:text=SINGAPORE%20%2D%20In%20Singapore’s%20wors t%20cyber,outpatient%20prescriptions%20stolen%20as%20well.
Urwin, M. (2023, February 28). 14 Machine Learning in Healthcare Examples. Built In. https://builtin.com/artificial-intelligence/machine-learning-healthcare
World Health Organization. (2021, June 28). WHO issues first global report on Artificial Intelligence (AI) in health and six guiding principles for its design and use. https://www.who.int/news/item/28-06-2021-who-issues-first-global-report-on-ai-in-health-and- six-guiding-principles-for-its-design-and-use