
Robot hand shake Image: My Tech Decision (2019)
Artificial intelligence has become an integral part of our daily lives, influencing everything from our social media feeds to the healthcare services we rely on. In China, the potential of AI to revolutionise healthcare delivery has been widely recognised. This post aims to provide clarity and context on commonly misunderstood key terms within the scope of technology, including Artificial Intelligence (AI), algorithm, and machine learning. Drawing on insights from researchers, the post will unpack these key terms and their implications. By using Ping An Good Doctor, a leading healthcare platform in China, as a case study, I will highlight some hidden risks associated with AI-driven medical tools, such as data privacy, biased outcome and transparency issues. The final section will examine the latest proposed regulations on AI, introduced on April 11 of this year, and discuss their potential effectiveness.
What is artificial Intelligence (AI)?
Artificial intelligence (AI) is a technology that enables computers to perform advanced functions such as analysing data, recognising spoken and written language, and making recommendations. The primary aim of AI is to create machines that can learn, reason, and act similarly to humans, or even beyond human capabilities. To achieve this, math and logic are used to simulate human reasoning, learning, and problem-solving. One approach to training machines to mimic human reasoning is through the use of neural networks, which are modelled after the human brain and comprise a series of algorithms. (Google, IBM & Microsoft, n.d.)
The “Turing Test” in 1950
Decades before this definition, the concept of artificial intelligence was first introduced by Alan Turing in his seminal work “Computing Machinery and Intelligence” which was published in 1950. Turing, widely regarded as the “father of computer science”, posed the question “Can machines think?” and proposed the famous “Turing Test” where a human interrogator would try to distinguish between a computer and human text response. Although the test has faced much scrutiny since its publish, it is still an important part of the history of AI and continues to be a relevant concept philosophy due to its incorporation of linguistic concepts. (IBM, n.d.)
What is artificial Intelligence (AI)?
Artificial intelligence (AI) is a technology that enables computers to perform advanced functions such as analysing data, recognising spoken and written language, and making recommendations. The primary aim of AI is to create machines that can learn, reason, and act similarly to humans, or even beyond human capabilities. To achieve this, math and logic are used to simulate human reasoning, learning, and problem-solving. One approach to training machines to mimic human reasoning is through the use of neural networks, which are modelled after the human brain and comprise a series of algorithms. (Google, IBM & Microsoft, n.d.)
The “Turing Test” in 1950
Decades before this definition, the concept of artificial intelligence was first introduced by Alan Turing in his seminal work “Computing Machinery and Intelligence” which was published in 1950. Turing, widely regarded as the “father of computer science”, posed the question “Can machines think?” and proposed the famous “Turing Test” where a human interrogator would try to distinguish between a computer and human text response. Although the test has faced much scrutiny since its publish, it is still an important part of the history of AI and continues to be a relevant concept philosophy due to its incorporation of linguistic concepts. (IBM, n.d.)
Artificial Intelligence: A Modern Approach
Stuart Russell and Peter Norvig’s went on to publish. Artificial Intelligence: A Modern Approach has become a leading textbook in the field of AI. They concerned AI “mainly with rational action. Ideally, an intelligent agent takes the best possible action in a situation. We study the problem of building agents that are intelligent in this sense” (Russell & Norvig, 2021, p. 34) . This book outlined four different ways to understand AI and how computer systems are classified based on their rationality and approach to thinking and acting. These include the human approach, which involves systems that think and act like humans, and the ideal approach, which involves systems that think and act in a rational way. Alan Turing’s definition of AI would fall under the category of systems that act like humans. (IBM, n.d.).
AI is driven by Data
AI is driven by data that is often obtained from various sources, sometimes without the awareness or intentional contribution of the provider. Megorskaya (2022) considered “AI stands on three key pillars: algorithms, hardware and data.’ Large amount of data are collected and algorithms use machine learning techniques to identify inter-dependencies between the data. These process are then reproduced on new data encountered.
While AI is often viewed as a transformative and essential technology that can lead to a brighter future, some researchers are questioning this widely held belief and challenging the idea that AI is an inevitable development that must be embraced.
AI as extractive problem
One of the scholars Kate Crawford (2021) considered AI as an “extractive” problem and “the creation of contemporary AI systems depends on exploiting energy and mineral resources from the planet, cheap labour, and data at scale” (Crawford, P.1) and AI is “neither artificial not intelligent”. (Crawford, P.7) Machine learning systems are trained on images that are taken from the internet or state institutions without context and without consent and are not neutral.
AI as black box
The black box society: The secret algorithms that control money and information written byFrank Pasquale (2015) suggested that extensive details about our personal lives and behaviours are being collected by corporations through the use of hidden algorithms that are concealed in secrecy and complexity. This invades our privacy and can have serious consequences, such as damaging reputations, deciding the future of businesses, and causing major economic problems.
Regulators have been unable to address the issue adequately despite the harmful impact of such practices, Pasquale (2015) argued that it is essential for individuals and society as a whole to be able to understand. The book also explored some of the negative consequences of algorithmic decision-making, such as the potential for discrimination, and calls for greater regulation to ensure that these systems are used fairly and responsibly.
Algorithms are rules and processes used for tasks such as calculation and data processing. Algorithmic selection on the internet refers to “the process that assigns (contextualized) relevance to information elements of a data set by an automated, statistical assessment of decentrally generated data signals” (Just and Latzer, 2019 as cited in Flew, 2021, P.82) This involves the interaction of user inputs with data sets to generate outputs through computational processes, and algorithms can be improved to provide better responses to user inputs through repeated interactions with users and data. The decision rules and data-driven processes that are linked with algorithms have an impact on how we think and consequently act in terms of agenda setting and framing. (Flew,2021; Just and Latzer, 2017)
Are AI and machine learning the same?
Although AI and machine learning are closely related, they are not the same. Machine learning is a subset of AI that involves training machines to analyse and learn from massive amounts of data. By using mathematical models, computers can learn without direct instruction, allowing them to improve and make predictions based on experience. (Microsoft, n.d.) Machine learning algorithms have a wide range of applications, from generating recommendations on Netflix to predicting health risks. (MIT Technology Review, n.d.)
Case Study: Ping An Good Doctor —A Quiet Tech Giant
When it comes to Chinese AI healthcare platforms, Alibaba, Tencent, and Baidu are usually the first names that come to mind. However, a young and fast-growing healthcare platform called Ping An Good Doctor is worth mentioning. This platform was developed by an insurance and finance conglomerate Ping An (平安 literally means safe and well) and has been rapidly expanding since its launch in 2015, particularly during the Covid period.
Ping An Good Doctor is currently the largest telemedicine application in China in terms of user scale and coverage, with over 72.6 million monthly active users and 373 million registered users. (Davenport, 2021). The software platform is based on “mobile medical + AI technology” to support a medical team of over 2,200 doctors and health professionals. This platform provides 24/7 medical services, including online consultations (averaging over 903,000 per day), referrals, registrations, and online drug purchases and deliveries. (Davenport, 2021) These services are powered by its underlying AI technology and database resources. Users can search for basic information for free, with consultations and treatments available at a cost (Ping An, 2021).
The integrated data analysis package, AskBob (AI Doctor), which they claimed it as medical sector’s “ChatGPT” , provides doctors with personalised and precise treatment recommendations, as well as assistance with decision-making with AI capabilities in diagnosing and treating cardiovascular disease are comparable to that of human doctors. (Ping An, n.d.)
Overlooked Risks associated with AI medical tools
Data collection and privacy
Their data collection practices echoed Crawford’s (2021) concept of “AI as extractive industry”, the platform is built on a vast database of diseases, medical products, treatments, medical resources and patient information. In other words, its AI healthcare services are actually powered by large amount of user data, including customers, hospitals, medication suppliers, medical practitioners, universities, research institutes and many more sources already stored in Ping An Cloud.

A screenshot for illustrating Ping An AI chatbot could process both text and image while consultation Image: Ping An (2022)
This includes personal information such as demographic information, medical history and symptoms, biometrics, credit card details and images related to their sickness. Users may not fully understand or agree what data will be collected and how it will be used. The data will potentially be used for purposes beyond healthcare and shared with third parties such as entitles within Ping An Healthcare ecosystem and even the government.
Biased outcome and error
According to Ping An (2021), the platform has “accumulated the data of nearly 1,183 million consultations based on its five major medical databases (of medical products, diseases, personal health, prescription treatment, and medical resources) and the AskBob consultation/treatment assistant tool”.
While the company reported an accuracy rate of over 99% for their medical guidance, it is worth noting that their AI system is primarily “trained” by China-based data such as common disease in China, Chinese medicine theories, local research results and hospitals’ medical information, leading to potential misdiagnosis for users from other regions.
Understanding how the algorithms of this platform operate and how they bring about changes can be challenging, as they are like a “black box”–complex, opaque, and often hard for non-experts to comprehend (Pasquale, 2015).When dealing with real-life data, even those on the inside cannot control the effects of their algorithms. As a profit-making corporation, the priority of Ping An is often given to making money over the reasons behind a specific decision (Auerbach, 2015). Therefore, the suggestions made by algorithms on the platform for doctors, treatments, and drugs may not necessarily be the most suitable for the users. It is possible that these suggestions are driven by profit motives.
Transparency issue
Issues around transparency and accountability can arise when relying on machine learning algorithms for decision-making. In view of the risks mentioned above, it is important to evaluate carefully whether the decision-making by AI is superior to that of humans, and to assess the reliability of diagnoses and suggested treatments.
CAC security assessments to regulate new AI
On 11 April 2023, the Cyberspace Administration of China (CAC) released draft measures for managing generative AI services, requiring companies to submit security assessments to authorities before offering them to the public. This move comes as governments around world are also exploring ways to rein in the rapid growing generative artificial intelligence (AI) tools. (Shen,2023)
Like EU’s Artificial Intelligence Act, China’s regulations also require greater transparency and audits of recommendation algorithms. In addition to prohibiting “deep fake” technology (Hale, 2023) and manipulating website traffic statistics, the CAC’s rules require new AI products to reflect China’s core socialist values and avoid generating content that suggests regime subversion, violence, or pornography, or disrupts the economic or social order. (Shen,2023) While a security assessment of enrolled algorithms is included in the proposed rules, its effectiveness to offer valuable insight into the “black box” technologies is yet to be seen.
Reference:
Auerbach, D. (2015, January 14). We Can’t Control What Big Data Knows About Us. Big Data Can’t Control It Either. Slate Magazine. https://slate.com/technology/2015/01/black-box-society-by-frank-pasquale-a-chilling-vision-of-how-big-data-has-invaded-our-lives.html
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Davenport, T. (2021, March 2). The Future Of Work Now: Good Doctor Technology For Intelligent Telemedicine In Southeast Asia. Forbes.
Flew, T. (2021). Regulating Platforms. Polity.
Google. (n.d.). What Is Artificial Intelligence (AI)?. https://cloud.google.com/learn/what-is-artificial-intelligence
Hale, E. (2023, April 13). China races to regulate AI after playing catchup to ChatGPT. Al Jazeera. https://www.aljazeera.com/economy/2023/4/13/china-spearheads-ai-regulation-after-playing-catchup-to-chatgdp
IBM. (n.d.). What is Artificial Intelligence (AI) ? https://www.ibm.com/topics/artificial-intelligence#:~:text=What%20is%20artificial%20intelligence%20(AI)%3F,Discover%20Watson%20Assistant
Megorskaya, O. (2022, June 27). Training Data: The Overlooked Problem Of Modern AI. Forbes. https://www.forbes.com/sites/forbestechcouncil/2022/06/27/training-data-the-overlooked-problem-of-modern-ai
Microsoft.(n.d.). Artificial Intelligence vs. Machine Learning . https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/artificial-intelligence-vs-machine-learning/#introduction
MIT Technology Review. (n.d.). Machine learning. https://www.technologyreview.com/topic/artificial-intelligence/machine-learning/
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Ping An (2021, March 30). An ecosystem to overhaul China’s health care. MIT Technology Review. https://www.technologyreview.com/2021/03/30/1021421/an-ecosystem-to-overhaul-chinas-health-care/
Ping An. (n.d.). AskBob, China’s AI doctor that is increasingly here to help. Financial Times – Partner Content by Ping an Insurance. https://www.ft.com/partnercontent/ping-an-insurance/askbob-china-ai-doctor-that-is-increasingly-here-to-help.html
Russell S. J. & Norvig P. (2021). Artificial intelligence : a modern approach (4th ed.).
Shen, X. (2023, April 11). China’s internet watchdog proposes rules, security assessment for AI tools similar to ChatGPT. South China Morning Post. https://www.scmp.com/tech/policy/article/3216691/chinas-internet-watchdog-proposes-rules-security-assessment-ai-tools-similar-chatgpt
Photo Reference:
My Tech Decision. (2019). Robot hand shake [Image]. https://mytechdecisions.com/facility/smart-healthcare-askbob/
Ping An. (2022). Online consultation [Image].https://www.youtube.com/watch?v=YpStI8PZ5FA&ab_channel=PingAn
Be the first to comment