Threat to Privacy in a Digital Age: A Case Study of Google Project Nightingale Data Controversy

Introduction

With the emergence of Web 2.0, big tech giants such as Google and Facebook access the data of their users through the convenience of multi-sided markets. In simple terms, platform anatomy is fueled by data, automated and organized through algorithms and interfaces, formalized through ownership relations driven by business models, and governed through user agreements (Pasquale, 2015). This means that platforms maintain ownership by collaborating with different business industries. Think about it this way, when people search for a hospital in the search engine Google, a few advertisements appear at the top lists which is an example of this ownership-driven business model. The blog aims to showcase how the private data of audiences are exploited for the profit motives of big tech giants. The primary thesis followed will be how the Project Nightingale Data Controversy of Google aimed to extract the data of users in the healthcare industry to maximize their profits with artificial intelligence and algorithmic practices. The blog will focus on how Google managed to pull out the personal data of its users in collaboration with Ascension. This will be followed by understanding the concepts of algorithms, commodification, and datafication.

Search Engine Algorithm

Well, do you know that your data is also getting harvested by tech giants who are vigorously working to make your lives easier? If not, then I have got you covered. Starting with, data is analyzed by businesses in an attempt to understand their customers at the cost of drawing out their confidential data. The platform anatomy gives rise to Artificial Intelligence (AI). Artificial Intelligence in simple terms can be understood as a simulation of human intelligence into a machine that is programmed to perform routine tasks like humans (Crawford, 2021).  Again, Google is the perfect example of AI in form of its search engine algorithm as it extends its services by mediating through users and various business sectors globally. The major characteristics of digital platforms are that it runs on data-driven models and provides third-party access and connectivity through algorithms (Flew, 2021). Through this AI service, the tech giant Google accessed the medical records of patients with its business collaboration with Ascension which helped it to earn increased revenue. In this particular blog, we will be discussing the example of the Project Nightingale Data Controversy of Google on how it accessed the private information of its users through algorithms.

Source from: Soni Upadhyay

Case Study of Google Project Nightingale Data Controversy

Google has now become a part of our daily lives as we search for random stuff to make our lives more convenient. Whenever we have a query, Google is the first thing that pops into our minds. Being a tech giant that has its services in almost every country in the world, the search engine aspect of Google helps it to extract the data of its users through its inbuilt algorithm tactics. AI runs through a machine learning process that uses a codification method for ratification of the private information of audiences on online platforms (Flew, 2021). Google in the year 2019, partnered with the non-profit and Catholic health system named Ascension to accumulate the data of users based on their existing medical records (Copeland & Needleman, 2019). With this partnership, the tech giant Google harvested the health information of 50 million American patients (Copeland & Needleman, 2019). This controversial data mining strategy set grounds for threats to privacy in a digital age.

Source from : author

Would you ever want anybody to pass your private information without your consent in a public forum? Most probably the answer will be a “NO”. Moving on, Artificial Intelligence is an extractive industry that exploits the private data of users as a form of exercising power (Crawford, 2021). Google exploited the health information data of customers of Ascension to exercise maximum power and profit by giving them personalized search results. Let me break this complex scenario into a simplified version.  For instance, by extracting the data of users, Google got an insight into medical records such as diagnosis from a doctor, name, birth year, medical claims, billing records, and hospitalization history. With all these data, Google gave patients a customized option for their next visit to a doctor in form of its advertised partnerships with medical sectors to improve profit levels for both parties. Data is mined when a user is active on Google or other associated business forms with or without their consent (Flew, 2021). This hints at the fact users sometimes give away their personal information willingly to tech giants in order to gain knowledge or in exchange for entertainment. This consent appears in the form of “cookies” on certain websites that are associated with Google for advertisement purposes. Thus, it can be understood that Ai services are a real threat to privacy and raise issues of concern about the unwanted spreading of personal data of users on digital platforms.

Source from : author

Data colonialism

Google is used by all of us on an everyday basis, but most of us are not aware of the data mining process of the company. This blog will help you to understand the metrics with which Google through its AI services collects the data of its users. Data colonialism is the privatization of data produced by users on digital platforms by claiming the ownership of the same by related organizations or tech giants (Just & Latzer, 2017). According to the Health Insurance Portability and Accountability Act of 1996, hospitals are allowed to share data of users with their business partners without their consent only to let the covered entities carry out their business operations smoothly (Pilkington, 2019). The Artificial Intelligence services of Google popped out relevant medication suggestions for the patients during their browsing on the search engine platform. Collection and labeling of data are done through the process of machine-learning activities that inform AI models and algorithms to intervene with the intimate information of users (Pasquale, 2015). A central network is developed by Google that connects its services with Ascension wherein software automatically reads the test reports of patients that is accessible to approximately 150 employees at Google (Griggs, 2019). This is the type of intervention that AI models and algorithms that interfere to harvest and threaten the privacy of users in the digital age. AI is dependent on three core elements: algorithms, data, and hardware (Just & Latzer, 2017). This process of three core elements works by firstly collecting the data, secondly by developing interdependency through machine-learning algorithms, and lastly reproducing it to new hardware with every new piece of data. In the case of Google, the collection of data is done by training the machine learning algorithms to directly access the private information of users. The algorithms then personalized and extracted the information of patients regarding their personal information and medication history. Lastly, tools or hardware such as Data Studio, Big Query, and Data Lab are used to allocate and track the activities of users on online platforms (Farr & Elias, 2019).  This reveals that algorithm practices by Google are used to develop new business practices by monitoring the behavioral pattern of users to incur maximum profits.

AI analytics

Are you now wondering that at some moment you might also have ended up giving access to Google to your secret data? To be honest, we all at some point or the other must have unknowingly shared our information with the algorithmic services of Google. Big data is used to discover new patterns and find co-relations among them rather than precise information (Noble, 2018). One of the employees from Ascension in an interview with CNBC revealed that Google is using tools that are not compliant with the HIPPA to import and export data mechanisms. Further, when the tech giant Google was asked about these tools by the employees at Ascension, they got a delayed response of declining any such activities conducted by the organization (Farr & Elias, 2019). Algorithms can be defined as the rules and processes established for machine-driven activities such as calculation, data processing, and automated decision-making (Crawford, 2021). This automated decision-making process was practiced by the AI services at Google where the whistleblower AKA the employee at Ascension posted a video on the social media channel Daily Motion that showcased confidential documents relating to Project Nightingale. The whistleblower revealed that through the artificial intelligence practices of Google, the tech giant tends to offer customized medication to patients on its search engine website which would help it to generate maximum leads and drive excessive profit levels (Pilkington, 2019). The algorithmic process is a machine-learning activity that improves over time with diversified data and regularity in engagement by digital users (Noble, 2018). With a diversified range of data through the collection of the personal information of millions of people, Google uses its AI analytics to predict the future behavioral processes and activities of patients who had their medical records with Ascension. In the year 2017, Google with the help of its artificial intelligence named ‘DeepMind Health’ transferred the data of 1.6 million people in the United Kingdom to observe the medical records and patterns of users (Pilkington, 2019). The algorithmic services hint at the automated decision-making systems through which search information is presented to users. Therefore, AI analytics makes automated decisions that extract personal data using a diversified range of information.

In my opinion, technological advancements come with both pros and cons. And here, the con is the threat to privacy. The inquiry about this partnership between both companies, Ascension and Google came into existence after the announcement of Google to acquire the fitness tracker company Fitbit and a cloud computing-based medical deal with Mayo Clinic (Farr & Elias, 2019). Algorithmic oppressions are used to manipulate the governance of data and privacy through the “human-out-of-the-loop” approach that suggests automated decisions are made by intelligent systems based on a pre-determined set of scenarios fed into machine learning processes (Crawford, 2021). This human-out-of-loop approach suggests that data is mined by tech giants without any legal consent from the owners. Here owners can be the patients who unwillingly are giving away their private data to Google to get a personalized advertisement feed while their online browsing time period. The United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted a governance framework for AI services that recommend fair, transparent, and contestable practices advising regulatory bodies to take stricter action against illegal data harvesting by tech giants (UNSECO, 2022). The initial goal of Google was to make it easier for the entire healthcare industry to extract specific information about patient data in medical records. However, this emerged and raised privacy concerns for the general public as they unknowingly are sharing their private information with unwanted companies on the verge of enhancing their overall revenue and power in the global market. Google paid a penalty of $170 million to the Federal Trade Commission (Garcia, 2019). With this penalty, Google in an indirect way accepted that it mines the personal information of its users for its own benefit.

Source from: Neal Sklar

Conclusion

The blog argued that the tech giant Google extracts the private data of its users to increase its revenue in the long term with specified AI and algorithmic practices. Threat to privacy in a digital age is a major concern as the personal information of users on online platforms is being considerably harvested. The findings reveal that algorithmic services are used by Google to extract health information about 50 million American patients in partnership with Ascension. With its search-based algorithm, the tech giant was able to gather the medical record history of patients in America. Further, personalization comes with the cost of giving away personal data without the consent of users using the services of Google. Thus, it can be concluded that this tracing of behavioral activities leads to issues of concern in digital platforms that may impact the personal data of users in a negative way and hence it should be checked with appropriate platform governance to avoid future implications in the long run.

References

Copeland, R., & Needleman, S.E. (2019, November 12). Google’s ‘Project Nightingale’ Triggers Federal Inquiry. The Wall Street Journal. https://www.wsj.com/articles/behind-googles-project-nightingale-a-health-data-gold-mine-of-50-million-patients-11573571867

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Farr, C., & Elias, J. (2019, November 12). Google’s hospital data-sharing deal raises privacy fears — here’s what’s really going on. CNBC. https://www.cnbc.com/2019/11/12/google-project-nightingale-hospital-data-deal-raises-privacy-fears.html

Flew, T. (2021). Regulating platforms. John Wiley & Sons.

Garcia, A. (2019, November 15). Google’s ‘Project Nightingale’ center of federal inquiry. CNN. https://edition.cnn.com/2019/11/12/tech/google-project-nightingale-federal-inquiry/index.html

Griggs, M.B. (2019, November 12). Google reveals ‘Project Nightingale’ after being accused of secretly gathering personal health records. The Verge. https://www.theverge.com/2019/11/11/20959771/google-health-records-project-nightingale-privacy-ascension

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, culture & society, 39(2), 238-258. https://doi.org/10.1177/0163443716643157

Noble, S. U. (2018). Algorithms of oppression. New York University Press.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Pilkington, E. (2019, November 12). Google’s secret cache of medical data includes names and full details of millions – whistleblower. The Guardian. https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

UNESCO. (2022). UNESCO adopts first global standard on the ethics of artificial intelligence. https://www.unesco.org/en/articles/unesco-adopts-first-global-standard-ethics-artificial-intelligence

Be the first to comment

Leave a Reply