
Source from https://stethoscopemagazine.org/2021/04/03/patient-privacy-in-the-digital-age/
In the 21st century, with the rise of big data technology and its increasing adoption, artificial intelligence (AI) has slowly but surely become an integral part of our lives. Our “digital existence” involves storing, sharing, and accessing personal biometric and identity information in a structured format, which could potentially be leaked and misused (Lagerkvist, 2018). The AI revolution has also sped up data sharing, making privacy protection even more challenging. In this digital age, privacy isn’t just about keeping our personal information safe and maintaining our dignity, but also safeguarding public interests and promoting human welfare.
Connotation and development of privacy right
Most people see privacy as a right tied to human dignity and worth, and believe they should have control over their personal information, photos, and actions. This right can change depending on societal norms (Friedman, 2011). The idea of privacy first appeared in the United States in the late 19th century when two law partners, Brandeis and Warren, published an article in the Harvard Law Review called “On the Right to Privacy”(Warren & Brandeis, 1890). Since then, privacy rights have grown and evolved with technological advancements and human rights movements. From its beginnings as the right to personal solitude, privacy has expanded to include self-determination, allowing individuals to make their own choices in their private lives without interference from others (Daniel, 2008). As internet technology advanced rapidly in the 1960s, violations of personal information became more prevalent, and privacy rights broadened to cover managing and controlling personal information and preventing unauthorized use or control by others (Miller, 1971).
The Rise of Artificial Intelligence
AI has come a long way since it was first introduced at the Dartmouth Conference in 1956. Often considered a branch of computer or intelligence science, AI involves creating intelligent machines using cutting-edge technology. The mid-2000s saw a surge in AI development within academia and industry, and now, powerful tech companies have deployed AI systems worldwide that rival or even surpass human intelligence (Crawford, 2021).
Privacy crisis in the digital age
Unfortunately, our digital age has also brought about a privacy crisis. The growing trend of datafication and algorithmization has allowed governments and commercial organizations to collect, analyze, and share personal information over the past two decades (Thomas, 1992). Companies mastering AI are particularly interested in personal data because of its potential to drive traffic and generate economic benefits, making it a “strategic resource” (Regan, 2002).
While privacy laws prevent businesses from directly selling personal information for profit, a gray area exists where personal data is used to predict people’s behavior and inform business decisions, potentially blurring the lines between legal and ethical practices.

Source from https://limevpn.com/is-there-privacy-in-digital-age/
Derek Leben (2018), in his book Ethics for Robots, suggests that AI relies on the evolutionary logic of automating information production. AI development focuses on areas where algorithms can replace humans in making logical judgments. In practice, AI technologies are commonly used in news communication for precise information distribution, with algorithm-driven companies delivering targeted content based on users’ reading preferences and behavior databases (Regan, 2002). However, many of these databases fall into the gray area mentioned earlier, potentially containing both general user behavior data (like time spent on a particular page) and private information (like location, age, gender, and other identifying details provided during registration).
In today’s world of “digital convenience,” users often aren’t aware that their actions can lead to privacy breaches (Mee & Brandenberg, 2020). Intelligent algorithms collect user information in two ways. The first is through fragmentation, where users’ simple daily actions on digital platforms are broken down and unknowingly handed over to algorithm-driven companies. The second is universality, meaning nearly all digital media platforms collect user data to some extent. It’s nearly impossible for users to be constantly vigilant or develop self-protection strategies for every platform. Social media has encouraged the habit of sharing our lives online, with almost half of adults posting personal content multiple times a week. As a result, personal information becomes a digital symbol that flows between different devices and platforms, easily captured by algorithms (Logan, 2019).We’re all stitched together and drawn out by data in the digital ecology.
Privacy Abuse and Invasion under Algorithmic Logic
In an online environment built on artificial intelligence, personal information is heavily relied upon for production, circulation, and use (Fairfield & Reynolds, 2022). Smart devices and algorithms often gather, analyze, and use vast amounts of user data without their knowledge. Users are “commercially profiled” on websites based on browsing history, clicks, search records, and preferences.
Cambridege Analytica scandal 2018-The darkest hour of Facebook

Source from https://www.cnbc.com/2018/04/04/cambridge-analytica-says-no-more-than-30-million-people-impacted.html
The Cambridge Analytica scandal, a significant moment in online data security and privacy, involved the illegal collection of 87 million Facebook users’ data to interfere in the 2016 presidential election.
According to The New York Times, in 2014, a Cambridge University researcher named Kogan developed a personality test app that collected Facebook users’ data, along with their friends’ data, through authorization. The data was then provided to Cambridge Analytica for political campaign analysis (Granville, 2018).
What information was collected?
Although only 270,000 people took the test and consented, Cambridge Analytica ended up with data from over 50 million users (Granville, 2018). Cambridge Analytica analyzed users’ behavior patterns, personality traits, values, upbringing, and political leanings, then targeted them with campaign ads and messages to influence voting intentions.
Facebook’s response?

Source from https://www.theguardian.com/technology/2019/mar/17/the-cambridge-analytica-scandal-changed-the-world-but-it-didnt-change-facebook
According to an investigation by Channel 4 News (2018), several U.S. political elections were affected by Cambridge Analytica’s data collection and analysis. Initially, Facebook avoided addressing the issue of user data protection vulnerabilities. However, as criticism and questions continued, CEO Mark Zuckerberg acknowledged the breach in a written statement, which detailed the data scandal and Facebook’s actions after learning about the data sharing with Cambridge Analytica (CBS NEWS, 2018).
Facebook’s fault?
Facebook’s fault lies in the data leak and its negative effects, including data collection, use, and deletion. The breach resulted from a significant loophole in Facebook’s personal data authorization policy, making it easy for third parties to gather users’ personal data. Taking necessary security measures to protect user data is a basic legal requirement, but Facebook clearly failed in this case. Zuckerberg admitted that Facebook previously displayed user birthdays and friends’ locations, and encouraged users to log in to different apps and share private info (CBS NEWS, 2018). Facebook’s policy allowed apps to access friends’ information with consent, but even after implementing regulations in 2014 to restrict app use of user data, third-party software had already been collecting data without “informed consent”(Issak & Hanna, 2018).
When Facebook discovered the user data breach involving Kogan and Cambridge Analytica, they only asked for “legal documents” to prove the data had been deleted, without verifying the deletion. This lack of oversight fueled the use of the data in the 2016 presidential elections and caused even more damage.
User privacy is a major concern with data breaches. The information Kogan collected included users’ addresses, genders, races, ages, work experiences, educational backgrounds, social networks, activities, and preferences. Big data tools can analyze this personal data to not only identify specific individuals, but also track their shopping habits, increasing the risk of privacy exposure in the digital age (Mee & Brandenberg, 2020). Over the past few years, privacy legislation has become a priority in various countries, with social media companies like Facebook and Twitter becoming primary targets for personal data protection.
Deeper Risks

Source from https://theweek.com/cartoons/766413/political-cartoon-mark-zuckerberg-facebook-cambridge-analytica-data-privacy-scandal-congress-fake-news-trust
The Facebook data scandal, involving 50 million users, may be smaller in scale than previous breaches like Yahoo’s 3 billion accounts or Equifax’s 143 million credit records, but it raises unprecedented concerns. These concerns extend beyond privacy issues to the impact of using users’ social network data on political campaigns, alarming governments and society as a whole. The focus on data protection has long been on the damage caused by data leaks and improper use, but we must also recognize that personal data carries national and public interest (Laterza, 2021). The seriousness of this data breach is exacerbated by the data’s use in the U.S. presidential campaign, potentially affecting the national political environment and even threatening democracy. UK’s government departments have launched investigations into Cambridge Analytica, which they believe also played a significant role in the Brexit vote
A 2012 article in Nature examined whether social networks could influence the voting behaviour(Aral, 2012). The study found that Facebook users who received specific voting messages during the 2010 U.S. congressional midterm elections were more likely to vote than those who received general messages (Laterza, 2021). Social media platforms like Facebook and Twitter have become key targets for political ads to influence votes. Cambridge Analytica capitalized on this opportunity by analyzing users’ data and placing political ads to influence votes based on political tendencies and personal preferences, all without users’ knowledge.
How to protect our data?
The Facebook-Cambridge Analytica scandal highlights the importance of regulating “data profiling” in the era of big data and AI. Recently, profiling has caught the attention of legislators. The EU’s General Data Protection Regulation imposes strict restrictions on data profiling, requiring legal basis or explicit user consent and prohibiting certain types of data analysis activities, including those whose results may lead to discrimination against individuals if they are based on the individual’s race, ethnicity, political affiliation, religious beliefs, sexuality, or other factors, political affiliation, religious beliefs, sexual orientation, gender, etc., or achieve the same effect as such discrimination (GDPR, 2017).
To protect user data, platforms must meet informed consent requirements, providing thorough explanations before authorization and offering convenient privacy settings (Philips, 2018). Users should be aware that their behavioral data will be exposed to different degrees based on their privacy settings and will be analyzed and utilized by social network platforms and third parties according to their authorization.
In conclusion
As science and technology progress, artificial intelligence, with data and algorithms at its core, has developed rapidly. While it brings convenience to our lives, it also poses a threat to personal privacy. In today’s society, material advancements are accompanied by significant strides in spiritual growth. With the rise of modern individualism and the flourishing human rights movement, people are increasingly demanding freedom, independence, and higher expectations for privacy. In the age of artificial intelligence, it’s crucial for individuals, societies, nations, and all stakeholders to collaborate and address the challenges that we face, mitigating the negative effects of artificial intelligence so that it can better serve humanity.
Reference List
Aral, S. (2012). Social science: Poked to vote. Nature (London), 489(7415), 212–. https://doi.org/10.1038/489212a
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t
Daniel, J. (2008) Understanding Privacy. Cambridge: Harvard University Press.
Data, Democracy and Dirty Tricks. (2018). Retrieved 15 April 2023, from https://www.channel4.com/news/data-democracy-and-dirty-tricks-cambridge-analytica-uncoveRed-investigation-expose
EU General data protection regulation (GDPR) : an implementation and compliance guide (Second edition.). (2017). IT Governance Publishing.
Facebook CEO Mark Zuckerberg breaks silence, admits “breach of trust”. (2018). CBS News. Retrieved 15 April 2023, from https://www.cbsnews.com/news/mark-zuckerberg-facebook-ceo-cambridge-analytica-data-scandal-statement-today-2018-03-21/
Fairfield, J., & Reynolds, N. (2022). Griswold for Google: Algorithmic Determinism and Decisional Privacy. The Southern Journal of Philosophy, 60(1), 5–37. https://doi.org/10.1111/sjp.12454
Friedman, L, M. (2011). The Human Rights Culture: A Study in History and Context. Quid Pro, LLC.
Granville, K. (2018). Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens. Retrieved 15 April 2023, from https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.html
Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer (Long Beach, Calif.), 51(8), 56–59.
Lagerkvist, A. (2018). Digital existence : ontology, ethics and transcendence in digital culture (A. Lagerkvist, Ed.; First edition.). Routledge, an imprint of Taylor and Francis.
Laterza, V. (2021). Could Cambridge Analytica Have Delivered Donald Trump’s 2016 Presidential Victory? An Anthropologist’s Look at Big Data and Political Campaigning. Public Anthropologist, 2021(1), 119–147. https://doi.org/10.1163/25891715-03010007
Leben, D. (2018). Ethics for robots : how to design a moral algorithm (First edition.). Routledge, an imprint of Taylor and Francis.
Logan, R. K. (2019). Understanding Humans: The Extensions of Digital Media. Information (Basel), 10(10), 304–. https://doi.org/10.3390/info10100304
Mee, P., & Brandenburg, R. (2020). Digital Convenience Threatens Cybersecurity. In MIT Sloan Blogs. Massachusetts Institute of Technology, Cambridge, MA.
Miller, A. R. (1971). The assault on privacy : computers, data banks, and dossiers. University of Michigan Press.
Phillips, M. (2018). International data-sharing norms: from the OECD to the General Data Protection Regulation (GDPR). Human Genetics, 137(8), 575–582. https://doi.org/10.1007/s00439-018-1919-7
Regan, P. M. (2002). Privacy as a Common Good in the Digital World. Information, Communication & Society, 5(3), 382–405. https://doi.org/10.1080/13691180210159328
Thomas, K. (1992). Beyond the Privacy Principle. Columbia Law Review, 92(6), 1431–1516. https://doi.org/10.2307/1122999
Warren, S. D., & Brandeis, L. D. (1890). The Right to Privacy. Harvard Law Review, 4(5), 193–220. https://doi.org/10.2307/1321160
Be the first to comment