The Privacy Paradox in the Digital Age: Balancing Rights and Convenience

Image1: The New York Times.(2014)

Introduction

This is an era of increasing emphasis on privacy, but it is also an era of “streaking” privacy. When you check “I have read and agree to the above terms” and click “Accept all cookies”, have you carefully read all the terms listed on the website? Have you ever worried about the risk of your privacy being leaked? But the result is that almost everyone will click to agree and sign an agreement with an Internet company. Although most people say they attach great importance to personal privacy protection. 

This phenomenon is not accidental. When we discover that big data push from social platforms often “peeps” into our hearts, it is also known as “personalized push”. We finally realized that our privacy might have been compromised! But from a legal perspective, platforms must reasonably collect user information and properly preserve and standardize its use. So, it is very likely that we have leaked privacy by ourselves, but this does not mean that we treat personal privacy with contempt. We are simply trapped in the “privacy paradox.”

What is the privacy paradox?

The “privacy paradox” originated in the medical field. Some researchers found that patients strongly do not want their private information to be disclosed. After the researcher gives an authoritative explanation, the patient’s attitude will change. Subsequently, this research gradually expanded to the consumer field. In a study about online shopping, researchers discovered the phenomenon of “privacy paradox” by measuring users’ privacy attitudes and observing online shoppers’ processing of privacy data. Although users have expressed concerns about privacy violations, they are still willing to provide their personal information to merchants when there is a profit.

In 2006, the term “privacy paradox” was first formally proposed. It is used to describe the differences in privacy behavior between different groups in social networks, middle-aged people are more worried about privacy leaks, but young people tend to disclose their privacy (Barnes, 2006). Subsequent scientific experimental research also proved the existence of the privacy paradox. In 2006, Acquisti and Gross found that although many people said they did not want others to know personal information such as their sexual orientation, they would still disclose the information publicly on social media (Acquisti, A.& Gross, R. 2006). Therefore, the privacy paradox focuses on the phenomenon of divergence between users’ attitudes which their own privacy protection and their actual behaviors.

Why does the privacy paradox exist?

Finite theory and instant gratification. With the popularization of Internet technology and the development of big data technology, more and more companies are able to conduct deeper mining and research on users’ private information. When people discovered the inconsistency between users’ privacy attitudes and behaviors, Scholars began to try to introduce “finite theory” and explain it. “Finite theory” believes that when individuals make decisions. So, they cannot obtain all the information and do not have sufficient knowledge and abilities, individuals are not completely rational when engaging in privacy disclosure behavior (Tversky A & Kahneman D.1974). This is the “privacy paradox” caused by information asymmetry and users’ limited cognitive level. Users lack the ability to calculate benefits and risks, as well as legal knowledge about privacy protection. As a result, users are unwilling to spend a lot of energy to manage their private information. It makes it difficult to implement relatively effective protective measures, and decisions are often the result of bounded rationality.

In addition to the lack of cognitive level, “instant gratification” is also a major reason for the “privacy paradox”(Tversky A& Kahneman D.1974). Users tend to obtain their needs immediately. It is generally believed that the benefits gained from privacy disclosure outweigh the losses caused by the risks. It undermines individual concerns about privacy disclosure and also increases the risk of privacy leakage. So we tend to agree by default without reading privacy policies and mistakenly believe that they will protect our privacy.

Image2: Social networking Customizable Isometric Illustrations | Amico Style

Social needs. Compared with traditional interpersonal communication, as online social platforms gradually penetrate into our daily lives. In order to maintain a social life better, people are more inclined to express themselves online, and the anonymity of social platforms allows users to show their true selves online. Deliberately create even a virtual self, and achieve psychological identity and satisfaction through information sharing to break down barriers. Whether posting comments or sharing daily life, these all increase the risk of privacy leaks. In addition, platform service providers based on the social concept, can try their best to track users’ online traces and provide personalized and accurate push services, making users dependent. Therefore, although users are aware of the dangers of excessive disclosure of privacy, they will still disclose personal information in order to enjoy the convenience and fun of the Internet.

Forced consent in the media society. In the new media environment, platforms have become social infrastructure, and leaving the platform means being excluded from society. For example, during the COVID-19 epidemic, Chinese citizens must use a “health code” to travel. Otherwise, going to public places or taking transportation will be hindered. If we want to obtain the services provided by the platform, we can only be forced to agree to the platform’s privacy terms when registering an account. Therefore, in this dramatized society, only by being forced to “actively” sacrifice one’s privacy and connect to the platform can one be included in normal social life.

Case study: The “Zao” software’s face-changing controversy

Image3: Screenshot of the face-changing video made by netizen

In recent years, a face-changing software called “Zao” has become popular on social networks, on the day of launch, it ranked second among all APP downloads. This is a new software that uses AI smart technology to swap faces with other people. People can easily swap faces with celebrities with just a photo. But as the number of registrations increased, some citizens discovered loopholes in the software, Once the user successfully registers, it means that they agree to the software’s user agreement and privacy policy by default, and this right is irrevocable (The Economist,2019). This means that the user has permanently transferred their portrait rights to the software without knowing, and it is very likely to be used and changed by others. In addition, “Zao” has not obtained the portrait rights authorization from the celebrities, When celebrities believe that their rights and interests have been infringed, software operators do not have to bear any legal responsibility, and users need to bear it themselves. 

This clause has triggered disputes the issues such as the application copyright of the software, data leakage, ethics, and privacy security, It also makes people more alert to the security of personal privacy data on online platforms. In addition, this software also requires facial recognition verification. It includes actions such as blinking, turning the head, opening the mouth and so on. This is nearly consistent with the facial recognition behavior required by financial institutions. Once facial feature information is stolen, the user’s personal property will no longer be safe. Although the operator of “Zao” later deleted this “overlord” clause and apologized to users, this cannot change the situation that the software went from becoming popular to being asked to make corrections within six days of its launch. But this does not change the situation in which the software went from being popular to being required to make corrections.

In fact, the technology of using artificial intelligence to change faces appeared a few years ago. With the advancement of technology, the requirements for “AI face changing” are getting lower and lower. In recent years, there have been more and more “face-changing apps”. For example, there is a Russian face-changing application called Face App, which can make the face in the picture become older or younger. For a time, both celebrities and ordinary citizens began to share their “old age photos”. However, when people enjoy the innovation of AI technology and the fun brought by face-changing apps, they cannot ignore the privacy leakage issues they bring. 

Governance Dilemma: The dilemma of choice between development and privacy protection

In the new media environment, personal privacy exists in the form of personal behavior and preference data, These data are the key for major companies to improve their market competitiveness, and they are also an important pillar of the Internet economy. Excessive protection and governance of online privacy will also hinder the production and vitality of data, thus hindering the development of the Internet economy. Therefore, with a large number of proactive disclosures of privacy and extensive information monitoring, the conflict between the two also makes it more difficult to protect and manage user privacy in the new media environment. To a certain extent, this reflects the difficult choice between the country, society, and individuals in balancing personal rights and protecting industrial development.

How to implement user privacy protection?

Improve the network supervision capabilities of social platforms. The impact of the software becomes more extensive, and the platform’s social and moral responsibilities will also lead to higher legal responsibilities. Different understandings of privacy vary among countries’ constitutions due to diverse cultural backgrounds, history, and philosophical influences. This diversity in interpretations of privacy rights reflects the unique societal values and legal traditions of each country, shaping the legal framework for privacy protection(Goggin, 2023). For example, In November 2021, China’s first law on personal privacy protection, the “Personal Information Protection Law” came into effect. The law adds clear regulations on applications that excessively collect, leak user information and illegally buy and sell information. This action is not only to check and fill the gaps in the current network ecological platform but also demonstrates the country’s determination to rectify unhealthy habits on the network platform. Australia also has a range of legal and policy measures to protect personal privacy, including “the Australian Privacy Act” and “the Australian Information Privacy Principles”. These laws and principles ensure that organizations and businesses follow specific privacy standards when processing personal information, and protect personal information from unauthorized collection, use, and disclosure (Goggin, 2023).

Democratizing social media platforms. In 2009, Facebook committed to ensuring that users would be consulted on any changes to its rules and that the company would in the future defer to the popular will of its users through a new voting process(Suzor & Nicolas P. ). For this massive social network to become more democratized. Also responded to criticism of its privacy policy changes. However, the implementation of this initiative did not achieve the expected results, because Facebook set certain participation requirements.  As a result, user participation was not enough to influence the platform’s decision-making, so in the end, the platform was not truly democratized. But in exploring the path of democratizing social media platforms, we can find inspiration from this case: Improving platform transparency actively encouraging users to participate in discussions and decision-making, and increasing users’ trust in the platform. Some researchers have proposed that the security market should be allowed to fully compete to achieve a relative balance of resources and form some kind of industry self-discipline. This includes platform operators making it clearer, and users can know how their personal information will be used, showing users easier controls, etc.

Conclusion

The balance between rights and convenience has always been the biggest issue in the “privacy paradox”. Privacy issues can not only be solved by relying on technical capabilities, but also involve psychology, society, marketing, and other aspects, and require the participation of many parties. Platform operators which control users’ data, should continue to strengthen the protection of users’ personal privacy information. Face users in a more democratic and friendly way when collecting information, and accept suggestions and opinions. When obtaining information, follow the norms of national network security laws, and clearly and truly label the developer’s information.  For users, although they are always under “data gaze and “intelligent monitoring”(Suzor, Nicolas P. 2019). In order to ensure the security of your own private data, you should carefully identify the permission requirements when authorizing personal information, and improve self-information protection awareness and media usage literacy.

In summary, the privacy paradox behavior of social media users is actually a game between self-disclosure, privacy concerns, and risk perception. Without infringing the rights of the data subject and obtaining authorization. Protecting user privacy and making better use of data resources are the long-term goals of the government and Internet platforms.

Reference

Acquisti, A., & Gross, R. (2006, June). Imagined communities: Awareness, information sharing, and privacy on the Facebook. In International workshop on privacy enhancing technologies (pp. 36-58). Berlin, Heidelberg: Springer Berlin Heidelberg.

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 72-79.

Furlong, A. (1998). Should we or shouldn’t we? Some aspects of the confidentiality of clinical reporting and dossier access. International Journal of Psychoanalysis, 79, 727-740.

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., Bailo, F. (2017) Executive Summary and Digital Rights: What are they and why do they matter now? In Digital Rights in Australia. Sydney: University of Sydney. 

Karppinen, K. (2017). Human rights and the digital. In The Routledge companion to media and human rights (pp. 95-103). Routledge.

Suzor, Nicolas P. 2019. ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives. Cambridge, UK: Cambridge University Press. pp. 10-24.

The Economist. (2019, September 7). Chinese netizens get privacy-conscious. https://www.economist.com/business/2019/09/07/chinese-netizens-get-privacy-conscious

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. science, 185(4157), 1124-1131.

Images

The New York Times.(2014). Americans Say They Want Privacy, but Act as if They Don’t.

Storyset, Social networking Customizable Isometric Illustrations |Amico Style

Be the first to comment

Leave a Reply