Navigating Privacy Challenges from Life to Death in Social Media

Privacy Challenges from Life to Death in Social Media
"Exploring the complex terrain of digital privacy: From face data collection to managing digital legacies, this blog delves into the multifaceted challenges traversing life to death in the realm of social media."

In the digital age, users exchange and share personal data to obtain high-quality services across social media networks. There is an ongoing game between the dissemination of information and personal privacy. With the explosion in data volumes, advances in technology, and the growing conflict between private rights and the public interest, the issue of privacy is becoming increasingly complex. From the collection of facial data to the rise of fake videos to the handling of digital legacies, everyone is placed in an increasingly complex maze of digital privacy. When your facial features and personal information are captured and stored in hundreds of millions of databases, it creates a realistic fake video showing you doing something you never did. Even after you leave this world, the digital legacy you are leaving behind can become a target for hackers. 

This blog takes an in-depth look at the challenges to personal privacy in social media, even after death. It then considers ways to manage personal information and digital legacy protection in the evolving online environment.

Privacy Still Challenged on Social Media Platforms?

Privacy is a fundamental right, and freedom citizens enjoy in the digital space. Privacy remains central to social media platforms’ rapid expansion and global reach. It should serve as personal security, freedom from interference, control over personal information and space (Flew, 2021, p. 74). The blog will revisit the impact of face data collection, the application of Deepfake technology, and hacking attacks on individual lifetime privacy, cybersecurity, and society.

Face data collection and user privacy agreement

Advances in platform algorithm technology and deep learning techniques have made identifying and analyzing personal information more accurate and intelligent, increasing the likelihood of personal privacy violations. On the one hand, social media or tech platforms frequently face accusations of collecting, misusing and selling users’ personal information. A previous Chinese AI face-swapping app, ZAO, has been accused of privacy issues over its controversial user privacy agreement. Users must agree to authorize ZAO and its affiliates to modify, edit, sublicense, and relicense their uploaded content, including modification of faces or voices, before using the ZAO application, subject to the consent of the relevant portrait rights holder (Antoniou, 2019). It stipulates that ZAO users relinquish their intellectual property rights to their images and allow the app to use them for marketing purposes. It may go against users’ basic need for privacy and self-expression and could cause an increased risk of misuse or leakage of user data. Governments, organizations and users should learn from the EU’s General Data Protection Regulation experience and work together to establish a robust data protection regime to maintain privacy and security in the digital age. The GDPR emphasizes that data processing should be “lawful, fair and transparent” to maximize the protection of user privacy and security, as the right to data protection is emerging as a new human right requirement (Goggin, 2017, p. 97). Collecting face data without permission and authorization is a criminal act that violates citizens’ portrait rights, privacy rights, and personal information security.

Chinese AI face-swap app Zao collects face data (Photo from Reuters).

On the other hand, the complexity and vagueness of the terms of online service agreements give platform operators excessive power. Users often need to read these terms carefully or have unclear definitions of what constitutes modification, editing, and sublicensing in the agreement, making it difficult to understand their rights and responsibilities accurately. As a result, they may inadvertently agree to share their data with service providers when they register. It has led to the problem of information asymmetry consumers face concerning the protection of personal information and privacy, along with the bundling nature of consent terms and the blurring of the line between personal and non-personal information (Flew, 2021, p. 75). Besides, users need to be made aware of the autonomous control and protection of data flows. Unidentified links or unknown app recommendations frequently surface on social media, sparking curiosity. However, clicking or downloading them may result in malware, viruses, or phishing websites stealing personal information. Users should carefully monitor URLs for security or install reliable anti-virus software to protect privacy and secure web links.

Zao’s user privacy agreement sparks controversy.

Misuse and spread of deepfake technology

In addition, the face-capture feature of apps like Zao makes it easy for users’ data to be accessed. Users may not realize that their face data will be used to generate false images that may be used for nefarious purposes, such as deep-fake false or pornographic image production. This kind of data collection without the user’s fully informed consent seriously violates the user’s right to privacy.

Deepfake technology uses AI and deep learning algorithms to synthesize facial and behavioural features of humans into highly realistic fake videos, photos and audio. However, more and more illegals are using deepfake technology to create non-consensual content to be applied to violent or pornographic scenes. Such acts may seriously threaten an individual’s privacy, property security and reputation through fabricated statements, behaviours or events. Women are the primary targets of deepfake, especially female celebrities. Deeptrace Labs study shows 96% of deepfake videos are non-consensual about objectifying women or turning them into sexual fantasies (Henry & Witt, 2024). For instance, the American singer Taylor Swift became one of the victims of deepfake, with numerous fake sex photos spreading on the Internet. Individuals have “the right to be forgotten” and can sometimes request search engines or platforms to remove search results or content to protect their privacy and rights (Goggin, 2017, p. 97). However, even if the content is removed, the Internet’s high-speed spreading nature will greatly damage celebrities. Paris (2021, p. 8) also points out that disseminating such fake images will harm vulnerable groups, manipulate viewers’ attitudes towards them and reinforce the inequalities of existing social structures.

Image from Pixabay/ Kingstoncourier

However, platforms have limitations in regulating the protection of users’ rights. Different platforms have different standards and uncertainties about when to remove content and how to balance the relationship between freedom of expression and privacy. Although Section 23 of the Communications Decency Act protects freedom of expression on the Internet, Internet service providers enjoy certain immunity from liability. Platforms usually play a neutral intermediary role, acting solely as facilitators of content dissemination and conversation to avoid being responsible for user behaviour (Suzor, 2019, p. 15). Moreover, if false information on a platform violates human rights or usage policies, the platform may have to take self-governing by removing or restricting the content. There are still many countries where formal and informal regulatory rules still need to effectively address the misuse of Deepfake technology, leading to legal gaps and regulatory loopholes in privacy protection. It is vital to implement legislative and regulatory initiatives such as the Digital Services Act and the Code of Conduct on False Information and to educate the public on digital literacy, stressing the marking of deepfake content and the responsibility of platforms (Aggarwal, 2024). This co-governance contributes to the safeguarding of individual rights and social stability.

Cybersecurity and hacking

Due to the huge number of users, extensive functionality and commercial value of social media platforms, they have become an ideal target for hacking, creating the risk of user data leakage and privacy breaches. A survey of the causes of sensitive data loss in global organizations revealed mostly operating system (OS) vulnerabilities on endpoint devices and external attacks, including cybercriminals (Petrosyan, 2024). Hacking may cause cybercrime with its common variety of attacks involving ransomware, phishing, weak passwords, DDoS attacks and IoT device intrusion. Hackers continue to target social media platforms to access or steal sensitive user data, including personal information, login credentials and private messages and then use them for political purposes or illegal activities. Hackers employed deepfake voices to breach Retool’s systems, luring employees into providing multi-factor authentication codes via phishing text messages and gaining access to company accounts (Kan, 2023). Another case concerns a Hong Kong multinational firm’s videoconferencing in which deepfake techniques were used to impersonate its CFO and defraud $25 million (Chen & Magramo, 2024). Regula‘s (2023) survey reveals that 37% of organizations have experienced Deepfake voice fraud, while 29% have fallen victim to Deepfake videos. It reveals organizational vulnerabilities and regulatory shortcomings in cybersecurity.

Hacking businesses for PII data and commercial fraud (Image from depositphoto).

It is worth noting that data breaches can have catastrophic ripple effects. Employee behavioural non-compliance may result in data misuse or theft, damaging a company’s reputation and exposing it to legal sanctions. It could impose significant economic costs, including investigation, remediation, audit, and potential legal costs (Huang et al., 2023). In the worst cases, it also loses competitive advantage and trust among multiple business stakeholders. The breach of Personally Identifiable Information (PII) data could engender an invasion of personal privacy rights with various negative consequences, including identity theft, harassment, financial fraud, and threats to personal security. Organizations should rely on the three fundamental principles of data security, confidentiality and integrity (Li & Liu, 2021, p. 8182) to ensure that data is protected from unauthorized access, confidentiality is maintained, and data integrity is safeguarded to create positive social impact and online safety. 

Embracing Digital Legacy: A New Social Media Privacy Challenge

When we understand the various challenges to social media privacy, a new problem creeps in. As we leave more and more traces on social media, do we realize that these digital footprints will live on after we are gone and pose new challenges to our privacy?

You find that when an individual passes away, their online status remains. For instance, comedy celebrity Gottfried’s Twitter account was allegedly hacked multiple times just hours after his family announced his death (Zilio, 2022). It also raises questions about the ownership and management of their digital assets and personal information. Edwards and Harbinja (2013, p. 101) suggest that individuals should have the power to preserve and control their reputation, dignity, integrity, secrets, and memories after their death. Handling digital assets and presence is a complex issue involving many factors, such as the right to privacy, the inheritance system, and the protection of personality rights.

Guidelines for Managing the Digital Legacy

Managing digital assets and presence involves navigating user preferences, platform regulations, and national laws. Understanding these dynamics is essential to safeguarding individual rights and privacy in the digital era. Different platforms handle digital legacies in different ways. Twitter and LinkedIn will deactivate the deceased’s account, while Facebook and Instagram offer a “memorial account” option. Heirs can manage a deceased user’s profile, but it is no longer publicly updated. Legitimate heirs must provide proper documentation, such as death certificates and relationship verification. Effective management of a deceased person’s social accounts can avoid possible commercial abuse or illegal attacks but also tries to protect their digital identity and privacy.

Illustration by Jamie Wignall/ The Guardian

Furthermore, users need to clearly express their preferences for the disposition of their digital assets. They can appoint a trusted trustee or executor to manage their digital assets. Interestingly, we found a contradiction – people may gladly have their loved ones inherit their valuable accounts or photos and videos as mementos after their deaths. However, they also worry that someone will discover their privacy. Individuals should have the right to decide what to do with their digital legacy through predetermined decisions on whether to delete, share or inherit digital assets. The will/ choice theory suggests that posthumous privacy rights should be based on individuals’ expressed preferences regarding data processing during their lifetime (Okoro, 2018, p. 11). However, there is also a privacy paradox in practice. It represents the gap between a privacy subject’s expressed intentions for post-death privacy information and what they acted upon during their lifetime (Morse & Birnhack, 2022, p. 3). Digital asset management involves multiple stakeholders, leading to a blurring of privacy boundaries and attribution of responsibility. Platform service providers should balance privacy and legal requirements, but imperfect laws and users’ lack of awareness of digital legacy planning add to the difficulty. Family expectations, technical limitations, and administrative difficulties may also affect realizing the deceased’s intentions. When individuals or executors struggle with service providers to gain access to digital assets, they often have to invest a great deal of money, time, and effort to face legal proceedings and technical hurdles. Solutions could utilize platforms such as MyWishes and Clocr to create digital legacy planning programs and provide a legal framework for the management and dissemination of digital legacies with the privacy protections provided by the RUFADAA legislation (Birnhack & Morse, 2022, p. 289).


In this blog, we delve into data collection, deepfake misuse, and cyber hacking scams to reveal the growing reality of privacy protection. Social media’s role as a “digital graveyard” has led to new privacy concerns, including digital assets and presence management and online identity protection. Nevertheless, we also found some encouraging trends and solutions. Protecting personal privacy and digital legacy is not only a technical issue but also an essential one that touches on individual awareness of protection, values and social responsibility. These challenges highlight the urgency of improving transparency, creating accountability, and empowering users. By combining regulatory development, platform governance and societal efforts, we could collectively create a safer, more privacy-respecting digital world.


Regula. (2023, May 1). Regula survey: a third of businesses hit by deepfake fraud. The Paypers.–1262408

Antoniou, A. (2019, September 6). Zao’s deepfake face-swapping app shows uploading your photos is riskier than ever. The Conversation.

Aggarwal, T. (2024, May 28). Avigating the deepfake dilemma: Government oversight in the age of AI. Observer Reasearch Foundation.

Birnhack, M., & Morse, T. (2022). Digital remains: property or privacy? International Journal of Law and Information Technology, 30(3), 280–301.

Chen, H., & Magramo, K. (2024, February 4). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. CNN.

Edwards , L., & Harbinja, E. (2013). Protecting Post-Mortem Privacy: Reconsidering the Privacy Interests of the Deceased in a Digital World. Cardozo Arts & Entertainment Law Journal, 32(1), 101-147.

Flew, T. (2021). Issues of Concern. Regulating platforms. Polity Press, 72–79.

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Adele, W., Sunman, L., & Bailo, F. (2017). Digital Rights in Australia. Sydney Law School Research Paper No. 18/23.

Huang, K., Wang, X., William Wei, W., & Madnick, S. (2023, May 4). The Devastating Business Impacts of a Cyber Breach. Harvard Business Review.

Henry, N., & Witt, A. (2024, February 1). Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all. The Conversation.,well%2Dknown%20actors%20and%20musicians.&text=This%20is%20concerning%20but%20not%20surprising

Kan, M. (2023, September 16). Hacker Deepfakes Employee’s Voice in Phone Call to Breach IT Company. PCMag.

Li, Y., & Liu, Q. (2021). A comprehensive review study of cyber-attacks and cyber security; Emerging trends and recent developments. Energy Reports, 7, 8176–8186.

Morse, T., & Birnhack, M. (2022). The posthumous privacy paradox: Privacy preferences and behavior regarding digital remains. New Media & Society, 24(6), 1343–1362.

Okoro, E.L. (2018). Death and Personal Data in the Age of Social Media. Tilburg University.,de%20facto%20privacy%20rights%20bearers.

Paris, B. (2021). Configuring Fakes: Digitized Bodies, the Politics of Evidence, and Agency. Social Media + Society, 7(4).

Petrosyan, A. (2024, January 26). Causes of sensitive information loss in global businesses 2023. Statista.

Suzor, N. P. (2019). Who Makes the Rules? In Lawless: The Secret Rules That Govern our Digital Lives, 10–24. Cambridge: Cambridge University Press.

Zilio, B. (2022, April 12). Gilbert Gottfried’s Twitter allegedly hacked hours after death announcement. Page Six.


Operate knowledge service platform. (2023, September 3). AI face change overnight fire, ZAO” overlord terms “or reveal user privacy? [Photo]. Sohu News.

Kingstoncourier. (2022, February 14). Deepfakes and altered image abuse: How the misuse of synthetic media poses a new digital threat for women[Image].

Meyer, H. (2023, October 16). Digital legacy: how to organise your online life for after you die[Illustration]. The Guardian.

Sharma, R. (2019, September 2). Zao: Chinese deepfake app goes viral – but faces immediate backlash over privacy concerns[Image]. Inews.

Be the first to comment

Leave a Reply