Raising concerns on digital rights, and private information protection
Digital rights become popular on the internet
In our modern digital age, we rely on the internet for almost everything we do. From communication with loved ones to online shopping and banking, our lives are deeply intertwined with the digital realm. But with this increased reliance on the internet comes a host of concerns around privacy, security, and digital rights.
Privacy has long been a concern for those of us who use the internet. Every time we use a search engine or log into a social media account, we are sharing our personal information with third-party companies. With the rise of big data and analytics, our online behaviour can be tracked and monitored in ways that were once unimaginable (Tsai et al., 2015). This creates concerns as to who controls access to personal data and its usages.
However, with the constant change of times and life experience, the past concept of “private” is “hidden” has undergone a subversive change in the era of big data. Both the Internet of Things and smart media brought by 5G are based on big data. Naturally, these data also contain a large number of users’ privacy(Whittaker et al, 2018). The popularity of social media exposes people’s private lives on the Internet, and private information becomes readily accessible in the era of big data.
Therefore, it is particularly important to define the privacy boundary between reasonable use of data and privacy invasion. American scholar Sandra Pieronio once proposed three rules of privacy boundary: control boundary link, control boundary penetration and clear boundary ownership. These three rules actually correspond to “who to say”, “what to say”, and “how to control” (Whittaker et al, 2018). But on social media, all three are at risk of getting out of hand.
Similarly, security is a critical concern for anyone who uses the internet. With cyberattacks on the rise, it is essential to develop awareness regarding the risks of sharing one’s personal information online. Malware, phishing scams, and other online threats can put our personal and financial information at risk (Jansen & Leukfeldt, 2016). And as more of our daily activities are moved online, the potential consequences of a security breach become more severe. In the era of “everyone has a microphone”, the use of social media prompts netizens to have a more open and inclusive attitude towards privacy, and they are more willing to expose their privacy in the public domain. In addition, the satisfaction gained from using social networks, such as the likes and support of friends for a post in a circle of friends, also encourages more self-disclosure.
At the same time, users show obvious third-person effect on privacy disclosure on social media: users know the existence of privacy security problems in social media, but think that the risk of privacy intrusion will not happen to them. In short, the use of social networks promotes the penetration of privacy boundaries and makes users more inclined to make private content public, which puts more and more privacy at risk of exposure.
AI technology affects causes more concerns on privacy
Algorithmic governance refers to the use of algorithms and other computational tools to make decisions that affect individuals and communities. Some experts argue that algorithms are better than people. They are generally better than people at tasks that require large amounts of data to be processed quickly and accurately (Silva & Kenney, 2019). For example, an algorithm can analyse large quantities of data to observe repeating patterns and develop forecasts that are difficult, if not impossible, for a human to do in the same amount of time. On the other hand, people are better than algorithms at tasks that require judgment, intuition, and creativity. Humans can interpret and understand complex situations in ways that algorithms cannot. They can make connections between seemingly unrelated pieces of information, apply common sense, and recognize the importance of context and nuance.
A new AI face-swapping app is taking social media by storm, but it’s also causing serious privacy controversy. According to the introduction, using AI technology, users can replace the faces of stars in selected videos by uploading a face-on photo to the software, and generate a video clip featuring themselves.
The app went viral overnight, but later raised serious questions about its alleged violation of users’ privacy. Because it is stipulated in the licensing agreement that users must sign before using, after uploading and publishing content, users agree to grant some relevant parties “worldwide free, irrevocable, permanent, sublicensible and sublicensible rights”. Some netizens and legal experts believe that it is suspected of illegally collecting users’ facial information. And worries that the collected facial information could be misused or stolen by hackers.
Amazon’s recruitment algorithm had a bias against women discovered in 2018. The algorithm downgraded resumes that included female-oriented language and favoured those with male-oriented language. The incident raised concerns about bias and discrimination in AI and algorithmic decision-making, highlighting the need for transparency and oversight in the development and implementation of such systems (Lavanchy, 2018). Amazon abandoned the algorithm, but it emphasized the importance of ensuring that AI is developed and used to promote fairness and equality.
While these developments offer many benefits, such as increased efficiency and accuracy, they also raise a number of concerns. One major issue is the potential for algorithmic bias, which can result in unfair and discriminatory outcomes. For example, training an algorithm using biased data could make decisions that perpetuate current social inequalities. Additionally, algorithms can be opaque and difficult to understand, which can make it difficult to challenge their decisions (Silva & Kenney, 2019). Another concern is the impact of automation on employment. Even though automation can result in higher productivity and economic progress, it also poses a risk of displacing a significant number of workers, especially those employed in low-skilled positions. Finally, algorithmic governance raises questions about the role of human decision-making in society. As machines increasingly make decisions that were once the sole purview of humans, it becomes important to consider how to ensure that these decisions are transparent, accountable, and aligned with democratic values.
In addition, privacy concessions in exchange for personalized recommendations “is the algorithmic recommendation trading rule. The advent of algorithm-based technology, which allows data processors to produce and infer information made public without people’s consent — such as your social gestures, likes or retweets — puts users’ privacy and other personal rights, such as image rights, at risk of being unknowingly violated. It is clear that AI, automation, and algorithmic governance are powerful tools that have the potential to transform society in many ways. However, it is important to be mindful of the risks and challenges that these technologies present, and to work towards ensuring that they are used in ways that are equitable, just, and respectful of individual rights and freedoms.
Digital rights are also a pressing concern in our current moment. Who has the right to access and use digital information? How should we regulate the power of tech companies in shaping our online experiences? These are difficult questions that require careful consideration and attention. Digital rights are essential to ensure that every individual has the right to privacy, free speech, and access to information in the online world (Nash, 2019). The internet has become a ubiquitous part of our lives, and access to it has become a fundamental human right. However, many governments and corporations are imposing restrictions on the internet and limiting access to information. This can result in the stifling of free speech and the dissemination of information that is essential to hold those in power accountable.
One of the biggest challenges in addressing these issues is the complex and contested nature of internet culture. The rules of social media platforms are often unclear and subject to change, making it difficult to ensure that our digital rights are protected. Additionally, the sheer scale of the internet and the volume of content being shared makes it hard to regulate and police effectively.
The Cambridge Analytica scandal serves as a prominent case study for the concepts of privacy, security, and digital rights. The year 2018 brought to light the fact that the political consulting company gathered data from numerous Facebook users without their approval and utilised it to influence presidential election outcomes in the U.S in 2016, as per Hinds et al. (2020). This occurrence emphasised the necessity for improved privacy safeguards and more stringent rules pertaining to the gathering and exploitation of personal information. It also raised concerns about the security of digital platforms and the potential misuse of data, emphasising the importance of ensuring digital rights for individuals in the digital age.
In conclusion, the issues of privacy, security, and digital rights are crucial in the digital age. As our reliance on technology increases, it is vital to have a deep understanding of these issues and their implications. The internet is a valuable resource that can help to enhance people’s lives worldwide. However, it is important to balance the benefits of the internet with the need for privacy and security, and the protection of digital rights. By working together, governments, corporations, and individuals can ensure that the internet is a safe and open space that promotes innovation, free speech, and access to information for all.
In the era of big data, with the Internet increasingly penetrating into people’s lives, almost all activities of people will leave traces on the Internet. The privacy leakage problem of social media requires all parties to make joint efforts to protect users’ personal privacy information, and at the same time, further improve users’ awareness of the risk of privacy leakage. In the aspect of network privacy security, the application of technology is essential. We should protect network privacy security from the perspective of technology and application, enhance the ability of data control and the application of network trace removal technology, strengthen the network encryption function and data risk warning mechanism. Authoritative privacy security evaluation software is introduced to provide privacy and security guidance to users when using software services, so as to prevent personal privacy from being leaked through software. With the continuous update of Internet technology in the era of big data, media platforms need to consciously assume their due social responsibilities and protect users’ data privacy. In the context of the Internet, with the development of information storage technology, it is difficult to completely erase people’s traces in the Internet, instead of the chain of perfect memory, and the right to be forgotten plays a positive role in promoting the protection of personal privacy, so all parties should assume corresponding responsibilities and jointly protect users’ personal privacy..
Hinds, J., Williams, E. J., & Joinson, A. N. (2020). “It wouldn’t happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human-Computer Studies, 143, 102498.
Jansen, J., & Leukfeldt, R. (2016). Phishing and malware attacks on online banking customers in the Netherlands: A qualitative analysis of factors leading to victimization. International Journal of Cyber Criminology, 10(1), 79.
Tsai, C. W., Lai, C. F., Chao, H. C., & Vasilakos, A. V. (2015). Big data analytics: a survey. Journal of Big data, 2(1), 1-32.
De Gregorio, G. (2020). Democratising online content moderation: A constitutional framework. Computer Law & Security Review, 36, 105374.
Every-Palmer, S., Cunningham, R., Jenkins, M., & Bell, E. (2021). The Christchurch mosque shooting, the media, and subsequent gun control reform in New Zealand: a descriptive analysis. Psychiatry, Psychology and Law, 28(2), 274-285.
Langvardt, K. (2017). Regulating online content moderation. Geo. LJ, 106, 1353.
Nash, V. (2019). Revise and resubmit? Reviewing the 2019 online harms white paper. Journal of Media Law, 11(1), 18-27.
Paz, M. A., Montero-Díaz, J., & Moreno-Delgado, A. (2020). Hate speech: A systematized review. Sage Open, 10(4), 2158244020973022.
Lavanchy, M. (2018). Amazon’s sexist hiring algorithm could still be better than a human. The Conversation.
Silva, S., & Kenney, M. (2019). Algorithms, platforms, and ethnic bias. Communications of the ACM, 62(11), 37-39.
Wajcman, J. (2017). Automation: is it really different this time?. The British journal of sociology, 68(1), 119-127.
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., … & Schwartz, O. (2018). AI now report 2018 (pp. 1-62). New York: AI Now Institute at New York University.