Unraveling the Digital World Conundrum: Balancing Privacy, Rights, AI, and Automation – Are We Ready for the Challenges of Algorithms, Datafication, Hate Speech, and Online Harms?

Fig 1- Internet culture: Image by Gerd Altmann from Pixabay https://pixabay.com/illustrations/monitor-binary-binary-system-1307227/


Over the years, internet culture has grown and diversified, manifesting unique challenges when it comes to its governance and regulations. With numerous individuals utilizing the Internet for entertainment, communication, business, and education, it is crucial that specific and effective governance policies are put in place. This blog post is going to explore my class unit on internet cultures and governance while discussing and giving insight via current and clear case studies on emerging issues such as privacy and digital rights, AI, automation, algorithms, datafication and hate speech, and online harms as aspects affecting internet governance efforts. Therefore, with the rapid technological growth and increase in digital platforms including artificial intelligence systems. We need to govern and take care of every aspect surrounding internet culture and its utilization.

Privacy & Digital Rights

We are in a digital age and we all need to catch up with every trend that surrounds it including digital rights and privacy. Castells (2011) describes the digital age as a current actual period that is characterized by a vigorous switch from a traditional setup to an economy centered on information technology entangled with the internet. Therefore, privacy and digital rights are vital elements that set a central concern when it comes to this error of the digital age. Flew (2021) explains that digital rights are responsible for the extension of civil rights in the internet age. Thus, the right to utilize various digital platforms as well as create and publish digital content while freely having access to and utilizing other electronic devices including computers and communication networks is crucial to check out for privacy measures when interacting with the internet. With a clear analysis of what our digital rights and privacy entail, let’s dive into gaining an accurate and deep understanding of privacy and digital rights.

Fig 2- People fighting for internet privacy: Image by Ethical Consumer via https://www.ethicalconsumer.org/technology/why-internet-privacy-important-how-are-people-fighting-it

The New York Times (2018) reports that Cambridge Analytica employees and contractors eagerly sold psychological profiles that belonged to American voters to political officers in 2014, by acquiring a large number of private Facebook users’ data. Hence, in the Facebook history, this has been the most substantial set out since its inception. However, after the 2016 elections, Christopher Wylie raised a concern about Cambridge Analytica on May 16th, 2018. He served as the whistleblower and testified to the United States Senate Judiciary Committee concerning this act. Thus, it was revealed that Cambridge Analytica utilized millions of data from Facebook users without their consent or knowledge to influence the 2016 US presidential elections. Through this study, we find out how vital digital privacy is and the need for companies and business entities to utilize transparent measures while collecting and utilizing data from the public. 

From the Cambridge Analytica Case Study, we get to synthesize the major issues surrounding digital privacy imbalance and manage this catastrophe. Firstly, information asymmetry offers limitless bargaining power between users and digital platforms. Suzor (2019) refers to this as an imbalance between two parties that are in a negotiation mode within their relevant mastery of relevant details and factors. This leads to an imbalance since the side with more information has a competitive advantage and enjoys it over the other party. Moreover, there is a lack of transparency and accountability concerning data usage not forgetting the loss of control over data and how different digital platforms utilize them. Flew (2021) outlines that numerous digital users on different platforms are not aware of why these platforms need their data, how these pieces of information are gathered, where they are stored, and how they are protected. Thus, lack of transparency and accountability and loss of data control has substantially increased the risk of ineffective communication and misunderstanding. 

Finally, managing this catastrophe is vital in enabling privacy and digital rights to be upheld not just by users but also by the government together with every business entity (Berdik et al., 2021). Therefore, we can all support and be active in data protection, competition, and consumer protection, support the government in rolling out General Data Protection Regulation (GDPR) utilization and enforcement of Personal Information Protection Law (PIPL) by our government to put restrictive measures on data protection. Thus, with a clear comprehension of privacy and digital rights, we need to utilize the key measure we’ve discussed and help govern and take care of every aspect surrounding internet culture and its utilization. 

AI, Automation, Algorithms, & Datafication

With the robust growth of technology, there is a spring out of digitalized automated technologies and they include AI, Automation, Algorithms, & Datafication. Crawford (2021) describes them as automated decision-making machines since they all aid in the transformation of divergent industries while being used to improve efficiency, solve complex problems and create new openings. However, these technologies also create issues entangled with the internet and digital technologies, culture, and society. This means that these technologies can be biased since they are infallible and their algorithms only rely on data they are fed on, hence making them biased. Therefore, as technology advances, we are increasingly relying on AI, automation, algorithms, and datafication to make decisions for our businesses, and schools, and run our daily errands, which to some extent can be affected by the infallible nature of these technologies.

Fig 3- Artificial Intelligence Robotic Concept; Image by Getty Image via https://www.gettyimages.com/detail/illustration/artificial-intelligence-robotics-concept-royalty-free-illustration/1340476740?adppopup=true

This concern is clearly illustrated by a current case study that involves the utilization of facial recognition technology. MIT Technology Review states that Detroit police arrested Robert Williams on January 9th of 2020 in the presence of her wife and daughter (Ryan-Mosley, 2021). William, who is a black American, faced allegations of being in possession of stolen watches from Shinola, a luxury store. After being detained briefly, he was questioned and one of the Detroit police showed a picture of a suspect to him. William quickly rejected the claim, “This not me,” howled and explained to the officers, “I hope y’all don’t think all black people look alike.” And the officer replied: “The computer says it’s you.” From this case, we find out that William was wrongfully arrested and according to New York Times in August 2020, this arrest was derived from a mismatch in a facial system recognition system belonging to the Detroit Police Department. However, this did not come to an end as more incidences were made public of false arrests and both were related to Black men. This incident prompted Williams to go further by filing a legal action against the police department for both wrongful arresting him and pursuing the banning of facial recognition technology.

Technology that incorporates the utilization of AI technology is confirmed to be inaccurate and biased since it hinges on racial information that has been fed to its algorithm. The case study that we have gone through makes it a notable aspect to call upon regulators and developers of these technologies to own up for not addressing biases in their algorithms. The rapid development and use of AI technologies in every aspect of society continue to have significant impacts which need to be addressed with urgency. Sofia Noble (2018) explains that the power of algorithms in the contemporary world is neoliberal and they reinforce digital decisions that are socially oppressive via enacting of new modes of racial profiling and she terms this as “technological redlining.” Therefore, the algorithms used in these technologies condone racial and gender bias in search engines. Flew (2021) describes how most search engines including Google draw most of its users to disparaging pages and links of women and people of color. This poses a dilemma to internet culture and its governance as management of challenges caused by algorithms to these technologies becomes a paradox.

Fig 4- Ridding AI and machine learning of bias involves taking their many uses into consideration Image from British Medical Journal

Nonetheless, governance and imposing of AI laws, policies, and regulations can be of significance to dealing with information bias, racism, and sexism caused by these technologies. Crawford (2021) implies that the mandatory imposing of rules and regulations on AI would play a vital role in preventing digital technology from infringing on human rights. This set of rules and regulations will aid in ensuring AI offers a positive effect on culture and society in general. For instance, the EU, U.S., Singapore, and UNESCO proposed AI acts are good exemplars for addressing every negative aspect of AI in the internet culture. Thus, to mitigate biases in technology, these policies can help detect data abnormalities and help AI technology make intelligent and transparent predictions. Therefore, we can comfortably employ AI in different areas of public governance and even utilize these technologies in finding financial anomalies to mitigate financial fraud.  

Hate Speech & Online Harms

Digital platforms are vital parts of Internet culture since they form a powerful tool of communication. The Internet culture allows individuals from all corners of the universe to express themselves freely. However, when inclined to the general public, free speech becomes a double-edged sword. Since users from both sides may express their opinions negatively via the internet and this would still be considered an act of exercising freedom of speech. With the deep advocacy of freedom of speech, hate speech and online harm have rapidly escalated making it a prevalent issue in the contemporary age. Thus, lack of governance has server consequences from these behaviors including social isolation, physical harm, mental health, and even death.  

Fig 5- Revelation of people online experience: Image by ofcom via https://www.wired-gov.net/wg/news.nsf/articles/Peoples+online+experiences+revealed+30052019121500?open

Recently BBC (2020) reported a case that was highly profiled and pervaded with precedents of hate speech and online harm. This occurred immediately after the 2020 presidential election, stationing aftermath. BBC reports that the 2020 elections were filled with controversies that were hefted to divergent social media platforms. Social media platforms were utilized by civilians, public figures, and bureaucrats to disseminate conspiracies and false information. Elected officials are said to have used social media platforms to spread hatred and incite violence in different States toward groups that were perceived to go against their political views. This had a significant impact on different states in the USA while its capitol was later stormed into on January 6th, 2021 leading to numerous individuals being injured and some losing their lives. This incident emphasizes the need for internet regulation and accountability when handling incidences of hate speech and online harm. Hence, as much as divergent digital platforms have come out to implement content policies and moderation while removing any harmful content and information, we still have a lot of work to accomplish here.

However, numerous critics come out to argue that divergent social media platforms have insufficiently combated this issue of hate speech and online harm. Sinpeng et al. (2021) outlines that most of the policies and regulations they have put in the palace are erratically enforced. Hence some major potential solutions to hate speech and online harm can include the imposing of policies that have substantial frameworks and the adoption of moderation systems. With this in place, we can have countries impose criminal penalties and fines on individuals or platforms tied to online behaviors of hate speech and other forms of harm. Thus, we still need a lot to be incorporated to ensure that the internet becomes an inclusive space that emboldens safety and equality as a culture.


The rapid digital growth gives an essential consideration of the significant impacts of internet culture and governance via numerous issues of concern. The unit Internet Cultures and Governance entails these issues of concern and how they impact Internet culture. Privacy and digital rights are essential aspects when it comes to the general public interacting with the internet. The case studies mentioned in this post illuminate the reason why we need substantial regulatory programs that are transparent regarding data collection by companies and digital platforms, as well as the need for developers to come out and address biases in Ai algorithms. Additionally, from the case studies and analysis, we get to know the importance of regulating hate speech and online harm and why they are rapidly escalating. Therefore, we all must join hands in working towards achieving a similar goal of having safe digital platforms that embolden equity and edify internet utilization, culture, and governance.


BBC. (2021, January 11). Capitol riots: Who broke into the building? Retrieved from https://www.bbc.com/news/world-us-canada-55576165

Berdik, D., Otoum, S., Schmidt, N., Porter, D., & Jararweh, Y. (2021). A survey on blockchain for information systems management and security. Information Processing & Management, 58(1), 102397.

Castells, M. (2011). The rise of the network society. John wiley & sons.

Crawford, Kate (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 72-79.

New York Times. (2020, August 3). Wrongfully accused by an algorithm (Published 2020). The New York Times – Breaking News, US News, World News and Videos. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

Noble, S. U. (2018). Algorithms of oppression. In Algorithms of oppression. New York University Press.

Ryan-Mosley, T. (2021, April 14). The new lawsuit that shows facial recognition is officially a civil rights issue. MIT Technology Review. https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/

Sinpeng, A., Martin, F. R., Gelber, K., & Shields, K. (2021). Facebook: Regulating Hate Speech in the Asia Pacific. Facebook Content Policy Research on Social Media Award: Regulating Hate Speech in the Asia Pacific.

Suzor, N. P. (2019). Lawless: The secret rules that govern our digital lives. Cambridge University Press.

The New York Times. (2018, November 14). Cambridge Analytica and Facebook: The scandal and the fallout so far (Published 2018).Breaking News, US News, World News and Videos. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Be the first to comment

Leave a Reply