The Tale of the Dystopian Surveillance Society

“If you want a picture of the future, imagine a boot stamping on a human face forever. Big Brother is watching you.” 

– 1984, George Orwell (1949)

I keep pondering the uncanny parallels between Orwell’s dystopian society and the modern digital landscape. 

The novel paints a bleak and totalitarian future where a powerful regime, represented by Big Brother, exercises complete control over society. Every move citizens make, every aspect of privacy and individual autonomy, is monitored through telescreens. Every move they make is being watched, and they know it. Despite being written 75 years ago, Orwell’s chilling warning about surveillance and censorship is slowly evolving into reality in today’s digital era, and I still can’t seem to understand why they had let it happen if they knew what was going on.

Flew mentioned in Regulating Platforms that “the fact that the largest digital platform companies have increasingly become the focus of attention for legislators, policymakers, and regulators worldwide is the result of more than their expanding size, scope, and capacity to influence” (2021, p. 72).

Looking back, this was all evident in the various news reports of legal trials held against corporate technology CEOs that seem to happen every year without a break. The emergence of issues that arose along with the soaring advancements in technology has enabled extensive data collection, monitoring of online activities, and algorithmic profiling to go under the radar. There is a constant tension between surveillance’s security benefits and its encroachment on individual privacy and freedom, and Flew is right to “raise broader questions around trust and ethics in the digital age” (2021, p. 74).

In this new digital age, new technology allocates a great deal of power to the operators and gives absolute discretion to those who operate to enforce the rules as they see fit (Suzor, 2018). We have laws that govern the real world set in black and white, but every line is blurred once it’s online.

Goggin notes the many different debates about rights, their relevance, effectiveness, and gulf, and the many difficulties of implementing and activating those rights. In particular, the many norms and fundamental debates about rights, especially “human” rights, about what counts as “human”, and the many varieties of the “non-human” (Goggin et al., 2017, p. 6).

We have examined, again and again, the trade-offs between security measures and the protection of individual rights, but no consensus has been reached.

Perhaps it’s time to face the uncomfortable truth: While technology has enabled increased access to and manipulation of data, it alone isn’t the problem.

I cannot remember the last time I signed a legal document without fully understanding what I’m getting myself into. So why do we understand the severity but continue to fall victim to clicking the “accept” button before reading the terms and conditions?

Freedom in a Controlled Environment

Goggin defines digital rights as:

  • rights explicitly set out or recognized in law, policy, and regulation; 
  • rights ideas and practices developed and asserted by a wide range of movements, organizations, and individuals; [and]
  • rights that extend beyond traditional frameworks of states, national, regional, and international communities of countries (Goggin et al., 2017, p. 6).

In the context of our dystopian surveillance culture, this refers to the fundamental rights and freedoms that we have in the digital realm, particularly concerning privacy, freedom of expression, access to information, and autonomy over personal data. These rights are essential for ensuring that we can navigate the digital environment without undue interference, censorship, or surveillance from governments, corporations, or other entities.

Twelve years ago, the American Obama White House endorsed a Privacy Bill of Rights. In it, it states the expectation that ‘‘companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data” (Nissenbaum, 2018, p. 831). This endorsement occurred two years after a New York-based newspaper publisher launched “a landmark investigative series, What They Know, which doggedly revealed to readers remarkable and chilling activities ranging from ubiquitous online monitoring to license plate tracking and much in between” (Nissenbaum, 2018, p. 832).

Eight years ago, Cambridge Analytica made headlines once again for its alleged interference in the 2016 US presidential election and Facebook’s irresponsible data-sharing practices (Berghel, 2018, p. 84).

Just a few years ago, it was reported that partnerships between Amazon Ring unit and law enforcement have increased five-fold in three years, with them sharing video footage recorded from the devices with the police without acquiring consent from the device owner (Burt, 2022).

These infringements upon our digital rights have been occurring since the very beginning of the digital age. It’s clear as day – we are being watched whether we know it or not.

Thoughts in the Telescreen – Case Study #1

“It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen,” Orwell wrote in 1984 (1949). 

The fear of being watched and the potential repercussions for deviating from Big Brother’s ideology in the novel lead individuals to censor their own thoughts and expressions.

This reminds me of the times when I was little and I would be scared that if I searched up one wrong thing or made one wrong joke, the police would come knocking on my door. While I’m all grown up now and know that a 9-year-old’s search history really isn’t one of their top concerns, it still doesn’t mean I can say whatever I want.

Digital platforms play a significant role in facilitating freedom of expression and amplifying diverse voices. However, content moderation practices, algorithmic biases, and censorship concerns have raised questions about the extent of this freedom. To what extent is content moderation limiting our freedom of expression? Who gets to dictate what content we can access and express online?

For the latter point, it’s clear that different online platforms adhere to their own set of rules, which can sometimes appear unjust. This was highlighted by the House of Commons Home Affairs Committee, which observed that while Google swiftly removes videos from YouTube for copyright violations, it doesn’t always take similar prompt action for content that is hateful or illegal (Flew, 2021).

Late last year, it was reported that Meta’s policies and practices “have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened censorship of social media” amid Israel’s invasion of Gaza (Younes, 2023). This systemic online censorship has risen against the backdrop of unprecedented violence, given that an estimated 1,200 people were killed in Israel in the Hamas-led attack on October 7 and over 18,000 Palestinians were killed as of December 14 (Younes, 2023).

Between October and November 2023, Human Rights Watch documented over “1,050 takedowns and other suppression of content on Instagram and Facebook that had been posted by Palestinians and their supporters, including about human rights abuses… Of the 1,050 cases reviewed for this report, 1,049 involved peaceful content in support of Palestine that was censored or otherwise unduly suppressed” (Younes, 2023).

While this distribution of cases does not necessarily reflect the overall distribution of censorship, it is crazy to me how much Palestinian content is censored when thousands of innocents are killed every day. Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about perhaps one of the biggest human rights issues in the century. Their behavior fails to meet its human rights due diligence responsibilities and implicates possible breaches of our universal rights to freedom of expression and access to information.

The Erosion of Digital Rights – Case Study #2

It’s not just the platforms that are at fault. 

Merely a month ago, the U.S. House of Representatives passed a bill aimed at forcing TikTok to either cut ties with its Chinese parent, ByteDance, or be banned from operating in America (Yu, 2024). 

It only took them a week to pass that bill. 

In the name of potential national security threat and privacy breach, the bill was passed with overwhelming votes. American President Joe Biden and U.S. lawmakers have “called the app a potential national security threat and warned that the Chinese Communist Party could use it to glean sensitive data on its 150 million users in the U.S. But there is only very limited evidence [that] ByteDance has ever directly shared any user data with the Chinese government. And there is no public proof that ByteDance has handed U.S. user data to Beijing” (Leffer, 2024). 

Despite legitimate concerns over big data, security, and privacy, the U.S. government’s move to single out TikTok is not at all a step towards data security but a step towards control. If this really was a step towards data security, Google and Meta should also face equivalent consequences as they’ve both been fined millions and billions of dollars for violating data privacy regulations (Park, 2022). Temu, another Chinese-owned company, should also be banned after its privacy class action lawsuit for collecting user data “without consent and [with the fact] that the information could be demanded by the Chinese government at any time” (Lawlor, 2023). However, even with no proof of China giving away, selling, or encrypting any of the users’ personal data and TikTok’s unprecedented Project Texas initiative to provide data transparency and accountability, they are still the only one that has faced an ultimatum. 

If this isn’t about the content we are consuming, we should not have access to half of the platforms we use daily. But it is about the content we consume on TikTok and the government’s lack of control over it that ultimately led the U.S. government to pass a bill forcing ByteDance to sell TikTok to an American company. Even though the ban is only applicable to the U.S., this is a precedent to who-knows-what-is-to come.

Then, what digital rights do we have left?

The Future of Surveillance Society

“Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.” 

– Orwell, 1984 (1949)

When I read George Orwell’s social commentary novel a few years ago, I didn’t think much about its resemblance to the modern digital age. The parallels were there, but I couldn’t grasp the seriousness behind stolen data or online monitoring. 

“If I hadn’t done anything wrong, why would I care if they are watching me?” I would think. But it is more complicated than that. By feigning ignorance, we are getting closer step-by-step to the totalitarian world Orwell painted 75 years ago. We are allowing giant corporations and, perhaps, the government, to control the rights we own in the digital realm.

The challenges posed by the intersection of digital rights, surveillance, and censorship are complex and multifaceted – and this is only the tip of the iceberg. By addressing these challenges proactively and collaboratively, we can strive towards a digital landscape that upholds fundamental rights and freedoms for all.


About Project Texas. TikTok. (2023, March 21). 

Berghel, H. (2018). Malice domestic: The Cambridge analytica dystopia. Computer, 51(05), 84-89.

Burt, C. (2022, August 8). Amazon defends ring data sharing practices to US senator, leaves voice biometrics door open: Biometric Update. Biometric Update | Biometrics News, Companies and

Flew, T. (2021). Issues of Concern. Polity Press.

Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo, F. (2017). Digital Rights in Australia. The University of Sydney.

Lawlor, M. (2023, November 7). Temu sued in class action for risking user data to Chinese government control. 

Leffer, L. (2024, March 22). Banning TikTok would do basically nothing to protect your data. Scientific American. 

Nissenbaum, H. (2018). Respecting context to protect privacy: Why meaning matters. Science and Engineering Ethics, 24(3), 831-852.

Orwell, G. (1949). 1984. Harcourt Brace Jovanovich.

Park, K. (2022, September 14). Google, Meta fined $71.8m for violating privacy law in South Korea. Yahoo! Finance. 

Suzor, N. P. (2018, November 23). Lawless: the secret rules that govern our digital lives.

Younes, R. (2023, December 21). Meta’s broken promises. Human Rights Watch. 

Yu, Y. (2024, March 28). Tiktok sell-or-ban bill heads to Senate: What’s next? Nikkei Asia. 

Be the first to comment

Leave a Reply