Data Dystopia: Exploring Information Cocoons Amidst Data Surveillance

Last week, when I logged on to the Weibo platform, I was presented with mandatory personal information to fill in, including gender, phone number, a mandatory selection of 4 areas of interest, and an unlimited number of bloggers to follow. I was wondering what kind of content the platform would recommend to me if I chose a different gender. As expected, when I chose male as my gender, apart from the fact that the people promoting the interests were male, the interests were also stereotypically male.

Once I had discovered this interesting phenomenon, I started to think, does this mean I am more likely to become proficient in these areas of knowledge? Could I become an ‘expert’ in these 4 areas? Could I find more like-minded people? I was really attracted to the clarity and the promise these labels provided. This is a heartening insight, but is it true?

With the continuous development of AI, big data, cloud computing, and other technologies, information has entered the era of intelligent communication (Yi et al., 2021). Big data analytics has given rise to a new regime, namely one of surveillance led by digital platform companies (Cinnamon, 2017; Cohen, 2018; Couldry and Yu, 2018; van Dijck, 2014; Zuboff, 2015, 2019, as cited in Flew, 2021). Dataveillance, which is ‘the monitoring of citizens on the basis of their online data’ and ‘entails the continuous tracking of (meta)data for unstated preset purposes’ (Van Dijck, 2014, p. 205). That doesn’t seem too bad, does it?

Yet information after data filtering and data surveillance is not always objective. Differences in Google search results can change undecided voting preferences by up to 20% (Epstein and Robertson, 2015). This suggests that the process of data surveillance habitually pushes homogenized content to specific users. These users, like chrysalises in a cocoon, are always in a relatively closed information space environment.

As big data, automation, artificial intelligence and algorithm surveillance techniques proliferate, does data surveillance really limit access to diverse sources? Will contemporary trends reinforce the existence of information cocoons, or will they encourage people to break down the barriers of these cocoons?

Do you get anxious when you see photos of mansions and luxury cars sent by internet users? Do you often need a lot of willpower to get out of TikTok? Then, you are in an information cocoon.

‘Information Cocoons’ was first coined by Professor Sunstein of Harvard Law School. It is used as a metaphor for information dissemination, in which the audience only chooses interesting and pleasing topics from the vast amount of information while rejecting or ignoring other views and contents, just like ‘cocooning silkworms’ (Sunstein, 2006).

When users are immersed in this ‘pseudo-environment’(Li et al., 2023), the construction of an ‘information cocoon’ with their own interests as the dominant factor will be infinitely reinforced, and they will continue to thicken the ‘cocoon layer’.

Why does data surveillance exacerbate cocooning? ‘Surveillance Capitalists’ rely on advanced algorithms and data analysis techniques, such as machine learning, data mining, and artificial intelligence. These allow them to gain deep insights into human behavior, predict and influence future actions (Figure 1).

Everyone’s mobile phone is a world of its own and data surveillance can spy on it. Surveillance capitalism uses big data for pattern mining, user behavior analysis, visualization and data tracking (Bello-Orgaz et al., 2016; Salehan & Kim, 2016). However, this kind of spying on the data may keep users in an information cocoon.

Fig.1. Structure of a recommender system.
https://www.researchgate.net/figure/Structure-of-a-recommender-system_fig2_220827211

A recent experiment by Facebook showed that they are able to influence people’s emotions by monitoring the content they view. The experiment manipulated user story selections seen by 689,003 users in their newsfeeds. When positive expressions were reduced, people produced fewer positive posts and more negative posts. When negative expressions were reduced, the opposite pattern occurred (Kramer et al., 2014, p. 1).

This allows us to see that the information cocoon effect is not only a form of information filtering but also a cognitive limitation. People who have been under surveillance for a long time tend to be unconsciously guided by the data. This leads to further cognitive narrowing and information limitation.

· Democratic society’s glue: Sunstein (2002) explains a diverse democratic society needs shared experiences as ‘social glue’. When people become cocooned in information, they may have fewer shared experiences and create discrimination and polarization in society. For example, the selective exposure between the parties to their own demands will lead to irreparable ideological differences between the parties (Stroud, 2010; Beam et al., 2018). This is why human-centered AI design is increasingly valued.

· The knowledge gap: The ‘cocoon of information’ created by algorithms is a source of social risk. It can lead to information gaps between different topics, thereby exacerbating social knowledge gaps. For example, those who avoid news know little, while those who self-select political news are better informed. The sorting and ranking of information in search engines can affect access to and diversity of information, the Council of Europe has warned.

· The abuse of privacy: Capitalists see personal data as a business opportunity by reselling data to advertisers and third parties, which lead to serious invasions of privacy. However, Rehm (2017) believes that the information cocoon room can strengthen the effects of information dissemination, especially for regimes or certain special periods. Whatever approach is taken, algorithms’ complexity creates a sense of doubt in users’ minds. Am I still in control of my world?

In summary, the cocoon room provides a relatively calm beach for those swimming in the data storm, but for those trying to break through the cocoon walls to surf, the calm may seem a little monolithic. Statistically speaking, the cocoon room effect does exist, but it is not as widespread as one would like, with only 8% of people having a single source for their news. From this perspective, the need to break out of the cocoon may not be so urgent. So whether to stay in the cocoon room or not does not have a clear answer. It depends on whether you, the actual user of the data platform, want to break through the cocoon wall generated by data surveillance.

With 700 million users, Today’s Headlines is a giant in the mobile news market. This website allows users to browse news from all over the world in real time. Despite its huge user base, the company faced criticism. Many users questioned its practice of ‘recommending friends through automatic matching of their mobile phone contacts without tying them to their mobile phones.’ However, ByteDance argued in a court case in 2018 that the information in its address book was not the private information of its users.

What exactly caused the controversy? Let’s start with its powerful data-analyzing abilities. Typical of its algorithms, they learn to find inter-dependencies among pieces of data and then reproduce the logic on every new piece of data (Megorskaya, 2022). Such data analysis capabilities are not only a reflection of the social networks but also an extension of the cocoon of information.

The ‘Follow’ section on the homepage of Today’s Headlines allows users to follow other users’ accounts and aggregate users with similar interests via retweets and other behavior. The platform uses this social relationship through algorithmic surveillance to understand users’ preferences and social tendencies. So when you open Today’s Headlines, all the news articles are based on the platform’s advanced algorithmic calculations of the user’s wants and needs and are precisely fed to the user.

Fig.2. Interface of Today’s Headlines

The current information monopoly adds to the problem. The problem lies not only in using technology and algorithms to trap users in an ‘information cocoon’, preventing them from exploring the unknown, but also in the fact that monitoring their data deprives them of the right to know the truth. In today’s headlines, users can hardly see any negative information about their own companies and products. However, negative information about their competitors continues to flow. Many users have expressed frustration with this unfairness.

Artificial Intelligence and algorithmic governance has become a multi-dimensional concern on the digital platform, encompassing data processing issues as well as the societal impact, requiring integrated management and oversight strategies. In response to this challenge, Today’s Headlines has taken a series of measures to counter the negative impact of information cocooning (Figure 3).

Fig.3. Platform governance of Today’s Headlines

· Protection of personal rights and suicide intervention: The news headlines have strengthened user protection measures and provided right protection services. Additionally, they put special emphasis on predicting and intervening in suicidal behavior. By identifying users’ suicidal intentions through data analysis, they received over 40 alerts and saved 4 lives.

· Tackling inaccuracy and providing quality content: Platform governance initiatives focusing on online inaccuracy, fraud, and the transparency of information sources have been put in place. They have built a dedicated knowledge base trained on fake news and used this to create a model to verify the authenticity of suspected false information. According to Today’s Headline’s 2023 annual platform governance report, in combination with AI tools, the platform has cumulatively processed 1.09 million pieces of inaccurate information and intercepted 1.67 million pieces of suspected fraudulent content in one year. They also launched the ‘source tagging tool’, which is designed to improve the credibility of news and the reliability of the platform. 

· MCN governance and accountability mechanism: The platform now encourages MCNs to act ethically, and increases penalties for non-compliance. For the content pushed by the platform, News Headlines has established an accountability mechanism to respond to user complaints and feedback in a timely manner, ensuring that the platform can promptly address user concerns and take effective action to resolve them. This will contribute to the transparency and accountability of the platform and increase user confidence.

In conclusion, the governance of the platform is a systematic project that requires the joint participation of companies, industry organizations, the media and the public. Before safeguarding the interests of all parties, AI governance should be human-centred, focusing on the safety and well-being of users. By reducing the impact of the information cocoon, we can create a safer, more ethically sound in online environment.

In the landscape of big data, automation, artificial intelligence and data surveillance, risk is inevitable. Although the algorithm is like a black box: opaque, complex and hard for non-experts to understand (Pasquale, 2015), the gaze of technology is constantly on our lives. It has a profound impact on us, both positive and negative.

In this complex digital age, we are at a crossroad. We need to reclaim our freedom of choice, not just how we respond to AI algorithms and data surveillance. Despite the limited amount of information we can absorb, we have an inherent power: the power to adapt, evolve and challenge the status quo.

When considering the implications of data surveillance, let’s not forget the human element which forms the basis of these AI algorithms and digital information. We are active participants in shaping the world around us, not passive recipients of data.

So how about embracing the power of human resilience and intelligence as we face the uncertainties of the future? Because in the world of AI, it’s not algorithms or data that define us, but minds that stay open and fluid. Politics can make corrections to algorithms, but people can also make changes. Therefore, it is never an overstatement to say that ‘people make the difference’.

Reference

Beam, M. A., Hutchens, M. J., & Hmielowski, J. D. (2018, March 5). Facebook news and (de)polarization: reinforcing spirals in the 2016 US election. Information, Communication & Society, 21(7), 940–958. https://doi.org/10.1080/1369118x.2018.1444783

Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2016, March). Social big data: Recent achievements and new challenges. Information Fusion, 28, 45–59. https://doi.org/10.1016/j.inffus.2015.08.005

Epstein, R., & Robertson, R. E. (2015). The search engine manipulation effect (SEME) and possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112(33), E4512–E4521.

Flew, Terry (2021) Regulating Platforms. Cambridge: Policy, pp. 79-86.

G. Rehm, “An infrastructure for empowering Internet users to handle fake news and other online media phenomena [J],” in Proceedings of the International Conference of the German Society for Computational Linguistics and Language Technology, pp. 216–231, Springer, Cham, 2017.

Just, N., and Latzer, M. (2017). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society, 39(2), 238–58.

Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788-8790.

Li, T., Dong, Y., & Zhang, B. (2023, June 23). Algorithm based personalized push Research on “Information Cocoon Room.” 2023 8th International Conference on Information Systems Engineering (ICISE). https://doi.org/10.1109/icise60366.2023.00066

Salehan, M., & Kim, D. J. (2016, January). Predicting the performance of online consumer reviews: A sentiment mining approach to big data analytics. Decision Support Systems81, 30–40. https://doi.org/10.1016/j.dss.2015.10.006

Stroud, N. J. (2010, August 19). Polarization and Partisan Selective Exposure. Journal of Communication, 60(3), 556–576. https://doi.org/10.1111/j.1460-2466.2010.01497.x

Sunstein, C. R. (2006). Infotopia: How many minds produce knowledge. New York, NY: Oxford University Press

Van Dijck, J. (2014). Data ication, dataism and dataveillance: Big data between scienti ic paradigm and ideology. Sureveillance & Society, 12(2), 197208.

Yi, Y.X.; Zhang, Z.F.; Yang, L.T.; Gan, C.Q.; Deng, X.J.; Yi, L.Z. Reemergence modeling of intelligent information diffusion in heterogeneous social networks: The dynamics perspective. IEEE Trans. Netw. Sci. Eng. 2021, 8, 828–840.

Zhang, X., Cai, Y., Zhao, M., & Zhou, Y. (2023, August 10). Generation Mechanism of “Information Cocoons” of Network Users: An Evolutionary Game Approach. Systems, 11(8), 414. https://doi.org/10.3390/systems11080414

Be the first to comment

Leave a Reply