Living under the AI-driven “Digital Information Cocoon” 

(Pixabay, n.d.)

Nowadays, the World is facing digitalisation pushed by companies like Meta and ByteDance, promoting innovations in fintech, AI and e-commerce. AI is described as a shallow human-like thinking process, that tries to formalize and reproduce human intelligence (Crawford, 2021). The algorithm which is the most well-known AI tool, is adapting to a wide range of media platforms, Algorithms are not everything for AI, they also can do content generation, optimise production and analyse target groups (Hallinan & Striphas, 2016), with AI affecting the digital world, a word called information cocoon was generated as a side effect of algorithms. TikTok under ByteDance is occupying human leisure time and becoming a main platform where people get new information and opinions. Information cocoons refer to “communication universes in which we hear only what we choose and only what comforts us and pleases us” (Sunstein, 2006) and losing abilities to know others’ perspectives. For example, in TikTok(China), if the algorithm defines you as a female when you check the comments under a video, it will always demonstrate the comments based on women’s views and aspects. “The Geometry of Information Cocoon: Analyzing the Cultural Space with Word Embedding Models” Huimin Xu, Zhicong Chen, Ruiqi Li, and Cheng-Jun Wang is researching information cocoons, utilising word embedding models to analyse how personalised algorithms influence user behaviour, leading to the formation of information cocoons (Xu et al., 2020). This study uses geometric ways to display the number of digital echo chambers and some concerns come along with information cocoons that can emphasise the importance of knowing how info that aligns with one’s preferences forms echo chambers and limited promotion of diverse viewpoints of users on Social Media. This blog will talk about the effect that is causing on a certain group of social media users, and explore the concerns and possible solutions of the AI-driven information cocoon.

(Xie, 2024)

The concerns of information cocoons on social platforms

Based on the algorithm, the media is more likely to form a customized page for its users. Its content and services for individual preferences might reduce the variety of media individuals interact with, potentially harming democratic discussions, open-mindedness, and a positive public discussion atmosphere(Pariser, 2011; Sunstein, 2002). Before discussing more about filter bubble which is another word for information cocoon, we will identify two kinds of personalities. The first one is called self-selected personality, selecting information that is identical to their pre-existing beliefs. To avoid media content that is opposite to what they believe(Festinger, 1957), nowadays, people are using algorithms to help them make a much more convenient selection, letting people rely more on it. The other one is pre-selected personalization, which differs from self-selected personalisation, pre-selected personalisation is not a proactive choice made by users, instead, it is a decision that is suggested by algorithms. This can usually be seen in a recommendation site for online shopping or video streaming services(O’Callaghan, Greene, Conway, Carthy, & Cunningham, 2013). Algorithms have a high position in the new media world. In this case study, we will explore more concerns on pre-selected crowds who are under the control of algorithms, the people who are not able to recognize they are in an information cocoon. 

Algorithm-driven personalization is the new guide to shaping public opinion (Trilling et al., 2016).  Start with public opinion,  in a public policy discussion which always considers search engines, app stores and social media as new gatekeepers and influencers of public thinking trends (European Commission, 2013). In very past, there was a law generated to monitor gatekeepers and protect users’ recognition of important public policy goals (Trilling et al., 2016).  For instance, during the COVID-19 period in China. The media platforms such as TikTok are always publishing the idea of the importance of quarantine, and all the videos related to COVID-19 keep sharing the advantages of self-isolation and the serious consequences of not doing so. Within that period if you are not supporting the government-driven policy you will be accused by the rest of the citizens. Users will not be able to see the videos of how COVID-19 is being treated as the flu in other countries and the freedom they have, even if they do, this kind of information will be described as a bad example. Overall, Chinese social media users were trapped in a situation where quarantine is the right thing and it is much more serious than having a normal life. Despite the ethical or political issues in this theme, with the help of social platforms and forming an information cocoon, it is a successful public opinion-controlling campaign since there is almost no one trying to refuse it. 

On the other hand, the algorithm’s pre-selected customization lacks transparency which may affect the user’s attitudes towards their reaction (Vīķe-Freiberga et al., 2013). And it will make a more difficult the media sector monitorization. The European councils tend to believe that algorithm searching with transparency will promote the diversity of media, and access to information and reduce the risk of engaging a filter bubble (Council of Europe, 2012). In this case, I can share one of my own experiences with scanning information about Japan releasing treated water into the sea at Fukushima. In the beginning, I saw these messages on Chinese social platforms and it was like a piece of really big news that everyone should be worried about. However, when I try to search on a foreign website, it turns out seldom people are worried or even care about it. This made me feel like I was pre-selected by an algorithm, and I automatically thought this was what everyone else should and would be seen as the top-breaking news. My reaction to an incident was manipulated without knowing I was being manipulated. 

Last but not least, combining these two concerns it turns out a new issue of manipulation. Since users are under circumstances in which the delivery of information is monitored by algorithms the shortage of transparency lets people’s responses change without notice by themselves. AI is always used commercially or government-directed if there is a campaign that is not healthy or beneficial to the citizens will people be able to aware of it? Information cocoon in digital platforms is a problem generated by AI. And same as algorithms, there are lots of ethical issues to think about. Since they have easy access to collect people’s data and are well-programmed to do the analysis, there needs to be a more comprehensive policy of protecting human digital rights to avoid getting into a potentially toxic bubble filter.

For a self-selected group, they can choose what they want to see and what they do not want. As they keep seeing things based on their pre-existing beliefs it will form an information gap with others, and step into a bubble filter that is created by themselves (Trilling et al., 2016).

Exploring the solutions

In the previous paragraph mentioned the concerns that happen along with an algorithm-based personalization system, now we moving into solutions. The main cause of the information cocoon is people have limited information to demonstrate on individuals’ interface, losing the ability to be open-minded. Our first solution will focus on user literacy improvement on media platforms. For the self-selected group, the users tend to choose content that fancies them, having this kind of bad habit will lead them into a vicious cycle. Human beings always hope to use the simplest way to get what they want, therefore they prefer to use the help from an algorithm to filter their site. However, it increased user inertia. To achieve the goal of gaining enough knowledge and away from a bubble filter, users have to seek a more proactive strategy in order to get essential messages for themself, people are required to evolve their media literacy (Li et al., 2023). Nevertheless, users can be more active in breaking information cocoons by joining diverse group discussions, learning more about algorithms’ working ways, and being aware of the risks that algorithms will bring (Li et al., 2023). Same for the pre-selected group, it is important to know more about algorithms and the potential consequences they might bring, and practice our conscious of realization while facing or having an information cocoon. Algorithms should be treated as tools that humans can use, not let algorithms be the dominators.  

The second solution is developed from the optimization of algorithms, while people are having a dynamic demand on their recommendation sites. For example, when I first used Netflix to watch a TV series about an ancient British royal story, it kept recommending similar genres although I am searching for other genres on Netflix. Within Netflix, I was like being in categorized by algorithms in a typical group that enjoys stories about British royals, and we can consider that the information I could gain from Netflix was limited by not timely updated algorithms. As algorithms can learn a better ability to react to people’s current demands can enlarge the area that users could be exploring. On the other hand, based on users’ hobbies, algorithms should look into what are the territories that users are always ignoring (Li et al., 2023). Also, algorithms can plan a zoom on their home page which is used to demonstrate important news or events, this can enhance audiences’ chances to step into the territory they might not be interested in, and title as the breaking news or something you might wanna know to attract audiences for seeing this information. Ensuring audiences will have less possibility to face an information gap based on their proactively or passively made information cocoon room. Thirdly, the platform should take responsibility for preventing its users from information cocoons. They have to monitor the algorithms used on their interface properly and come up with more regulations from an ethical aspect like human rights, and potential risk to our media society, not just focus on commercial benefits. Input more manual controls and assisting algorithms, mitigating the negative effects of algorithmic promotion on users’ unhealthy habits (Li et al., 2023). Last but not least, the content quality should be guaranteed. Since there is a large scale of information that may make audiences feel lost in sundry and overloaded information library, so, they prefer to rely on algorithms that can do the filtrations. If the platform or algorithms can devote more to qualification checks, providing audiences with qualified content, audiences will be more clear with the choice they want and slowly get rid of the dependency on algorithms. Also, high-quality work can attract viewers more, even if it is not the topic they want. It can enlarge their desire to scan different domains (Li et al., 2023), avoiding information cocoons. 

Conclusion

In conclusion, the pervasive influence of digitalization forming a new era that is characterized by advancing AI, fintech, and e-commerce. While AI, particularly algorithms in this blog, content generation and user engagement on platforms like TikTok, have also been a key reason contributing to the formation of information cocoons. These cocoons, or digital echo chambers, cause individuals within a restriction, and they can only get solely content that aligns with AI suggestions or their preferences. Research of the Geometry of Information Cocoon: Analyzing the Cultural Space with Word Embedding Models (Xu et al.,2020), comes from the angle of the geometric way to see information cocoons and their impact on user behaviour. The findings reveal the concern of limited demonstration to a diverse perspective, which can trigger a lack of critical thinking. Enhanced the importance of knowing about information cocoon. What is more, it is essential to know the consequences that AI-driven information cocoons might bring and explore potential solutions to mitigate their negative effects. This can involve implementing transparency, practising media literacy skills, users’ ability to critical content, and the optimization of algorithms.

By acknowledging and displaying the challenges revealed by information cocoons, we can work on building a more inclusive and informed digital society. Uniting stakeholders, like technology companies, researchers, government, and users towards the same goal. Companies can do more human-involved monitoring, the government can public more protective policies, researchers can bring to light more invisible concerns, and users can practice their awareness on seeing problems like they are in an information cocoon.

References

  • European Commission. (2013). Preparing for a fully converged audiovisual world: Growth, creation and values. Green Paper.
  • Hallinan, B., & Striphas, T. (2014). Recommended for you: The Netflix Prize and the production of algorithmic culture. New Media & Society, 18(1), 117–137. https://doi.org/10.1177/1461444814538646
  • Li, T., Yuan, D., & Zhang, B. (2023). Algorithm based personalized push Research on “Information Cocoon Room.” https://doi.org/10.1109/icise60366.2023.00066
  • Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
  • Pixabay. (n.d.). Cocoon on the tree. In https://www.pexels.com.
  • Sunstein, C. R. (2008). Infotopia: How many minds produce knowledge. Oxford University Press. 
  • Sunstein, C. R. (2002). Republic.com. Princeton University.
  • Vīķe-Freiberga, V., Däubler-Gmelin, H., Hammersley, B., & Pessoa, M. (2013).  A free and pluralistic media to sustain European democracy. Digital-Strategy.ec.europa.eu. https://digital-strategy.ec.europa.eu/en
  • Xu, H., chen, Z., Li, R., & Wang, C. (2020). The geometry of information cocoon: Analyzing the cultural space with word embedding models.
  • Xie, Y. (2024). Cyber cocoon kids. In http://www.xieyongart.com.

Be the first to comment

Leave a Reply