The Perils of TikTok: Information Cocoons and Hate Speech Caused by Algorithm

Have you ever encountered the following situation:

Mentioned to a friend in passing that you wanted to eat fried chicken in the evening, and opened the takeaway software to find that it happened to be in the first place in the store recommendations;

When you open a short video platform, the content is all your usual favorites, so you can’t help but watch it for two hours;

Click on social media and find that there are many views you agree with, so you feel that your perception is consistent with the public….

You shop Amazon, read the news, as if what are on your mind these applications can accurately predict.

This information explosion and algorithm-led big data era, the major platforms know us too well. What you have browsed and what you are interested in, relevant information will be recommended to you in a targeted manner. Just like you just realized that you are hungry, food is thoughtfully delivered to the front.

The rise of the Internet has created an ocean of information for us, but algorithms have trapped us in islands of information and exposed us to the risk of erosion of undesirable content and privacy leakage.

We seem to have easy access to a huge amount of information from various sources, but the information we receive is getting scarce, and the personal information we distribute is getting more and more.

With 1.05 billion active users in 154 countries, TikTok is one of the most popular social media platforms through 2023 (Ruby, 2023).

Today, we will take TikTok, which has taken the public by storm in recent years, as an example to explore the problems of information cocoon and hate speech caused by TikTok’s algorithm.

Are you trapped by the “information cocoons” of Tiktok?

Sunstein first proposed the concept of “information cocoons” (Sima & Han, 2022).

Information cocoons, in layman’s terms, refer to the phenomenon that people choose to receive only some information according to their own interests and preferences, ignoring the attention to other information, and over time, they wrap themselves up like cocoons.

 Once the cocoon is formed, people living in it will mistakenly think that the scope of the wrapped information is the whole area, thinking that they stand in the center of the world.

 For example, the elders who often boast of their authority in life, because the information education they received when they were growing up were very different from the new age, thus forming an information cocoon of stereotypes about marriage, careers, individuals, etc.

Tiktok’s recommendation algorithm is one of the main reasons for the information cocoon. Based on users’ browsing history, likes, comments and other behavioral data, Tiktok will recommend similar content to users, thus forming a closed loop where users will only see content related to their interests and know nothing about other information.

Sometimes I use my mother’s phone to browse Tiktok, and I even feel like we are not in the same world. I love sports, beauty, music, and entertainment, and my Tiktok recommendations include beauty influencers sharing their new products, new songs released by Jay Chou, and commentary on League of Legends matches. If you open my mom’s Tiktok, you don’t even need to know her; the videos are like a one-way mirror (Pasquale, 2015). You can tell she’s a yoga fanatic who likes to bake based on the recommended videos. She doesn’t know how to play League of Legends, and likewise, I don’t know how to use an oven for baking.

So, do you realize? The world we see is the world that Tiktok has filtered for us, the world that matches our perceptions and values.

In 2020, a group of young people came up with such a method: using a loophole in the password change program, several people logged in at different locations to use the same Instagram account, each with different topics of interest, and the topics viewed and liked would be different(Ng, 2020). In this way, the data algorithm will be difficult to summarize a constant law from it, and finally the information stream recommended to them will always have new content.

Tiktok’s recommendation algorithm:

  • – User interaction: like, share, follow, comment, etc.
  • – Video information: subtitles, sounds, tags, etc.
  • – Device and account settings: language, country, device type, etc.
  • More information on: https://blog.hootsuite.com/tiktok-algorithm/

Beware that Tiktok is using your privacy to steal your wallet!

The same airline ticket online shopping platform, the second time to check the ticket price is often more expensive than the first; online shopping App platform of the same goods, frequent buyers than the first time to buy the price is higher; using taxi software, the same time period from home to school feel the cost of the higher the use…

Such cases of big data-enabled price discrimination against existing customers are everywhere.

Operators use personalized recommendations to sell the same goods or services at different prices to different consumers for higher profits by using Tiktok to collect information about consumers and analyze their consumption preferences, habits and income levels(Wu et al., 2022). Thus regular customers may see higher prices than new customers.

Throughout your use of Tiktok, your behavior is being included, such as how much you have browsed for cars, what price point you are interested in or what type of consumption you prefer.

In addition, Tiktok collects a lot of user data, such as geographic location, device information, and search history, in order to better recommend content for users. This involves the issue of collecting users’ privacy information.

Although Tiktok says it has taken security measures to protect user data from being misused or leaked, privacy issues still exist. For example, some third-party applications may use Tiktok’s API interface to access user data and use it for other purposes.

Because in TikTok’s privacy policy, it allows third-party trackers to collect data from users – it is essentially impossible to know who is tracking your data or what information the platform collects.

Third-party trackers can track your activity on other sites even if you leave the app (Jr., 2022).

The platform secretly collects a lot of unnecessary information from users, which not only fuels price fraud by third parties, but also causes cyber crimes. Among the common situations are SMS harassment, fraudulent calls and spam. The phenomenon of illegal collection of consumer personal information by e-commerce, social applications and other platforms has become a new hot spot for complaints.

Of course, algorithms and artificial intelligence emerged initially to better serve users. Artificial intelligence is based on three important elements: algorithms, hardware and data. First, a large amount of data has to be collected, and then by means of machine learning, the algorithm automatically learns the interrelationships between these data and can eventually reproduce this logic in new data (Just & Latzer, 2016).

For users, Tiktok’s personalized recommendation algorithm improves the user experience, enhances the user’s stickiness, increases the user’s length of use, and allows them to retrieve the content they need in a timely manner in a sea of information.

For creators, Tiktok’s algorithm provides content creators with greater exposure, making it possible for everyone to become an influencer. This stimulates content creators to create quality content, and Tiktok tags creative content while facilitating the management of the platform and efficient interaction between creators and users to achieve a virtuous cycle.

For the Tiktok platform itself, users’ preferences are constantly refined based on legally obtained user information, which can achieve user conversion and retention at a lower cost and improve user stickiness; the rich data dimension and data volume can in turn serve to improve the recommendation accuracy of the recommendation algorithm; Moreover, the algorithm facilitates accurate recommendation of product advertisements and stimulates users’ purchase.

Ideally, if the public has good reason to trust the institutions that access the data, has the ability to reject unequal and unreasonable privacy terms offered by the platform, and understands the potential benefits of data collection and algorithmic use for the community as a whole, the public will feel more secure about the once opaque algorithmic mechanisms.

Beware! Unwittingly becoming someone else’s bullet!

Paul Lazarsfeld, a pioneer in American communication, states that the mass media is a tool that can serve both good and evil; and on the whole, it is more likely to serve evil if not properly controlled (Jeřábek, 2017).

TikTok’s AI detection is based on deep learning algorithms that detect various violations, such as online violence, bullying, and hate speech, to ensure a good user experience and safety in the application.

When detecting hate speech, TikTok’s AI detection examines user comments, videos, chats, etc. Once offending content is detected, that content is automatically flagged as inappropriate. In addition, TikTok employs community management strategies, such as social trust level-based restrictions and voluntary processing to ensure that online violence and other inappropriate behavior is addressed in a timely manner.

TikTok also uses machine learning algorithms to predict incidents of online hate speech and displays alert icons on relevant accounts to encourage other users to file complaints about inappropriate behavior. Of course, these AI detection tools still need to be continually refined and updated to ensure that they are effective in detecting and eliminating inappropriate behavior.

As mentioned before, the most important feature of Tiktok’s personalized information push is that it abandons the traditional manual selection and pushing of news content in favor of content algorithms and collaborative filtering algorithms to retrieve, filter, aggregate and distribute information (Pasquale, 2015)

Unlike humans, who can think about new things, deep learning technologies can only be trained based on existing data, screen new undesirable content with human review, and judge user behavior, on the basis of which experience can be “passed on” to AI (Crawford, 2021).

But AI cannot achieve the high accuracy rate of recognizing images and videos when analyzing speech and text on Tiktok. Different words may express different meanings in different contexts, and it is difficult for AI to think like a human being and understand the true meaning of language, such as the problem that plagues all platforms: hate speech.

Combined with the previously mentioned information cocoon, in the era of big data, people with similar ideology and the same value orientation are more likely to come together and form a network community due to their similar interests. Once each group enters the track of information eclipse and commits to the closed space of the information cocoon, it may bring the risk of group polarization.

Information cocoons enhance the effect of information dissemination and are used purposefully to spread opinion messages and emotionally inflammatory messages (Liu & Zhou, 2022).

For example, Malaysia’s 15th general election was torn by a hate speech message that not only polarized voters, but also incited violence.

More information on:https://theconversation.com/how-tiktok-became-a-breeding-ground-for-hate-speech-in-the-latest-malaysia-general-election-200542

The essence of algorithmic push is scientific and useful, but it has encountered problems in the practical application.

This makes us think: when the current algorithm is not “smart” enough, using machine intelligence to replace the human “gatekeeper”, can such information “gatekeeper” be trusted?

Regulation and Governance

The European Union General Data Protection Regulation (GDPR), which came into force in 2018, sets out the scope and requirements for data processing, as well as the rights of data subjects and how they are controlled.

The U.S. California Consumer Privacy Act (CCPA), which came into force in 2020, aims to protect the privacy and security of personal information of California residents.

These legal provisions play a positive role in protecting the privacy of users, but still have shortcomings.

For example, the “right to delete” in China’s PIPL is not exactly the same as the right to be forgotten, but is more established as an integral part of national network information security. Its overall institutional design focuses on ensuring the stability of online information dissemination order (McKenzie et al., 2021).

From a technical point of view, the ease of copying and spreading personal information also means that “deletion” may not be possible with “one click”.

Another noteworthy fact is that the right to be forgotten or the right to delete is more often discussed in academic and legal circles, and most ordinary people are not aware of its existence.

Summary

Algorithms can be used to collect and analyze personal data, which in turn can threaten privacy and create information cocoons. At the same time, algorithms can be used to detect and prevent hate speech, but such applications can also undermine freedom of expression and individual privacy. Therefore, we and the platforms need to find a balance between them, and we will see what happens.

References

Crawford, K. (2021). In Atlas of AI: Power, politics and planetary costs of Artificial Intelligence (pp. 1–21). essay, Yale University Press. Retrieved April 9, 2023, from https://www.hindawi.com/journals/ddns/2022/1326579/.

Jeřábek, H. (2017). Paul Lazarsfeld and the origins of communications research, 4–60. https://doi.org/10.4324/9781315533858

Jr., T. H. (2022, February 22). TikTok shares your data more than any other social media app – and it’s unclear where it goes, study says. CNBC. Retrieved April 8, 2023, from https://www.cnbc.com/2022/02/08/tiktok-shares-your-data-more-than-any-other-social-media-app-study.html

Just, N., & Latzer, M. (2016). Governance by algorithms: Reality construction by algorithmic selection on the internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157

Liu, W., & Zhou, W. (2022). Research on solving path of negative effect of “information cocoon room” in emergency. Discrete Dynamics in Nature and Society, 2022, 1–12. https://doi.org/10.1155/2022/1326579

McKenzie, P. D., Milner, G. A., & Sun, C. (2021, September 8). China’s Personal Information Protection Law (PIPL): Key questions answered. Morrison Foerster. Retrieved April 9, 2023, from https://www.mofo.com/resources/insights/210908-chinas-personal-information-protection-law

Ng, A. (2020). Teens have figured out how to mess with Instagram’s tracking algorithm. CNET. Retrieved April 8, 2023, from https://www.cnet.com/google-amp/news/teens-have-figured-out-how-to-mess-with-instagrams-tracking-algorithm/?__twitter_impression=true

Pasquale, F. (2015). 1 INTRODUCTION THE NEED TO KNOW. In The Black Box Society: The Secret Algorithms that control money and information (pp. 1–20). essay, Harvard University Press. Retrieved April 6, 2023, from https://www-jstor-org.ezproxy.library.sydney.edu.au/stable/j.ctt13x0hch.

Ruby, D. (2023, April 5). TikTok statistics 2023 – (users, revenue and trends). Demand Sage. Retrieved April 7, 2023, from https://www.demandsage.com/tiktok-user-statistics/

Sima, Y., & Han, J. (2022). Online carnival and offline solitude: “information cocoon” effect in the age of algorithms. Advances in Social Science, Education and Humanities Research, 664, 1–5. https://doi.org/10.2991/assehr.k.220504.445

Wu, Z., Yang, Y., Zhao, J., & Wu, Y. (2022). The impact of algorithmic price discrimination on consumers’ perceived betrayal. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.825420

Be the first to comment

Leave a Reply