With the rise of digital media platforms, most people download social media apps to keep up with the latest trends and watch the latest news. While enjoying the convenience of digital platforms, we overlook the side effects that come to us while using them. For example, the Facebook scandal has caused users to worry about data privacy. Nobody wants social media to sell or share their personal information without authorization. However, as members of a collective society, it is useful to understand the social forces that influence daily life and long-term trends. In many situations, individuals are asked to provide personal information in exchange for access to goods or services. This also means that when we use digital platforms to learn about information and educate ourselves, we are inevitably caught in a trade-off between data privacy and access to messages.
However, this trade-off is often accompanied by unfairness and non-transparency.
The development of digital power is widening the divide between users and platforms. Users outside the “technical barrier” are more difficult to protect their own privacy. When the desire to use social media is very urgent, users often ignore the security of their own data, which leads to the secondary use of information, and may produce the serious consequences of “negative externality” (Diakopoulos,2015). For example, information asymmetry between platforms and users, data misuse, lack of transparency, hacking, monopoly power in platforms, algorithmic bias and discrimination are all topics of concern.
Figure 1 (source from Google)
Following the publication of the EU General Data Protection Regulation (GDPR), which was implemented in 2018, the impact of awareness and regulation of data protection has been evolving globally. There have been a lot of discussions on the topic of data privacy, and recently TikTok has been thrust into the limelight by the US government for allegedly compromising user privacy. In 2020, the Trump administration first attempted to ban TikTok in the United States by executive order, citing national security concerns, but the ban was temporarily blocked by a federal court (Catherine, 2023). In 2021, the Trump administration threatened to outlaw TikTok if it was not divested from its Chinese owners. As of December 2022, the U.S. Congress has banned TikTok from federal government devices, and the Biden administration has expanded its legal authority to ban TikTok nationwide.
The issue of whether to ban TikTok is once again attracting international attention in 2023. At 10 a.m. local time on March 23, TikTok CEO Shou Zi Chew attended a hearing of the U.S. House Energy and Commerce Committee to formally respond to questions from U.S. lawmakers on “national security” and other issues.
What is TikTok and what threat does it pose to user privacy?
TikTok is a social media platform focused on creating short videos ranging in length from a few seconds to up to 3 minutes. It contains increasingly diverse content covering everyday tricks, everyday motivation, dance challenges, quizzes, and quirky and funny content. TikTok launched in China in 2016 and became the most downloaded app in the US and globally in 2020 (Nakafuji, 2021), and in the first half of 2021, it was the #1 non-gaming app in the world ( Vorhaus, 2021). In the US, nearly half (48%) of 18-29-year-olds use TikTok (Auxier & Anderson, 2021).
Figure 2 TikTok Interface (source from TikTok)
As users use TikTok, the platform will collect data from them to deliver new content more precisely.
The Chinese government may have access to U.S. TikTok users’ data.
Last year, Buzzfeed raised alarms when it released leaked audio of more than 80 internal U.S. TikTok meetings. According to a 14-point affirmation released by nine TikTok employees, Chinese engineers at TikTok’s Chinese parent company, Bytebeat, accessed U.S. data from at least September 2021 to January 2022. In their content, they provided audio recordings, screenshots and other extensive evidence to confirm their revelations. The reports sparked a new boycott of the company, prompting a Federal Communications Commission (FCC) commissioner to write to the CEOs of Apple and Google asking them to remove TikTok from their app stores (Pravan, 2022) But the agency does not regulate the Internet or any company operating on it, which means it has no authority to force companies like Apple or Google to do anything, such as banning apps on their platforms. TikTok can only respond that employees in the Asia-Pacific region, including China, will only have minimal access to user data from the EU and the US.
CHINA VS AMERICA
Figure 3 (source from Google)
In addition, several security vulnerabilities have been identified in TikTok, making it an attractive target for hackers. Hackers use TikTok to steal personal information by sending SMS messages containing malware to users (Milan and van der Velden,2016).
TikTok collects users’ preferences to deliver wrong opinions through algorithmic recommendations and discrimination.
The US government is concerned about the algorithms TikTok uses to display content. TikTok and most social media apps have algorithms designed to understand users’ interests and then try to adjust the content so that users continue to use the app. On the one hand, the recommendation system can enhance the user experience while the “long tail” of commodity trading serves the economy and society; On the other hand, recommendation systems need to collect as much information as possible to build user models to provide personalized recommendations. TikTok has not shared its algorithms, so it is unclear how the app selects content for users. Therefore, the US government are worried about TikTok may limit users’ exposure to diverse viewpoints by creating isolated ideological communities and amplifying the fragmentation of the public sphere (Napoli,2018).
Figure 4 The goals of TikTok’s algorithm (source from New York Times)
A study that used in-depth interviews to explore how TikTok users collaborate with AI algorithms found that human users interact with AI-based algorithms and that users may try to consciously train the algorithms during use to obtain content that better matches their interests (Kang & Lou, 2022). Another empirical study noted that for experience products, consumers prefer content-based filtering with a higher degree of matching because it is perceived to provide greater transparency. For search products, a collaborative filtering model that incorporates recommendations endorsed by other users triggers a “trend effect” that leads to more positive user ratings (Liao & Sundar, 2022).
The algorithm may also be biased in ways that influence people to believe certain things.
There have been numerous allegations that TikTok’s algorithm is biased, reinforces negative thoughts in young users, and is used to influence public opinion. According to several researchers from the Queensland University of Technology, it takes less than 30 seconds to find harmful content on TikTok, and the algorithm can fill a user’s recommendation page with offensive videos in a matter of hours (Avani,2021). Technology advocacy group Reset Australia conducted a series of experiments and found that it took the algorithm about four hours to learn that a 13-year-old was interested in racist content and about seven hours to fill a user’s tweet page with sexist videos. The algorithm’s manipulation may be unintentional, but there are concerns that the Chinese government has been using or maybe using the algorithm to influence people.
TikTok’s responses and the previous bans on TikTok in international countries
Shou Zi Chew was grilled by U.S. politicians during a congressional hearing that lasted more than five hours. Questions focused on TikTok’s handling of user data and whether the CCP had access to it, as well as how harmful content, such as content about self-harm and eating disorders, was distributed on the app. Chew repeatedly emphasized during the hearing and previous meetings that user privacy in TikTok is stored in Oracle’s cloud infrastructure and that access to the data is completely controlled by U.S. personnel, and that the Chinese government and others do not have access.
In 2020, India became one of the first countries to implement a persistent ban on TikTok as well as dozens of other Chinese applications nationwide, citing privacy and security concerns. In December 2022, Taiwan implemented a public sector ban after the application was deemed a national security risk by the FBI in the United States. Also in that month, the US House of Representatives issued a ban on the use of devices by lawmakers and staff. Recently, European Union legislators were banned from installing TikTok on their devices. Many other countries have also issued bans, including Canada, Latvia, Denmark, Belgium, the UK, New Zealand, France, the Netherlands, and Norway (Catherine, 2023).
Conclusion: What will happen next remains to be seen.
If the U.S. government does succeed in banning TikTok, user privacy is partially protected, but it would also raise new social concerns. According to the Wall Street Journal, a group of U.S. netizens shouted to Congress on the 22nd before the hearing that banning the app would provoke widespread public opposition. “It’s one of the most powerful tools for young people to communicate with each other and engage in civic affairs,” said Aidan Cohen Murphy, founder of an organization with nearly 300,000 followers on TikTok. Mr. Deming, who owns a bookstore, said that 90% of her current sales come from the TikTok platform. Some U.S. opinion leaders believe that the U.S. government’s ban is ultra vires and overly censorious. But don’t forget, it’s not just TikTok.
While there are undeniable risks to TikTok’s aggressive collection and manipulation of personal data, the same issues apply to companies like Meta, Twitter, Google, Amazon and others. You can see that the consequences of massive data collection are not hypothetical. We need not only TikTok but all major social platforms to raise the profile of data privacy.
Regulators would then need to consider a range of regulatory responses, including self-regulation, co-regulation with industry, and government regulation. Flew argues that these responses should be tailored to the specific platform and context and that they should be designed to promote the public values at stake (Flew,2021). Regulating digital platforms requires a new approach that prioritizes public values. This approach would involve identifying relevant public values, assessing their impact on the platform, and determining appropriate regulatory responses. But the understanding of what “privacy” means varies from one country to another based on history, culture, or philosophical influences. This article provides an in-depth analysis of the privacy violations that users may face when using digital platforms, using TikTok as a case study. Users, platform owners and governments should all increase their awareness of privacy protection. Meanwhile, there is no agreed-upon standard for human news gatekeepers, assessing the performance of machines in the role is doubly complicated (Nechushtai and Lewis,2019). Therefore, the outcome of the TikTok matter will be how it is still uncertain, let us continue to pay attention to the dynamics of both the United States and China.
Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415.
Napoli, P. M. (2018). What if more speech is no longer the solution: First Amendment theory meets fake news and the filter bubble. Federal Communications Law Journal, 70, 55-105.
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.
Kang, H., & Lou, C. (2022). AI agency vs. human agency: Understanding human-AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmab014. https://doi.org/10.1093/jcmc/zmab014.
Liao, M. Q., & Sundar, S. S. (2022). When e-commerce personalization systems show-and-tell: Investigating the relative persuasive appeal of content-based versus collaborative filtering. Journal of Advertising, 51(2), 256-267. https://doi.org/10.1080/00913367.2021.1887013.
Catherine, T., & Brian, F. (2023, March 18). TikTok banned in the US: What you need to know. CNN Business. https://edition.cnn.com/2023/03/18/tech/tiktok-ban-explainer/index.html.
Milan, S., & Van der Velden, L. (2016). The alternative epistemologies of data activism. Digital Culture & Society, 2(2), 57-74.
Reardon, M. (2022, February 4). TikTok called a national security threat: Here’s what you need to know. CNET. https://www.cnet.com/news/tiktok-called-a-national-security-threat-heres-what-you-need-to-know/.
Dias, A., McGregor, J., & Lauren, D. (2021). A deep dive into the black side of TikTok’s algorithm for Jitterbug overseas. ABC News. https://www.abc.net.au/chinese/2021-08-01/tiktok-app-algorithm-content-censorship-chinese-bytedance/100331078.
Dixit, P. (2022, January 20). TikTok responds to senators confirming BuzzFeed News report. BuzzFeed News. https://www.buzzfeednews.com/article/pranavdixit/tiktok-responds-to-senators-confirming-buzzfeed-news-report.
Flew, T. (2021). Regulating Platforms. Cambridge: Polity, pp. 72-79.
Nechushtai, E., & Lewis, S. C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Computers in Human Behavior, 90, 298-307. https://doi.org/10.1016/j.chb.2018.07.043.