Who is manipulating what you say and what you see?

Online platforms: not complete freedom of speech

Have you ever had the experience of having your posts deleted or your account banned during your long career using social media? Your first reaction must be confusion: Where did my post and account go? Then you will feel puzzled or angry: what did I say! Why do they want to ban me? Afterwards, you may appeal to the official authorities, and after refutation and waiting, your information may be exposed again, or there may be no change or response. Finally, you have to change the content, change your account, and be cautious when posting in the future.

Who is manipulating what you can say? Although the information world may seem more free than the real world, in reality, we are subject to more restrictions when expressing our opinions online.  There are many implicit rules that determine whether our speech can be displayed. Undoubtedly, when you use social media, the platform is the leader who determines whether information can be displayed.

Kurt Lewin proposed the concept of “gatekeeping” in his 1947 article “Frontiers in Group Dynamics II. Channels of Group Life; Social Planning and Action Research”: Information always flows along certain channels containing gatekeepers, and decisions are made on whether information or goods are allowed to enter or continue to flow within the channels based on fair regulations or the personal opinion of the gatekeeper.

At that time, the concept of “gatekeeping” was widely used in the news industry, but now, with the explosion of information on social platforms, the behavior of “gatekeeping” has also undergone many changes.

· The subject of “gatekeeping” has changed from a “professional community” to a “diverse subject”. The original “gatekeeping” was carried out by a professional news team, which screened out appropriate information and shaped it into news. Now, with ordinary people becoming a major source of news, the right of review has been transferred to Internet companies, that is, various platforms.

· The program of “gatekeeping” has changed from “pre screening” to “post screening”. In the information age that emphasizes timeliness, a large amount of content on the internet cannot be fully reviewed before being sent out. Instead, it is directly sent out after simple keyword filtering and then reviewed by the platform. For example, aggregation platforms in China, such as Today’s Headlines and Weibo, adopt a first send, then review approach. After the information is immediately sent, the weight of inappropriate and overheated articles is reduced, non compliant information is deleted, and infringing and plagiarizing accounts are blocked after problems are discovered.

· The process of “gatekeeping” becomes secretive, and the filtering of information gradually moves from the “front-end” to the “back-end”. The algorithm black box makes it impossible for users to know where they are and where they do not comply with the rules, resulting in being banned. Sarah Myers West’s research in the OnlineCensorship-org project documented user reports on content deletion on social media platforms. Users often have little or no information about why their content has been deleted, and it is difficult to contact personnel on the platform to explain this decision or file a complaint, so users can only guess why their posts and accounts have been banned.

· The “gatekeeping” standard has shifted from “news value” to “user preferences”. Due to almost all platforms relying on traffic and advertising to generate revenue, platform recommendations and bans on information are based on what can bring both security and discussion. While maximizing the flow of controversial content, the platform avoids content with strong political, gender, and other themes, which to some extent affects the platform’s ecological environment and information quality, and limits the content that people can discuss online.

    Information leakage: an unavoidable risk both subjectively and objectively

    Besides being deleted and restricted from speaking, do you also feel that your personal information has been leaked? This is a very common feeling. Every time we log in to a new website, the lengthy privacy terms are always directly ignored by us. Even if some platforms ensure that you understand their content by placing a confirmation button at the bottom and leaving it on the terms page for 10 seconds, we often just scan them casually instead of really trying to figure it out.

    In fact, on the Internet, we are accustomed to ceding our rights and using our own information and privacy in exchange for permission to access the Internet. After all, if you do not agree to these terms, you cannot use the features on the website. This is strange because unless we are really following or have problems, we have no experience using online social spaces as all the places on the platform – they feel like our own place (Suzor, Nicolas P.2019). In our own place, we provide a lot of content that contains our own information, which can also lead to information leakage.

    In China, there is a recognized strange phenomenon in the use of media: when you say something, discuss something with friends on chat apps, or search for something on certain platforms, you will soon see the platform recommending something to you on shopping apps such as TaoBao, as if the network you are using is really an interconnected big network, tightly wrapped up by surveillance.

    Users realize that some of their personal information, such as phone numbers, addresses, preferences, etc., has been exposed invisibly, but they have no choice because without providing this information, they cannot enter the modern “second society” of the online world, or can only use media that has been castrated for some important functions.

    At the same time, besides these unintentional leaks and forced submissions, the information we proactively release is also not secure. In fact, we personally exhibit a willingness to voluntarily disclose information online. The online world is now almost a microcosm of the real world. We complete a large part of social activities online, so showcasing ourselves is also an essential part, such as students demonstrated to be generally “OK” with friends, family, classes, and even strangers accessing their social networking sites’ profiles. When presenting information to other users, it is also increasing users’ personal disclosures to other entities on the network as well: third party apps, advertisers, and social media itself.

    As we can see, artificial intelligence can now easily create fake messages, fake images, and fake videos. Whether it is due to cases of fraud caused by AI generated face swapping videos of relatives and friends, or news of Trump angrily accusing him of having his face swapped onto other golfers, it can be seen that our own portraits uploaded on the internet are not secure and are highly likely to become materials for illegal crimes or other harmful behaviors, and cannot be distinguished due to advanced modern technology. So now many people are calling for reducing the posting of all personal information on social media platforms, even if it is very difficult to do so.

    In addition to the unauthorized use of our information, the generation of information cocoons also makes people feel manipulated and infringed upon by platforms and big data. All the content we see on the Internet platform seems to be set up: when you show your interest to the network, countless identical or similar content is continuously pushed to you, and because these recommendations meet your interest, you will continue to open them, eventually leading to the content you contact on the network is very limited, as if wrapped in a cocoon, which is completely against the nature of the Internet.

    For example, in 2014, Facebook introduced the ad preference setting. Facebook’s explanation to users is that one of the reasons why users see specific types of advertisements is that the company wants to reach out to people who are similar to its customers. Based on the user’s situation, such as the pages they have liked, the ads and posts they checked in, the user’s age, gender, location, and the devices used to access Facebook, all determine which ads a user will see. So not only the content, but also the advertisements are designed by the platform according to our preferences to attract us to the greatest extent possible.

    The concept of “privacy” is anticipated on the idea that a private individual desires to be “let alone” from being observed or disturbed by others . However, on the Internet, our private information and even personal thoughts cannot be “alone”, but are constantly spied by algorithms and at risk of being used by malicious individuals.

    We can assume that all of our online activities are currently being monitored, but this panoramic prison is obscured by the powerful personal attributes of social media platforms.

    Initially considered by Samuel Bentham as a circular primary design, the Panopticon allowed a single watchman to observe all the cells without the prisoners knowing which they were being watched. Michel Foucault later interpreted Panoptism as a form of power that extends into complex disciplinary networks in the 19th century.

    In the context of social media, the Panopticon is reversed, because the controlled user is all in the middle of the principal the sociotechnical system and the controllers other users are all around her or him. Mark Post developed Foucault’s ideas by likening the information society to a massive “super panoramic principal”, a complex network that systematically and effectively monitors the public.  Social administrators and social media platforms are like  guards in super Panoramic principals, network information systems are like watchtowers in principals, and users are monitored principals.

    Taking social media platforms such as Instagram, X, and Weibo where we can post as examples, we tirelessly engage in activities such as posting our own updates, initiating conversations, sharing our location, behavior, and all other private topics. These activities are carried out in front of big data under the size of platforms and managers, and each of us is monitored without any personal privacy.

    How to maximize the protection of our privacy, security, and digital rights

    So when we have to and voluntarily use online media, how should we deal with this invisible privacy leakage and manipulation?

    We should first understand that to a large extent, the platform’s purpose of prying into our privacy is not considered with the multiple forms that identity assets in social media, but is interested in obtaining profitable information from it. Big data restricts our speech to ensure the stability and traffic of the platform, collects our preferences to accurately advertise and obtain more income, and manages our information to provide to spam and scammers.

    So when using social media, we can reduce the disclosure of our own information within the limits allowed by the platform. And in fact, in this regard, national laws, platform managers, and individual users should all make efforts to change this situation.

    · Strengthening the obligation of data controllers to process personal data. Regulatory authorities should strengthen the regulation of privacy protection contracts formulated by network service providers, especially websites, to change the Unreasonable terms formed by the unequal status between users and network service providers, and clarify the responsibility and obligation of data controllers to protect user privacy.

    · Distinguish different data usage and implement hierarchical classification protection. In the era of big data, the data obtained through the Internet includes private information, identity information, log information, public information and other information at different levels. In the future, platforms should pay more attention to distinguishing the purposes of data, clarifying the data that can be used, the methods and methods of using data, the scope of personal data application, and the standardized protection of personal data. Enable users to use media to the fullest extent with minimal information disclosure.

    · Strengthening privacy protection education and personal self-control. Users should pay attention to distinguishing the reasonable level of information obtained by the platform, so it is time to change the behavior of hastily scanning privacy terms when using new software Many people have not realized that some personal information belongs to the category of privacy rights, and their awareness of privacy protection is relatively weak They should improve their new media literacy to face the unsafe network environment. For example, permissions that applications do not need to be used, such as microphones, cameras, etc., can be disabled, or posts containing important personal information can be reduced on the platform.


      Suzor, N. P. (2019). Who Makes the Rules? In Lawless (pp. 10-24).

      Lewin, K. (1947). Frontiers in Group Dynamics: Concept, Method and Reality in Social Science; Social Equilibria and Social Change. Human Relations (New York), 1(1), 5-41. https://doi.org/10.1177/001872674700100103

      van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001

      Marwick, A. E., & Boyd, D. (2018). Understanding Privacy at the Margins: Introduction. International Journal of Communication (Online), 1157-.

      Romele, A., Gallino, F., Emmenegger, C., & Gorgone, D. (2017). Panopticism is not Enough: Social Media as Technologies of Voluntary Servitude. Surveillance & Society, 15(2), 204,221. https://doi.org/10.24908/ss.v15i2.6021

      Be the first to comment

      Leave a Reply