With the giant technology corporations dominating the global digital economy under the internet platformed era, the issues of hateful online content, information monopolies, as well as the influence on media context producers and creative industries are of great concern to governments and the public. It has been questioned how these platforms are regulated, and the balance of external regulatory institutions with internal governance practices (Flew, 2021). As Suzor (2019) illustrate that platforms mediate the communication way for people, and the decisions they make bring a real influence on public culture, as well as the social and political lives of their users. Nonetheless, online technologies undoubtedly complex these dynamics to the extent that most people realize themselves continuously negotiating between concealment, disclosure, and connection (Marwick & boyd, 2019, p.1158).
The current regulatory system for Weibo as the case will be introduced in this blog. Also, how the governance and frameworks practiced in digital platforms will be critically analyzed through the phenomenons and issues in the following. It has been found that it is yet difficult to reach a balance in this regulatory approach – and while this hardline legal approach may be technically correct, for many people the argument that values such as freedom of expression do not apply to social media is somehow false.
Analysis of internet cultures and Governance
Initially, networked technologies expand and challenged appeared communicational relevant rights and online expressional freedom in the new media technologies. According to previous research, digital technology acts as a nexus to promote freedom of democracy and communication by utilizing new chances for self-expression and political involvement of new voices (Karppinen, 2017, p.96). To be specific, since digital technologies are also considered as facilities for the broader promotion and realization of human rights, beyond their outstanding influence on freedom of expression, this view concern digital tools more widely as tools that can contribute to wider human rights-related purposes, such as political participation, economic advancement, overall social progress, and the elimination of inequalities.
Meanwhile, critical scholars sharply pointed out that similar digital tools may also be applied for surveillance and censorship, as well as to accelerate new patterns of communicative inequality and concentration of power (Karppinen, 2017, p.97). Individual rights currently appear to construct a central normative structure for dealing with policy issues relevant to new networked technologies and the Internet. Also, the outstanding of rights is likely to embody a perception that human rights are gradually under threat in the digital age, as implied by ongoing concerns about new structures of control and revelations about widespread online surveillance practices (Karppinen, 2017, p.95).
One of the core issues is privacy, which refers to the private individual who should be in a situation non-observed or non-disturbed by other people. However, allowing people to have privacy in their online interactions typically requires framing the choices they make and providing the privileges that make this freedom possible (Marwick & boyd, 2019, p.1158). For most people, privacy is not just the capability to limit access to information, but the capacity to effectively control social situations by impacting what information is available to others, how it is interpreted, and how it is disseminated.
Users’ data currently enters the digital stream as a by-product of participation in contemporary life. As data-based systems become more prevalent and people struggle to trust platform parties, the lines between choice, circumstance, and coercion become increasingly blurred (Marwick & boyd, 2019, p.1159). The technology industry often defines its products as an exchange in which people are willing to share personal information in exchange for benefits. While there are lots of people who approach particular services with a mindset that they intentionally choose to do so, a great deal of information is not collected from truly informed and consenting individuals (Suzor, 2019, p.12). That is, while the web has the feature of easy access, not all online users have the same rights – some marginalized groups are possible to be forced to provide information in exchange for basic services.
One important form of externalization of rights is – by censoring others and thus avoiding being censored themselves. Institutions are intent on seeking private details about potential employees and clients, but at the same time try to avoid providing regulators with information and statistics about their processes. To this end, internet companies are collecting more and more data about their users through online algorithms and big data, but not allowing those users to exercise the associated control over the digital profiles generated. New technologies may enable users to unconsciously ‘quantified self’, with the resulting wealth of information and data being recorded in unique profiles (Pasquale, 2015, p. 4).
To this end, as Nissenbaum (2018) argued that the way to refine policies to conform to the current state of governance of today’s platforms should be to embed them in a theory of contextual integrity – focusing on and encouraging innovation, recognizing the commercial interests of commercial actors, while setting appropriate constraints to protect user privacy and personal information flows. Weibo provides privacy management in terms of contact, interaction, and visibility respectively. According to the diagram below, when in its initial state, the basic protection features can ward off some of the privacy concerns and negative remarks. For some accounts that are hugely followed, additional filters may need to be enabled to prevent hate speech.
Weibo is a mainstream digital platform in China, which provides users with more efficient technologies to receive and disseminate information in both aspects speed and wide than the previous ways. However, the popularization of Weibo posts is a huge challenge to the information control regime of the state, since it exposes a large amount of negative social issues at the same time, as well as online users express and publicize their discontent with such negative consequences. To this end, user-led online opinion facilitated central government intervention to correct and avoid the occurrence and further spread of undesirable events (Sullivan, 2014, p. 24).
Both platforms and governments have the regulation right and responsibilities. For the platform, a reality in legal is that digital platforms belong to the corporations that produced them and they have almost absolute power to determine the patterns they operate. The terms of service agreements of the major platforms contain a basic rule: you can terminate your access to the platform at any time, for any reason, or for no reason at all. Therefore, the platform is bound by law to be presented to the user – the platform appears to be strictly regulated. In a way, this is exactly the way social media sells us on the idea that platforms give users the right to speak and share freely (Suzor, 2019, p.12).
However, the term of service is a legally contractual document establishing a simple customer transaction: in exchange for access to the platform, the user agrees to be limited by the terms and conditions laid down. The legal relationship between supplier and user is that between company and consumer. That means, the terms of service document usually gives the operator a great deal of power. For large companies’ platforms in particular, these terms of service are written in such a way as to protect their commercial interests. They retain absolute discretion to allow platform operators to set and enforce the rules as they see fit. Thus, the platform remains dominant because the Terms of Service documents are not intended to be governing documents – they are designed to protect the legitimate interests of the company (Suzor, 2019, p.11). In other words, when consumers are satisfied with the content provided by algorithms, and if the level of transparency in business and government matches this, then the reduction in personal privacy may be worthwhile. However, contrary to expectations, credit ratings, search engines, and other agencies continue to collect information about users and convert it into scores, risk calculations, rankings, and watch lists, and companies are not scrutinized unless there are serious consequences in terms of leaks (Pasquale, 2015, p. 5).
The government has also used regulatory mechanisms on several fronts to maintain control over society when the negative impact of some events has increasingly caused public panic (Sullivan, 2014, p. 24). In today’s Weibo, dissent and mobilization remain censored and controlled by the platform, and the goals of its participants are necessarily limited. Even though some researchers argue that ‘political participation has improved significantly as a result of Weibo’ (Sullivan, cited in Richburg 2011), public opinion remains dominated and monopolized as news agencies and online media mainly publish official announcements about cases. Studies have shown that for some sudden or acute social issues, ordinary citizens are the main initiators, but because they have limited public opinion leadership and voice, they must rely on the media to organize the dissemination of news about the events. Online media and news organizations become dominant as the voice of the public challenging the opinions of official propaganda (Nip & Fu, 2016).
The Chinese government is also using the dissemination of information to identify and eliminate the same ‘threatening’ behavior from microblogs by filtering sensitive words. That is, radical comments by localized individual users are detected through AI and removed from public view. Wherever there is potential for microblogging incidents to go beyond small-scale resistance and discontent, the government enforces its censorship and propaganda regime, reinforced by management of the technological infrastructure, political and legal leverage over internet corporations, and through the integration of offline public safety agencies (Sullivan, 2014, p. 32).
For example, Weibo comments from news organizations often show ‘comments selected by bloggers’ or ‘comments closed’ in relation to some socially egregious cases. Or even that the relevant phrase or image has been removed from the platform – According to the relevant laws, regulations, and policies, the page is not found to prevent the further spread of negative social opinion before pending the outcome of the police investigation. However, from another perspective, this approach also limits people’s freedom of expression. Below is a news topic on ‘A BMW ramming into a crowd in Guangzhou has killed 5 and injured 13’. Within a short time after the news was published, the relevant words were blocked by the Weibo platform, relevant live photographs were unable to be accessed, and discussion was banned as well. To some extent, the criteria for determining whether an individual is free to speak or deliberately disinformation depends on the tyranny of the platform (Pasquale, 2015).
In addition, being politically sensitive is also an important factor. While new technologies are perceived as inherently extending or threatening human rights, these threats and chances will not exist independently without regulation and politics. Because the whole field of digital communications is the result of state intervention and government-funded research to a large extent, even insufficient intervention in its advancement is a political decision. The different expressions related to digital rights, therefore, deal with diverse political assumptions, with sophisticated influences on policy and regulation (Karppinen, 2017, p.100).
Therefore, as Sullivan (2014) mentioned that the impact of Weibo may not be the intervention and isolation of individual incidents, instead, it is a long-term process in which users become accustomed to greater transparency, political participation, and demand for more systematic accountability mechanisms. To be concluded, issues on the platform, such as privacy and user rights still require the providers to continually improve the regulating solutions in order to provide a better online experience for users. Mechanisms for the governance of speech on platforms, both in terms of digital technology and regulations, still require more specific detection and scrutiny. This does not mean that users’ free speech is further restricted, but rather that the voices of the general public are protected to the greatest extent possible while blocking negative speech. Through the analysis above, on the one hand, digital platforms should provide consumers with sufficient creative freedom and freedom of expression. On the other hand, platforms still need governments and suppliers to improve regulations to keep pace with future developments.
Flew, T. (2021). Regulating platforms. Polity Press.
Karppinen, K. (2017) Human rights and the digital. In Routledge Companion to Media and Human Rights. In H. Tumber & S. Waisbord (eds) Abingdon, Oxon: Routledge pp 95-103.
Marwick, A. & boyd, d. (2019) ‘Understanding Privacy at the Margins: Introduction’, International Journal of Communication, pp. 1157-1165.
Nip, J. Y. M., & Fu, K. (2016). Challenging Official Propaganda? Public Opinion Leaders on Sina Weibo.
Nissenbaum, H. (2018). Respecting Context to Protect Privacy: Why Meaning Matters. Science and Engineering Ethics, 24(3), 831–852. https://doi.org/10.1007/s11948-015-9674-9
Pasquale, F. (2015). Introduction – The Need to Know. In The black box society : the secret algorithms that control money and information. Harvard University Press. pp. 1-18.
Sullivan, J. (2014). China’s Weibo: Is faster different? New Media & Society, 16(1), 24–37. https://doi.org/10.1177/1461444812472966
Suzor, N. P. (2019). ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives. Cambridge, UK: Cambridge University Press. pp. 10-24.