How should media platforms be regulated and governed in the era of AI and algorithms?


With today’s rapid digital globalisation, technological advances and iterations have made life easier than ever before. One of the most popular and noteworthy technologies is AI and algorithms, which have become ubiquitous in all industries as these technologies continue to mature. Whether it is learning, communicating, solving problems or making suggestions, AI and algorithms can have a positive impact on people in their daily lives and in their professional work, including increased efficiency, reduced transaction costs, scalability and applicability as a general-purpose technology (Bresnahan, 2010, p. 765). All of these benefits make the media industry a major user of AI and algorithms, and media consumption in particular is increasingly influenced by the choices made by AI and algorithms (Just & Latzer, 2019, p. 239). AI and algorithms are like a double-edged sword for media. If media companies use AI and algorithms for sensible governance within laws and rules to combat some harmful information, this is undoubtedly helpful for media platforms, but if media platforms violate some privacy, ethical and moral dimensions for the sake of profit, it is an unbearable and devastating blow for any company. This blog will take a look at how media companies can effectively use AI and algorithms to regulate and govern, also in the future, how AI and algorithms can help more people or businesses to benefit in an ethical and regulated manner.

Figure 1 – (Grant, 2023)

Human Rights Perspective

As the influence of AI and algorithms continues to expand, so do their rights, leading to a number of human rights issues. In today’s society, human rights is a sensitive but inescapable issue. Each company has its own ideas and rules when it comes to the human rights-based perspective of AI and algorithms. The systems that platforms use to manage content and users are complex and multi-layered, with more and more algorithms either automatically removing content or automatically tagging content for human bloggers (Savolainen, 2022, p. 1095). All of this puts human rights issues at risk and puts every user’s rights at risk to some degree. Once content on media platforms is linked to social well-being, the impact of human rights will not only be felt on online platforms but will actually affect everyone in the real world, such as credit ratings, crime risk assessments, health diagnoses, etc. (Bruckner, 2018, p. 5). We need to consider whether we can give up our right to publish content on the internet to the new technology of AI and algorithms. We need to consider whether we can give ourselves the right to post content on the internet to the emerging technology of AI and algorithmic lenders. We need to give people the opportunity to complain or even to go back to the platform if they come across content that AI and algorithmic lenders judge to be in breach of the rules, regardless of the laws and regulations of any country or the guidelines of the platform. This is not only for the protection of human rights but also so that the initiative of the internet remains in the hands of the people themselves, making the online world as open and fair as possible. 

In addition, the protection of privacy in relation to the collection of big data is also a very important part of the human rights issue. Big data provides the basis for automated algorithmic choices and creates an equally growing need to process and understand this vast amount of collected data (Just & Latzer, 2019, p. 240). Today, big data is so sophisticated that each user can quickly find the content of interest through their preferences. While users may not have access to back-end data arrangements and social media algorithm codes, their impact can still be felt. (Bucher, 2018, p. 23). The presence of big data makes people feel much better about using media platforms, as people can simply look at some of their favourite content and let the system suggest itself what they might be interested in, saving them a lot of time searching on their own and avoiding missing out on a lot of content. But at the same time, the collection of big data also involves human rights issues. Whether people consent to their data being collected by media platforms and whether the platforms keep users’ data intact so that it is not leaked to the government or other companies are all human rights issues that need to be closely monitored and regulated by media platforms. It is fundamental to a media platform that users are able to put their human rights concerns and confusion to rest, both in terms of content review and data collection.

Morality & Ethics Perspective

As AI and algorithms emerge as one of the primary foundations of media platforms, their morality and ethics are an issue that needs to be looked at and tested. Ethical issues have always been the bottom line in human thinking, and how to enable AI and algorithms to have that same vetting ability is something that still needs to progress. The question of the responsible agency of platforms makes the moral significance of algorithms even more important (Verbeek, 2014, p. 78).The morality and ethics of AI and algorithms are also largely what will determine the morality and ethics of society in the future. As the most likely to be implemented and most talked about of future technologies, AI and algorithms will most likely be the main writers in media platforms in the future, and whether they are able to understand the standards that people have for the media industry, such as the identification of fake news, especially for communication studies, ethical issues, especially with the rise of automated journalism, which poses new challenges for professional journalistic practice and the media in general (Dörr, 2016, p. 703), in addition to those concerning the fate of society, the stakes not only in the scope of information visible to individual users but the need for practices that promote regulation, customisation and personalisation. (Andrejevic, 2019, p. 45). In addition to some moral and ethical issues for people emotionally, the moral autonomy that comes with AI and algorithms themselves is an issue worth noting. Both in the answers given by AI and in the sources that AI learns from, it is society and professional bodies that need to help learn and influence. For if the sources of AI and algorithms are themselves ethically flawed, then they also bring a range of problems to people’s lives. Although AI and algorithms have advanced very rapidly today, making people’s lives much more efficient, moral and ethical improvements are needed if AI and algorithms are to be used in professional contexts, going beyond simply applying some formulas to the need for AI and algorithms to understand the human mind. However, when AI and algorithms are not yet ethically proficient, media platforms and companies should carefully review and set up appropriate auditing systems, as if AI and algorithms have a bad moral and ethical impact on users, this could make the user experience very poor and lead to the loss of potential and regular users. Nowadays, there have been further attempts to construct ethical issues in digital media and algorithm-related content in general (Ess, 2009, p. 34), and in the future, moral and ethical AI and algorithms are bound to bring more possibilities and creativity to the world.

Figure 2 – (Naik, 2022)

Accountability, Transparency & Interpretability

Among the general principles of AI and algorithmic regulation and governance, three points need to be kept in mind, namely, accountability, transparency, and explainability. AI and algorithms are not to be seen from one side, and while they bring unlimited convenience, they can also bring many unexpected problems.

The first is accountability, which is essential for AI and algorithms to survive and continue to be reused in major industries. In any case, a company with a good accountability system will have a good PR face on the one hand and will give users more peace of mind when using it on the other. In media platforms, accountability systems for AI and algorithmic regulation are a move towards being fairer to users and less discriminatory, identifying who is responsible or accountable for the operation and impact of algorithmic decisions while providing explanations or proof that these impacts can be remedied (Latzer & Just, 2020, p. 9). These will allow people to understand the advantages of accountability systems and allow companies to start building their own.

Secondly, transparency, the transparency of AI and algorithms, is key to the regulation and governance of media platforms. Transparency represents the integrity of a platform in terms of AI and algorithms, which are separate and secretive to the platform, with almost all operations being done in the background or in the cloud, so there is often a fear that media platforms will be driven by profit to leak their information and data. This is where having a transparent and trusted platform is very reassuring for any user. Just as limited transparency is the basis for accountability, users can use methods such as reverse engineering to investigate the capabilities of algorithms and be held accountable for faulty algorithms (Diakopoulos, 2015, p. 402).

Finally, interpretability, also one of the accountability mechanisms, is essential to the regulation and governance of media platforms. It is very helpful to have an explanation and an explainable channel, both for the platform itself and for the user. Interpretability is more about trying to provide an order in which developers can explain their ideas in order to safeguard their own or the platform’s rights from being undermined, while users can also express their own interpretations, which may be completely different from the developers’ ideas, but this plurality is also part of the order, allowing users to maximise the enforcement of their own accountability mechanisms.

Instagram: Wrong Image Removal

A key feature of Instagram, one of the most popular social media outlets today, is that it allows users to freely post images to be uploaded to the platform for users to like or comment on. As part of the social economy (Terry, 2021, p. 80), Instagram has also evolved with AI and algorithms, whether it is vetting harmful content or recommending content of interest to users based on big data, which has become one of the means by which Instagram absorbs users.

Figure 3 – (Liss, 2021)

But Instagram has let the issue of regulation and governance come to the fore when it comes to images of women. Instagram was 2.48 and 1.59 times more likely to remove images depicting light and medium figures, respectively, compared to those depicting women with overweight bodies. At the same time, many of the images did not violate any of the platform’s policies (Witt, Suzor & Huggins, 2019, p. 562). And instead of being completely transparent and open about its mistakes when the AI and algorithm made them, Instagram kept its regulation hidden from the public without giving users any explanation or reasoning and without engaging them, which didn’t give them a full sense of fairness. And all this is the result of not having a good accountability mechanism, leaving the explanation to Instagram Inc. In the case of Instagram, there are some uncontrollable risks for female users when using Instagram, which can cause Instagram to lose many female users and also cause many men to become suspicious of Instagram’s censorship mechanisms, thus losing more potential users, which is a good example of how having an open This demonstrates the importance of having an open and transparent accountability mechanism to regulate and govern media platforms.


This blog discusses how media platforms are regulated and governed in the context of AI and algorithms from three perspectives. They are ‘Human rights perspective’, ‘Morality & Ethics perspective’, and ‘Accountability, Transparency & Interpretability’. The blog also analyses Instagram’s censorship of women’s images as a case study. It also reminds all companies that need to use AI and algorithms that AI and algorithms are not always right and that it is important to pay more attention to the establishment of accountability mechanisms in addition to the advantages of commercialisation and diversity so that accountability mechanisms can be transparent, fair, responsible and justified, and only in this way can AI and algorithms truly achieve efficiency and accuracy in future media platforms.


Andrejevic, M. (2019). Automated Culture, In Automated Media. (pp. 44-72). Routledge.

Bresnahan, T. (2010). General purpose technologies. In B. H. Hall & N. Rosenberg (Eds.), Handbook of the economics of innovation, 2(18), 761–791.

Bruckner, M. A. (2018). The promise and perils of algorithmic lenders’ use of big data. Chicago-Kent Law Review, 93(1), 3–60. 

Bucher, T. (2018). If. . . Then: Algorithmic Power and Politics. Oxford: Oxford University Press.

Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722.

Diakopoulos, N. (2015). Algorithmic accountability. Digital Journalism, 3(3), 398–415.

Ess, C. (2009). Digital media ethics. Polity Press.

Grant, J.I. (2023). To regulate or not to regulate AI … That is not the question. [Figures]. In iStock. Financial Review.

Just, N. & Latzer, M. (2019). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238-258.

Latzer, M. & Just, N. (2020). Governance by and of Algorithms on the Internet: Impact and Consequences. Oxford University, 1-21.

Lizz, D. (2021). The FDA should regulate Instagram’s algorithm as a drug. [Figures]. In TechCrunch. Getty Images.

Naik, N. (2022). Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? [Figures]. In Scientific Diagram (p. 3). Frontiers in Surgery.

Savolainen, L. (2022). The shadow banning controversy: perceived governance and algorithmic folklore. Media, Culture & Society, 44(6), 1091-1109.

Terry, F. (2021). Regulating Platforms (pp. 79-86). Polity.

Verbeek, P.P. (2014). Some misunderstanding about the moral significance of technology. In Kroes P and Verbeek P-P (eds) The Moral Status of Technical Artefacts. (pp. 75–88). Springer.

Witt, A., Suzor, N. & Huggins, A. (2019). The Rule of Law on Instagram: An Evaluation of The Moderation of Images Depicting Women’s Bodies. UNSW Law Journal, 42(2), 557-596.

Be the first to comment

Leave a Reply