Algorithmic Governance Research in the Age of Artificial Intelligence

Introduction

With the development of the social economy, information technology, and computer network, people’s demand for network applications is also increasing. AI is a very popular new technology direction in recent years. Since everyone realized the capabilities of AI a few years ago, AI has become an increasingly important topic and has been widely used in various industries. AI algorithms have become the fundamental rules that influence how the world works. The rise of artificial intelligence, represented by machine learning algorithms has broken through the limitations of human expression as revealed by the “Polanyi paradox” (Translated by Content Engine LLC, 2022). This breakthrough allows algorithms to become self-producing, forming rule sets and applying them to perception and decision-making in different scenarios through a self-learning process based on large data sets. The increase in the performance of algorithms and the spread of applications have greatly improved the efficiency of human society but have also brought new governance risks and challenges. These challenges include self-reinforcement and the subjectivity problem. In the context of increasing digitization and the deep integration of cyberspace and real space, we need to fully recognize the importance of algorithmic governance. Additionally, based on a summary analysis of the basic principles of algorithms and application scenarios, we should form a reasonable analysis of the governance challenges brought about by new technologies and new business models without exceeding their current level of development and future development possibilities. Based on the analysis, we should propose innovative ways of public policy for algorithmic governance.

Self-Reinforcement and Subjectivity

Artificial intelligence technology is a technology that imitates human intelligence by using various disciplines such as linguistics, physiology, and psychology (Leach, 2022). In artificial intelligence technology, AI can be made to simulate human vision, hearing, speaking, and thinking, so that AI has a human way of thinking and ability. AI works by combining large amounts of data with fast processing and smart algorithms, allowing the software to automatically learn from patterns in the data.

With the rapid development of artificial intelligence technology, algorithms are becoming more widely used in our daily lives. However, they also pose several problems, including issues of self-reinforcement and subjectivity. Firstly, the decision-making process of algorithms is often opaque, people cannot understand how the algorithms arrive at their decisions (Lee et al., 2019). This opacity may lead to mistrust of the algorithm’s decisions and even concerns that the algorithm may make unfair or discriminatory decisions. Secondly, self-reinforcement arises because algorithms may deepen existing biases and discrimination as they learn. For example, if an algorithm is used to predict the risk of crime, it may learn that certain ethnic groups are more likely to commit crimes based on biased historical data. This learning process can reinforce these biases and discrimination. Finally, the subjectivity problem refers to the algorithm’s lack of subjectivity in decision-making. An algorithm can only make decisions based on predetermined rules and data. It cannot think and judge independently as humans do, therefore it cannot make judgments on moral and ethical issues in the same way that humans can (Fountain, 2022). The increasing use of algorithms in our daily lives has brought up challenges like self-reinforcement and subjectivity. To tackle these problems, it is necessary to enhance the regulation and accountability systems of algorithms. We should also prioritize ethics education for algorithm designers and users to ensure that they follow ethical guidelines when creating and utilizing algorithms.

Algorithms as rules: principles and their applications

Although algorithms can be defined in various ways, their fundamental function is to shape the rules of behavior in cyberspace, allowing certain human behaviors while restricting others. As cyberspace and real space become increasingly integrated, the impact of algorithms as rules of behavior in cyberspace gradually extends to real space, affecting its established order (Flew, 2021). Under the theoretical framework of institutionalism, rules include formal rules which generally refer to clearly expressed written rules and informal rules which include unwritten rules such as values, beliefs, customs, and cultural traditions that are agreed upon by members of a given scope of society. Although algorithms are implemented as written computer code, this does not necessarily mean that all algorithms should be considered formal rules (De Stefano & Taes, 2022). In fact, when considering the mechanisms and processes by which rules affect human behavior, algorithms can still be categorized as formal or informal. Formal algorithms have clear interpretable logic, allowing humans to understand the content of algorithmic rules and know how they make decisions that affect human behavior or shape outputs. In contrast, informal algorithms are not interpretable, and neither the producers, users nor those affected by an algorithm can clearly explain the reasons and processes by which an algorithm makes a particular decision. From the perspective of the “Polanyi paradox” analytical framework, the latter is not interpretable. According to Ney’s paradox framework, traditional algorithms can be considered as having the property of formal rules, while machine learning algorithms in the context of the third wave of AI development are more akin to informal rules (Pasquale, 2015).

The different properties of algorithms as rules stem from their different implementation principles. Traditional algorithms are more like formal rules, relying on human understanding and analysis. In contrast, machine learning algorithms, which are more akin to informal rules, do not rely on human understanding and analysis but rather adjust their parameters and weights through a self-training, self-learning process in order to ultimately achieve a given goal. The basic principles of the latter can be broadly divided into three stages: annotation, training, and application. Annotation is a critical preparatory stage of a machine learning algorithm that aims to create a sizable dataset for the algorithm to learn or train from. This process involves identifying and manually labeling or automatically generating data based on sensors or IoT. Through annotation, specific human knowledge is linked to digital material such as speech, images, and videos that can be processed by computers. During the training stage, the algorithm self-tunes and self-produces, guided by a set goal, such as recognition accuracy in image recognition algorithms. The algorithm processes the annotated big data set in various ways and eventually forms a rule set composed of common features of the big data set. This rule set is created through automatic adjustment of feedback iterations. If the output based on the current rule set does not meet a predetermined goal, the algorithm adjusts the rule set according to specific boundary conditions, allowing the machine learning algorithm to become self-producing, and breaking away from its reliance on human programmers. However, machine learning algorithms are often not interpretable since the common features contained in the resulting rule set can be vast in number and challenging to translate into the natural human-understandable language (Noble, 2018). The rule set that emerges from training becomes the standard and principle that guides the application of machine learning algorithms to different scenarios. The three stages of annotation, training, and application are closely interconnected, with the output of the application used as new data to feed back into the annotation, guiding the adaptation of the rule set in the training stage.

There are multiple implementation paths for machine learning algorithms, with the label, train, and apply process falling more into the supervised learning branch. However, unsupervised learning which does not rely on labeled datasets is also gaining attention. Despite this, supervised learning still holds an overwhelmingly important position in real-world applications and is likely to remain the dominant paradigm for a considerable period. It is important to note the reliance on big data in the basic principles of machine learning algorithms (Crawford, 2021). The large dataset formed in the labeling stage almost determines the main content of the rule set produced in the training stage. The popularity of machine learning algorithms in recent years can be attributed to the unprecedented boom of big data under the influence of various factors such as the increase in hardware computing power and the decrease in data storage costs. The use of machine learning algorithms, image recognition, natural language processing, and other fundamental artificial intelligence technologies has become increasingly widespread in finance, healthcare, public safety, urban transportation, and other fields, and is rapidly spreading to other areas. This has led to an optimistic outlook for the future of AI technology development and its applications. Algorithms are now affecting all aspects of human society. Despite the positive attitude towards AI technology and its applications, the potential absence or bias of large datasets limits the application of machine learning algorithms. As a result, the algorithms pose several governance challenges when applied to different areas of human society. It is crucial to address these challenges to ensure the responsible and ethical development of AI technology.

Case Study

In the age of artificial intelligence, the importance of algorithmic governance has become increasingly prominent. Recently, the US House of Representatives passed a bill called the Algorithmic Transparency Act, which requires technology companies to disclose information about the algorithms they use and how these algorithms impact users and society. This event has sparked discussions and research on algorithmic governance. The Algorithmic Transparency Act is an important measure taken by the US government to regulate the algorithmic governance of technology companies. The passage of this bill aims to promote algorithmic transparency and prevent algorithmic discrimination, and unfair treatment of users and society. The bill also requires technology companies to establish an independent algorithmic review committee to ensure that their algorithms comply with transparency and fairness requirements.

The passage of this bill has sparked discussions and research on algorithmic governance. Algorithmic governance refers to the establishment and implementation of rules and standards to ensure the transparency, fairness, and accountability of algorithms. In the era of artificial intelligence, algorithms have become the basis for many decisions, such as automated recruitment, financial evaluations, and disease diagnoses. However, the decision results of algorithms are often opaque, which poses risks of unfairness and discrimination to users and society. Research on algorithmic governance covers multiple areas. One important research direction is algorithmic transparency, which refers to the disclosure of information about algorithms and decision results to enable users and society to supervise and review them. Another important research direction is algorithmic fairness, which ensures that algorithmic decisions do not discriminate against or unfairly treat different groups, such as gender, race and age. These cases highlight the need for research on algorithm governance in the era of artificial intelligence. Such research can help establish and improve relevant laws, regulations, and policy systems to safeguard the public interest and maintain social order. It can also enhance the government’s image and the public’s sense of well-being.

Conclusion

In conclusion, this blog focuses on the study of algorithmic governance in the age of artificial intelligence. With the rapid development of artificial intelligence, there are many negative effects along with the convenience it brings. One of these negative effects is the prevalence of vulgar and false recommended content, aimed solely at attracting attention and reinforcing users’ biases and preferences. Moreover, there is also a risk of catering to users, which may lead to the vulgarization of smart platforms and eventually hinder innovation. These dangers constitute a profound reflection on the current industry related to algorithmic recommendations. In fact, we are increasingly living in an algorithmic society where algorithms have a profound impact on our daily lives, such as those used by search engines to rank what users search for, by commercial banks to assess the repayment risk of loan applicants, and by the airport, stations to identify the characteristics of large crowds. The question that arises is whether the emergence of new business models that accompany technological development warns us that while AI can greatly improve the operational efficiency of human society, it can also pose universal governance challenges.

Based on the research in this blog, there are several recommendations for algorithmic governance research. Firstly, it is essential to accelerate the popularization of AI cognitive education so that people can scientifically understand the possible advances and potential risks brought about by algorithmic applications. This will enable people to form objective expectations while avoiding blind optimism, reduce unnecessary obstacles in the process of technological development and application, and promote relevant policy innovations to address governance challenges. Secondly, there is a need to strengthen the discussion and research on the ethics of algorithms, promote dialogue and exchange between experts in the natural and social sciences, and form ethical guidelines around the development and application of algorithms. Finally, public policies should be formulated based on the maturity of algorithm applications and the extent of their impact. This should be done by field and priority, based on the objective concept of not exceeding their current level of development and future development possibilities. This will help to ensure that the benefits of algorithmic applications are maximized while minimizing their negative impacts.

Reference list

Crawford, Kate (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.

De Stefano, V., & Taes, S. (2022). Algorithmic management and collective bargaining. Transfer (Brussels, Belgium). https://doi.org/10.1177/10242589221141055

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 79-86.

Fountain, J. E. (2022). The moon, the ghetto and artificial intelligence: Reducing systemic racism in computational algorithms. Government Information Quarterly, 39(2), 101645–. https://doi.org/10.1016/j.giq.2021.101645

Leach, N. (2022). What is AI? In Architecture in the Age of Artificial Intelligence. Bloomsbury Publishing USA.

Lee, M. K., Kusbit, D., Kahng, A., Kim, J. T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas, A., & Procaccia, A. (2019). WeBuildAI: Participatory Framework for Algorithmic Governance. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–35. https://doi.org/10.1145/3359283

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. https://doi.org/10.18574/9781479833641

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. https://doi.org/10.4159/harvard.9780674736061

Translated by Content Engine LLC. (2022). Polanyi’s Paradox and Machine Learning (English ed.). ContentEngine LLC, a Florida limited liability company.

Be the first to comment

Leave a Reply