Datafication and algorithms have “domesticated” the human race

Whether or not algorithms have values, in the past, when the Internet era was just beginning, the owners of industry giants such as TikTok and Facebook argued that they were technology companies, not media companies, and that they were not responsible for the content that appeared on their platforms, which means that algorithms do not have values. But as time has progressed, we’ve probably all come to realize and even experience that algorithms do have values. Because an algorithm is essentially a program that is powered by and learns from the big data of countless people. Therefore, the application of algorithms is bound to contain traces of human behavior, and will be influenced by human values, which will be expressed as algorithmic values, and then human behavior will be influenced by it in the opposite direction, which is why dativization and algorithms are so important. Even The World Economic Forum described personal data as ‘the new “oil” – a valuable resource of the twenty-first century … . a new type of raw material that is on a par with capital and labor’ (World Economic Forum 2011).

So how do platforms use this massive amount of data and ultimately export their own values and thus “domesticate” human beings? This is mainly reflected in the personalized topics recommended by these platforms, take Twitter as an example, whenever we open Twitter’s search function, we will see a number of personalized recommended topics. How is this selected? In a nutshell, Twitter’s recommendation algorithm is a personalized recommendation system that operates in the “For You” interface of Twitter users. It whittles down about 500 million tweets a day to 1,500, which are then filtered and recommended to different users in their For You streams. The system first predicts which users are most likely to interact with and determines which “communities” and tweets are popular on Twitter. So how does the system determine the above? There are two parts of the data: the first is the underlying data used to train the machine learning model: Twitter’s large-scale proprietary data, including user data, followers, tweets, and interactions. The second part is the ranking information used in scoring the relevance of tweets, such as user preferences, historical behaviors, and time factors. After that, we use heuristic algorithms and filters to filter out the content that is blocked by the user, unsuitable for public viewing, and the content that has already been viewed. This results in a personalized list of top recommendations.

Algorithmic recommendations are supposed to be a service to users, who are supposed to “domesticate” the algorithms to recommend news or topics that better match their needs, thus making our lives easier. But because of the biases in the machine-learning phase of such recommendation algorithms, they are prone to recommending highly controversial events, which leads to algorithms constituting a form of what Adrienne Masanori terms ‘platform politics’, whereby ‘the assembly of the assemblage of the world’s most important news and information’, or ‘the assemblage of the world’s most important news and information’. ‘, whereby ‘the assemblage of design, policies, and norms … encourage certain kinds of cultures and behaviors … . encourages certain kinds of cultures and behaviors to coalesce on platforms while implicitly discouraging others’ (Massanari, A. 2017).

In media where algorithms are incorporated, the platforms that develop and use the algorithms act as both legislators and enforcers of content production and recommendation, guiding creators to produce content according to a set of standardized templates with a variety of recommendation rules that satisfy a variety of requirements, from the title of the content to the form in which it is presented. At a time when algorithmic recommendations dominate distribution channels, creators have to cede their subjectivity and creativity in order to increase the likelihood of being recommended, creating content around types that are easily pushed in large quantities, thus entering the logic of the platform-led attention economy. It can be seen that people use technology to realize the purpose of increasing fans, obtaining traffic, and improving income and influence, but technology also guides and regulates the direction of content production for creators, so that they cannot produce content as they wish according to their initial ideas, and when creators and users are in this kind of network environment for a long time. When creators and users are exposed to this kind of online environment for a long period of time, it will have a great influence on their thinking, and users may become extreme and narrow-minded, and slowly lose the ability to think deeply and focus, and many people will have their prejudice and arrogance about certain things increased by the videos recommended by algorithms. Society and culture will be torn apart; many creators will even create rumors and controversial news for the sake of traffic and profit.

On the other hand, these recommendation algorithms have also “domesticated” our users in an invisible way. We all know that our algorithmic recommendations are based on a large amount of user data feedback, but have we ever wondered how these platforms can accurately recommend a user’s interests and retain them when faced with a new user? That is to use the safest way – recommend some of the best traffic, generally popular content, once you browse this kind of content, the platform will continue to recommend similar content to you, no platform will be from the beginning to recommend you a very niche content, but this and personalized recommendation of the original purpose of the contrary. This leads to your personalized needs more and more difficult to meet, which ultimately becomes a vicious circle. Slowly the user is “domesticated” by the recommendation algorithm and loses the willingness to explore new interests. As a result, the entire user base becomes homogenized.

In addition, we often have no choice in sharing our data, and algorithms inadvertently “domesticate” our behavioral choices. If we don’t choose to share our data, we can’t use these convenient services, which makes us very passive in protecting our data and privacy. And while we passively consent to the use of our data by the platforms, we also tacitly agree to the sharing of our data for use by third parties, which undoubtedly results in the massive dissemination and leakage of our privacy. While initiatives such as the European Union’s General Data Protection Regulation (GDPR) address important public interest concerns about data use, they do not address the need for privacy protection. While initiatives such as the European Union’s General Data Protection Regulation (GDPR) address important public interest concerns about data use, they do not cover data collection or data processing. (Flew, Terry 2021) This still does not prevent privacy and personal data leakage.

The above “domestication” of humans through algorithms or the algorithms themselves occurs almost unconsciously to the general public, making people unaware of their existence. From the perspective of technological cognition, these unconscious technological designs remove the boundaries between humans and machines, allowing humans to operate naturally and without barriers while treating the technology as a black box and allowing the smooth extension of human perceptions and behaviors into the machines. People don’t need to understand them to access the reality of the impact they bring. These intelligent algorithms are like highly intelligent “creatures” devouring data, increasingly perceiving and intervening in the world in ways that are incomprehensible to humans. In contrast, humans’ own information and data processing capabilities are increasingly stretched, which seems to imply that they will rely more and more on algorithms, exacerbating this phenomenon of algorithmic “domestication” of the human race.

This is mainly due to the fact that the modern world’s emphasis on quantification and objectivity is underpinned by a notion that devalues the ability of human beings to be rational. Closely related to this is the behaviorist claim that the human mind is fragile and flawed, unable to take the bigger picture into account, and therefore prone to irrational choices. since the end of the twentieth century, this understanding has been reinforced by the ubiquitous use of computers and statistical evidence – the objective knowledge of algorithms is preferable to the subjective judgment of experts. In recent years, it has been further fueled by artificial intelligence driven by big data and machine learning. These perceptions have led many to believe that machine rules are more scientific and objective than human rules, and therefore more “fair”.

And when big data analytics and algorithmic recommendations are applied in practice, we sometimes question their “objectivity”. Take Twitter for example, when we are recommended personalized content, this content can easily influence our thoughts and decisions. This ability to influence can lead to huge unknown risks. Cases such as corporate meddling in the U.S. election seem to suggest that algorithms can provide a sort of “objective” measure of the world and manipulate it, even feeding accurate information to voters who are uncertain of their own positions. This ability to use algorithms to “domesticate” people’s behavior makes it easy for the values of a select group of elites to radiate to the whole of society, and it creates a kind of vicious circle in which a few people invisibly pass their values on to the algorithms, making the algorithms biased, and these algorithms then recommend these biases to the general public, thus allowing the few people to The algorithms then recommend these biases to the general public, thus allowing those few to continue to profit. This vicious cycle inevitably leads to greater exploitation and harm to the general public. But it is also a reminder that we need to make this technology more widely available and manageable. Only by awakening more ordinary people and involving them in the decision-making process of algorithmic applications can we reduce the “domestication” of human beings by algorithms.

Nowadays, there have been many efforts in the governance of algorithms, which can function as institutions insofar as they apply ‘norms and rules that affect behavior on the supply and demand side, as a set of rules and routines that both limit activities and create new room for maneuver’ (Just, N., and Latzer, M 2017)

For example, some media platforms are already choosing to make their products less controlled by recommendation algorithms – Musk’s “Twitter” is open-sourcing its recommendation algorithms in 2023 in the hope that by making them public, it will increase the transparency of the platform and enhance the trust of both creators and ordinary users; or YouTube now has the option to ignore recommendations and clear historical viewing data.

In conclusion, the operation of such a huge power as algorithms should not be made unscrupulous at the ethical and political levels because of its technological concealment strategy, but should be based on the trend of the current technological era, and should be scrutinized systematically in terms of its internal composition and external associations from a number of dimensions. Increasing the transparency of its operation and allowing the majority of the population to judge it together will increase its objectivity and reduce its bias, as well as enable people to use it better after recognizing its mode of operation and not be domesticated by algorithms.

Reference

Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 79-86

Just, N., and Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the internet. media, Culture & Society, 39(2), 238-58.

Massanari, A. (2017). #Gamergate and The Fappening: how Reddit’s algorithm, governance, and culture support toxic technocultures. new Media & Society, 19(3),. 329-46

World Economic Forum. (2011). Personal Data: The Emergence of a New Asset Class. Geneva: World Economic Forum

Be the first to comment

Leave a Reply