Who’s making decisions for us?(Take the algorithm of the Little Red Book platform as an example)

Let me share a story with you, the story of the Smartest Horse in the World.
At the end of the nineteenth century, Europe was captivated by a horse named Hans. “Clever Hans” was nothing less than a marvel: It could solve math problems, tell time, identify days on a calendar, differentiate musical tones, and spell out words and sentences. Why is it so smart? Because the questioner’s posture, breathing, and facial expression would subtly change around the moment Hans reached the right answer, prompting Hans to stop there. Therefore, the solution to the Clever Hans riddle was the unconscious direction from the horse’s questioners. The horse was trained to produce the results his owner wanted to see. It’s an interesting story. This story comes from the beginning of a book I recently read by Kate called The Atlas of AI. When I read this story, it got me thinking: The horse’s “artificial intelligence” seems to come from repeated “training,” but the “right answer” is a decision made by the horse’s owner through the “training mechanism” Or did the horse make the decision on his own?
Similarly, back in the real world, with the development of the digital age, assuming that the horse is the user, that is, the user of the platform; The horse owner may be the designer of the platform; The training mechanism used by the horse owner is the algorithm. So, let’s take Little Red Book, a Chinese social platform, as an example, take literature and interview videos as research methods, and analyze who decides the algorithm of artificial intelligence in Little Red Book? What is the impact and how to govern it?

(Image1 from Little Red Book’s official website: http://www.xiaohongshu.com/eu


Little Red Book, also known by the Chinese name Xiaohongshu, its official website shows that with a mission to “inspire lives,” it is a lifestyle platform that inspires people to discover and connect with a range of diverse lifestyles. Millions of users showcase their experiences on the platform daily, from cosmetics and beauty to fashion, food, travel, entertainment, fitness, and childcare, brought to life visually through a variety of formats, including photos, text, videos, and live streaming. The innovative platform integrates the authentic content shared by its community with commerce, quickly becoming one of the most popular destinations for making lifestyle decisions. Why choose the Little Red Book as the case? Because according to the Qiangua Data website’s 2024 study report on XiaoHongshu’s active users shows, today’s XiaoHongshu has 300 million monthly active users, with a male to female ratio of 3:7, the proportion of post-1995 is 50%, the proportion of post-00 is 35%, and the proportion of users in first-tier and second-tier cities is 50%. The community has more than 80 million users, the average daily user search penetration has reached 60%.

(Image2 and data from:https://www.qian-gua.com/blog/detail/2898.html


As shown in the above picture, the Little Red Book platform has a large amount of customer data, especially 79.13% of female users. The Little Red Book algorithm recommends personalized content and products according to user’s preferences and behaviors. Users can even find and buy products recommended by other users here, especially beauty products such as lipstick and nail art.
Although Little Red Book’s algorithm personalized recommendation has gained the goodwill of users, I, as a user, have raised specific concerns about the governance of the algorithm and the decision-making process. Who is responsible for the decisions made by the platform’s algorithms, the designers behind the algorithms? Or is the decision made by the user?

Who’s making decisions for us?
Frank mentions in his Black Box Society book that while internet giants claim their algorithms are scientific and neutral tools, it is difficult to verify these claims. Furthermore, Mehrabi et al. state in their survey on bias and fairness in machine learning that algorithms, like people, are vulnerable to biases that can result in unfair decisions. In the context of decision-making, fairness is the absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics. Thus, an unfair algorithm is one whose decisions are skewed toward a particular group of people.
For instance, I recently watched a YouTube interview with Nick, the engineer behind the Little Red Book Algorithm, which also demonstrates this very well. Nick mentioned that algorithmic engineers design models that determine who and what gets to be famous. Sitting in his gym in downtown Shanghai one day, he noticed across the street from an orthopedic hospital that specialized in bone trimming and whose customers came out with their heads covered. He walked in and saw that everyone in the hospital was using the Little Red Book platform software. He suddenly realized that it was the content he had sifted out that had shaped the aesthetics of these people, who, in turn, had begun to change their faces based on what he had instilled in them. Therefore, in this view, algorithms are biased and subject to the subjective influence of algorithm engineers. As in the story at the beginning of the article, the “smartest horse” gives feedback based on subjective behaviors, such as the tone and movement of the horse owner, so the horse owner helps the horse make decisions. In the same way, the designers of Little Red Book help users make decisions.

Image3 from: YouTube video Little Red Book algorithm engineer Nick interview https://youtu.be/cN07i8Puqzs?si=VQqCfEWSThx1qQe5&t=3243(The video runs from 54:03 to 55:53)


What is the impact?
So how does the bias of algorithm engineers affect us? In Media, Culture & Society, Natascha et al state that Algorithmic selection on the Internet influences not only what we think about (agenda-setting) but also how we think about it (framing) and consequently how we act.
This means that algorithms affect our thoughts, affect our actions, and the Little Red Book is domesticating all of us into slaves of algorithms. As Little Red Book algorithm engineer Nick mentions in the video, because of his algorithm bias, he likes and designs “beautiful women” with pointed chins and big eyes. Users in the use of Little Red Book generally believe that this is the standard of beauty, and then go to the plastic surgery hospital, the sharp chin and big eyes of their own “beauty.” Therefore, in this view, the bias of the Little Red Book algorithm designer affects the behavior of users and affects people’s physical and mental health.
In addition, Natascha also mentioned that algorithmic selection shapes the construction of individuals’ realities, that is, individual consciousness, and as a result, affects culture, knowledge, norms, and values of societies, that is, collective consciousness, thereby shaping social order in modern societies. Therefore, the algorithm bias of Little Red Book not only affects people’s physical and mental health but also affects the collective, from the individual to the social atmosphere.

How to govern it?
According to the above analysis, the algorithm designer determines the algorithm model, creating algorithmic biases that influence the user’s thinking and behavior. However, there are still people behind the algorithm, each from a different country and region with a different cultural background, so being completely neutral is impossible. In particular, the Little Red Book is a social e-commerce platform dominated by female users. Designers’ algorithm biases have affected users’cognition of beauty and aesthetic trends, thus affecting personal physical and mental health as well as the overall social atmosphere.


So, how do we use digital policy and governance? In my opinion, this is not only the surface, the government, and the platform’s open transparency and accountability system, but also the guidance of the people. If everyone is good, there is no prejudice or discrimination, and everyone has a regular life instead of indulging in various social platforms to lose themselves, it is very good. Therefore, if we want to govern this, we need to consider these two aspects.


Firstly, Personal physical and mental health aspects. The Little Red Book platform can strengthen the ethical education and aesthetic education of algorithm engineers. The designer’s outlook on life and values should be relatively normal, with good thoughts and positive energy. Guide them to pay attention to social and cultural diversity and avoid excessive pursuit of a single aesthetic standard. In addition, the Little Red Book platform can also increase the user’s choice. For example, the platform can ensure that the recommended content has a diversified aesthetic view through the design algorithm, avoid over-emphasizing the homogenous aesthetic view, guide the use of an independent and healthy aesthetic view, and avoid blind worship or imitation. Users can also be encouraged to share works of different styles and viewpoints, promoting the diversity of aesthetic concepts in society.

Secondly, Social governance aspects. Each country can put in place policies that not only strengthen algorithmic transparency and regulation, but also dictate how much time each person can use social media software each day. For example, only use it for 3 hours a day. Just as people need to sleep, just as there are Saturdays and Sundays off every week, all electronic devices need a period of rest, and so do our eyes. The rest of the time, learn or develop their own skills, or return to reality, with the family. If this cannot be done from the national social level, every social media platform can change the algorithm structure, don’t overdo it to make money, let users become addicted to the algorithm and become slaves. Such as, the Little Red Book platform can introduce health functions, specifically, regular reminders of users to rest, limit the use of time, and encourage healthy lifestyle content recommendations. With the help of user data and behavior analysis, the platform can design experiences that are more beneficial to users’ physical and mental health and positive social communication.

Conclusion
Based on these arguments and circumstances, backed up by these sources, this is what I argue. There may not be a lot of literature citations, but this is my honest opinion. In fact, algorithmic bias includes not only designers but also data bias or individual user bias. However, due to the limited number of words, I only analyzed the designer’s algorithm bias in the Little Red Book platform and the aesthetic problems affecting users.
Algorithmic bias is an important information ethics problem in the age of algorithmic information, which not only deviates from the fair and just professional norms of journalism, challenges users’ right to know and information choice, but also deconstructs social consensus and leads to public opinion risks. In order to avoid algorithm bias, not only the participation of algorithm designers, users and service objects, but also the support and assistance of relevant departments such as algorithm platforms, news media, and government are needed to ensure the coordination and integration between technical rationality and value rationality, so as to reduce the risk of algorithmic bias effectively.
In combination with the above measures, I hope to reduce the impact of Little Red Book designers’ algorithm bias in a targeted way and promote the positive development of users’ physical, mental health, and social collective consciousness. At the same time, we also need to monitor changes in issues and new challenges and update policies and measures in a timely manner.

Reference list:
Crawford, Kate (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-4.

Image1 from Little Red Book’s official website: http://www.xiaohongshu.com/eu

Image2 and data from: https://www.qian-gua.com/blog/detail/2898.html

Image3 from The KK Show – 216 小紅書演算法工程師 – Nick – YouTube
https://youtu.be/cN07i8Puqzs?si=VQqCfEWSThx1qQe5&t=3243
(The video runs from 54:03 to 55:53)

Just, Natascha & Latzer, Michael (2019) ‘Governance by algorithms: reality construction by algorithmic selection on the Internet’, Media, Culture & Society 39(2), pp. 245-246

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), P.2. https://doi.org/10.1145/3457607

Pasquale, Frank (2015). ‘The Need to Know’, in The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, p.14.

Be the first to comment

Leave a Reply