Published by Bohan Sun

Source from: (Smith, 2022)
At a time when social media platforms are widely selected, Twitter has attracted wide attention as one of the social platforms with global users. According to statistics, Twitter has 330 million users worldwide and 5.8 million of them are from Australia (Smperth, 2022). About 55% of users log in to Twitter every day, and the total number of tweets posted daily is about 500million (Aslam, 2023). What do you think when you look at this data? Shocked? Surprised? Or do you show a deeper concern? In fact, beneath the surface of its popularity, if you take a closer look at Twitter’s AI algorithm, you will see that some algorithmic biases exist on the much-watched social platform. This blog will take you through the specific manifestations and impact of algorithmic bias on Twitter, the existing governance methods for bias, and the challenges it still faces.
So, what are AI and algorithms?
When we see the word AI (Artificial Intelligence), we might instantly think of things we see in our daily lives, such as digital assistants, E-Payments, social media, etc. We usually associate AI with some smart high-tech products or services that are convenient for our lives. Crawford (2021) indicates that AI is mostly machine learning. AI combines figurative and material characteristics while relying on social structures and politics. The reason for this is that the construction of artificial intelligence depends on human resources, natural resources, etc. The AI works through intensive training and testing using large amounts of data. The human and material costs involved in test runs and optimizing AI are considered to serve existing dominant interests, thus Crawford (2021) argues that AI is a “registry of rights”. AI represented by social media is inseparable from algorithms. According to Flew (2021), algorithms are defined as rules and procedures established for computing, “data processing” and “automated reasoning”. An algorithm on the Internet is a computational process in which user input and data intersect to produce an output. In simple terms, an algorithm can be understood as a series of procedures or rules that a computer or application follows when solving a problem. Social media relies on algorithms to provide users with “personalized” services. Kim (2017) claims that data generated by users’ clicks and actions on the platform indicates each user’s preferences. Algorithms are used to recommend to users what they may be interested in based on their personal preferences. However, the content recommended by these algorithms is purposeful, and the algorithm system also face challenges.
What is algorithmic bias? Why does bias exist?

Source from: (TRT World, 2019)
When people search for things on social media search engines, what they search for can be different due to algorithmic bias, fairness issues, and transparency. At the systemic level of norms or standards, algorithmic bias exists in algorithmic output, performance, or impact. At the same time, algorithmic bias exists morally, statistically or socially (Fazelpour & Danks, 2021). Bias in sampling practices is often manifested in the data that informs decision-making (Flew, 2021). These biases may reflect social biases such as race, gender, sexual orientation, politics, economics, etc.

Source from: (Fazelpour & Danks, 2021)
The reasons for the algorithm bias may include the problem specification, data, modelling and validation, and deployment bias. First, the goals that decision makers are interested in are complex and ambiguous (Fazelpour & Danks, 2021). Considering values and normative standards, it is difficult for algorithms to provide these decision makers with content that perfectly matches them. Therefore, the bias exists. Second, algorithms and models rely on historical datasets (Fazelpour & Danks, 2021). Since algorithms reflect statistics in historical data, when there are biases in historical data, the generated algorithms and models often reflect the existence of these biases. Furthermore, the pursuit of fairness in modelling and validation (fairness metrics) generates value judgments (Fazelpour & Danks, 2021). In order to pursue fairness, the accuracy of the learning model has caused bias, which in turn affects the recommendation and search bias of the algorithm. In addition, in the deployment process of predictive algorithms (Fazelpour & Danks, 2021). Since the algorithms rely on what they are trained to optimize to achieve value, when there is a difference between the actual value of the user and the value of the algorithm, bias will exist at this time.
Case study: Bias in Twitter’s photo cropping algorithm.

Source from: (Tonkin, 2021)
- What is automatic photo-cropping? What are the advantages?
Automatic photo-cropping can crop a photo by a given viewport size or aspect ratio so that the photo fits the frame task (Yee et al., 2021). At the same time, the most relevant or engaging parts of the photo are ensured to be displayed within the viewport. Less relevant or interesting ones are usually cropped and filtered out. Automatic image cropping is often used in “responsive images” (Yee et al., 2021). Automatic cropping of images has its advantages. For example, it can reduce the human resource burden caused by a large workload of images to be processed. Secondly, due to the different equipment conditions required by each platform, automatic image cropping can quickly meet the required image size and format. In addition, automatically cropping images can also help individuals quickly identify key information in pictures (Yee et al., 2021). Therefore, users can get more desirable and more appealing pictures.
- How does the photo-cropping algorithm in Twitter work?
Bateman (2021) indicates that the cropping rules of Twitter images are automatically cropped by the “saliency” algorithm. Saliency is considered a measure used to “measure the likelihood of an image’s likelihood of attracting the human eye to a particular part”. That is, users will be attracted by highly salient images, such as text, people, objects, etc. And Twitter’s algorithm is specifically designed to filter through these images for users. By filtering and cropping the images, groups with certain characteristics were given priority by Twitter or only shown.
- What is the main content of Twitter’s photo-cropping algorithm bias?
Specifically, where does Twitter’s algorithmic bias lie? A media article that Vincent (2021) reported the existence of age, appearance, race, language and gender biases in Twitter’s photo-cropping algorithm. Research has found that Twitter’s photo-cropping system favours white faces over black faces. Its cropping algorithm also favours young, slender, female-featured faces with light skin tones. In terms of language, the image cropping preference is English rather than other scripts such as Arabic (Vincent, 2021). In addition, Collier (2021) suggests that the algorithm is also heavily biased against people with grey hair. The bias against images of grey-haired people mirrors the bias against the older population on social media, as well as the marginalization of the older population in the algorithm. Furthermore, Minorities and people with disabilities are also marginalized groups. Take the image of a Muslim wearing a hijab, which Twitter ignores in its photo-cropping algorithm. This reflects the bias of Twitter’s algorithm towards religion. Racial diversity is not well represented. People with disabilities, as one of the marginalized populations, are also prejudiced in the face of Twitter’s photo-cropping algorithm. Collier (2021) stated that in an image showing a group of people, if the group includes a disabled person in a wheelchair, Twitter is likely to remove this disabled person from the photo because of the low height caused by the wheelchair Lose. Judging by the bias in Twitter’s image cropping algorithm, it invisibly reinforces the bias in society by tacitly acknowledging the injustice faced by marginalized groups in society. At the same time, it also filters out those who do not meet the algorithm classification expectations in terms of appearance, age and skin colour, which also exacerbates the occurrence of bias and inequality.
What is the existing governance method to address this bias?

Source from: (Chowdhury & Williams, 2021)
Recognizing that the bias in its photo-cropping algorithm could further cause moral hazard, Twitter proactively identified the harm and made changes. They invite the community to help identify negative effects of such algorithms and encourage them to point out problems in existing ML models and algorithms (Chowdhury & Williams, 2021). The purpose of this event is to focus on ML ethics and brainstorm ways to efficiently identify and mitigate existing and potential vulnerabilities.
The process of governance:
- In October 2020, realized and raised the issue of unfairness of cropped photos on Twitter.
- In May 2021, Twitter began to make adjustments to the presentation of pictures. (From cropped to uncropped.)
- In August 2021, in an effort to strengthen governance, Twitter hosted its first Algorithmic Bias Bounty Challenge, hoping to use more voices and participation to identify other biases and harms that still exist.
Twitter’s governance of AI and algorithmic issues is done through crowd efforts. Twitter uses bounty as an inducement condition to encourage people from all over the world and in different fields to participate in this governance activity. The uniqueness of its tasks and the attractiveness of the prize money have attracted many “entrants” to Twitter. Different from government governance, Twitter’s governance of this algorithm issue is autonomous, which also invisibly increases the public’s trust and support for the Twitter platform. Furthermore, Twitter serves as a precedent for using the saliency model, demonstrating the risks and vulnerabilities of the model (Twitter, 2021). These vulnerabilities and the feedback received by the public governance also provide a reminder to other people or platforms using this dataset in a way, and inspire more people who plan to use it.
The challenges of governing bias in photo-cropping algorithms.
The marginalized groups ignored by algorithms reflect the inequalities that still exist in society today. The “unconscious” algorithm reflects historical and persistent patterns of racism (Fazelpour & Danks, 2021). Although the public has identified the problems in the photo-cropping algorithm, the phenomenon of marginalized groups being biased and ignored in society still exists. Many of the problems lie in the unconscious of the public, the authorities and social media. Therefore, human ideology as reflected by algorithmic bias remains a challenge to eliminating prejudice in today’s society. In addition, existing governance strategies still have limitations. A better measurement strategy might enable a fairer algorithm, but it cannot control the existence of other biases.
Possible Improvement recommendations
- Eliminate saliency-based cropping. Ensure that the automatic cropping algorithm avoids subordination with marginal groups (Yee et al., 2021).
- Optimize the automatic cropping algorithm to avoid the influence of stereotyping and male gaze and so on during cropping (Yee et al., 2021).
- Identify the groups that deserve attention and call for greater public attention to these marginalized groups.
- In an attempt to account for marginalized groups, the algorithm considers showing more images of them.
- Governments and social media platforms should pay more attention to and regulate the ethical aspects of technology.
Conclusion
In conclusion, AI and algorithms are capable of assisting humans and helping the human world function more efficiently. However, just like people, AI is not perfect. It faces many difficulties and flaws in its actual operation. Algorithmic bias is real, and it exists primarily among marginalized groups on the Web and social media platforms. Therefore, in order to avoid exacerbating inequality and discrimination, it is necessary to make proper use of AI and strengthen control over algorithms. The deviation of the photo-cropping algorithm on Twitter also reflects the deviation problem in the real society, which is the direction and the goal of governance that human beings should consider more closely.
Reference List
Aslam, S. (2023, March 9). • Twitter by the Numbers (2022): Stats, Demographics & Fun Facts. Omnicore. https://www.omnicoreagency.com/twitter-statistics/#:~:text=Twitter%20Usage%20Statistics
Bateman, T. (2021, May 20). Twitter’s photo crop algorithm is biased toward white faces and women. Euronews. https://www.euronews.com/next/2021/05/20/twitter-is-removing-its-photo-crop-algorithm-that-prefers-white-people-and-women
Crawford, K. (2021) The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press, pp. 1-21.
Chowdhury, R., & Williams, J. (2021, July 30). Introducing Twitter’s first algorithmic bias bounty challenge. Blog.twitter.com. https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge
Collier, K. (2021, August 9). Twitter’s racist algorithm is also ageist, ableist and Islamaphobic, researchers find. NBC News. https://www.nbcnews.com/tech/tech-news/twitters-racist-algorithm-also-ageist-ableist-islamaphobic-researchers-rcna1632
Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760
Flew, T. (2021) Regulating Platforms. Cambridge: Polity, pp. 79-86.
Kim, S. (2017). Social media algorithms: why you see what you see. Georgetown Law Technology Review, 2(1), 147-154.
Smith, C. S. (2022, July 1). Twitter’s algorithm ranking factors: A definitive guide. Search Engine Land. https://searchengineland.com/twitter-algorithm-ranking-factors-386215
Smperth. (2022, March 3). 2023 Twitter Statistics // Everything You Need to Know from SMPerth. Smperth. https://www.smperth.com/resources/twitter/twitter-statistics/#:~:text=Australian%20Twitter%20Stats&text=Twitter%20has%20330%20million%20monthly
Tonkin, S. (2021, August 10). Twitter’s photo-cropping algorithm favours light-skinned faces – study. Mail Online. https://www.dailymail.co.uk/sciencetech/article-9879871/Twitters-photo-cropping-algorithm-favours-young-beautiful-light-skinned-faces-study-confirms.html
TRT World. (2018). Algorithmic bias explained. In YouTube. https://www.youtube.com/watch?v=bWOUw8omUVg
Twitter. (2021, September 7). Sharing learnings from the first algorithmic bias bounty challenge. Blog.twitter.com. https://blog.twitter.com/engineering/en_us/topics/insights/2021/learnings-from-the-first-algorithmic-bias-bounty-challenge
Vincent, J. (2021, August 10). Twitter’s photo-cropping algorithm prefers young, beautiful, and light-skinned faces. The Verge. https://www.theverge.com/2021/8/10/22617972/twitter-photo-cropping-algorithm-ai-bias-bug-bounty-results
Yee, K., Tantipongpipat, U., & Mishra, S. (2021). Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–24. https://doi.org/10.1145/3479594
Be the first to comment