Under the Shadow of Algorithms: Decoding Digital Discrimination in Our Lives


During our mid-term break, my friends and I were excitedly planning our trip to Fiji, never anticipating that algorithmic bias in big data would turn our booking process into a nightmare. What should have been a simple and enjoyable preparation turned into deep frustration due to constantly rising ticket prices and repeated “insufficient availability” alerts. I refreshed the page over and over, and each time I tried to purchase, the price of the tickets increased, eventually costing twice the original price.

Just when I was about to give up and consider changing our destination, my friend casually opened her booking app and found that the price for the same flight, at the same time, was still the original price. Although we eventually resolved the issue, this experience made me acutely aware of what is known as “big data price gouging” — a pricing fluctuation that seems random but is actually the result of carefully designed pricing algorithms used by airlines(Wu,2023). My frequent refreshing and eager buying attitude seemed to be captured and exploited by the algorithm.

This experience not only made me question the fairness of online pricing but also made me realize that this algorithm-driven price discrimination is just the tip of the iceberg of widespread unjust behaviors hidden behind digital transactions. For ordinary consumers, we often do not perceive how these algorithms manipulate our decisions and feelings behind the scenes, yet this unnoticed manipulation profoundly affects our consumer experience and economic decisions.

Understanding Algorithms: The Invisible Workers of the Digital Age

Shutterstock / Who is Danny

First, let’s clarify what an “algorithm” is. Simply put, an algorithm is a set of rules and programs designed to process data, perform calculations, and conduct automated reasoning(Flew,2021). Algorithms ,they are the unseen laborers of our digital world.

At the heart of the information economy, from the internet to financial companies, businesses are using the vast amounts of personal data we leave behind to make crucial decisions that affect our choices (Pasquale, 2015). As data volumes grow, automated algorithms have become essential tools for managing these large datasets. Their roles extend beyond searching, predicting, or monitoring; they also encompass data filtering, recommendation systems, and content generation (Latzer et al., 2014). Imagine every time we search online, shop, or scroll through social media, there are algorithms behind the scenes controlling what we see and do.

Big data is not just a byproduct of technology; it has almost become a new category of economic assets. Combined with algorithms, it enables us to extract economic and social value from these data, driving the development of societal characteristics (Just & Latzer, 2017). However, this raises a question: How do we manage and utilize these powerful resources while ensuring fairness and transparency?

Algorithmic Bias: The Invisible Barrier in the Digital Age

Source:Connie Chen

Understanding Algorithmic Bias

You may have heard of “algorithmic bias,” but what does it really mean? Simply put, algorithmic bias occurs when algorithms process data and make decisions that, due to reliance on biased data or flawed model design, unfairly or harmfully affect certain individuals or groups (Kordzadeh& Ghasemaghaei ,2022). In today’s fast-evolving tech landscape, big data and algorithms not only quantify our online behaviors but also create digital profiles of individuals (Pasquale, 2015). But have you ever considered whether these seemingly objective digital profiles are truly fair and unbiased, or might they be subtly working against you?

Human Factors and Algorithmic Bias

Firstly, it’s important to understand that no matter how advanced algorithms are, their design and implementation are never completely neutral or objective. The mathematical models and automated processes behind algorithms are set by humans, which means they may carry the biases of their creators (Noble, 2018). Indeed, harmful societal values such as racism and sexism may inadvertently be embedded into these algorithms, affecting their fairness and effectiveness.

The Manifestation of Algorithmic Bias in Everyday Life

Algorithmic bias is not just present in big data analysis and decision-making; it has infiltrated every aspect of our lives. From the price variations in online shopping and the quality of service in ride-sharing apps to unequal treatment in financial services and recruitment processes, algorithmic bias is ubiquitous and varied in form.

Case Study: How Algorithmic Bias Affects Our World

Dynamic Pricing: More Than Just Numbers

Since the 1960s, price discrimination and quality discrimination have become widely used strategies in the business world. American Airlines is the first company to adopt this strategy which they call “screen science” (Wu, 2023). According to passenger needs, competition status, time and Air ticket prices are dynamically adjusted based on purchase history and other factors.

The core of this corporate behavior is that through algorithms and big data technology, companies can conduct in-depth analysis of users’ consumption habits, classify users based on this information, and then implement differentiated services and pricing strategies (Wu, 2023, as cited in Siegert & Ulbrichtb, 2020).

Although this pricing strategy based on user behavior and data analysis is reasonable from a business perspective, it can easily lead to a crisis of trust among consumers (Wu, 2023). This original intention is to maximize revenue, but what follows is It is a serious challenge to fairness. Imagine two passengers sitting in the same seat on the same flight, but paying very different prices because of their different purchasing behaviors. This also involves issues of corporate ethics and social responsibility, especially in terms of user data protection and Personal privacy considerations.

Algorithmic Bias in Recruitment Processes

Algorithms play a key role in our recruitment process. They are designed to screen resumes and help identify candidates who best match the job requirements. This technology not only improves recruitment efficiency, but also helps reduce subjective bias in human resources departments (Köchling & Wehner, 2020). However, these algorithms are not perfect.Algorithms can sometimes exclude qualified candidates due to biased training data or incomplete design considerations, favoring applicants with specific industry backgrounds and overlooking those from non-traditional ones.

Photograph: Brian Snyder/Reuters

A specific case is Amazon’s initial version of its hiring algorithm, which was criticized for gender bias in its design (Dastin, 2018). The algorithm is programmed to prioritize “male” as a key criterion for career “suitability,” resulting in female applicants’ resumes being given less consideration in the screening process. Such gender-based screening not only violates the principles of professional equality but also runs counter to workplace values of diversity and inclusion.

Köchling & Wehner (2020) also highlighted discrimination in the placement of recruitment ads. To optimize advertising cost-effectiveness, the algorithm might discriminatorily target ads, even though job advertisements should be gender-neutral. By training with historical employment data, the recruitment algorithm might assume that men are more suited for managerial positions. This bias not only results in management roles being predominantly filled by men but also reduces the number of management job ads pushed to women on social media, thereby depriving women of the opportunity to apply for these positions. This gender-based assumption further “optimizes” the algorithm, exacerbating gender bias issues (Wu, 2023).

source: flation

Understanding Algorithmic Bias: Input Determines Output

In the realm of big data, there is a concept called “BIBO” (Bias In, Bias Out), highlighting that any bias in the input data will directly affect the output(Mayson, 2018). This serves as a reminder that, despite the potential of algorithms, their fairness and accuracy are still limited by the quality of the data used.Big data is not only a product of technological advancement but also a mirror reflecting existing and potential biases within our society. This means that if there is widespread bias against a particular minority or gender, these biases are likely to be manifested and amplified through algorithms in our decisions and everyday lives.

Specific Forms of Algorithmic Bias

Algorithmic bias often manifests itself in two main types: automation bias and Proxy Bias (Wu, 2023).

  • Automation Bias: This bias stems from an over-reliance on algorithmic outputs. We might assume that data and decisions provided by machines are more reliable than human judgment, leading us to overlook other sources of information. This trust might cause us to blindly accept algorithm outputs that contain historical inequalities or societal biases, such as gender or racial biases (Fazelpour & Danks, 2021). Such biases are not only adopted by algorithms but may also be reinforced in their decision-making processes.
  • Proxy Bias: When algorithm designers try to avoid using overtly sensitive variables like gender or race, they might inadvertently choose other characteristics strongly correlated with these variables, such as postal codes of specific regions. Although this approach may seem neutral, it can still indirectly reflect and perpetuate existing social and economic biases, making these biases difficult to detect and eliminate (Wu, 2023).

The Impact of Algorithmic Bias: An Issue We Cannot Ignore

The main worries for businesses stem from biases, such as those originating from humans and algorithms.Source: DataRobot

We cannot turn a blind eye to “algorithmic bias.”

Fairness should be a right for everyone, yet the discrimination caused by unfair algorithms is invisible, hard to detect, and profoundly affects every corner of our lives. Kordzadeh & Ghasemaghaei (2022) discuss how algorithmic biases can lead to erroneous decisions, causing widespread negative impacts at personal, organizational, and societal levels.

Individually, these biases might mean paying unfairly high prices or facing inequalities in job opportunities, especially for minority groups. At an organizational level, algorithmic biases can lead to violations of equal opportunity policies, create unethical work environments, increase employee turnover, and result in higher customer attrition rates due to perceived discrimination. Societally, these biases may exacerbate the economic disparities between historically disadvantaged groups and others, deepening societal divisions.

Algorithm Regulation and Solutions: Making Technology Fairer

Transparency is Key

Imagine if every time an algorithm made a decision, you could clearly understand how it operates. What would that be like? By demanding transparency in the decision-making process of algorithms, we can more easily identify biases and ensure that every user understands how these decisions impact their lives—it’s like letting the algorithm “run naked,” where everything is clear and visible.

The opaque nature of algorithms often leaves people in the dark about how these systems operate, as the details are either proprietary or too complex to understand(Shin&Park, 2019).The complexity of algorithms makes them difficult to trace and understand due to their irregular learning structures. Moreover, overly stringent data governance can hinder economic innovation and efficiency (Wu, 2023). Thus, Shin & Park (2019) suggest that if qualified, trustworthy experts or entities provide users with easy-to-understand system information, users might forego the need for complete and transparent access to the underlying algorithms and datasets. When people understand how a system works, they are more likely to use it correctly and trust the designers and developers. Such an approach also better balances the transparency needed by the audience with the complexity of the algorithms.

Strengthening Legal and Ethical Standards

Establishing strict legal frameworks and ethical norms is essential to protect individuals from the unfair impacts of algorithmic biases. By clearly defining legal limits for algorithm applications, we can prevent technological abuse and enhance industry-wide social responsibility. Ongoing issues with deeply ingrained and complex algorithmic biases necessitate regulations that balance different fairness concepts and set practical standards(Fu, Huang, & Singh, 2020).

Legislative efforts to address algorithmic bias are progressing, with significant measures such as the New York City Council’s December 2017 enactment of the first U.S. law on algorithm accountability and fairness. This law initiated a task force to monitor and suggest enhancements for city agencies’ automated decision systems. Furthermore, in April 2019, the proposed Federal Algorithmic Accountability Act mandated companies to evaluate their automated systems on key criteria including accuracy, fairness, bias, discrimination, privacy, and security (Fu, Huang, & Singh, 2020).

Strengthen public education and awareness raising

Finally, and what I think is the most important, is that we need to realize that algorithms are discriminatory.

Relevant departments and policymakers need to educate the public and raise awareness in many aspects, such as through media publicity and the introduction of educational courses to explain to the public how algorithms work and their possible biases. In addition, policymakers and technology developers need to work together to continually review and improve algorithms to ensure that their design and implementation do not exacerbate social inequalities. This includes implementing stricter regulatory and transparency requirements, as well as conducting regular algorithmic reviews to ensure the fairness and accountability of algorithmic decision-making processes. On an individual basis, we should all remain vigilant and fight against this unfairness, ranging from price discrimination on various apps to biased portraits of us by algorithms.


My experience with flight booking is a telling example of the broader issue of algorithmic bias. It leads me to wonder: had my friend not fortuitously checked the flight price, would I have remained ensnared by the algorithm’s skewed logic, unaware of the discrimination at play? This incident underscores the stealthy impact of such biases, which often operate undetected. As our lives become increasingly governed by digital decision-making, it is critical that we strive for greater algorithmic transparency and enforce fair digital practices. Acknowledging the presence of algorithmic discrimination is just the beginning. We must stay vigilant, advocate for more equitable systems, and support robust regulations that prevent digital discrimination. Doing so ensures that our technological advancements foster social justice and contribute meaningfully to societal progress. This commitment to vigilance and reform is essential for shaping a future where technology serves all of humanity fairly and justly.


Dastin, J. (2018, October 11). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/idUSKCN1MK0AG

Flew, T. (2021). Issues of concern. In Regulating platforms (pp. 79-86). Polity. ISBN 9781509537082.

Fu, R., Huang, Y., & Singh, P. V. (2020). AI and Algorithmic Bias: Source, Detection, Mitigation, and Implications. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3681517

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238-258. https://doi.org/10.1177/0163443716643157

Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13, 795-848. https://doi.org/10.1007/s40685-020-00134-w

Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems, 31(3), 388–409. https://doi.org/10.1080/0960085X.2021.1927212

Mayson, S. G. (2018). Bias in, bias out. YAle lJ, 128, 2218.

Noble, S. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York, USA: New York University Press. https://doi.org/10.18574/nyu/9781479833641.001.0001

PASQUALE, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Paper presented at the preconference, “Data and Discrimination: Converting Critical Concerns into Productive Inquiry,” at the 64th Annual Meeting of the International Communication Association, May 22, Seattle, WA.

Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277-284. https://doi.org/10.1016/j.chb.2019.04.019

Yi Wu. (2023) Data Governance and Human Rights: An Algorithm Discrimination Literature Review and Bibliometric Analysis. Journal of Humanities, Arts and Social Science, 7(1), 128-154. DOI: 10.26855/jhass.2023.01.018

Be the first to comment

Leave a Reply