Is Artificial Intelligence a Catalyst for Class Division or a Bridge to Equality?

How Artificial Intelligence Could Widen the Gap Between Rich and Poor Nations (Alonso et al., 2020)

The rise of artificial intelligence (AI), a game-changing technology that has completely reshaped daily life and society, has unquestionably characterized the zeitgeist of the early 21st century. Class divisions are being challenged and reinforced in new ways as AI systems become more complex and permeate every element of human effort, from labor markets to personal social connections.

AI and Social Classes

The Impact of AI on Different Social Classes

AI involves the enablement of computers to perform tasks that typically require human intelligence (Grewal, 2014). This encompasses pattern recognition, language understanding, learning, reasoning, and problem-solving. Hence, AI applications have permeated our daily lives, from the optimization of search engine algorithms to the development of autonomous vehicles, and advances in personalized medicine. However, Yin et al (2024) asserted that AI has a double-edged sword effect. The ‘double-edged sword’ is a metaphor used to describe a situation with both positive impacts and negative consequences. Therefore, it reveals that while AI has significantly enhanced productivity and innovation, it has also had profound implications for the job market and social equality.

For high-skilled workers engaged in complex decision-making, planning, and creative tasks, the integration of AI has improved work efficiency and sparked innovation within industries. By automating routine tasks, AI allows these workers to focus more energy on strategic-level work, thereby enhancing output and creative capabilities (Smith & Anderson, 2014). In 2022, IBM launched an advanced artificial intelligence medical technology platform called Watson Health, dedicated to improving all aspects of healthcare, from clinical decision support to medical research and patient care. It uses natural language processing and machine learning algorithms to analyse unstructured medical data, which includes extracting key information from medical journals, case reports, and clinical trials. Hence, Watson can help doctors understand and apply the latest research findings and the specific conditions of patients, thereby creating more personalized treatment plans.

Nevertheless, the widespread application of AI poses a significant threat to low-skilled workers whose jobs involve repetitive and predictable tasks. AI-driven automation technologies can complete these tasks more efficiently, leading to a risk of unemployment or underemployment for these workers (Frey & Osborne, 2017). For example, Amazon Sparrow, an advanced automated picking system developed by Amazon. Sparrow’s technology is capable of identifying, picking, and handling millions of different products, a process that is highly complex and labor-intensive in traditional warehouse operations. While such automated systems have greatly improved the efficiency and processing capacity of warehouses and reduced error rates, they also potentially reduce reliance on human pickers. From this perspective, introducing automated picking systems could mean a decrease in the number of low-skilled job positions, as robots can work incessantly and perform tasks with consistent precision once deployed.

Additionally, AI has conferred considerable benefits to consumers by reducing costs and enhancing the quality of goods and services. However, these advantages are not distributed equally; individuals with higher incomes and greater access to technology are more likely to benefit, potentially exacerbating existing inequalities (Brynjolfsson & McAfee, 2014). The evolution and widespread adoption of smart home technology has offered substantial convenience and cost savings for consumers. For instance, Ecobee has released its Smart Thermostat Premium, integrating an advanced AI-driven climate control system. This system monitors indoor and outdoor temperature, learns user behavior, and predicts whether to automatically regulate home temperature, providing comfort while optimizing energy use. Through remote thermostat control, consumers can save up to 23% on heating and cooling expenses by connecting the device to their home’s Wi-Fi. Along with suggestions for energy savings, Ecobee also provides consumers with thorough energy consumption data that aid in understanding their usage patterns. While many energy-saving and conveniences come with the Ecobee Smart Thermostat Premium, these tend to favor those with better financial circumstances and more technical proficiency due to the high initial installation cost and the need for some technical knowledge for setup and maintenance.

Will Artificial Intelligence Augment Human Workers or Displace Jobs? (Onuroa, 2022)

New Mechanisms of Class Differences Caused by AI

According to Acemoglu and Restrepo (2019), AI systems are primarily responsible for the emergence of new class gaps. The discrepancies are indicative of the concentration of technological capital and data ownership, the construction of digital divides and information silos, and the contribution of AI technology to exacerbate pre-existing socio-economic structural imbalances. According to Litvinenko (2020), data is thought of as the new oil, and being able to handle and analyze it is now a huge advantage. The concentration of technological capital and data ownership is growing in large tech organizations and startups as they continue to innovate and build AI technologies that can gather, analyse, and use big data (Sadowski, 2019). Hence, data centralization has led to the rise of a new class of so-called “data barons,” a phenomenon that not only highlights the privileged access, control, and utilization of data but also exacerbates economic inequality.

Moreover, the emergence of artificial intelligence exacerbates the depth of the data divide. Van Dijk (2017) think that there are disparities in access to technology, but what matters more is the capacity to utilize and comprehend it. Hence, a tendency of those with limited resources and abilities to fall behind those who have mastered developing technology and possess the requisite knowledge and in-depth understanding. For instance, the disparity in urban and rural children’s access to technology tools and programming classes serves to emphasize this division. Meanwhile, the West (2018) analysis of the information silo problem, which limits the dissemination of information to communities or platforms, deepens social divides and impedes the unrestricted flow of knowledge. We can notice that there’s not much diversity in the discussion in an online forum because of how similar everyone’s backgrounds are. This restricts the variety of viewpoints that can be exchanged and illustrates the phenomena of information bubbles. Therefore, the mechanisms of the digital divide and information islands are also the foundation for the emergence of a new class.

Illustration by Jaron Strom

Besides, Melnyk et al (2019) believe that the rapid development of AI is accelerating the transformation of the industrial framework and has a profound impact on the socioeconomic structure and labor market. The technologically skilled gain a competitive advantage, further exacerbating economic inequality. Therefore, the advancement and application of artificial intelligence have significantly affected social dynamics, giving rise to new social classes and gaps.

Ethical issues of social inequality caused by AI

Not only can AI exacerbate social class division, but it also poses ethical questions regarding societal inequity. We are starting to see a new kind of societal distinction because of the broad use of AI technology across some industries, from automated manufacturing lines to algorithm-driven decision-making processes. This disparity is predicated on data control as much as the capacity to obtain financial resources and technological advancements. Algorithms may make prejudice and historical inequality worse, which has led to several ethical and societal problems.

AI is a field of study that combines computer science and theory to construct computers or systems that can mimic intelligent human behavior (Crawford, 2021). A fundamental feature of these systems is algorithmic decision-making, which uses algorithms to interpret data, identify patterns, and make choices or offer advice (Just & Latzer, 2017). Considering that algorithms typically use historical data to forecast future behavior or preferences. Thus, biases and inequities that exist in the real world may be reflected in and strengthened by algorithmic decision-making. For instance, several nations and areas are starting to employ AI-driven algorithms to evaluate the likelihood of recidivism for offenders to assist courts in rendering more “objective” choices about bail and punishment. For instance, the New South Wales Correctional Service in Australia employs the LSI-R statistical assessment tool to determine the likelihood that a prisoner will commit another crime; in the UK, OASys is a component of the pre-sentence data that judges use when determining bail, parole, and sentencing. In the United States, it’s a tool called COMPAS. These computational instruments of the legal system give rise to the “bias in, bias out” dilemma. They might occasionally work as racialized police enforcement agents. Serious ethical concerns arise when recidivism risk is predicted using such data. This is due to the possibility that data resulting from racially biased enforcement would eventually skew projections. This could indicate that a greater percentage of minorities than non-minorities are assigned a high-risk rating. According to Hamilton and Ugwudike’s report (2023), the US COMPAS tool predicted that white people would not re-offend when in reality they did and tested that black people would re-offend when in reality they did not.

Additionally, algorithms raise ethical concerns about discrimination in the hiring process. Even while the use of artificial intelligence in recruiting has the potential to boost productivity, decrease transactional labor, and improve recruitment quality, its actual implementation has revealed serious issues with discriminatory prejudice (Chen, 2023). Money Bank, a British financial institution, uses the AI system GetBestTalent. Davies (2023) showed that three excellent applicants with the skills and background listed for the role were turned down during the hiring process, they were a black woman and senior person, age 61. Money bank organization stated the resume screening procedure that is automated by the system is intended to be predicated on an impartial evaluation of the applicant’s abilities, background, and credentials. However, it turns out that its decision-making process is based on more than just these objective criteria.

We are seeing a new kind of social distinction driven by data control rights, technological acquisition skills, and financial resources. As a significant advancement in computer science, AI and algorithmic decision-making process data and identify patterns to make judgments or offer suggestions. It is meant to encourage justice and efficiency. But they frequently rest on past data that mirrors actual prejudices and disparities, creating a “bias in, bias out” issue that shows up in the legal system as well as in hiring practices, escalating social inequality and class distinction.

5 Pillars of Responsible Generative AI: A Code of Ethics for the Future by Douglas

Principles and Framework of Governance

All facets of society must take proactive steps to guarantee that technology advancements not only foster economic growth but also social fairness and inclusion considering the ethical concerns surrounding AI’s exacerbation of social class and inequality. This means that everyone must actively promote technological ethics and social responsibility, including legislators, tech developers, and regular people.

Government regulation: To guarantee that social justice is continuously monitored as technical advancements are made, governments and regulatory bodies should set up a strong policy framework to direct and oversee the development and application of artificial intelligence. Besides, Governments and authorities need to aggressively distribute information on networks, underlying technology, and artificial intelligence to close the digital gap. Raising public awareness will not only help reach a wider audience but also increase society’s adaptability and resilience to the rapid changes caused by AI. Therefore, it is particularly important to ensure that it is consistent with the values ​​and needs of the public in discussions about technological advancement.

Transparency, Responsibility and Explainability: Technology businesses and engineers should have a moral commitment to guarantee that the decision-making process in AI’s internal systems is transparent and to delineate the issues around computational return. This work encompasses all audiences and controllers and goes beyond the confines of the technology sector. The development of precise and pertinent policies and benchmarks to identify and address any biases or inequities put into the framework is therefore highly desired by tech companies and AI experts. To determine who is responsible for mistakes or disputes in AI systems and to promptly address them when they arise, it is equally vital to establish transparent accountability frameworks.

Cooperation: Transnational cooperation and exchange play an irreplaceable role in narrowing the sense of social class in the current era of artificial intelligence. In the face of the multifaceted troubling conditions posed by artificial intelligence, maintaining a consistent global strategy is actually not an option but a necessity. The collaboration goes beyond the exchange of professional information to include in-depth exchanges on policy, ethics and social implications. By integrating information and assets across sectors, AI has the power to guide society toward more noteworthy fairness and resilience. Therefore, this approach emphasizes not advancing technology because it is important, but using it as a tool for social development, thus ensuring that benefits are distributed equitably across the globe and not just to individual groups. Nonetheless, this governance principle also has shortcomings. At present, artificial intelligence and high technology have become symbols of national strength, resulting in some countries or regions being prohibited from participating in global technological exchanges and cooperation due to technology monopoly and policy restrictions. Therefore, it is crucial to establish a more open and inclusive international cooperation mechanism and encourage technology sharing and knowledge dissemination.


AI technology is evolving quickly, changing human society, and giving ease and opportunity. Meanwhile, it is also causing mid-career shifts in society, aggravating social class differences, expanding the wealth and education divides, and posing increasingly important moral and ethical questions. To address these societal issues, an equitable and transparent regulatory framework that brings together stakeholders from the public, private, academic, and nonprofit sectors is necessary. In order to guarantee that modern technological advancements benefit all of mankind and contribute to the ongoing development and prosperity of the global community, we must collaborate to create an open, reasonable, and inclusive AI ecosystem. We believe that in the future, AI will enable people to access ground-breaking advancements in public safety, healthcare, education, environmental protection, and every other field, democratizing access to a high standard of living that was previously only available to a select few. It is not just for the wealthy anymore.


Acemoglu, D. & Restrepo, P. (2019). 8. Artificial Intelligence, Automation, and Work. The Economics of Artificial Intelligence: An Agenda (pp. 197-236). Chicago: University of Chicago Press

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 1-21.

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications10(1), 1-12.

Davies, J. (2023). Discrimination and bias in AI recruitment: a case study. Jdupra.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation?. Technological forecasting and social change114, 254-280.

Grewal, D. S. (2014). A critical conceptual analysis of definitions of artificial intelligence as applicable to computer engineering. IOSR Journal of Computer Engineering16(2), 9-13.

Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society39(2), 238-258.

Hamilton, M., & Ugwudike, P. (2023, July 26). A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up. The Conversation.

Litvinenko, V. S. (2020). Digital economy as a factor in the technological development of the mineral sector. Natural Resources Research29(3), 1521-1541.

Melnyk, L., Kubatko, O., Dehtyarova, I., Matsenko, O., & Rozhko, O. (2019). The effect of industrial revolutions on the transformation of social and economic systems. Problems and Perspectives in Management17(4), 381.

Sadowski, J. (2019). When data is capital: Datafication, accumulation, and extraction. Big Data & Society6(1), 205395171882054-.

Smith, A. & Anderson, J., 2014. AI, Robotics, and the Future of Jobs, Pew Research Center: Internet, Science & Tech. United States of America. Retrieved from 

Van Dijk, J. A. (2017). Digital divide: Impact of access. The International Encyclopedia of media effects, 1-11.

West, D. M. (2018). The future of work: Robots, AI, and automation. Brookings Institution Press. Routledge.

Yin, M., Jiang, S., & Niu, X. (2024). Can AI really help? The double-edged sword effect of AI assistant on employees’ innovation behavior. Computers in Human Behavior150, 107987.

Be the first to comment

Leave a Reply