Artificial intelligence (AI) algorithms are transforming various aspects of society, from the economy and politics to culture and education. And AI has the potential to revolutionize industries and enhance people’s lives, but it also poses significant challenges that need to be addressed. The interplay between AI algorithms and society is complex and multi-faceted, a double-edged sword. AI algorithms offer increased efficiency and flexibility in the workplace and new opportunities for innovation and creativity. While AI could exacerbate existing inequalities, perpetuate biases, and create new social, ethical, and legal dilemmas. Understanding the interplay between AI algorithms and society is crucial for ensuring that AI is deployed in a way aligning with societal values, ethical principles, and human well-being.
The Impact of Artificial Intelligence on Society
The bias of artificial intelligence and inequality in artificial intelligence algorithms are changing society both positively and negatively. One area where AI is having a significant effect is employment. AI algorithms are making many jobs automated. Repetitive or routine tasks are increasingly being performed by AI, which is partly responsible for the rise of the gig economy, where these jobs are flexible and short-term (Andrejevic, 2019), such as the example of the trend of Uber, which is using a digital platform to connect users with providers of transportation.
Workers in the context of a gig economy have the autonomy to choose the jobs they want, which appeals especially to workers who value freedom and independence.Mark Andrejevic
However, Andrejevic (2019) argues that automation in these jobs exacerbates existing inequalities and creates instability in new situations. Andrejevic (2019) mentions Uber’s use of algorithms to match workers in a gig economy, where workers have no job security, labor protection, and even uncertain income. Temporarily hired workers are vulnerable to sudden changes in demand or being replaced by automated artificial intelligence. Artificial intelligence maximizes efficiency and profits but neglects workers’ welfare, exacerbating inequalities in the labor market, and the absence of benefits such as health insurance and retirement plans is likely to affect low-income and marginalized workers disproportionately.
The bias of artificial intelligence and inequality exist in contemporary society.Kate Crawford
Structural inequalities are becoming apparent in society with the rise of algorithms. The standard algorithms of artificial intelligence are often biased against certain groups, such as women or people of colour. Noble (2018) points out that algorithms are not neutral. For example, a search for ‘black girl’ produces sexy associations, while a search for ‘white girl’ yields more innocent results. The controversy of racial discrimination has been raised repeatedly, with Microsoft and Amazon having stopped providing facial recognition systems to the police in June 2020 and stating that they would no longer sell their facial recognition technology. Uber in London was exposed in 2021 for discriminating against people of colour in its face recognition system.
In March 2020, ADCU (2021) pointed out that Uber enabled Microsoft’s FACE API‘s face recognition system. If the photo stored in the system did not match the one uploaded in real-time, the driver would face dismissal and the suspension of the driver’s vehicle license at Transport for London (ADCU, 2021). Uber’s maintenance of proper systems, processes and procedures was a condition of the renewal of Uber’s license in London by the Westminster Magistrates’ Court in September 2020, which consequently facilitated the introduction of the harmful facial system—retaken to court in 2021 by ADCU. While “Black Lives Matter” in June 2020 brought racial discrimination to the forefront again, IBM in June 2020 said it was terminating its face recognition business outright.
The Impact of Society on Artificial Intelligence
The public creates or uses artificial intelligence, and AI relies on data to learn and make predictions accordingly. Noble (2018) states that algorithms react to the biases and values of those who create and control them. For example, the creators of the rules or algorithms of most social platforms mentioned by Suzor (2019) are almost always well-educated white males located in Silicon Valley. Krishnapriya et al. (2020) refer that a dataset consisting of predominantly white faces cannot accurately identify the faces of people with darker skin tones. In terms of hiring, if a hiring system for AI only learns from past hires that reflect race, gender or other inequalities, then the hiring system will automatically reflect these biases in future employment, exacerbating inequalities (Tambe et al., 2019). For example, if historical data shows that a company hires male engineers, the hiring algorithm may tend to recommend male engineers and ignore other less-represented groups. Just as the AI system previously developed by Amazon to screen hiring candidates was gender biased because it saw more records of male employees in the past hiring data. This hiring system automatically screened more male candidates and ignored more qualified female candidates (Reuters, 2018). Amazon, therefore, abandoned the system to address these issues.
The second is the impact of the regulatory aspect of AI on society, where governments and other organisations can designate regulations and policies to regulate the development and use of AI (Roberts et al., 2021). And Robert et al.(2021) mentioned governments regulate the development and use of AI technologies through legislation and regulators to ensure compliance with societal values and ethics. These regulations can protect the privacy protection of data.
The government is ensuring the algorithmic fairness of AI technology through regulations to avoid unfair outcomes due to algorithmic bias or discrimination. The EU’s General Data Protection Regulation (GDPR) sets out the rights of data subjects, liability obligations for data protection, and penalties for data breaches, among other aspects. Transparency and interpretation of algorithms, such as the GDPR, which gives data subjects the right to an explanation of the algorithmic decisions associated with them, and the California Consumer Privacy Act (CCPA) in the US, which requires companies to disclose some basic information about the algorithmic decisions they use to avoid unfair outcomes due to algorithmic bias or discrimination.
Their Mutual Influence
There is an interdependent relationship between artificial intelligence and society. On the one hand, the development and application of AI algorithms profoundly impact all aspects of society, including the economy, politics, culture and education (Lawless & Sofge, 2017).
On the other hand, Lawless and Sofge (2017) mentioned the needs and feedback from society also contribute to the continuous development and optimisation of AI algorithms. For example, in economic applications, AI algorithms have begun to be widely used in finance, healthcare, insurance and other fields (Ghimire et al., 2020). Artificial intelligence algorithms can rapidly process large amounts of data, automate decisions and optimise operations, thereby improving the efficiency and competitiveness of companies. Artificial intelligence algorithms can be applied to risk management, stock trading and customer service. For example, AI algorithms can analyse vast amounts of financial data and market trends to predict stock price movements and help investors make trading decisions (Lawless & Sofge, 2017). Alternatively, AI algorithms can be used for disease diagnosis, drug development, and optimise healthcare resources. For example, AI algorithms can analyse medical images and patient data to help doctors diagnose diseases and develop treatment plans quickly and accurately. Eling et al. (2021) point out the AI algorithms could be used in risk assessment and claims processing which can analyse insurance claims data and history to automate the processing of claims applications and predict future insurance risks. Applying artificial intelligence algorithms can help companies increase efficiency, reduce costs and improve service quality (Eling et al., 2021), thereby enhancing competitiveness. At the same time, Roberts et al. (2021)point out that attention must be paid to algorithms’ possible limitations and risks, such as data bias and privacy breaches. Therefore, when applying AI algorithms in the economic field, care needs to be taken to balance the advantages and potential risks of the algorithms in order to achieve mutual economic and social development.
Even systems that perform well during training can produce poor predictions when faced with new information from the outside world.Kate Crawford
While the use of AI algorithms in politics has many potential advantages, there are also potential risks. In addition, As mentioned by Elish and Boyd (2018) AI algorithms are becoming increasingly common in the political sphere, for example, for election prediction and public opinion analysis using AI algorithms. Regarding election prediction, AI algorithms can analyse large amounts of voter and historical election data to predict the candidates’ electability and likelihood of winning the election (Elish and Boyd, 2018). This action can help politicians and campaign teams understand voter needs and attitudes and develop effective election strategies. Regarding public opinion analysis, AI algorithms can automatically analyse large amounts of information, such as social media and news reports (Thurman, 2018), to understand public opinion and sentiment and promptly respond and address possible problems, thus maintaining political image and public trust. However, there is also a need to be aware of the potential risks that AI algorithms can have, such as algorithmic bias and privacy breaches (Tambe et al., 2019). For example, algorithmic bias may lead to unfairness and favouritism in political decision-making, privacy breaches may trigger public discontent and panic, and data security may be hacked and misused. Therefore, when applying AI algorithms, care needs to be taken to balance the algorithms’ advantages and potential risks to protect the public’s interests and rights.
To Create a better Environment
For AI on Society
- Establish laws and policies to ensure workers and employees receive reasonable job security and benefits, including health insurance and retirement plans. These benefits apply to all workers and employees, regardless of whether they work on an odd-job basis or a traditional full-time basis (Tambe et al., 2019).
- Increase public awareness and understanding of artificial intelligence algorithms and machine learning technologies and raise public awareness of the risks and responsibilities associated with these technologies, as well as the positive impacts they can have.
- Establish a code of ethics and standards for AI algorithms to ensure their impartiality, neutrality and transparency and to avoid bias and discrimination against certain groups (Crawford, 2021). The technology industry should be encouraged to participate in developing ethical codes, while penalties should be imposed on organisations that violate them.
For Society on AI
- Developers need to use more diverse datasets to avoid AI algorithms being biased against specific groups (Krishnapriya et al., 2020). It will ensure that the algorithm learns a broader range of characteristics and information to better represent the group as a whole in society. There is also a need to uphold ethics and values in data collection and cleaning.
- To ensure that AI algorithms are fair and transparent. Introduce independent algorithm review mechanisms to review and regulate AI algorithms. These mechanisms need to be able to check that algorithms are biased and discriminatory and that they follow ethics and values.
- Through the development of relevant algorithm arrangement tools, the transparency and interpretability of AI algorithms have been significantly improved (Krishnapriya et al., 2020). These tools allow people to understand how algorithms make decisions and how they use data to make those decisions. This decision could alleviate concerns about AI algorithms and increase trust in these technologies.
- Governments and companies should make their AI algorithms, and decision-making processes public and promotes transparency and interpretability of algorithms to allow the public to understand better and assess the benefits and potential risks of these algorithms (Robert et al., 2021).
- Governments and businesses should protect citizens’ data privacy by strengthening data encryption and authorisation management to avoid privacy breaches and misuse (Ghimire et al., 2020). These stakeholders should also ensure that AI algorithms are not influenced by the biases of individuals or specific groups and that algorithms are regularly reviewed and monitored to ensure that they do not result in unjust outcomes.
- Governments and enterprises should take measures to ensure the security of algorithms (Tambe et al., 2019), such as strengthening the security of data and systems, regulating the use and storage of algorithms to avoid hacking and data leakage, and designing the application scenarios and functions of algorithms taking into account human values and ethical standards to avoid adverse effects of algorithms on humans.
Still has a long way to go…
Artificial intelligence algorithms are transforming society in many ways, bringing positive and negative impacts. While AI algorithms can improve efficiency, competitiveness, and decision-making in various fields, they also exacerbate existing inequalities, create new risks and challenges, and raise ethical and regulatory issues. To ensure that AI algorithms are developed and used in a fair, transparent, and responsible manner, it is crucial to establish ethical codes, laws, and policies to protect workers, promote algorithmic fairness, privacy, and security, increase public awareness and understanding, and introduce independent review mechanisms and algorithm interpretation tools. By taking these steps, we can maximize the benefits of AI while minimizing its potential risks and negative impacts and building a better and more inclusive future for all.
ADCU. (n.d.). ADCU initiates legal action against Uber’s workplace use of racially discriminatory facial recognition systems. Retrieved from https://www.adcu.org.uk/news-posts/adcu-initiates-legal-action-against-ubers-workplace-use-of-racially-discriminatory-facial-recognition-systems#:~:text=ADCU%20initiates%20legal%20action%20against,system%20failed%20to%20identify%20them.
Andrejevic, M. (2019). Automated culture. In Automated Media. Routledge.
CBS News. (2020, June 10). How George Floyd’s death ignited a racial reckoning that shows no signs of slowing down. https://www.cbsnews.com/news/george-floyd-black-lives-matter-impact/
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Dwoskin, E. (2020, June 11). Microsoft bans police from using its facial recognition technology for the next year. The Washington Post. https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/
Eling, M., Nuessle, D., & Staubli, J. (2021). The impact of artificial intelligence along the insurance value chain and on the insurability of risks. The Geneva Papers on Risk and Insurance-Issues and Practice, 1-37. https://doi.org/10.1057/s41288-021-00213-5
Elish, M. C., & Boyd, D. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57-80. https://doi.org/10.1080/03637751.2017.1394588
European Union. (2018). What is the GDPR?. Retrieved from https://www.consilium.europa.eu/en/policies/data-protection/data-protection-regulation/#:~:text=data%20protection%20rules-
Ghimire, A., Thapa, S., Jha, A. K., Adhikari, S., & Kumar, A. (2020). Accelerating business growth with big data and artificial intelligence. In 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) (pp. 441-448). IEEE. https://doi.org/10.1109/I-SMAC48042.2020.9243423
Krishnapriya, K. S., Albiero, V., Vangara, K., King, M. C., & Bowyer, K. W. (2020). Issues related to face recognition accuracy varying based on race and skin tone. IEEE Transactions on Technology and Society, 1(1), 8-20.
Lawless, W. F., & Sofge, D. A. (2017). Evaluations: Autonomy and artificial intelligence: A threat or savior?. Springer International Publishing. https://doi.org/10.1007/978-3-319-54360-8_18
Microsoft. (n.d.). Face API. Retrieved from https://azure.microsoft.com/en-gb/products/cognitive-services/face/
National Geographic. (2019, June 13). Inside the mind of a master procrastinator | Tim Urban [Video]. YouTube. https://www.youtube.com/watch?v=ED4lYu-0t5Y
Noble, S. U. (2018). A society, searching. In Algorithms of Oppression: How search engines reinforce racism. New York: New York University.
Office of the Attorney General. (n.d.). CCPA overview. Retrieved from https://oag.ca.gov/privacy/ccpa
Reuters. (2018, October 11). Exclusive: Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. Ethics, Governance, and Policies in Artificial Intelligence, 47-79.
Suzor, N. P. (2019). Who makes the rules? In Lawless: The secret rules that govern our lives. Cambridge University Press.
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15-42.
Thurman, N. (2018). Social media, surveillance, and news work: On the apps promising journalists a “crystal ball”. Digital Journalism, 6(1), 76-97. https://doi.org/10.1080/21670811.2017.1308555
Uber. (n.d.). Downloading monthly account statements. Retrieved from https://help.uber.com/business/article/downloading-monthly-account-statements?nodeId=2472db37-d8b9-4df7-b404-2b591b0c5106
Vincent, J. (2020, June 10). Amazon bans police use of its facial recognition technology for a year. The Verge. https://www.theverge.com/2020/6/10/21287101/amazon-rekognition-facial-recognition-police-ban-one-year-ai-racial-bias
WEF. (2020, June 9). IBM’s CEO: George Floyd’s death shows why we need to fight racial injustice with technology. https://www.weforum.org/agenda/2020/06/ibm-facial-recognition-george-floyd#:~:text=IBM’s%20CEO%20Arvind%20Krishna%20said,transparency%20and%20accountability%20in%20policing.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.