Microsoft is leading the AI charge by accelerating its AI offerings, but is this a smart move? What should be considered in rolling out AI products at scale for global users? And just before Microsoft Office underwent a full AI upgrade, they fired the entire social ethics team in their artificial intelligence division and no longer have a dedicated team to ensure that AI principles and product design are closely related.
Microsoft’s latest layoff plan
On March 14, Microsoft recently fired the ethics team under its AI division, according to Microsoft, as part of its layoff plan (Bellan, 2023).
It is reported that the team’s purpose is to make AI development more responsible and in line with ethical and moral requirements. Microsoft has retained an office responsible for AI, which will be responsible for developing rules and standards for AI. Microsoft has previously reorganized the ethics team, retaining only some employees, while others were transferred to other positions.
Microsoft employees said the move resulted in Microsoft not having a dedicated team responsible for tightly integrating AI principles with product design, and the company is now leading the way in AI development and making AI tools mainstream.
Opposing voices of team employees
The AI Social Ethics team reached its largest size in 2020, when it had about 30 employees, including engineers, designers, and philosophers. But in October, as part of a reorganization, the team was cut to about seven people (Vynck & Oremus, 2023). Even as the team has struggled to identify the risks associated with Microsoft’s adoption of OpenAI technology across its suite of products.
At a team meeting after the reorganization, John Montgomery, vice president of AI, told employees that company leaders had given instructions to quickly move the product to market. According to audio of the meeting obtained by the press, he said the pressure is on Microsoft CTO Kevin Scott and CEO Satya Nadella to get these latest open AI models and follow-on models into the hands of customers at a very fast pace. (MACKIE, 2023).
Because of this pressure on the company to launch products, most members of the team will be assigned to help in other departments. Some members of the AI social ethics team pushed back, they claimed this team has been deeply concerned about how the product affects society and the negative impacts the product has, and those impacts are significant.
Google fires its own ethics expert
Timnit Gebru is regarded as one of the most talented researchers in the field of artificial intelligence ethics. She co-authored with other researchers for an essay and warns about the ethical dangers of the large language models that form the basis of Google’s search engine and its business (Tiku, 2020).
Gebru warned that these models analyze a huge amount of textual material on the Internet, but most of it comes from the Western world. The risk associated with this geographic bias is that racist, sexist and offensive language on the Internet could enter Google’s data and be systematically replicated. Google responded by asking Gebru to retract the paper. And she was sacked when she refused (Tiku, 2020).
Other researchers have also identified and pointed out the risks associated with the uncontrolled evolution of artificial intelligence systems. Alexandros Kalousis, a professor of data mining and machine learning at the University of Applied Sciences in Western Switzerland, says AI is ubiquitous and advancing rapidly (Blondé & Kalousis, 2019). Yet developers of AI tools and models are often not really sure how they will behave once they are applied in complex real-world environments.
Why needs AI ethics?
One prevailing belief about the ethical and moral challenges posed by artificial intelligence (AI) is that the technology itself is not at fault, but rather the individuals who wield it. Murphy’s Law states that if something can go wrong, it will, and if there are multiple ways to carry out a task and one of them is likely to result in an error, then someone will inevitably choose that method. A survey of 1,010 tech professionals in the UK revealed that while 90% of respondents viewed technology as a force for good, 59% of those working in AI admitted to developing projects that could potentially harm society, and 18% resigned from their positions due to ethical concerns (Metz, 2021). It is worth noting that the ethical violations committed by AI are often the result of individual mistakes, as referenced in Murphy’s Law. However, such mistakes can be easily replicated by others and amplified by the power of the internet, leading to potentially incalculable social harm.
What is AI Ethics?
The field of AI ethics concerns the moral considerations surrounding the development and use of artificial intelligence. It involves the systematic examination of human morality and values, with a focus on regulating the development and implementation of AI technologies in a rational manner(Metz, 2021). Additionally, it addresses social issues that may arise in human-computer interactions. AI ethics encompasses a variety of areas, such as acceptance of AI, fairness, safety, and ethics. Current research in the field includes topics such as algorithmic discrimination, data privacy, safety and responsibility, relationships between robots and humans, and the impact of AI on technological poverty.
What‘s the significance?
AI ethics research is becoming increasingly urgent due to the rapid development of AI technology. The topic of AI and ethics has gained attention as people are concerned about ensuring that AI does not pose a threat to humanity (Talagala, 2022).
As the field of artificial intelligence (AI) advances at an unprecedented pace, it is increasingly important that ethical considerations be taken into account in research and development related to this technology. AI has the potential to revolutionize the way we live our lives, from healthcare to transportation to public safety. However, without proper oversight, AI could be used in ways that have negative consequences for society.
To ensure that AI is developed and used in a responsible and ethical manner, it is crucial for individuals who use AI technology and those who are in charge of its development to establish a relevant moral and ethical perspective. This includes considering the impact of AI on human life and taking steps to mitigate any potential negative consequences.
Establishing ethical guidelines for the development and deployment of AI is important not only for ensuring that these technologies are used responsibly but also for building trust and confidence in AI among the public. By prioritizing ethics in the development and use of AI, we can ensure that these technologies are developed and deployed in a way that benefits society as a whole while minimizing any potential negative consequences.
The ethical boundary of AI development
When one cannot carve the moral boundary of AI development, how to guarantee that AI development will not cross the boundary is a tricky issue. Professor Tao Xie of Peking University believes that the moral constraints on machines can be described by natural language, but industry-related personnel needs to make natural language truly implemented and verified in AI development through technical means such as algorithm design (Jing, 2022).
For AI developers, the lack of professional ethics and professional morality is the root of many of the current social problems. Technology as a means serves a specific purpose, and that purpose must be ethical. So, people need to strengthen the ethical awareness of AI developers, rather than waiting for products to come out and then regulating them. AI developers should realize that AI technology implementation serves a set of core human moral values, not just capital and power.
To prevent AI development from crossing ethical boundaries, it is essential to have external regulation in place that not only sets constraints but also provides guidance to companies on how to conduct AI technology development in a reasonable and ethical manner. This external regulation can be synergistic and beneficial to companies if it is done in a way that supports them in meeting basic requirements.
How to govern people to adhere AI ethics?
Artificial intelligence, in reprogramming and intervening in the world, is ruling in other ways, though rarely recognized as politics. These dominations are driven by the big companies of AI that dominate the marketplace on a massive scale and will even bring about another technological revolution in human life (Crawford, 2021).
So we are asked to ask serious questions about the way AI is produced, such as whose interests does it serve and who bears the greatest risk of harm? And where should the use of AI be limited?
Attention to ethics first needs to begin at the design stage of AI systems. AI engineers and researchers should gain a deep understanding of ethical principles in related fields to ensure that human well-being is fully considered in the development process. At the same time, interdisciplinary cooperation is crucial, and experts such as ethicists, sociologists and psychologists should be involved in the design and evaluation of AI systems.
Second, to prevent data discrimination, we need to focus on data sources and processing. Data discrimination often stems from unfair data collection and processing processes. Therefore, AI systems need to use fair, diverse and unbiased data sources and adopt transparent and fair algorithms in the data processing stage. In addition, a third-party review mechanism should be established to regularly evaluate and adjust AI systems to ensure that their decisions are fair and non-discriminatory.
In addition, transparency and interpretability of AI systems are essential to ensure ethical decision-making. The “black box” effect of AI technologies often results in users being left in the dark about their decision-making processes. Therefore, research and development of explainable AI models is key to allow users to understand the rationale and basis of AI decisions in order to increase trust.
For developers working in the field of AI, their professional ethics and morals are crucially important. Technology should serve a specific purpose while also strengthening the ethical awareness of AI developers, to prevent the risks that AI technology can pose from the very beginning, rather than simply regulating the product after it has been released.
To achieve this, there are three important aspects that need to be considered at the algorithm level:
- interpretability and transparency;
- balance and non-discrimination in the data;
- exclusive testing of the algorithm as a tool.
It’s essential for AI developers to understand that AI technology should serve a set of core human ethical values, rather than solely being driven by capital and power. Finally, it is also important to foster public concern and understanding of AI ethics and morality. Through education and public discussion, we can raise public awareness of AI ethical issues and create a favorable environment for society to jointly monitor the ethical behavior of AI systems.
The widespread adoption of Artificial Intelligence in our daily lives highlights the need for people to educate themselves about its capabilities and limitations. Proper direction and use of AI can be ensured through awareness and knowledge. It is crucial to eliminate bias and discrimination in AI systems to prevent harm to individuals and society. AI must be trained to identify and address such issues. Furthermore, it is essential to establish regulations to govern the use and exploitation of AI, ensuring that it is developed ethically and for the betterment of humanity, and preventing any misuse that may pose a threat to humans.
Bellan, R. (2023, March 14). Microsoft lays off an ethical AI team as it doubles down on OpenAI. Retrieved from https://techcrunch.com/2023/03/13/microsoft-lays-off-an-ethical-ai-team-as-it-doubles-down-on-openai/
Vynck, G. D., & Oremus, W. (2023, March 30). As AI booms, tech firms are laying off their ethicists. Retrieved from https://www.washingtonpost.com/technology/2023/03/30/tech-companies-cut-ai-ethics/
MACKIE. (2023, March 14). AI Ethics Team Gutted at Microsoft — Redmond Channel Partner. Retrieved April 27, 2023, from https://rcpmag.com/articles/2023/03/14/ai-ethics-team-gutted-at-microsoft.aspx
Metz. (2021, November 19). Can a Machine Learn Morality? Retrieved April 27, 2023, from https://www.nytimes.com/2021/11/19/technology/can-a-machine-learn-morality.html
Talagala, N. (2022, May 31). AI Ethics: What It Is And Why It Matters. Retrieved from https://www.forbes.com/sites/nishatalagala/2022/05/31/ai-ethics-what-it-is-and-why-it-matters/
Tiku, N. (2020, December 23). Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Retrieved from https://www.washingtonpost.com/technology/2020/12/23/google-timnit-gebru-ai-ethics/
Blondé, L., & Kalousis, A. (2019, April 11). Sample-Efficient Imitation Learning via Generative Adversarial Nets. Retrieved from https://proceedings.mlr.press/v89/blonde19a.html
Jing. (2022, August 25). Where are the ethical boundaries of artificial intelligence development? Retrieved April 27, 2023, from https://www.ccf.org.cn/YOCSEF/Branches/Shenzhen/News/2019-10-24/669602.shtml
Crawford. (2021, August 16). Atlas of AI. Retrieved April 27, 2023, from https://yalebooks.yale.edu/9780300264630/atlas-of-ai