Have You Seriously Thought about What Kind of AI YOU Really Want to Use?: Human-Centered AI Design and Governance

“Smart technologies are unlikely to engender smart outcomes unless they are designed to promote smart adoption on the part of human end users. Successful applications of AI hinge on more than big data and powerful algorithms. Human-centred design is also crucial. AI applications must reflect realistic conceptions of user needs and human psychology”

– Deloitte: “Smarter together: Why artificial intelligence needs human-centred design”

In 2019, Google launched The People + AI Guide, aiming to assist UX professionals and product managers in adopting a human-centered approach to AI. When you go through Google’s guidelines, you may be wondering why human-centered design is so important for AI development?

Why AI Needs Human-Centered Design

Misconceptions about AI Design

With the increasing popularity of research and development in various AI technologies, many people generally believe that we have developed more advanced algorithms that enable machines to operate in a way that simulates the human brain, and the development of AI has been able to touch the working mode of the human brain, and even replace human thinking. However, a recent report shows that in reality, AI does not need to simulate the human brain. Instead, it serves as a tool to help humans think better and complete tasks. The focus of artificial intelligence development in the past decade has been on improving computing power and scaling up datasets, rather than on the nature of algorithms (Martinez et al., 2019). The Portable team notes that current trend in AI involves generating systems based on large amounts of data, analyzing trends, identifying outliers, and building complex models. While these models are often highly accurate, they may not always be that useful.

“If you don’t meet human needs, you’re just going to build a very powerful system to solve a very small – or probably non-existent – problem.”

– Josh Lovejoy, The UX of AI

In addition, Many “big data” or AI projects fail to deliver in practical applications due to a variety of reasons, such as overestimating the availability of data and not taking appropriate processes to ensure that the algorithm output meets the expected business outcomes. Furthermore, as Josh Lovejoy said above, an important reason that many people overlook is that the application of AI lacks — human-centered considerations. By adopting a human-centered approach and taking the time to understand how users truly interact with our products and services, we can identify their needs and address them more appropriately.

Human-Centered AI: Design AI with Purpose

So what exactly is human-centered AI (HCAI) design?

Donald A. Norman, in his book “In The Design of Everyday Things“, points out that the significance of human-centered design (HCD) is an innovative approach to problem-solving that focuses on human thinking, emotion, and behavior in design. The HCD approach always keeps the end-user’s needs at the forefront and places them at the center of the digital design process. Human-centered AI (HCAI) was birthed from this philosophy and more specifically, HCAI emphasizes that the development of AI should prioritize the impact of AI on human society. This approach aims to embed human values, morals, concepts, and needs into intelligent systems to enhance human skills and experiences. The goal is to apply HCD concepts, to the main design principles of AI, improving capabilities with the use of AI rather than replacing humans (Geyer et al., 2022).

A very visual example, like Apple Health, which many people are using today, Apple launched this smart health management assistant in 2014, which is designed to be a platform for real-time monitoring, centralized management and integration of users’ health data, providing personalized health analysis and feedback based on our own health status and goals. In the following four years, based on user experience and feedback, new features such as sleep analysis, mental health tracking, and fitness data sharing were introduced. This HCAI method helped us better understand and improve our health in a more targeted way, thereby enhancing everyone’s quality of life and happiness. To give another typical example, IBM partnered with fashion companies to develop AI systems that analyze fashion trends, customer preferences, and social media data. This is the fashion industry’s first AI project. This HCAI approach enables designers to create collections that are more personalized and responsive services that cater to current trends, thus businesses can improve their overall performance and increase customer satisfaction.

Image 1. IBM Watson deployed a visual recognition tool that analyzed recurring colors and silhouettes among other features (IBM, 2018).

We can see from the two cases above that the key measure of AI success in a narrow sense is the extent to which the technology helps to caters to the multiple requirements of individuals. HCAI is created to meet the needs of a wide range of people to the greatest extent, enhance the practicality of interactions, improve the human experience, make AI systems more clearly designed for users to achieve excellent personalization. It aims to enhance individual satisfaction and trust, ultimately capturing the real value of AI and ensure that it is not only successfully built, but also successfully adopted and popularity among users.

How to design HCAI

What’s the most annoying thing about going to the supermarket? Many people might answer: Checkout line. It’s true that waiting in line is really time-consuming, and who doesn’t want to take it and go? In 2017, Amazon launched an unmanned supermarket Amazon Go. Unlike traditional supermarkets, Amazon Go uses advanced technologies such as computer vision, deep learning, sensor fusion and other technologies to automatically identify customer movements, product location and product status. This allows customers to simply pick up the items they want and leave the store without having to queue up to check out, and the customer’s smartphone will automatically settle and receive the relevant bill when leaving. Amazon Go eliminates the need for checkout queue at the supermarket, allowing customers to have a better shopping experience.

From this we can see that how to design a truly user-centric AI product successfully is – understanding people’s needs, problems, preferences, pain points, and expectations. This involves ensuring that AI solves a real problem in a way that adds unique value, identifying and engaging with users and stakeholders, and carefully thinking about the motivations and goals of the design before incorporating AI into the product, whether actually improves or reduces the experience.

Also, It is essential to design AI systems that prioritize usability by translating user needs into data needs, and optimizing long-term public benefits (Korzen, n.d.). In November 2022, OpenAI, an American artificial intelligence research lab, released ChatGPT. ChatGPT is now the leading AI-powered chatbot among many tech companies, not only because they offer particularly novel uses for AI, but because they prioritize usability and accessibility to the public (Korzeń, n.d.).

Image 2. ChatGPT’s simple interface (Korzen, n.d.).

Design governance of HCAI

Governance Based on Human-Centered Concept

Why do we emphasise the importance of governance in AI design in particular?

AI itself carries the risk of multiple uncertainties, including ethical and moral issues, algorithmic bias and discrimination, privacy violations, etc (Müller,2015). In 2018, Amazon was reported by Reuters for its hiring algorithm that was found to be sexist and controversial, as the algorithm was trained on historical data, and historically Amazon’s hiring data was dominated by men. As a result, the algorithm favored male applicants over female applicants. However, in many cases, the cost of these risks is transferred to society or governments to undertake rather than the designers and beneficiaries of these technologies (Taeihagh, 2021).

It is important to incorporate a structured approach to address defects and risks during the design phase of AI. This includes providing proper oversight of the process, aligning its behavior with ethical standards and societal expectations, and preventing any potential adverse effects. By doing so, we can ensure that HCAI operates on the principle that respects human dignity and rights instead of violating them. (Mucci, 2023).

The global tech sector has recognized the importance of AI governance to some extent for a while now, but there is still a long way for humans to go. In December 2023, the United Nations published Governing AI for Humanity, which calls for closer alignment of international norms with the way AI is developed and promoted. It aims to promote international collaboration towards the governance of AI through the implementation of seven key functions. These functions include a forward-looking assessment of risks, support for international cooperation on data, computing power, and talent, strengthening transparency and accountability, as well as ensuring every country has an equal voice. In March 2024, the world’s first comprehensive framework for restraining the risks of AI has approved by MEPs. The law’s creators said it would make the tech more “human-centric.” These series of measures show that comprehensive governance of AI based on a human-centred approach is an important force for the future reasonableness of AI development.

“The AI act is not the end of the journey but the starting point for new governance built around technology.” 

– MEP Dragos Tudorache

Limitations of HCAI Governance

Remember the trolley problem, the moral and ethical thought experiment that has plagued people for so long? What about when it happens to self-driving cars? Further, how are programs designed when HCAI faces ethical dilemmas and how do we manage it?

During the 13th IEEE-RAS International Conference on Humanoid Robots, Ronald Arkin, a roboticist from Georgia Tech, gave an impassioned speech to the audience. When it comes to the use of intelligent technology in the manufacture of weapons, an ethical question arises: “What happens if our robots are moral, but the opposing side’s robots are not?” In this era of rapid growth in AI, countries see it as a powerful new field of competition in international technology. It is worth reflecting on whether the global AI governance system can act together, coordinate and resolve contradictions.

HCAI emerged in policy discourse intending to adapt the concept of human-centered design (HCD) to AI. However, there was insufficient reflection on how it could be reformed to adapt to the new mission environment of AI, resulting from the public governance environment (Auernhammer, 2020). When HCD was introduced as part of a new design paradigm, its application faced similar criticism, failing to take into account the broader political, ethical, and legal issues that public administrations needed to consider (Sigfrids et al., 2023).

When considering HCAI design, there are dilemmas and risks in incorporating human values, ethics, and morals into the AI program. When an AI machine is like a human as an autonomous decision-maker, it must be considered at the beginning of the design how it will handle moral dilemmas. For example, when a driverless car encounters a dangerous road situation, does it save one person on the side of the road or multiple people in the car? In the real world, drivers can make their own choices according to the specific environment and social norms, but pre-designed choices for AI do not guarantee that the damage is minimized.

Image 3. Different scenarios of a self-driving car’s behavior on the road (Oleksandr Odukha, 2018).

More importantly, from the designer’s point of view, how can the AI machine make its own judgments for autonomous decision-making as an autonomous actor? Current AI machines lack social awareness and moral responsibility, can only choose and judge based on built-in algorithms and moral scenarios, and are not capable of taking corresponding responsibility for their own actions. Therefore, the bottleneck in the development of autonomous driving is limited not only by technology, but also by moral and ethical issues. Similarly, the bottleneck of HCAI is not only in the design, but more often in the failure to consider the social governance system that has not yet taken shape.

In fact, we are no longer facing the “unmanned car problem”, and now the intelligence of machines has enabled machines to work without human intervention or with little intervention. When intelligent robots independently undertake the task of caring for the elderly, patients, and the disabled, disaster rescue and other tasks. As they interact more with humans, designers need to create machines with thinking and problem-solving capabilities like humans through people-oriented design. This approach ensures that robots are consistent and compatible with human values and norms. However, how to design and manage it to be consistent and compatible with human values and norms has become a new problem and challenge that designers of AI and social policymakers have to face seriously.

Shift from “User”-Centered to “Human”-Centered

Noteworthy in the general direction of future HCAI governance is that the traditional concept of human-centered design emphasizes the perspective of “human” as “users”. This approach may limit the scope of AI design thinking to a simplified version because AI design is not only a multi-technical work, it also involves ethical, social, psychological, economic, political and legal aspects, and may have a profound impact on society (Lucivero, 2016). Therefore, this concept of putting people in the place of users will isolate the broader and complex political and social implications of AI. When considering design governance, it’s not just about being user-centric, but rather shifting towards a human-centered view. This human-centered ideal can benefit from human-centered technology and AI governance by defining it as – a combination of user-centric, community-centric, and socially centric perspectives (Sigfrids et al., 2023). We need to design and govern the future of AI from a macro perspective that takes into account the complexity of AI in all its global dimensions in a holistic manner.

Image 4. Human-centricity in AI governance: A systemic approach (Leikas Jaana, 2023).

Conclusion

In the fast-paced world of AI, which is constantly emerging, competing and obsolescence, it is crucial to integrate the concept of human-centered into the design of AI. This will lead the creation of AI in a direction that is beneficial to mankind, instead of disrupting and replacing it, which will have a profound impact on human progress in the future. Furthermore, it should be emphasized that design governance is not only user-centered, but also community-centered to a greater extent. It seeks a management method consistent with the values and norms of human society from a broader perspective. Although AI has seen explosive growth – driving huge profits in recent years, the governance of AI is still an essential part that is easily overlooked. This particular governance system requires global cooperation to build clearer and better human-centred policies.

References

Amazon. (2023). Amazon Go. Amazon.com. https://www.amazon.com/b?ie=UTF8&node=16008589011

Apple Inc. (2024). iOS – Health. Apple (Australia). https://www.apple.com/au/ios/health/

Clemente, J. (2020, September 11). Bestseller and IBM Garage bring sustainable fashion forward with Fashion.ai. IBM. https://www.ibm.com/blog/bestseller-and-ibm-garage-bring-sustainable-fashion-forward-with-fashion-ai/

Dastin, J. (2018, October 11). Insight – Amazon scraps secret AI recruiting tool that showed bias against women. Www.reuters.com. https://www.reuters.com/article/idUSKCN1MK0AG/

Federica Lucivero. (2016). Ethical Assessments of Emerging Technologies Appraising the moral plausibility of technological visions. Cham Springer International Publishing.

Flew, T. (2021). Regulating Platforms. John Wiley & Sons.

Geyer, W., & Weisz, J. (2022, April 1). What is human-centered AI? Www.ibm.com. https://research.ibm.com/blog/what-is-human-centered-ai

Google PAIR. (2019, May 8). People + AI Guidebook. Pair.withgoogle.com. https://pair.withgoogle.com/guidebook/

Korzeń, R. (n.d.). Designing for AI: 12 Expert Tips for Human-Centered Design. Soft Kraft. https://www.softkraft.co/designing-for-ai/

Lovejoy, J. (2018, January 25). The UX of AI. https://design.google/library/ux-ai

Martinez, D., Malyska, N., & Streilein, B. (2019). Artificial Intelligence: Short History, Present Developments, and Future Outlook. https://www.ll.mit.edu/sites/default/files/publication/doc/2021-03/Artificial%20Intelligence%20Short%20History%2C%20Present%20Developments%2C%20and%20Future%20Outlook%20-%20Final%20Report%20-%202021-03-16_0.pdf

McCallum, S., McMahon, L., & Singleton, T. (2024, March 13). MEPs approve world’s first comprehensive AI law. Www.bbc.com. https://www.bbc.com/news/technology-68546450

MüllerV. C. (2015). Risks of Artificial Intelligence. London Crc Press.

Norman, D. (2013). The Design of Everyday Things. Mit Press. (Original work published 1988)

OpenAI. (2022, November 30). ChatGPT. Chat.openai.com; OpenAI. https://chat.openai.com/

Portable. (n.d.). How human-centred design elevates AI. Portable.com.au. https://portable.com.au/articles/how-human-centred-design-elevates-ai

Sigfrids, A., Leikas, J., Salo-Pöntinen, H., & Koskimies, E. (2023). Human-centricity in AI governance: A systemic approach. Frontiers in Artificial Intelligence , 6(2624-8212). https://doi.org/10.3389/frai.2023.976887

Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. https://academic.oup.com/policyandsociety/article/40/2/137/6509315?login=false

Tim Mucci, T., & Stryker, C. (2023, November 28). What is AI governance? | IBM. Www.ibm.com. https://www.ibm.com/topics/ai-governance

United Nations. (2023). Governing AI for Humanity. https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf

Images

IBM. (2018). Women’s Wear Daily. In https://cognitivefashion.github.io/. https://cognitivefashion.github.io/portfolio/couture_wwd/

Jaana, L. (2023). Human-centricity in AI governance: A systemic approach. In https://medium.com/. https://medium.com/@mindful_studio/human-centered-ai-the-ethics-of-designing-new-technology-984ea5333012

Korzeń, R. (n.d.). ChatGPT’s simple interface. In https://www.softkraft.co/. https://www.softkraft.co/designing-for-ai/

Nadya_Art. (2023). Artificial Intelligence VS Human. Vector illustration in a flat style of the robot and a human heads placed opposite each other on a contrasting background with technical and natural elements. In https://www.shutterstock.com/zh/. https://www.shutterstock.com/zh/image-vector/artificial-intelligence-vs-human-vector-illustration-2285090819

Odukha, O. (2018). Different scenarios of a self-driving car’s behavior on the road. In https://intellias.com/. https://intellias.com/it-s-time-to-give-autonomous-cars-an-ethics-lesson/

Be the first to comment

Leave a Reply