With AI trust in doubt under the algorithmic black box, can explainable AI make algorithms more transparent ?

Hanyin Jiang

The movie Real Humans is about the future of mankind into the age of robots, people buy robots to help humans do housework to take care of children, looking at the present, “housework robots” can be regarded as a microcosm of the algorithm to take over the life of the people to get up and rely on Twitter to browse the entertainment news, travel by uber taxi, on Amazon to carry out Daily necessities shopping, what you want, what you care about, behind the natural operation of the algorithm to decide, but the deployment of all aspects of life algorithm, has been in a “black box” state, it absorbs data, analyze data, recommend the logic of the people do not know.

While algorithms improve the efficiency of social functioning, algorithms also bring some hidden worries, for example, AI screening resumes of Amazon was criticized for gender bias, and the Pew Research Center has pointed out in a research report that algorithms are mainly written to improve efficiency and profitability, and do not take into account the corresponding social impacts, lack of consideration of individuals, and humans will be treated by the algorithms as “inputs” to the process ” rather than thinking and feeling creatures (Rainie and Anderson, 2017), but since platforms do not publish algorithms, rules and data, it is difficult for those whose interests are compromised to take evidence for themselves.

Therefore, as more and more industries in systems such as healthcare, finance and job search use algorithms to have an impact on people’s lives, more and more people want to know how algorithms are used, how data is obtained and how accurate their functioning is (Pasquale, 2015), and AI interpretability is a key factor in deepening this relationship. In this paper, we will take a look at two aspects of algorithmic black box, namely, its principles and harms, and how explainable AI can solve the algorithmic black box problem.

What is an algorithmic black box

A black box generally refers to a system that can be viewed with based on its inputs and outputs without the need to understand its internal workings, an algorithmic black box is an algorithm whose internal workings and decision-making processes are hidden or opaque to the user or observer, that is, an algorithmic black box accepts data inputs and performs some operations on them and generates outputs, but the particular steps and logic used to generate the outputs are invisible or inaccessible ( Yasar, 2023). Algorithmic black boxes are characterized by the opacity of internal components, e.g., the source code of the algorithm, training data, or model parameters cannot be inspected. There is a lack of transparency in algorithmic decisions, i.e., the reasoning and logic behind the algorithmic output cannot be easily explained or understood by the user. So while each company’s algorithm is different, it is a reflection of their purpose and values, e.g. Facebook was not prominently recommended during the height of the Occupy Wall Street movement.

Algorithm black box phenomenon derived risk is mainly reflected in the following aspects; first, the internal operation mechanism of the algorithm is obscure and difficult for ordinary users to understand; secondly, the algorithm is biased and discriminatory, leading to unfair consequences; finally, the algorithmic decision-making process lacks traceability, so that it is difficult to hold accountable for the responsibility in the event of a problem.

Future directions for preventing the dangers of algorithmic black boxes: explainable AI

In the “black box” of decision-making in AI systems, it is often difficult for even developers to explain how AI makes a particular decision, and this has far-reaching implications for human life due to the growing trend of utilizing algorithms to make predictions in critical areas such as healthcare, finance, the judiciary, and the education system, with regard to high-risk decisions. The “black box” problem will be compounded by the obvious dangers of entrusting important decisions to a system that cannot be explained and cannot be interpreted by humans. For example, in 2021, Ofqual, the regulator of A-level exams, received an order from the UK government that although UK students would not be taking exams this year due to the epidemic, Ofqual was required to assign marks to students and ensure that this year’s marks were judged on the same criteria as previous years’ marks, and therefore used an algorithm to predict the students’ marks for this year’s exams, with the result that the marks of 40% of the UK candidates were incorrectly predicted to be much lower than in previous years and led to many universities withdrawing places, leading to social disruption. As a result, 40% of UK candidates’ scores were incorrectly predicted to be much lower than in previous years, leading to many universities withdrawing places, student demonstrations and social disruption  (Harkness, 2021).

What is Explainable Artificial Intelligence

To address this algorithmic black box problem, Explainable Artificial Intelligence (XAI) has emerged as an important area of research that focuses on improving the transparency of AI systems.Originally proposed by Lent et al. for elucidating the behavior of AI-controlled entities in simulated game scenarios, XAI now encompasses a much broader range of topics. In practice, it involves utilizing a variety of methods and techniques to build transparent machine learning models capable of elucidating the rationale for their decisions.The overall goal of XAI is to give algorithms the ability to elucidate the underlying principles guiding specific decisions, and to articulate the strengths or weaknesses associated with those decisions. In addition, XAI helps identify missing data sources in the algorithmic model, thus providing insights into areas of the AI algorithm that need improvement. (Carvalho et al., 2019). For example, self-driving cars are required to deal with a large amount of real-time data and are required to make decisions quickly. Explainable AI models can help drivers understand the car’s decision-making process and improve the safety of autonomous driving.

How XAI is used in practice

Explainable Artificial Intelligence (XAI) is already playing an important role in highly sensitive information domains such as finance, healthcare, and law, where XAI opens up the black box of AI and enables stakeholders to understand, validate, and challenge the decisions made by these systems.

In the financial sector, algorithms are used for stock market analysis, credit ratings, and real estate price forecasting, XAI provides reliability to investors and analysts by clearly explaining the data and patterns that underlie the decisions made by these predictive models, e.g., PayPal uses algorithms to detect fraudulent transactions, and their system analyzes millions of transactions in real time to identify suspicious activity (Vikas Hassija et al.,2023). Palpal, by using XAI, can more easily understand why algorithmic models are categorizing specific transactions as fraudulent and can more easily review or modify those decisions if necessary.

In healthcare, where the problem of explainability is more important than the problem of curiosity, XAI can help healthcare professionals trust and understand the process by which AI systems make decisions, which is especially important for diagnosing and developing treatment planning AI systems (Vikas Hassija et al.,2023). Google’s Deepmind has developed an AI model for ophthalmology to diagnose retinal diseases, which analyzes retinal scans to detect the presence of disease and provide a basis for its diagnosis, an approach that helps ophthalmologists explain the entire result to the patient more clearly.

In the legal field, explainable AI is used for case analysis, legal advice, verdict prediction, etc. XAI enables legal professionals to understand and trust the recommendations made by these systems by clearly explaining the basis of their decisions (Vikas Hassija et al.,2023), For example, In 2022, the Federal Court imposed a $44.7 million fine on Trivago, a global hotel booking company, following a lawsuit filed by the ACCC, the competition regulator, claiming that Trivago’s website and its website were not accurate and that the company was not responsible for the damage it caused. Claiming that made misleading representations about hotel room rates on its website and in its television advertisements, when in fact it was Trivago that used an algorithm that determined which hotels to highlight on its website based on which online hotel booking site paid it the highest per-click fee, Trivago, by using the algorithm to display recommendations to mislead consumers into believing they were getting a great value hotel deal, which was not the case. A key issue during the legal proceedings was how Trivago’s complex ranking algorithm selected the highest ranked hotel room deals. The ACCC therefore called expert witnesses, without any access to Trivago’s system, to use the XAI methodology to push back on how Trivago’s algorithmic system worked to prove that the algorithmic system behaved in a way that misled consumers (Snoswell et al.,2022).

The Need for Artificial Intelligence explainable to Solve the Algorithmic Black Box Problem

One is to prevent AI from engaging in self-reinforcing loops and to increase transparency. Artificial intelligence uses biased or outdated data at times, and algorithms that engage in non-stop feedback loops reinforce the bias and discrimination of the algorithm’s results, so feedback loops are important for improving algorithmic errors, and so with explainable AI, researchers can gain insights into the reasons for algorithmic loops as well as the rationale behind particular illogical decisions, and thus discover areas that the algorithm lacks understanding in domains, which can be enriched by adding diverse data sources, for example, to enrich its decision-making capabilities (Rijmenam, 2020).

Secondly, to protect privacy and data rights, algorithmic systems require large amounts of data for training and decision making, which raises concerns about individual privacy and data rights. Explainable AI can help to detect and eliminate bias and discrimination in data, and protect individual rights and social fairness by focusing on specific data features or specific groups of people (Vikas Hassija et al.,2023), and XAI can help users to focus on those features that are most important to the algorithm’s predictions important to the algorithm’s predictions and how modifying those data can affect the algorithm’s output, and as algorithms continue to grow in complexity and intensity, explainable AI provides transparent and explainable explanations of these models that can help ensure they are used ethically and responsibly, such as the Linkedin team introduced the explanatory technology CrystalCandle to boost sales, gained customer trust to fulfill customer needs, and increased its subscription revenue by 8% (Yang et al.,2021).

Limitations of Explainable Artificial Intelligence

However, for now AI explainability and prediction accuracy are not compatible. One of the main factors affecting accuracy and explainability is the complexity of machine learning algorithms. In general, more complex algorithms tend to have higher accuracy but lower explainability, while simpler algorithms tend to have lower accuracy but higher explainability (Ndungula, 2022). For example, if you have a large and diverse dataset, you can train a more complex and accurate algorithm, but you may lose some insight into the underlying patterns and relationships in the data. On the other hand, if you have a small and simple dataset, you can train a less complex and accurate algorithm, but you may gain some intuition and interpretation about the data. So often AI algorithms are highly efficient and at the same time it is poorly explainable, similarly AI models are not as accurate and it is relatively more explainable, although AI iterative updates are in the process of being updated towards but not currently, algorithms that are high in both dimensions.

Conclusion

Explainability is an important factor in ensuring the credibility, transparency and social acceptance of AI systems. By providing a clear basis for decision-making and internal operating logic through explainable AI, explainable AI can not only address the reliability and transparency of the algorithmic black box to a certain extent, but also protect personal privacy and data rights, and promote the social application of AI.

.

Reference:

Bagchi, Saurabh. “What Is a Black Box? A Computer Scientist Explains What It Means When the Inner Workings of AIs Are Hidden.” The Conversation, 22 May 2023, theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888.

Carvalho, Diogo V., et al. “Machine Learning Interpretability: A Survey on Methods and Metrics.” Electronics, vol. 8, no. 8, 26 July 2019, p. 832, https://doi.org/10.3390/electronics8080832.

Harkness, Timandra . “A Level Results: Why Algorithms Aren’t Making the Grade.” Www.sciencefocus.com, 11 Jan. 2021, www.sciencefocus.com/future-technology/a-level-results-why-algorithms-arent-making-the-grade.

Ndungula, Samuel. “Model Accuracy and Interpretability.” Medium, 25 Nov. 2022, samuelndungula.medium.com/model-accuracy-and-interpretability-3d875439942c.

Pasquale, Frank. The Black Box Society the Secret Algorithms That Control Money and Information. Harvard University Press, 2015.

Rainie, Lee, and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center: Internet, Science & Tech, 8 Feb. 2017, www.pewresearch.org/internet/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/.

Rijmenam, Dr Mark van. “Algorithms Are Black Boxes, That Is Why We Need Explainable AI.” Medium, 6 Dec. 2020, markvanrijmenam.medium.com/algorithms-are-black-boxes-that-is-why-we-need-explainable-ai-72e8f9ea5438.

Shin, Donghee, and Yong Jin Park. “Role of Fairness, Accountability, and Transparency in Algorithmic Affordance.” Computers in Human Behavior, vol. 98, Sept. 2019, pp. 277–284, https://doi.org/10.1016/j.chb.2019.04.019. Accessed 18 Jan. 2020.

Snoswell, Aaron J., et al. “When Self-Driving Cars Crash, Who’s Responsible? Courts and Insurers Need to Know What’s inside the “Black Box.”” The Conversation, 24 May 2022, theconversation.com/when-self-driving-cars-crash-whos-responsible-courts-and-insurers-need-to-know-whats-inside-the-black-box-180334.

Vikas Hassija, et al. “Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence.” Cognitive Computation, vol. 16, 24 Aug. 2023, https://doi.org/10.1007/s12559-023-10179-8.

X, Inspire. “Use Cases of Explainable AI (XAI) across Various Sectors.” Medium, 19 Nov. 2023, medium.com/@inspirexnewsletter/use-cases-of-explainable-ai-xai-across-various-sectors-ffa7d7fa1778. Accessed 14 Apr. 2024.

Yang, Jilei, et al. “CrystalCandle: A User-Facing Model Explainer for Narrative Explanations.” ArXiv (Cornell University), 1 Jan. 2021, https://doi.org/10.48550/arxiv.2105.12941. Accessed 14 Apr. 2024.

Yasar, Kinza. “What Is Black Box AI? – Definition from WhatIs.com.” WhatIs.com, Mar. 2023, www.techtarget.com/whatis/definition/black-box-AI.

Be the first to comment

Leave a Reply