当前位置: X-MOL 学术Comput. Ind. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A methodological and theoretical framework for implementing explainable artificial intelligence (XAI) in business applications
Computers in Industry ( IF 10.0 ) Pub Date : 2023-11-17 , DOI: 10.1016/j.compind.2023.104044
Dieudonné Tchuente , Jerry Lonlac , Bernard Kamsu-Foguem

Artificial Intelligence (AI) is becoming fundamental in almost all activity sectors in our society. However, most of the modern AI techniques (e.g., Machine Learning – ML) have a black box nature, which hinder their adoption by practitioners in many application fields. This issue raises a recent emergence of a new research area in AI called Explainable artificial intelligence (XAI), aiming at providing AI-based decision-making processes and outcomes to be easily understood, interpreted, and justified by humans. Since 2018, there has been an exponential growth of research studies on XAI, which has justified some review studies. However, these reviews currently focus on proposing taxonomies of XAI methods. Yet, XAI is by nature a highly applicative research field, and beyond XAI methods, it is also very important to investigate how XAI is concretely used in industries, and consequently derive the best practices to follow for better implementations and adoptions. There is a lack of studies on this latter point. To fill this research gap, we first propose a holistic review of business applications of XAI, by following the Theory, Context, Characteristics, and Methodology (TCCM) protocol. Based on the findings of this review, we secondly propose a methodological and theoretical framework in six steps that can be followed by all practitioners or stakeholders for improving the implementation and adoption of XAI in their business applications. We particularly highlight the need to rely on domain field and analytical theories to explain the whole analytical process, from the relevance of the business question to the robustness checking and the validation of explanations provided by XAI methods. Finally, we propose seven important future research avenues.



中文翻译:

在业务应用中实施可解释人工智能 (XAI) 的方法和理论框架

人工智能 (AI) 正在成为我们社会几乎所有活动领域的基础。然而,大多数现代人工智能技术(例如机器学习 - ML)都具有黑匣子性质,这阻碍了许多应用领域的从业者采用它们。这个问题引发了人工智能领域最近出现的一个新研究领域,称为可解释人工智能(XAI),旨在提供基于人工智能的决策过程和结果,以便人类轻松理解、解释和论证。自 2018 年以来,关于 XAI 的研究呈指数级增长,这证明了一些综述研究的合理性。然而,这些评论目前的重点是提出 XAI 方法的分类法。然而,XAI本质上是一个应用性很强的研究领域,除了XAI方法之外,研究XAI如何在行业中具体使用也非常重要,从而得出最佳实践以更好地实施和采用。对于后一点,目前还缺乏研究。为了填补这一研究空白,我们首先提出按照理论、背景、特征和方法论 (TCCM) 协议对 XAI 的业务应用进行全面审查。基于本次审查的结果,我们其次提出了一个分六个步骤的方法和理论框架,所有从业者或利益相关者都可以遵循该框架,以改进 XAI 在其业务应用程序中的实施和采用。我们特别强调需要依靠领域领域和分析理论来解释整个分析过程,从业务问题的相关性到稳健性检查以及 XAI 方法提供的解释验证。最后,我们提出了七个重要的未来研究途径。

更新日期:2023-11-19
down
wechat
bug