Evaluating explanations of artificial intelligence decisions : the explanation quality rubric and survey
- Authors: Young, Charlotte
- Date: 2022
- Type: Text , Thesis , PhD
- Full Text:
- Description: The use of Artificial Intelligence (AI) algorithms is growing rapidly (Vilone & Longo, 2020). With this comes an increasing demand for reliable, robust explanations of AI decisions. There is a pressing need for a way to evaluate their quality. This thesis examines these research questions: What would a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations look like? How can a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations be created? Can a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations be used to improve explanations? Current Explainable Artificial Intelligence (XAI) research lacks an accepted, widely employed method for evaluating AI explanations. This thesis offers a method for creating a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations. It uses this to create an evaluation methodology, the XQ Rubric and XQ Survey. The XQ Rubric and Survey are then employed to improve explanations of AI decisions. The thesis asks what constitutes a good explanation in the context of XAI. It provides: 1. a model of good explanation for use in XAI research 2. a method of gathering non-expert evaluations of XAI explanations 3. an evaluation scheme for non-experts to employ in assessing XAI explanations (XQ Rubric and XQ Survey). The thesis begins with a literature review, primarily an exploration of previous attempts to evaluate XAI explanations formally. This is followed by an account of the development and iterative refinement of a solution to the problem, the eXplanation Quality Rubric (XQ Rubric). A Design Science methodology was used to guide the XQ Rubric and XQ Survey development. The thesis limits itself to XAI explanations appropriate for non-experts. It proposes and tests an evaluation rubric and survey method that is both stable and robust: that is, readily usable and consistently reliable in a variety of XAI-explanation tasks.
- Description: Doctor of Philosophy
- Authors: Young, Charlotte
- Date: 2022
- Type: Text , Thesis , PhD
- Full Text:
- Description: The use of Artificial Intelligence (AI) algorithms is growing rapidly (Vilone & Longo, 2020). With this comes an increasing demand for reliable, robust explanations of AI decisions. There is a pressing need for a way to evaluate their quality. This thesis examines these research questions: What would a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations look like? How can a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations be created? Can a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations be used to improve explanations? Current Explainable Artificial Intelligence (XAI) research lacks an accepted, widely employed method for evaluating AI explanations. This thesis offers a method for creating a rigorous, empirically justified, human-centred scheme for evaluating AI-decision explanations. It uses this to create an evaluation methodology, the XQ Rubric and XQ Survey. The XQ Rubric and Survey are then employed to improve explanations of AI decisions. The thesis asks what constitutes a good explanation in the context of XAI. It provides: 1. a model of good explanation for use in XAI research 2. a method of gathering non-expert evaluations of XAI explanations 3. an evaluation scheme for non-experts to employ in assessing XAI explanations (XQ Rubric and XQ Survey). The thesis begins with a literature review, primarily an exploration of previous attempts to evaluate XAI explanations formally. This is followed by an account of the development and iterative refinement of a solution to the problem, the eXplanation Quality Rubric (XQ Rubric). A Design Science methodology was used to guide the XQ Rubric and XQ Survey development. The thesis limits itself to XAI explanations appropriate for non-experts. It proposes and tests an evaluation rubric and survey method that is both stable and robust: that is, readily usable and consistently reliable in a variety of XAI-explanation tasks.
- Description: Doctor of Philosophy
Evaluating human-like explanations for robot actions in reinforcement learning scenarios
- Cruz, Francisco, Young, Charlotte, Dazeley, Richard, Vamplew, Peter
- Authors: Cruz, Francisco , Young, Charlotte , Dazeley, Richard , Vamplew, Peter
- Date: 2022
- Type: Text , Conference paper
- Relation: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, 23-27 October 2022, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Vol. 2022-October, p. 894-901
- Full Text:
- Reviewed:
- Description: Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations. © 2022 IEEE.
- Authors: Cruz, Francisco , Young, Charlotte , Dazeley, Richard , Vamplew, Peter
- Date: 2022
- Type: Text , Conference paper
- Relation: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, 23-27 October 2022, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Vol. 2022-October, p. 894-901
- Full Text:
- Reviewed:
- Description: Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations. © 2022 IEEE.
- «
- ‹
- 1
- ›
- »