- Title
- Explainable robotic systems : understanding goal-driven actions in a reinforcement learning scenario
- Creator
- Cruz, Francisco; Dazeley, Richard; Vamplew, Peter; Moreira, Ithan
- Date
- 2023
- Type
- Text; Journal article
- Identifier
- http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/198012
- Identifier
- vital:18969
- Identifier
-
https://doi.org/10.1007/s00521-021-06425-5
- Identifier
- ISSN:0941-0643 (ISSN)
- Abstract
- Robotic systems are more present in our society everyday. In human–robot environments, it is crucial that end-users may correctly understand their robotic team-partners, in order to collaboratively complete a task. To increase action understanding, users demand more explainability about the decisions by the robot in particular situations. Recently, explainable robotic systems have emerged as an alternative focused not only on completing a task satisfactorily, but also on justifying, in a human-like manner, the reasons that lead to making a decision. In reinforcement learning scenarios, a great effort has been focused on providing explanations using data-driven approaches, particularly from the visual input modality in deep learning-based systems. In this work, we focus rather on the decision-making process of reinforcement learning agents performing a task in a robotic scenario. Experimental results are obtained using 3 different set-ups, namely, a deterministic navigation task, a stochastic navigation task, and a continuous visual-based sorting object task. As a way to explain the goal-driven robot’s actions, we use the probability of success computed by three different proposed approaches: memory-based, learning-based, and introspection-based. The difference between these approaches is the amount of memory required to compute or estimate the probability of success as well as the kind of reinforcement learning representation where they could be used. In this regard, we use the memory-based approach as a baseline since it is obtained directly from the agent’s observations. When comparing the learning-based and the introspection-based approaches to this baseline, both are found to be suitable alternatives to compute the probability of success, obtaining high levels of similarity when compared using both the Pearson’s correlation and the mean squared error. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Publisher
- Springer Science and Business Media Deutschland GmbH
- Relation
- Neural Computing and Applications Vol. 35, no. 25 (2023), p. 18113-18130
- Rights
- All metadata describing materials held in, or linked to, the repository is freely available under a CC0 licence
- Rights
- Copyright © 2021, The Author(s)
- Rights
- Open Access
- Subject
- 4602 Artificial intelligence; 4603 Computer vision and multimedia computation; 4611 Machine learning; Explainable reinforcement learning; Explainable robotic systems; Goal-driven explanations
- Full Text
- Reviewed
- Hits: 1046
- Visitors: 1061
- Downloads: 28
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | SOURCE1 | Published version | 4 MB | Adobe Acrobat PDF | View Details Download |