Prediction using a symbolic based hybrid system
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
The viability of prudence analysis
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
Levels of explainable artificial intelligence for human-aligned conversational explanations
- Dazeley, Richard, Vamplew, Peter, Foale, Cameron, Young, Cameron, Aryal, Sunil, Cruz, Francisco
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
- «
- ‹
- 1
- ›
- »