Detecting the knowledge boundary with prudence analysis
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 p. 482-488
- Full Text: false
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness in knowledge based systems (KBS). PA is essentially an online validation approach, where as each situation or case is presented to the KBS for inferencing the result is simultaneously validated. This paper introduces a new approach to PA that analyses the structure of knowledge rather than the comparing cases with archived situations. This new approach is positively compared against earlier systems for PA, strongly indicating the viability of the approach.
- Description: 2003006511
An expert system methodology for SMEs and NPOs
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
Online knowledge validation with prudence analysis in a document management application
- Authors: Dazeley, Richard , Park, Sung Sik , Kang, Byeongho
- Date: 2011
- Type: Text , Journal article
- Relation: Expert Systems with Applications Vol. , no. (2011), p.
- Full Text: false
- Reviewed:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness in knowledge based system (KBS) development. PA is essentially an online validation approach where as each situation or case is presented to the KBS for inferencing the result is simultaneously validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. Previous studies have shown that a modification to multiple classification ripple-down rules (MCRDR) referred to as rated MCRDR (RM) has been able to achieve strong and flexible results in simulated domains with artificial data sets. This paper presents a study into the effectiveness of RM in an eHealth document monitoring and classification domain using human expertise. Additionally, this paper also investigates what affect PA has when the KBS developer relied entirely on the warnings for maintenance. Results indicate that the system is surprisingly robust even when warning accuracy is allowed to drop quite low. This study of a previously little touched area provides a strong indication of the potential for future knowledge based system development. © 2011 Elsevier Ltd. All rights reserved.
On the limitations of scalarisation for multi-objective reinforcement learning of Pareto fronts
- Authors: Vamplew, Peter , Yearwood, John , Dazeley, Richard , Berry, Adam
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 Vol. 5360, p. 372-378
- Full Text: false
- Description: Multiobjective reinforcement learning (MORL) extends RL to problems with multiple conflicting objectives. This paper argues for designing MORL systems to produce a set of solutions approximating the Pareto front, and shows that the common MORL technique of scalarisation has fundamental limitations when used to find Pareto-optimal policies. The work is supported by the presentation of three new MORL benchmarks with known Pareto fronts.
- Description: 2003006504
An approach for generalising symbolic knowledge
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 p. 379-385
- Full Text: false
- Description: Many researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. However, they generally treat context as a static entity, neglecting many connectionists’ work in learning hidden and dynamic contexts, which aids generalization. This paper presents a method that models hidden context within a symbolic domain achieving a level of generalisation. Results indicate that the method can learn the information that experts have difficulty providing by generalising the captured knowledge.
- Description: 2003006525
Evaluating authorship distance methods using the positive Silhouette coefficient
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
Empirical evaluation methods for multiobjective reinforcement learning algorithms
- Authors: Vamplew, Peter , Dazeley, Richard , Berry, Adam , Issabekov, Rustam , Dekker, Evan
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 84, no. 1-2 (2011), p. 51-80
- Full Text: false
- Reviewed:
- Description: While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms. © 2010 The Author(s).
Recentred local profiles for authorship attribution
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
Automated unsupervised authorship analysis using evidence accumulation clustering
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 1 (2013), p. 95-120
- Full Text:
- Reviewed:
- Description: Authorship Analysis aims to extract information about the authorship of documents from features within those documents. Typically, this is performed as a classification task with the aim of identifying the author of a document, given a set of documents of known authorship. Alternatively, unsupervised methods have been developed primarily as visualisation tools to assist the manual discovery of clusters of authorship within a corpus by analysts. However, there is a need in many fields for more sophisticated unsupervised methods to automate the discovery, profiling and organisation of related information through clustering of documents by authorship. An automated and unsupervised methodology for clustering documents by authorship is proposed in this paper. The methodology is named NUANCE, for n-gram Unsupervised Automated Natural Cluster Ensemble. Testing indicates that the derived clusters have a strong correlation to the true authorship of unseen documents. © 2011 Cambridge University Press.
- Description: 2003010584
Levels of explainable artificial intelligence for human-aligned conversational explanations
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.