An approach for generalising symbolic knowledge
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 p. 379-385
- Full Text: false
- Description: Many researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. However, they generally treat context as a static entity, neglecting many connectionists’ work in learning hidden and dynamic contexts, which aids generalization. This paper presents a method that models hidden context within a symbolic domain achieving a level of generalisation. Results indicate that the method can learn the information that experts have difficulty providing by generalising the captured knowledge.
- Description: 2003006525
An expert system methodology for SMEs and NPOs
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
Detecting the knowledge boundary with prudence analysis
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 p. 482-488
- Full Text: false
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness in knowledge based systems (KBS). PA is essentially an online validation approach, where as each situation or case is presented to the KBS for inferencing the result is simultaneously validated. This paper introduces a new approach to PA that analyses the structure of knowledge rather than the comparing cases with archived situations. This new approach is positively compared against earlier systems for PA, strongly indicating the viability of the approach.
- Description: 2003006511
Epistemological approach to the process of practice
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Journal article
- Relation: Minds and Machines Vol. 18, no. 4 (2008), p. 547-567
- Full Text:
- Reviewed:
- Description: Systems based on symbolic knowledge have performed extremely well in processing reason, yet, remain beset with problems of brittleness in many domains. Connectionist approaches do similarly well in emulating interactive domains, however, have struggled when modelling higher brain functions. Neither of these dichotomous approaches, however, have provided many inroads into the area of human reasoning that psychology and sociology refer to as the process of practice. This paper argues that the absence of a model for the process of practise in current approaches is a significant contributor to brittleness. This paper will investigate how the process of practise relates to deeper forms of contextual representations of knowledge. While researchers and developers of knowledge based systems have often incorporated the notion of context they treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts. This paper argues that the omission of these higher forms of context is one of the fundamental problems in the application and interpretation of symbolic knowledge. Finally, these ideas for modelling context will lead to the reinterpretation of situation cognition which makes a significant step towards a philosophy of knowledge that could lead to the modelling of the process of practice. © 2008 Springer Science+Business Media B.V.
- Description: C1
Generalisation with symbolic knowledge in online classification
- Authors: Kang, Byeongho , Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: PKAW-08: Proceedings of the Pacific Rim Knowledge Acquisition Workshop 2008
- Full Text: false
- Reviewed:
- Description: Increasingly, researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. For instance, Repertory Grids, Formal Concept Analysis (FCA) and Ripple-Down Rules (RDR) all integrate either implicit or explicit contextual information. However, these methodologies treat context as a static entity, neglecting many connectionists’ work in learning hidden and dynamic contexts, which aid their ability to generalize. This paper presents a method that models hidden context within a symbolic domain in order to achieve a level of generalisation. The method developed builds on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). RM retains a symbolic core, while using a connection based approach to learn a deeper understanding of the captured knowledge. This method is applied to a number of online classification environments and results indicate that the method can learn the information that experts have difficulty providing.
On the limitations of scalarisation for multi-objective reinforcement learning of Pareto fronts
- Authors: Vamplew, Peter , Yearwood, John , Dazeley, Richard , Berry, Adam
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 21st Australasian Joint Conference on Artificial Intelligence, Auckland, New Zealand : 1st-5th December 2008 Vol. 5360, p. 372-378
- Full Text: false
- Description: Multiobjective reinforcement learning (MORL) extends RL to problems with multiple conflicting objectives. This paper argues for designing MORL systems to produce a set of solutions approximating the Pareto front, and shows that the common MORL technique of scalarisation has fundamental limitations when used to find Pareto-optimal policies. The work is supported by the presentation of three new MORL benchmarks with known Pareto fronts.
- Description: 2003006504
Prediction using a symbolic based hybrid system
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
Scalable continuous query architecture for eCommerce and legal disputes
- Authors: Saeed, Ather , Stranieri, Andrew , Dazeley, Richard , Ma, Liping
- Date: 2008
- Type: Text , Journal article
- Relation: Communications of SIWN Vol. 3, no. (2008), p. 1-6
- Full Text: false
- Reviewed:
- Description: Continuous Queries (CQ) are persistent, content sensitive and time dependent. Once the CQ is installed it will continuously poll the data sources and monitor updates of interest. This paper discusses major problems and issues with the existing CQ techniques for monitoring updates of interest on the web. A new Continuous Query based architecture is proposed to deal with the context sensitive problems of negotiation, mediation and arbitration to resolve Ecommerce and legal disputes. A business process model is given to automate mediation and arbitration processes in ODR (Online dispute resolution) to resolve disputes efficiently and in a timely manner. In the proposed CQ-Mediator architecture partial page update and web services are integrated for efficient monitoring and notification of updates to the disputants, mediators and arbitrators. Performance results of the proposed architecture and business process model for CQ-based ODR is also discussed in the experiment section.
- Description: 2003006852
The viability of prudence analysis
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
Constructing stochastic mixture policies for episodic multiobjective reinforcement learning tasks
- Authors: Vamplew, Peter , Dazeley, Richard , Barker, Ewan , Kelarev, Andrei
- Date: 2009
- Type: Text , Book chapter
- Relation: AI 2009 : Advances in Artificial Intelligence : 22nd Australasian Joint Conference, Melbourne, Australia, December 1-4, 2009. Proceedings Chapter p. 340-349
- Full Text:
- Description: Multiobjective reinforcement learning algorithms extend reinforcement learning techniques to problems with multiple conflicting objectives. This paper discusses the advantages gained from applying stochastic policies to multiobjective tasks and examines a particular form of stochastic policy known as a mixture policy. Two methods are proposed for deriving mixture policies for episodic multiobjective tasks from deterministic base policies found via scalarised reinforcement learning. It is shown that these approaches are an efficient means of identifying solutions which offer a superior match to the user’s preferences than can be achieved by methods based strictly on deterministic policies.
- Description: 2003007906
Generalising symbolic knowledge in online classification and prediction
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2009
- Type: Text , Journal article
- Relation: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 5465 LNAI, no. (15 December 2008 through 16 December 2008 2009), p. 91-108
- Full Text:
- Reviewed:
- Description: Increasingly, researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. For instance, Repertory Grids, Formal Concept Analysis (FCA) and Ripple-Down Rules (RDR) all integrate either implicit or explicit contextual information. However, these methodologies treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts, which aid their ability to generalize. This paper presents a method that models hidden context within a symbolic domain in order to achieve a level of generalisation. The method developed builds on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). RM retains a symbolic core, while using a connection based approach to learn a deeper understanding of the captured knowledge. This method is applied to a number of classification and prediction environments and results indicate that the method can learn the information that experts have difficulty providing. © Springer-Verlag Berlin Heidelberg 2009.
- Description: 2003006509
Grid-based information retrieval for the aggregation of legal datasets in online dispute resolution
- Authors: Saeed, Ather , Stranieri, Andrew , Dazeley, Richard , Ma, Liping
- Date: 2009
- Type: Text , Journal article
- Relation: Communications of SIWN Vol. 6, no. April (2009), p. 16-22
- Full Text: false
- Description: The Web is a stateless and complex environment when it comes to the retrieval of information from millions of computers connected to the Internet via WWW servers. Information Retrieval (IR) from heterogeneous data sources poses a great challenge as the information of interest is stored in a variety of different formats. Answering an enormous amount of queries is a resource and computational intensive task in ODR (Online Dispute Resolution). Information availability also poses a challenge when it comes to the mediation and arbitration processes in resolving eCommerce and legal disputes. A new Grid-based information retrieval model is proposed for the aggregation and replication of legal datasets from remote machines with indexed-based search facility. Datasets of interests will be indexed with a slight modification to the existing indexing scheme. A new strategy is proposed to deal with similar queries posted over and over again and how the commonality among the XML query trees are exploited and merged for the efficient retrieval of information.
Optimization of multiple classifiers in data mining based on string rewriting systems
- Authors: Dazeley, Richard , Kelarev, Andrei , Yearwood, John , Mammadov, Musa
- Date: 2009
- Type: Text , Journal article
- Relation: Asian-European Journal of Mathematics Vol. 2, no. 1 (2009), p. 41-56
- Relation: https://purl.org/au-research/grants/arc/DP0211866
- Relation: https://purl.org/au-research/grants/arc/LP0669752
- Full Text:
- Description: Optimization of multiple classifiers is an important problem in data mining. We introduce additional structure on the class sets of the classifiers using string rewriting systems with a convenient matrix representation. The aim of the present paper is to develop an efficient algorithm for the optimization of the number of errors of individual classifiers, which can be corrected by these multiple classifiers.
Authorship attribution for Twitter in 140 characters or less
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
Automatically determining phishing campaigns using the USCAP methodology
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at General Members Meeting and eCrime Researchers Summit, eCrime 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Phishing fraudsters attempt to create an environment which looks and feels like a legitimate institution, while at the same time attempting to bypass filters and suspicions of their targets. This is a difficult compromise for the phishers and presents a weakness in the process of conducting this fraud. In this research, a methodology is presented that looks at the differences that occur between phishing websites from an authorship analysis perspective and is able to determine different phishing campaigns undertaken by phishing groups. The methodology is named USCAP, for Unsupervised SCAP, which builds on the SCAP methodology from supervised authorship and extends it for unsupervised learning problems. The phishing website source code is examined to generate a model that gives the size and scope of each of the recognized phishing campaigns. The USCAP methodology introduces the first time that phishing websites have been clustered by campaign in an automatic and reliable way, compared to previous methods which relied on costly expert analysis of phishing websites. Evaluation of these clusters indicates that each cluster is strongly consistent with a high stability and reliability when analyzed using new information about the attacks, such as the dates that the attack occurred on. The clusters found are indicative of different phishing campaigns, presenting a step towards an automated phishing authorship analysis methodology. © 2010 IEEE.
Consensus clustering and supervised classification for profiling phishing emails in internet commerce security
- Authors: Dazeley, Richard , Yearwood, John , Kang, Byeongho , Kelarev, Andrei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 235-246
- Full Text:
- Reviewed:
- Description: This article investigates internet commerce security applications of a novel combined method, which uses unsupervised consensus clustering algorithms in combination with supervised classification methods. First, a variety of independent clustering algorithms are applied to a randomized sample of data. Second, several consensus functions and sophisticated algorithms are used to combine these independent clusterings into one final consensus clustering. Third, the consensus clustering of the randomized sample is used as a training set to train several fast supervised classification algorithms. Finally, these fast classification algorithms are used to classify the whole large data set. One of the advantages of this approach is in its ability to facilitate the inclusion of contributions from domain experts in order to adjust the training set created by consensus clustering. We apply this approach to profiling phishing emails selected from a very large data set supplied by the industry partners of the Centre for Informatics and Applied Optimization. Our experiments compare the performance of several classification algorithms incorporated in this scheme. © 2010 Springer-Verlag Berlin Heidelberg.
The ballarat incremental knowledge engine
- Authors: Dazeley, Richard , Warner, Philip , Johnson, Scott , Vamplew, Peter
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper pressented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 195-207
- Full Text:
- Reviewed:
- Description: Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg.
Empirical evaluation methods for multiobjective reinforcement learning algorithms
- Authors: Vamplew, Peter , Dazeley, Richard , Berry, Adam , Issabekov, Rustam , Dekker, Evan
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 84, no. 1-2 (2011), p. 51-80
- Full Text: false
- Reviewed:
- Description: While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms. © 2010 The Author(s).
Establishing reasoning communities of security experts for Internet Commerce Security
- Authors: Kelarev, Andrei , Brown, Simon , Watters, Paul , Wu, Xinwen , Dazeley, Richard
- Date: 2011
- Type: Text , Book chapter
- Relation: Technologies for supporting reasoning communities and collaborative decision making : Cooperative approaches p. 380-396
- Full Text: false
- Reviewed:
- Description: The highly sophisticated and rapidly evolving area of internet commerce security presents many novel challenges for the organization of discourse in reasoning communities. This chapter suggests appropriate reasoning methods and demonstrates how establishing reasoning communities of security experts and enabling productive group discourse among them can play a crucial role in successful resolution of problems concerning the implementation, integration, deployment and maintenance of flexible local security systems for defense against malware threats in internet security. Local security systems of this sort may combine several ready open source or commercial software packages behind a common front-end and may enhance and supplement their facilities with additional plug-ins. To illustrate the diverse character of challenges the reasoning communities in internet security are likely to be faced with, this chapter concentrates on defense against phishing attacks. This example was selected as it is one of the newest and most rapidly changing application domains for the principles of organizing reasoning communities. The major group discourse methods suggested for the reasoning communities of security experts in this chapter include the Delphi Method, the Wideband Delphi Process, the Generic/Actual Argument Model of Structured Reasoning, Brainstorming, Reverse Brainstorming, Consensus Decision Making, Voting, Open Delphi and Open Brainstorming Methods. The Delphi Method and Wideband Delphi Process are suggested as tools for organizing a cohesive reasoning architecture, for coordinating other methods, and for preparing and allocating other methods to particular issues.
Fault-tolerant data aggregation scheme for monitoring of critical events in grid based healthcare sensor networks
- Authors: Saeed, Ather , Stranieri, Andrew , Dazeley, Richard
- Date: 2011
- Type: Text , Conference paper
- Relation: Paper presented at 19th High Peformance Computing Symposium (HPC 2011) part of SCS Spring Simulation Multiconference (SpringSim'11)
- Full Text:
- Reviewed:
- Description: Wireless sensor devices are used for monitoring patients with serious medical conditions. Communication of content-sensitive and context sensitive datasets is crucial for the survival of patients so that informed decisions can be made. The main limitation of sensor devices is that they work on a fixed threshold to notify the relevant Healthcare Professional (HP) about the seriousness of a patient’s current state. Further, these sensor devices have limited processor, memory capabilities and battery. A new grid-based information monitoring architecture is proposed to address the issues of data loss and timely dissemination of critical information to the relevant HP. The proposed approach provides an opportunity to efficiently aggregate datasets of interest by reducing network overhead and minimizing data latency. To narrow down the problem domain, in-network processing of datasets with Grid monitoring capabilities is proposed for the efficient execution of the computational, resource and data intensive tasks. Interactive wireless sensor networks do not guarantee that data gathered from the heterogeneous sources will always arrive at the sink (base) node, but the proposed aggregation technique will provide a fault tolerant solution to the timely notification of a patient’s critical state. Experimental results received are encouraging and clearly show a reduction in the network latency rate.