A brief guide to multi-objective reinforcement learning and planning JAAMAS track
- Hayes, Conor, Bargiacchi, Eugenio, Källström, Johan, Macfarlane, Matthew, Reymond, Mathieu, Verstraeten, Timothy, Zintgraf, Luisa, Dazeley, Richard, Heintz, Frederik, Howley, Enda, Irissappane, Aathirai, Mannion, Patrick, Nowé, Ann, Ramos, Gabriel, Restelli, Marcello, Vamplew, Peter, Roijers, Diederik
- Authors: Hayes, Conor , Bargiacchi, Eugenio , Källström, Johan , Macfarlane, Matthew , Reymond, Mathieu , Verstraeten, Timothy , Zintgraf, Luisa , Dazeley, Richard , Heintz, Frederik , Howley, Enda , Irissappane, Aathirai , Mannion, Patrick , Nowé, Ann , Ramos, Gabriel , Restelli, Marcello , Vamplew, Peter , Roijers, Diederik
- Date: 2023
- Type: Text , Conference paper
- Relation: 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, 29 May to 2 June 2023, Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS Vol. 2023-May, p. 1988-1990
- Full Text:
- Reviewed:
- Description: Real-world sequential decision-making tasks are usually complex, and require trade-offs between multiple - often conflicting - objectives. However, the majority of research in reinforcement learning (RL) and decision-theoretic planning assumes a single objective, or that multiple objectives can be handled via a predefined weighted sum over the objectives. Such approaches may oversimplify the underlying problem, and produce suboptimal results. This extended abstract outlines the limitations of using a semi-blind iterative process to solve multi-objective decision making problems. Our extended paper [4], serves as a guide for the application of explicitly multi-objective methods to difficult problems. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
- Authors: Hayes, Conor , Bargiacchi, Eugenio , Källström, Johan , Macfarlane, Matthew , Reymond, Mathieu , Verstraeten, Timothy , Zintgraf, Luisa , Dazeley, Richard , Heintz, Frederik , Howley, Enda , Irissappane, Aathirai , Mannion, Patrick , Nowé, Ann , Ramos, Gabriel , Restelli, Marcello , Vamplew, Peter , Roijers, Diederik
- Date: 2023
- Type: Text , Conference paper
- Relation: 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023, London, 29 May to 2 June 2023, Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS Vol. 2023-May, p. 1988-1990
- Full Text:
- Reviewed:
- Description: Real-world sequential decision-making tasks are usually complex, and require trade-offs between multiple - often conflicting - objectives. However, the majority of research in reinforcement learning (RL) and decision-theoretic planning assumes a single objective, or that multiple objectives can be handled via a predefined weighted sum over the objectives. Such approaches may oversimplify the underlying problem, and produce suboptimal results. This extended abstract outlines the limitations of using a semi-blind iterative process to solve multi-objective decision making problems. Our extended paper [4], serves as a guide for the application of explicitly multi-objective methods to difficult problems. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Evaluating human-like explanations for robot actions in reinforcement learning scenarios
- Cruz, Francisco, Young, Charlotte, Dazeley, Richard, Vamplew, Peter
- Authors: Cruz, Francisco , Young, Charlotte , Dazeley, Richard , Vamplew, Peter
- Date: 2022
- Type: Text , Conference paper
- Relation: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, 23-27 October 2022, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Vol. 2022-October, p. 894-901
- Full Text:
- Reviewed:
- Description: Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations. © 2022 IEEE.
- Authors: Cruz, Francisco , Young, Charlotte , Dazeley, Richard , Vamplew, Peter
- Date: 2022
- Type: Text , Conference paper
- Relation: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, 23-27 October 2022, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Vol. 2022-October, p. 894-901
- Full Text:
- Reviewed:
- Description: Explainable artificial intelligence is a research field that tries to provide more transparency for autonomous intelligent systems. Explainability has been used, particularly in reinforcement learning and robotic scenarios, to better understand the robot decision-making process. Previous work, however, has been widely focused on providing technical explanations that can be better understood by AI practitioners than non-expert end-users. In this work, we make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action. These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods. This paper presents a user trial to study whether these explanations that focus on the probability an action has of succeeding in its goal constitute a suitable explanation for non-expert end-users. The results obtained show that non-expert participants rate robot explanations that focus on the probability of success higher and with less variance than technical explanations generated from Q-values, and also favor counterfactual explanations over standalone explanations. © 2022 IEEE.
Language representations for generalization in reinforcement learning
- Goodger, Nikolaj, Vamplew, Peter, Foale, Cameron, Dazeley, Richard
- Authors: Goodger, Nikolaj , Vamplew, Peter , Foale, Cameron , Dazeley, Richard
- Date: 2021
- Type: Text , Conference paper
- Relation: 13th Asian Conference on Machine Learning, Virtual, 17-19 November 2021, Proceedings of The 13th Asian Conference on Machine Learning Vol. 157, p. 390-405
- Full Text:
- Reviewed:
- Description: The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discreteactions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language
- Authors: Goodger, Nikolaj , Vamplew, Peter , Foale, Cameron , Dazeley, Richard
- Date: 2021
- Type: Text , Conference paper
- Relation: 13th Asian Conference on Machine Learning, Virtual, 17-19 November 2021, Proceedings of The 13th Asian Conference on Machine Learning Vol. 157, p. 390-405
- Full Text:
- Reviewed:
- Description: The choice of state and action representation in Reinforcement Learning (RL) has a significant effect on agent performance for the training task. But its relationship with generalization to new tasks is under-explored. One approach to improving generalization investigated here is the use of language as a representation. We compare vector-states and discreteactions to language representations. We find the agents using language representations generalize better and could solve tasks with more entities, new entities, and more complexity than seen in the training task. We attribute this to the compositionality of language
Fault-tolerant data aggregation scheme for monitoring of critical events in grid based healthcare sensor networks
- Saeed, Ather, Stranieri, Andrew, Dazeley, Richard
- Authors: Saeed, Ather , Stranieri, Andrew , Dazeley, Richard
- Date: 2011
- Type: Text , Conference paper
- Relation: Paper presented at 19th High Peformance Computing Symposium (HPC 2011) part of SCS Spring Simulation Multiconference (SpringSim'11)
- Full Text:
- Reviewed:
- Description: Wireless sensor devices are used for monitoring patients with serious medical conditions. Communication of content-sensitive and context sensitive datasets is crucial for the survival of patients so that informed decisions can be made. The main limitation of sensor devices is that they work on a fixed threshold to notify the relevant Healthcare Professional (HP) about the seriousness of a patient’s current state. Further, these sensor devices have limited processor, memory capabilities and battery. A new grid-based information monitoring architecture is proposed to address the issues of data loss and timely dissemination of critical information to the relevant HP. The proposed approach provides an opportunity to efficiently aggregate datasets of interest by reducing network overhead and minimizing data latency. To narrow down the problem domain, in-network processing of datasets with Grid monitoring capabilities is proposed for the efficient execution of the computational, resource and data intensive tasks. Interactive wireless sensor networks do not guarantee that data gathered from the heterogeneous sources will always arrive at the sink (base) node, but the proposed aggregation technique will provide a fault tolerant solution to the timely notification of a patient’s critical state. Experimental results received are encouraging and clearly show a reduction in the network latency rate.
- Authors: Saeed, Ather , Stranieri, Andrew , Dazeley, Richard
- Date: 2011
- Type: Text , Conference paper
- Relation: Paper presented at 19th High Peformance Computing Symposium (HPC 2011) part of SCS Spring Simulation Multiconference (SpringSim'11)
- Full Text:
- Reviewed:
- Description: Wireless sensor devices are used for monitoring patients with serious medical conditions. Communication of content-sensitive and context sensitive datasets is crucial for the survival of patients so that informed decisions can be made. The main limitation of sensor devices is that they work on a fixed threshold to notify the relevant Healthcare Professional (HP) about the seriousness of a patient’s current state. Further, these sensor devices have limited processor, memory capabilities and battery. A new grid-based information monitoring architecture is proposed to address the issues of data loss and timely dissemination of critical information to the relevant HP. The proposed approach provides an opportunity to efficiently aggregate datasets of interest by reducing network overhead and minimizing data latency. To narrow down the problem domain, in-network processing of datasets with Grid monitoring capabilities is proposed for the efficient execution of the computational, resource and data intensive tasks. Interactive wireless sensor networks do not guarantee that data gathered from the heterogeneous sources will always arrive at the sink (base) node, but the proposed aggregation technique will provide a fault tolerant solution to the timely notification of a patient’s critical state. Experimental results received are encouraging and clearly show a reduction in the network latency rate.
Authorship attribution for Twitter in 140 characters or less
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at - 2nd Cybercrime and Trustworthy Computing Workshop, CTC 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Authorship attribution is a growing field, moving from beginnings in linguistics to recent advances in text mining. Through this change came an increase in the capability of authorship attribution methods both in their accuracy and the ability to consider more difficult problems. Research into authorship attribution in the 19th century considered it difficult to determine the authorship of a document of fewer than 1000 words. By the 1990s this values had decreased to less than 500 words and in the early 21 st century it was considered possible to determine the authorship of a document in 250 words. The need for this ever decreasing limit is exemplified by the trend towards many shorter communications rather than fewer longer communications, such as the move from traditional multi-page handwritten letters to shorter, more focused emails. This trend has also been shown in online crime, where many attacks such as phishing or bullying are performed using very concise language. Cybercrime messages have long been hosted on Internet Relay Chats (IRCs) which have allowed members to hide behind screen names and connect anonymously. More recently, Twitter and other short message based web services have been used as a hosting ground for online crimes. This paper presents some evaluations of current techniques and identifies some new preprocessing methods that can be used to enable authorship to be determined at rates significantly better than chance for documents of 140 characters or less, a format popularised by the micro-blogging website Twitter1. We show that the SCAP methodology performs extremely well on twitter messages and even with restrictions on the types of information allowed, such as the recipient of directed messages, still perform significantly higher than chance. Further to this, we show that 120 tweets per user is an important threshold, at which point adding more tweets per user gives a small but non-significant increase in accuracy. © 2010 IEEE.
Automatically determining phishing campaigns using the USCAP methodology
- Layton, Robert, Watters, Paul, Dazeley, Richard
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at General Members Meeting and eCrime Researchers Summit, eCrime 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Phishing fraudsters attempt to create an environment which looks and feels like a legitimate institution, while at the same time attempting to bypass filters and suspicions of their targets. This is a difficult compromise for the phishers and presents a weakness in the process of conducting this fraud. In this research, a methodology is presented that looks at the differences that occur between phishing websites from an authorship analysis perspective and is able to determine different phishing campaigns undertaken by phishing groups. The methodology is named USCAP, for Unsupervised SCAP, which builds on the SCAP methodology from supervised authorship and extends it for unsupervised learning problems. The phishing website source code is examined to generate a model that gives the size and scope of each of the recognized phishing campaigns. The USCAP methodology introduces the first time that phishing websites have been clustered by campaign in an automatic and reliable way, compared to previous methods which relied on costly expert analysis of phishing websites. Evaluation of these clusters indicates that each cluster is strongly consistent with a high stability and reliability when analyzed using new information about the attacks, such as the dates that the attack occurred on. The clusters found are indicative of different phishing campaigns, presenting a step towards an automated phishing authorship analysis methodology. © 2010 IEEE.
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at General Members Meeting and eCrime Researchers Summit, eCrime 2010 p. 1-8
- Full Text:
- Reviewed:
- Description: Phishing fraudsters attempt to create an environment which looks and feels like a legitimate institution, while at the same time attempting to bypass filters and suspicions of their targets. This is a difficult compromise for the phishers and presents a weakness in the process of conducting this fraud. In this research, a methodology is presented that looks at the differences that occur between phishing websites from an authorship analysis perspective and is able to determine different phishing campaigns undertaken by phishing groups. The methodology is named USCAP, for Unsupervised SCAP, which builds on the SCAP methodology from supervised authorship and extends it for unsupervised learning problems. The phishing website source code is examined to generate a model that gives the size and scope of each of the recognized phishing campaigns. The USCAP methodology introduces the first time that phishing websites have been clustered by campaign in an automatic and reliable way, compared to previous methods which relied on costly expert analysis of phishing websites. Evaluation of these clusters indicates that each cluster is strongly consistent with a high stability and reliability when analyzed using new information about the attacks, such as the dates that the attack occurred on. The clusters found are indicative of different phishing campaigns, presenting a step towards an automated phishing authorship analysis methodology. © 2010 IEEE.
Consensus clustering and supervised classification for profiling phishing emails in internet commerce security
- Dazeley, Richard, Yearwood, John, Kang, Byeongho, Kelarev, Andrei
- Authors: Dazeley, Richard , Yearwood, John , Kang, Byeongho , Kelarev, Andrei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 235-246
- Full Text:
- Reviewed:
- Description: This article investigates internet commerce security applications of a novel combined method, which uses unsupervised consensus clustering algorithms in combination with supervised classification methods. First, a variety of independent clustering algorithms are applied to a randomized sample of data. Second, several consensus functions and sophisticated algorithms are used to combine these independent clusterings into one final consensus clustering. Third, the consensus clustering of the randomized sample is used as a training set to train several fast supervised classification algorithms. Finally, these fast classification algorithms are used to classify the whole large data set. One of the advantages of this approach is in its ability to facilitate the inclusion of contributions from domain experts in order to adjust the training set created by consensus clustering. We apply this approach to profiling phishing emails selected from a very large data set supplied by the industry partners of the Centre for Informatics and Applied Optimization. Our experiments compare the performance of several classification algorithms incorporated in this scheme. © 2010 Springer-Verlag Berlin Heidelberg.
- Authors: Dazeley, Richard , Yearwood, John , Kang, Byeongho , Kelarev, Andrei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 235-246
- Full Text:
- Reviewed:
- Description: This article investigates internet commerce security applications of a novel combined method, which uses unsupervised consensus clustering algorithms in combination with supervised classification methods. First, a variety of independent clustering algorithms are applied to a randomized sample of data. Second, several consensus functions and sophisticated algorithms are used to combine these independent clusterings into one final consensus clustering. Third, the consensus clustering of the randomized sample is used as a training set to train several fast supervised classification algorithms. Finally, these fast classification algorithms are used to classify the whole large data set. One of the advantages of this approach is in its ability to facilitate the inclusion of contributions from domain experts in order to adjust the training set created by consensus clustering. We apply this approach to profiling phishing emails selected from a very large data set supplied by the industry partners of the Centre for Informatics and Applied Optimization. Our experiments compare the performance of several classification algorithms incorporated in this scheme. © 2010 Springer-Verlag Berlin Heidelberg.
The ballarat incremental knowledge engine
- Dazeley, Richard, Warner, Philip, Johnson, Scott, Vamplew, Peter
- Authors: Dazeley, Richard , Warner, Philip , Johnson, Scott , Vamplew, Peter
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper pressented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 195-207
- Full Text:
- Reviewed:
- Description: Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg.
- Authors: Dazeley, Richard , Warner, Philip , Johnson, Scott , Vamplew, Peter
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper pressented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 195-207
- Full Text:
- Reviewed:
- Description: Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg.
An expert system methodology for SMEs and NPOs
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
Prediction using a symbolic based hybrid system
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
The viability of prudence analysis
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
- «
- ‹
- 1
- ›
- »