Epistemological approach to the process of practice
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Journal article
- Relation: Minds and Machines Vol. 18, no. 4 (2008), p. 547-567
- Full Text:
- Reviewed:
- Description: Systems based on symbolic knowledge have performed extremely well in processing reason, yet, remain beset with problems of brittleness in many domains. Connectionist approaches do similarly well in emulating interactive domains, however, have struggled when modelling higher brain functions. Neither of these dichotomous approaches, however, have provided many inroads into the area of human reasoning that psychology and sociology refer to as the process of practice. This paper argues that the absence of a model for the process of practise in current approaches is a significant contributor to brittleness. This paper will investigate how the process of practise relates to deeper forms of contextual representations of knowledge. While researchers and developers of knowledge based systems have often incorporated the notion of context they treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts. This paper argues that the omission of these higher forms of context is one of the fundamental problems in the application and interpretation of symbolic knowledge. Finally, these ideas for modelling context will lead to the reinterpretation of situation cognition which makes a significant step towards a philosophy of knowledge that could lead to the modelling of the process of practice. © 2008 Springer Science+Business Media B.V.
- Description: C1
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Journal article
- Relation: Minds and Machines Vol. 18, no. 4 (2008), p. 547-567
- Full Text:
- Reviewed:
- Description: Systems based on symbolic knowledge have performed extremely well in processing reason, yet, remain beset with problems of brittleness in many domains. Connectionist approaches do similarly well in emulating interactive domains, however, have struggled when modelling higher brain functions. Neither of these dichotomous approaches, however, have provided many inroads into the area of human reasoning that psychology and sociology refer to as the process of practice. This paper argues that the absence of a model for the process of practise in current approaches is a significant contributor to brittleness. This paper will investigate how the process of practise relates to deeper forms of contextual representations of knowledge. While researchers and developers of knowledge based systems have often incorporated the notion of context they treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts. This paper argues that the omission of these higher forms of context is one of the fundamental problems in the application and interpretation of symbolic knowledge. Finally, these ideas for modelling context will lead to the reinterpretation of situation cognition which makes a significant step towards a philosophy of knowledge that could lead to the modelling of the process of practice. © 2008 Springer Science+Business Media B.V.
- Description: C1
Prediction using a symbolic based hybrid system
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Knowledge Based Systems (KBS) are highly successful in classification and diagnostics situations; however, they are generally unable to identify specific values for prediction problems. When used for prediction they either use some form of uncertainty reasoning or use a classification style inference where each class is a discrete predictive value instead. This paper applies a hybrid algorithm that allows an expert’s knowledge to be adapted to provide continuous values to solve prediction problems. The method applied to prediction in this paper is built on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). The method is published in a parallel paper in this workshop titled Generalisation with Symbolic Knowledge in Online Classification. Results indicate a strong propensity to quickly adapt and provide accurate predictions.
- Description: 2003006510
An expert system methodology for SMEs and NPOs
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
- Authors: Dazeley, Richard
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at 11th Australian Conference on Knowledge Management and Intelligent Decision Support, ACKMIDS 2008, Ballarat, Victoria : 8th-10th December 2008
- Full Text:
- Description: Traditionally Expert Systems (ES) require a full analysis of the business problem by a Knowledge Engineer (KE) to develop a solution. This inherently makes ES technology very expensive and beyond the affordability of the majority of Small and Medium sized Enterprises (SMEs) and Non-Profit Organisations (NPOs). Therefore, SMEs and NPOs tend to only have access to off-the-shelf solutions to generic problems, which rarely meet the full extent of an organisation’s requirements. One existing methodological stream of research, Ripple-Down Rules (RDR) goes some of the way to being suitable to SMEs and NPOs as it removes the need for a knowledge engineer. This group of methodologies provide an environment where a company can develop large knowledge based systems themselves, specifically tailored to the company’s individual situation. These methods, however, require constant supervision by the expert during development, which is still a significant burden on the organisation. This paper discusses an extension to an RDR method, known as Rated MCRDR (RM) and a feature called prudence analysis. This enhanced methodology to ES development is particularly well suited to the development of ES in restricted environments such as SMEs and NPOs.
- Description: 2003006507
Generalising symbolic knowledge in online classification and prediction
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2009
- Type: Text , Journal article
- Relation: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 5465 LNAI, no. (15 December 2008 through 16 December 2008 2009), p. 91-108
- Full Text:
- Reviewed:
- Description: Increasingly, researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. For instance, Repertory Grids, Formal Concept Analysis (FCA) and Ripple-Down Rules (RDR) all integrate either implicit or explicit contextual information. However, these methodologies treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts, which aid their ability to generalize. This paper presents a method that models hidden context within a symbolic domain in order to achieve a level of generalisation. The method developed builds on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). RM retains a symbolic core, while using a connection based approach to learn a deeper understanding of the captured knowledge. This method is applied to a number of classification and prediction environments and results indicate that the method can learn the information that experts have difficulty providing. © Springer-Verlag Berlin Heidelberg 2009.
- Description: 2003006509
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2009
- Type: Text , Journal article
- Relation: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 5465 LNAI, no. (15 December 2008 through 16 December 2008 2009), p. 91-108
- Full Text:
- Reviewed:
- Description: Increasingly, researchers and developers of knowledge based systems (KBS) have been incorporating the notion of context. For instance, Repertory Grids, Formal Concept Analysis (FCA) and Ripple-Down Rules (RDR) all integrate either implicit or explicit contextual information. However, these methodologies treat context as a static entity, neglecting many connectionists' work in learning hidden and dynamic contexts, which aid their ability to generalize. This paper presents a method that models hidden context within a symbolic domain in order to achieve a level of generalisation. The method developed builds on the already established Multiple Classification Ripple-Down Rules (MCRDR) approach and is referred to as Rated MCRDR (RM). RM retains a symbolic core, while using a connection based approach to learn a deeper understanding of the captured knowledge. This method is applied to a number of classification and prediction environments and results indicate that the method can learn the information that experts have difficulty providing. © Springer-Verlag Berlin Heidelberg 2009.
- Description: 2003006509
The viability of prudence analysis
- Dazeley, Richard, Kang, Byeongho
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
- Authors: Dazeley, Richard , Kang, Byeongho
- Date: 2008
- Type: Text , Conference paper
- Relation: Paper presented at Pacific Rim Knowledge Acquisition Workshop 2008, PKAW-08, Hanoi, Vietnam : 15th-16th December 2008
- Full Text:
- Description: Prudence analysis (PA) is a relatively new, practical and highly innovative approach to solving the problem of brittleness. PA is essentially an incremental validation approach, where each situation or case is presented to the KBS for inferencing and the result is subsequently validated. Therefore, instead of the system simply providing a conclusion, it also provides a warning when the validation fails. This allows the user to check the solution and correct any potential deficiencies found in the knowledge base. There have been a small number of potentially viable approaches to PA published that show a high degree of accuracy in identifying errors. However, none of these are perfect, very rarely a case is classified incorrectly and not identified by the PA system. The work in PA thus far, has focussed on reducing the frequency of these missed warnings, however there has been no studies on the affect of these on the final knowledge base’s performance. This paper will investigate how these errors in a knowledge base affect its ability to correctly classify cases. The results in this study strongly indicate that the missed errors have a significantly smaller influence on the inferencing results than would be expected, which strongly support the viability of PA.
- Description: 2003006508
The ballarat incremental knowledge engine
- Dazeley, Richard, Warner, Philip, Johnson, Scott, Vamplew, Peter
- Authors: Dazeley, Richard , Warner, Philip , Johnson, Scott , Vamplew, Peter
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper pressented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 195-207
- Full Text:
- Reviewed:
- Description: Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg.
- Authors: Dazeley, Richard , Warner, Philip , Johnson, Scott , Vamplew, Peter
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper pressented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 195-207
- Full Text:
- Reviewed:
- Description: Ripple Down Rules (RDR) is a maturing collection of methodologies for the incremental development and maintenance of medium to large rule-based knowledge systems. While earlier knowledge based systems relied on extensive modeling and knowledge engineering, RDR instead takes a simple no-model approach that merges the development and maintenance stages. Over the last twenty years RDR has been significantly expanded and applied in numerous domains. Until now researchers have generally implemented their own version of the methodologies, while commercial implementations are not made available. This has resulted in much duplicated code and the advantages of RDR not being available to a wider audience. The aim of this project is to develop a comprehensive and extensible platform that supports current and future RDR technologies, thereby allowing researchers and developers access to the power and versatility of RDR. This paper is a report on the current status of the project and marks the first release of the software. © 2010 Springer-Verlag Berlin Heidelberg.
Structure learning of Bayesian Networks using global optimization with applications in data classification
- Taheri, Sona, Mammadov, Musa
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 5 (2014), p. 931-948
- Full Text:
- Reviewed:
- Description: Bayesian Networks are increasingly popular methods of modeling uncertainty in artificial intelligence and machine learning. A Bayesian Network consists of a directed acyclic graph in which each node represents a variable and each arc represents probabilistic dependency between two variables. Constructing a Bayesian Network from data is a learning process that consists of two steps: learning structure and learning parameter. Learning a network structure from data is the most difficult task in this process. This paper presents a new algorithm for constructing an optimal structure for Bayesian Networks based on optimization. The algorithm has two major parts. First, we define an optimization model to find the better network graphs. Then, we apply an optimization approach for removing possible cycles from the directed graphs obtained in the first part which is the first of its kind in the literature. The main advantage of the proposed method is that the maximal number of parents for variables is not fixed a priory and it is defined during the optimization procedure. It also considers all networks including cyclic ones and then choose a best structure by applying a global optimization method. To show the efficiency of the algorithm, several closely related algorithms including unrestricted dependency Bayesian Network algorithm, as well as, benchmarks algorithms SVM and C4.5 are employed for comparison. We apply these algorithms on data classification; data sets are taken from the UCI machine learning repository and the LIBSVM. © 2014, Springer-Verlag Berlin Heidelberg.
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 5 (2014), p. 931-948
- Full Text:
- Reviewed:
- Description: Bayesian Networks are increasingly popular methods of modeling uncertainty in artificial intelligence and machine learning. A Bayesian Network consists of a directed acyclic graph in which each node represents a variable and each arc represents probabilistic dependency between two variables. Constructing a Bayesian Network from data is a learning process that consists of two steps: learning structure and learning parameter. Learning a network structure from data is the most difficult task in this process. This paper presents a new algorithm for constructing an optimal structure for Bayesian Networks based on optimization. The algorithm has two major parts. First, we define an optimization model to find the better network graphs. Then, we apply an optimization approach for removing possible cycles from the directed graphs obtained in the first part which is the first of its kind in the literature. The main advantage of the proposed method is that the maximal number of parents for variables is not fixed a priory and it is defined during the optimization procedure. It also considers all networks including cyclic ones and then choose a best structure by applying a global optimization method. To show the efficiency of the algorithm, several closely related algorithms including unrestricted dependency Bayesian Network algorithm, as well as, benchmarks algorithms SVM and C4.5 are employed for comparison. We apply these algorithms on data classification; data sets are taken from the UCI machine learning repository and the LIBSVM. © 2014, Springer-Verlag Berlin Heidelberg.
- «
- ‹
- 1
- ›
- »