Performance evaluation of multi-tier ensemble classifiers for phishing websites
- Abawajy, Jemal, Beliakov, Gleb, Kelarev, Andrei, Yearwood, John
- Authors: Abawajy, Jemal , Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This article is devoted to large multi-tier ensemble classifiers generated as ensembles of ensembles and applied to phishing websites. Our new ensemble construction is a special case of the general and productive multi-tier approach well known in information security. Many efficient multi-tier classifiers have been considered in the literature. Our new contribution is in generating new large systems as ensembles of ensembles by linking a top-tier ensemble to another middletier ensemble instead of a base classifier so that the toptier ensemble can generate the whole system. This automatic generation capability includes many large ensemble classifiers in two tiers simultaneously and automatically combines them into one hierarchical unified system so that one ensemble is an integral part of another one. This new construction makes it easy to set up and run such large systems. The present article concentrates on the investigation of performance of these new multi-tier ensembles for the example of detection of phishing websites. We carried out systematic experiments evaluating several essential ensemble techniques as well as more recent approaches and studying their performance as parts of multi-level ensembles with three tiers. The results presented here demonstrate that new three-tier ensemble classifiers performed better than the base classifiers and standard ensembles included in the system. This example of application to the classification of phishing websites shows that the new method of combining diverse ensemble techniques into a unified hierarchical three-tier ensemble can be applied to increase the performance of classifiers in situations where data can be processed on a large computer.
- Authors: Abawajy, Jemal , Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This article is devoted to large multi-tier ensemble classifiers generated as ensembles of ensembles and applied to phishing websites. Our new ensemble construction is a special case of the general and productive multi-tier approach well known in information security. Many efficient multi-tier classifiers have been considered in the literature. Our new contribution is in generating new large systems as ensembles of ensembles by linking a top-tier ensemble to another middletier ensemble instead of a base classifier so that the toptier ensemble can generate the whole system. This automatic generation capability includes many large ensemble classifiers in two tiers simultaneously and automatically combines them into one hierarchical unified system so that one ensemble is an integral part of another one. This new construction makes it easy to set up and run such large systems. The present article concentrates on the investigation of performance of these new multi-tier ensembles for the example of detection of phishing websites. We carried out systematic experiments evaluating several essential ensemble techniques as well as more recent approaches and studying their performance as parts of multi-level ensembles with three tiers. The results presented here demonstrate that new three-tier ensemble classifiers performed better than the base classifiers and standard ensembles included in the system. This example of application to the classification of phishing websites shows that the new method of combining diverse ensemble techniques into a unified hierarchical three-tier ensemble can be applied to increase the performance of classifiers in situations where data can be processed on a large computer.
DOWL : A dynamic ontology language
- Authors: Avery, John , Yearwood, John
- Date: 2003
- Type: Text , Conference paper
- Relation: Paper presented at IADIS International Conference WWW/Internet 2003, Algarve, Portugal : 5th August, 2003
- Full Text:
- Reviewed:
- Description: Abstract: Ontologies in a web setting, particularly those used in a group context (such as a virtual community), need to be flexible and open to changes that reflect the evolution of knowledge. OWL the ontology language of the semantic web provides very little for facilitating the description of evolutionary changes in an ontology. We propose a dynamic web ontology language (dOWL), an extension to OWL, which consists of a set of elements that can be used to model these evolutionary changes in an ontology.
- Description: E1
- Description: 2003000552
- Authors: Avery, John , Yearwood, John
- Date: 2003
- Type: Text , Conference paper
- Relation: Paper presented at IADIS International Conference WWW/Internet 2003, Algarve, Portugal : 5th August, 2003
- Full Text:
- Reviewed:
- Description: Abstract: Ontologies in a web setting, particularly those used in a group context (such as a virtual community), need to be flexible and open to changes that reflect the evolution of knowledge. OWL the ontology language of the semantic web provides very little for facilitating the description of evolutionary changes in an ontology. We propose a dynamic web ontology language (dOWL), an extension to OWL, which consists of a set of elements that can be used to model these evolutionary changes in an ontology.
- Description: E1
- Description: 2003000552
A formal description of ontology change in OWL
- Authors: Avery, John , Yearwood, John
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at the Third International Conference on Information Technology and Applications, ICITA 2005, Sydney : 4th - 7th July, 2005
- Full Text:
- Reviewed:
- Description: There are three main activities involved in managing ontology change. Firstly we need to identify changes, secondly describe these identified changes, and finally describe and handle the ramifications of the changes. In previous work we have presented a language (DOWL) for describing ontology change and in this paper we demonstrate how changes described in this language can be represented in the RDF abstract syntax which enables us to describe the ramifications of a change in a formal manner. This formalism can provide the basis for an automated ontology change management system.
- Description: E1
- Description: 2003001448
- Authors: Avery, John , Yearwood, John
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at the Third International Conference on Information Technology and Applications, ICITA 2005, Sydney : 4th - 7th July, 2005
- Full Text:
- Reviewed:
- Description: There are three main activities involved in managing ontology change. Firstly we need to identify changes, secondly describe these identified changes, and finally describe and handle the ramifications of the changes. In previous work we have presented a language (DOWL) for describing ontology change and in this paper we demonstrate how changes described in this language can be represented in the RDF abstract syntax which enables us to describe the ramifications of a change in a formal manner. This formalism can provide the basis for an automated ontology change management system.
- Description: E1
- Description: 2003001448
New algorithms for multi-class cancer diagnosis using tumor gene expression signatures
- Bagirov, Adil, Ferguson, Brent, Ivkovic, Sasha, Saunders, Gary, Yearwood, John
- Authors: Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807
- Full Text:
- Reviewed:
- Description: Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.
- Description: C1
- Description: 2003000439
- Authors: Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807
- Full Text:
- Reviewed:
- Description: Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.
- Description: C1
- Description: 2003000439
A global optimisation approach to classification in medical diagnosis and prognosis
- Bagirov, Adil, Rubinov, Alex, Yearwood, John, Stranieri, Andrew
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John , Stranieri, Andrew
- Date: 2001
- Type: Text , Conference paper
- Relation: Paper presented at 34th Hawaii International Conference on System Sciences, HICSS-34, Maui, Hawaii, USA : 3rd-6th January 2001
- Full Text:
- Description: In this paper global optimisation-based techniques are studied in order to increase the accuracy of medical diagnosis and prognosis with FNA image data from the Wisconsin Diagnostic and Prognostic Breast Cancer databases. First we discuss the problem of determining the most informative features for the classification of cancerous cases in the databases under consideration. Then we apply a technique based on convex and global optimisation to breast cancer diagnosis. It allows the classification of benign cases and malignant ones and the subsequent diagnosis of patients with very high accuracy. The third application of this technique is a method that calculates centres of clusters to predict when breast cancer is likely to recur in patients for which cancer has been removed. The technique achieves higher accuracy with these databases than reported elsewhere in the literature.
- Description: 2003003950
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John , Stranieri, Andrew
- Date: 2001
- Type: Text , Conference paper
- Relation: Paper presented at 34th Hawaii International Conference on System Sciences, HICSS-34, Maui, Hawaii, USA : 3rd-6th January 2001
- Full Text:
- Description: In this paper global optimisation-based techniques are studied in order to increase the accuracy of medical diagnosis and prognosis with FNA image data from the Wisconsin Diagnostic and Prognostic Breast Cancer databases. First we discuss the problem of determining the most informative features for the classification of cancerous cases in the databases under consideration. Then we apply a technique based on convex and global optimisation to breast cancer diagnosis. It allows the classification of benign cases and malignant ones and the subsequent diagnosis of patients with very high accuracy. The third application of this technique is a method that calculates centres of clusters to predict when breast cancer is likely to recur in patients for which cancer has been removed. The technique achieves higher accuracy with these databases than reported elsewhere in the literature.
- Description: 2003003950
Unsupervised and supervised data classification via nonsmooth and global optimisation
- Bagirov, Adil, Rubinov, Alex, Sukhorukova, Nadezda, Yearwood, John
- Authors: Bagirov, Adil , Rubinov, Alex , Sukhorukova, Nadezda , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Top Vol. 11, no. 1 (2003), p. 1-92
- Full Text:
- Reviewed:
- Description: We examine various methods for data clustering and data classification that are based on the minimization of the so-called cluster function and its modications. These functions are nonsmooth and nonconvex. We use Discrete Gradient methods for their local minimization. We consider also a combination of this method with the cutting angle method for global minimization. We present and discuss results of numerical experiments.
- Description: C1
- Description: 2003000421
- Authors: Bagirov, Adil , Rubinov, Alex , Sukhorukova, Nadezda , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Top Vol. 11, no. 1 (2003), p. 1-92
- Full Text:
- Reviewed:
- Description: We examine various methods for data clustering and data classification that are based on the minimization of the so-called cluster function and its modications. These functions are nonsmooth and nonconvex. We use Discrete Gradient methods for their local minimization. We consider also a combination of this method with the cutting angle method for global minimization. We present and discuss results of numerical experiments.
- Description: C1
- Description: 2003000421
Derivative-free optimization and neural networks for robust regression
- Beliakov, Gleb, Kelarev, Andrei, Yearwood, John
- Authors: Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Journal article
- Relation: Optimization Vol. 61, no. 12 (2012), p. 1467-1490
- Full Text:
- Reviewed:
- Description: Large outliers break down linear and nonlinear regression models. Robust regression methods allow one to filter out the outliers when building a model. By replacing the traditional least squares criterion with the least trimmed squares (LTS) criterion, in which half of data is treated as potential outliers, one can fit accurate regression models to strongly contaminated data. High-breakdown methods have become very well established in linear regression, but have started being applied for non-linear regression only recently. In this work, we examine the problem of fitting artificial neural networks (ANNs) to contaminated data using LTS criterion. We introduce a penalized LTS criterion which prevents unnecessary removal of valid data. Training of ANNs leads to a challenging non-smooth global optimization problem. We compare the efficiency of several derivative-free optimization methods in solving it, and show that our approach identifies the outliers correctly when ANNs are used for nonlinear regression. © 2012 Copyright Taylor and Francis Group, LLC.
- Authors: Beliakov, Gleb , Kelarev, Andrei , Yearwood, John
- Date: 2012
- Type: Text , Journal article
- Relation: Optimization Vol. 61, no. 12 (2012), p. 1467-1490
- Full Text:
- Reviewed:
- Description: Large outliers break down linear and nonlinear regression models. Robust regression methods allow one to filter out the outliers when building a model. By replacing the traditional least squares criterion with the least trimmed squares (LTS) criterion, in which half of data is treated as potential outliers, one can fit accurate regression models to strongly contaminated data. High-breakdown methods have become very well established in linear regression, but have started being applied for non-linear regression only recently. In this work, we examine the problem of fitting artificial neural networks (ANNs) to contaminated data using LTS criterion. We introduce a penalized LTS criterion which prevents unnecessary removal of valid data. Training of ANNs leads to a challenging non-smooth global optimization problem. We compare the efficiency of several derivative-free optimization methods in solving it, and show that our approach identifies the outliers correctly when ANNs are used for nonlinear regression. © 2012 Copyright Taylor and Francis Group, LLC.
Application of rank correlation, clustering and classification in information security
- Beliakov, Gleb, Yearwood, John, Kelarev, Andrei
- Authors: Beliakov, Gleb , Yearwood, John , Kelarev, Andrei
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Networks Vol. 7, no. 6 (2012), p. 935-945
- Full Text:
- Reviewed:
- Description: This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman-Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms. © 2012 Academy Publisher.
- Description: 2003010277
- Authors: Beliakov, Gleb , Yearwood, John , Kelarev, Andrei
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Networks Vol. 7, no. 6 (2012), p. 935-945
- Full Text:
- Reviewed:
- Description: This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman-Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms. © 2012 Academy Publisher.
- Description: 2003010277
Consensus clustering and supervised classification for profiling phishing emails in internet commerce security
- Dazeley, Richard, Yearwood, John, Kang, Byeongho, Kelarev, Andrei
- Authors: Dazeley, Richard , Yearwood, John , Kang, Byeongho , Kelarev, Andrei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 235-246
- Full Text:
- Reviewed:
- Description: This article investigates internet commerce security applications of a novel combined method, which uses unsupervised consensus clustering algorithms in combination with supervised classification methods. First, a variety of independent clustering algorithms are applied to a randomized sample of data. Second, several consensus functions and sophisticated algorithms are used to combine these independent clusterings into one final consensus clustering. Third, the consensus clustering of the randomized sample is used as a training set to train several fast supervised classification algorithms. Finally, these fast classification algorithms are used to classify the whole large data set. One of the advantages of this approach is in its ability to facilitate the inclusion of contributions from domain experts in order to adjust the training set created by consensus clustering. We apply this approach to profiling phishing emails selected from a very large data set supplied by the industry partners of the Centre for Informatics and Applied Optimization. Our experiments compare the performance of several classification algorithms incorporated in this scheme. © 2010 Springer-Verlag Berlin Heidelberg.
- Authors: Dazeley, Richard , Yearwood, John , Kang, Byeongho , Kelarev, Andrei
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 11th International Workshop on Knowledge Management and Acquisition for Smart Systems and Services, PKAW 2010 Vol. 6232 LNAI, p. 235-246
- Full Text:
- Reviewed:
- Description: This article investigates internet commerce security applications of a novel combined method, which uses unsupervised consensus clustering algorithms in combination with supervised classification methods. First, a variety of independent clustering algorithms are applied to a randomized sample of data. Second, several consensus functions and sophisticated algorithms are used to combine these independent clusterings into one final consensus clustering. Third, the consensus clustering of the randomized sample is used as a training set to train several fast supervised classification algorithms. Finally, these fast classification algorithms are used to classify the whole large data set. One of the advantages of this approach is in its ability to facilitate the inclusion of contributions from domain experts in order to adjust the training set created by consensus clustering. We apply this approach to profiling phishing emails selected from a very large data set supplied by the industry partners of the Centre for Informatics and Applied Optimization. Our experiments compare the performance of several classification algorithms incorporated in this scheme. © 2010 Springer-Verlag Berlin Heidelberg.
Optimization of multiple classifiers in data mining based on string rewriting systems
- Dazeley, Richard, Kelarev, Andrei, Yearwood, John, Mammadov, Musa
- Authors: Dazeley, Richard , Kelarev, Andrei , Yearwood, John , Mammadov, Musa
- Date: 2009
- Type: Text , Journal article
- Relation: Asian-European Journal of Mathematics Vol. 2, no. 1 (2009), p. 41-56
- Relation: https://purl.org/au-research/grants/arc/DP0211866
- Relation: https://purl.org/au-research/grants/arc/LP0669752
- Full Text:
- Description: Optimization of multiple classifiers is an important problem in data mining. We introduce additional structure on the class sets of the classifiers using string rewriting systems with a convenient matrix representation. The aim of the present paper is to develop an efficient algorithm for the optimization of the number of errors of individual classifiers, which can be corrected by these multiple classifiers.
- Authors: Dazeley, Richard , Kelarev, Andrei , Yearwood, John , Mammadov, Musa
- Date: 2009
- Type: Text , Journal article
- Relation: Asian-European Journal of Mathematics Vol. 2, no. 1 (2009), p. 41-56
- Relation: https://purl.org/au-research/grants/arc/DP0211866
- Relation: https://purl.org/au-research/grants/arc/LP0669752
- Full Text:
- Description: Optimization of multiple classifiers is an important problem in data mining. We introduce additional structure on the class sets of the classifiers using string rewriting systems with a convenient matrix representation. The aim of the present paper is to develop an efficient algorithm for the optimization of the number of errors of individual classifiers, which can be corrected by these multiple classifiers.
An experiment in task decomposition and ensembling for a modular artificial neural network
- Ferguson, Brent, Ghosh, Ranadhir, Yearwood, John
- Authors: Ferguson, Brent , Ghosh, Ranadhir , Yearwood, John
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at Innovations in Applied Artificial Intelligence: 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, Ottawa, Canada : 17th May, 2004
- Full Text:
- Reviewed:
- Description: Modular neural networks have the possibility of overcoming common scalability and interference problems experienced by fully connected neural networks when applied to large databases. In this paper we trial an approach to constructing modular ANN's for a very large problem from CEDAR for the classification of handwritten characters. In our approach, we apply progressive task decomposition methods based upon clustering and regression techniques to find modules. We then test methods for combining the modules into ensembles and compare their structural characteristics and classification performance with that of an ANN having a fully connected topology. The results reveal improvements to classification rates as well as network topologies for this problem.
- Description: E1
- Description: 2003000852
- Authors: Ferguson, Brent , Ghosh, Ranadhir , Yearwood, John
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at Innovations in Applied Artificial Intelligence: 17th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, Ottawa, Canada : 17th May, 2004
- Full Text:
- Reviewed:
- Description: Modular neural networks have the possibility of overcoming common scalability and interference problems experienced by fully connected neural networks when applied to large databases. In this paper we trial an approach to constructing modular ANN's for a very large problem from CEDAR for the classification of handwritten characters. In our approach, we apply progressive task decomposition methods based upon clustering and regression techniques to find modules. We then test methods for combining the modules into ensembles and compare their structural characteristics and classification performance with that of an ANN having a fully connected topology. The results reveal improvements to classification rates as well as network topologies for this problem.
- Description: E1
- Description: 2003000852
Optimization of matrix semirings for classification systems
- Gao, David, Kelarev, Andrei, Yearwood, John
- Authors: Gao, David , Kelarev, Andrei , Yearwood, John
- Date: 2011
- Type: Text , Journal article
- Relation: Bulletin of the Australian Mathematical Society Vol. 84, no. 3 (2011), p. 492-503
- Full Text:
- Reviewed:
- Description: The max-plus algebra is well known and has useful applications in the investigation of discrete event systems and affine equations. Structural matrix rings have been considered by many authors too. This article introduces more general structural matrix semirings, which include all matrix semirings over the max-plus algebra. We investigate properties of ideals in this construction motivated by applications to the design of centroid-based classification systems, or classifiers, as well as multiple classifiers combining several initial classifiers. The first main theorem of this paper shows that structural matrix semirings possess convenient visible generating sets for ideals. Our second main theorem uses two special sets to determine the weights of all ideals and describe all matrix ideals with the largest possible weight, which are optimal for the design of classification systems. © Copyright Australian Mathematical Publishing Association Inc. 2011.
- Description: 2003009498
- Authors: Gao, David , Kelarev, Andrei , Yearwood, John
- Date: 2011
- Type: Text , Journal article
- Relation: Bulletin of the Australian Mathematical Society Vol. 84, no. 3 (2011), p. 492-503
- Full Text:
- Reviewed:
- Description: The max-plus algebra is well known and has useful applications in the investigation of discrete event systems and affine equations. Structural matrix rings have been considered by many authors too. This article introduces more general structural matrix semirings, which include all matrix semirings over the max-plus algebra. We investigate properties of ideals in this construction motivated by applications to the design of centroid-based classification systems, or classifiers, as well as multiple classifiers combining several initial classifiers. The first main theorem of this paper shows that structural matrix semirings possess convenient visible generating sets for ideals. Our second main theorem uses two special sets to determine the weights of all ideals and describe all matrix ideals with the largest possible weight, which are optimal for the design of classification systems. © Copyright Australian Mathematical Publishing Association Inc. 2011.
- Description: 2003009498
A fully automated CAD system using multi-category feature selection with restricted recombination
- Ghosh, Ranadhir, Ghosh, Moumita, Yearwood, John, Mukherjee, Subhasis
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Mukherjee, Subhasis
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007, Melbourne, Victoria : 11th-13th July 2007 p. 106-111
- Full Text:
- Description: In pattern recognition problems features plays an important role for classification results. It is very important which features are used and how many features are used for the classification process. Most of the real life classification problem uses different category of features. It is desirable to find the optimal combination of features that improves the performance of the classifier. There exists different selection framework that selects the features. Mostly do not incorporate the impact of one category of features on another. Even if they incorporate, they produce conflict between the categories. In this paper we proposed a restricted crossover selection framework which incorporate the impact of different categories on each other, as well as it restricts the search within the category which searching in the global region of the search space. The results obtained by the proposed framework are promising.
- Description: 2003005429
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Mukherjee, Subhasis
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007, Melbourne, Victoria : 11th-13th July 2007 p. 106-111
- Full Text:
- Description: In pattern recognition problems features plays an important role for classification results. It is very important which features are used and how many features are used for the classification process. Most of the real life classification problem uses different category of features. It is desirable to find the optimal combination of features that improves the performance of the classifier. There exists different selection framework that selects the features. Mostly do not incorporate the impact of one category of features on another. Even if they incorporate, they produce conflict between the categories. In this paper we proposed a restricted crossover selection framework which incorporate the impact of different categories on each other, as well as it restricts the search within the category which searching in the global region of the search space. The results obtained by the proposed framework are promising.
- Description: 2003005429
A modular framework for multi category feature selection in digital mammography
- Ghosh, Ranadhir, Ghosh, Moumita, Yearwood, John
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at ESANN 2004 Proceedings: European Symposium on Artificial Neural Networks, Bruges, Belguim : 28/04/2004 Vol. Elsevier, p. 175-180
- Full Text:
- Reviewed:
- Description: Many existing researches utilized many different approaches for recognition in digital mammography using various ANN classifier-modeling techniques. Different types of feature extraction techniques are also used. It has been observed that, beyond a certain point, the inclusion of additional features leads to a worse rather than better performance. Moreover, the choice of features to represent the patterns affects several aspects of pattern recognition problem such as accuracy, required learning time and necessary number of samples. A common problem with the multi category feature classification is the conflict between the categories. None of the feasible solutions allow simultaneous optimal solution for all categories. In order to find an optimal solutions the searching space can be divided based on individual category in each sub region and finally merging them through decision spport system. In this paper we propose a canonical GA based modular feature selection approach combined with standard MLP.
- Description: E1
- Description: 2003000872
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at ESANN 2004 Proceedings: European Symposium on Artificial Neural Networks, Bruges, Belguim : 28/04/2004 Vol. Elsevier, p. 175-180
- Full Text:
- Reviewed:
- Description: Many existing researches utilized many different approaches for recognition in digital mammography using various ANN classifier-modeling techniques. Different types of feature extraction techniques are also used. It has been observed that, beyond a certain point, the inclusion of additional features leads to a worse rather than better performance. Moreover, the choice of features to represent the patterns affects several aspects of pattern recognition problem such as accuracy, required learning time and necessary number of samples. A common problem with the multi category feature classification is the conflict between the categories. None of the feasible solutions allow simultaneous optimal solution for all categories. In order to find an optimal solutions the searching space can be divided based on individual category in each sub region and finally merging them through decision spport system. In this paper we propose a canonical GA based modular feature selection approach combined with standard MLP.
- Description: E1
- Description: 2003000872
A new scoring system in Cystic Fibrosis : Statistical tools for database analysis - A preliminary report
- Hafen, Gaudenz, Hurst, Cameron, Yearwood, John, Smith, Julie, Dzalilov, Zari, Robinson, P. J.
- Authors: Hafen, Gaudenz , Hurst, Cameron , Yearwood, John , Smith, Julie , Dzalilov, Zari , Robinson, P. J.
- Date: 2008
- Type: Text , Journal article
- Relation: BMC Medical Informatics and Decision Making Vol. 8, no. 44 (2008), p.1-11
- Full Text:
- Reviewed:
- Description: Background. Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. Methods. The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. Results. (1) Feature selection: CAP has a more effective "modelling" focus than DA. (2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate, moderate, intermediate severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Conclusion. Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset. © 2008 Hafen et al; licensee BioMed Central Ltd.
- Authors: Hafen, Gaudenz , Hurst, Cameron , Yearwood, John , Smith, Julie , Dzalilov, Zari , Robinson, P. J.
- Date: 2008
- Type: Text , Journal article
- Relation: BMC Medical Informatics and Decision Making Vol. 8, no. 44 (2008), p.1-11
- Full Text:
- Reviewed:
- Description: Background. Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. Methods. The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. Results. (1) Feature selection: CAP has a more effective "modelling" focus than DA. (2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate, moderate, intermediate severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Conclusion. Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset. © 2008 Hafen et al; licensee BioMed Central Ltd.
Visual tools for analysing evolution, emergence, and error in data streams
- Hart, Sol, Yearwood, John, Bagirov, Adil
- Authors: Hart, Sol , Yearwood, John , Bagirov, Adil
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007, Melbourne, Victoria : 11th-13th July 2007 p. 987-992
- Full Text:
- Description: The relatively new field of stream mining has necessitated the development of robust drift-aware algorithms that provide accurate, real time, data handling capabilities. Tools are needed to assess and diagnose important trends and investigate drift evolution parameters. In this paper, we present two new and novel visualisation techniques, Pixie and Luna graphs, which incorporate salient group statistics coupled with intuitive visual representations of multidimensional groupings over time. Through the novel representations presented here, spatial interactions between temporal divisions can be diagnosed and overall distribution patterns identified. It provides a means of evaluating in non-constrained capacity, commonly constrained evolutionary problems.
- Description: 2003005432
- Authors: Hart, Sol , Yearwood, John , Bagirov, Adil
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007, Melbourne, Victoria : 11th-13th July 2007 p. 987-992
- Full Text:
- Description: The relatively new field of stream mining has necessitated the development of robust drift-aware algorithms that provide accurate, real time, data handling capabilities. Tools are needed to assess and diagnose important trends and investigate drift evolution parameters. In this paper, we present two new and novel visualisation techniques, Pixie and Luna graphs, which incorporate salient group statistics coupled with intuitive visual representations of multidimensional groupings over time. Through the novel representations presented here, spatial interactions between temporal divisions can be diagnosed and overall distribution patterns identified. It provides a means of evaluating in non-constrained capacity, commonly constrained evolutionary problems.
- Description: 2003005432
Exploring novel features and decision rules to identify cardiovascular autonomic neuropathy using a hybrid of wrapper-filter based feature selection
- Huda, Shamsul, Jelinek, Herbert, Ray, Biplob, Stranieri, Andrew, Yearwood, John
- Authors: Huda, Shamsul , Jelinek, Herbert , Ray, Biplob , Stranieri, Andrew , Yearwood, John
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at the 2010 6th International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP 2010 p. 297-302
- Full Text:
- Reviewed:
- Description: Cardiovascular autonomic neuropathy (CAN) is one of the important causes of mortality among diabetes patients. Statistics shows that more than 22% of people with type 2 diabetes mellitus suffer from CAN and which in turn leads to cardiovascular disease (heart attack, stroke). Therefore early detection of CAN could reduce the mortality. Traditional method for detection of CAN uses Ewing's algorithm where five noninvasive cardiovascular tests are used. Often for clinician, it is difficult to collect data from for the Ewing Battery patients due to onerous test conditions. In this paper, we propose a hybrid of wrapper-filter approach to find novel features from patients' ECG records and then generate decision rules for the new features for easier detection of CAN. In the proposed feature selection, a hybrid of filter (Maximum Relevance, MR) and wrapper (Artificial Neural Net Input Gain Measurement Approximation ANNIGMA) approaches (MR-ANNIGMA) would be used. The combined heuristics in the hybrid MRANNIGMA takes the advantages of the complementary properties of the both filter and wrapper heuristics and can find significant features. The selected features set are used to generate a new set of rules for detection of CAN. Experiments on real patient records shows that proposed method finds a smaller set of features for detection of CAN than traditional method which are clinically significant and could lead to an easier way to diagnose CAN. © 2010 IEEE.
- Authors: Huda, Shamsul , Jelinek, Herbert , Ray, Biplob , Stranieri, Andrew , Yearwood, John
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at the 2010 6th International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP 2010 p. 297-302
- Full Text:
- Reviewed:
- Description: Cardiovascular autonomic neuropathy (CAN) is one of the important causes of mortality among diabetes patients. Statistics shows that more than 22% of people with type 2 diabetes mellitus suffer from CAN and which in turn leads to cardiovascular disease (heart attack, stroke). Therefore early detection of CAN could reduce the mortality. Traditional method for detection of CAN uses Ewing's algorithm where five noninvasive cardiovascular tests are used. Often for clinician, it is difficult to collect data from for the Ewing Battery patients due to onerous test conditions. In this paper, we propose a hybrid of wrapper-filter approach to find novel features from patients' ECG records and then generate decision rules for the new features for easier detection of CAN. In the proposed feature selection, a hybrid of filter (Maximum Relevance, MR) and wrapper (Artificial Neural Net Input Gain Measurement Approximation ANNIGMA) approaches (MR-ANNIGMA) would be used. The combined heuristics in the hybrid MRANNIGMA takes the advantages of the complementary properties of the both filter and wrapper heuristics and can find significant features. The selected features set are used to generate a new set of rules for detection of CAN. Experiments on real patient records shows that proposed method finds a smaller set of features for detection of CAN than traditional method which are clinically significant and could lead to an easier way to diagnose CAN. © 2010 IEEE.
A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling
- Huda, Shamsul, Yearwood, John, Togneri, Roberto
- Authors: Huda, Shamsul , Yearwood, John , Togneri, Roberto
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics Vol. 39, no. 1 (2009), p. 182-197
- Full Text:
- Reviewed:
- Description: This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). © 2008 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Togneri, Roberto
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics Vol. 39, no. 1 (2009), p. 182-197
- Full Text:
- Reviewed:
- Description: This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). © 2008 IEEE.
Hybrid wrapper-filter approaches for input feature selection using maximum relevance and Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA)
- Huda, Shamsul, Yearwood, John, Stranieri, Andrew
- Authors: Huda, Shamsul , Yearwood, John , Stranieri, Andrew
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Feature selection is an important research problem in machine learning and data mining applications. This paper proposes a hybrid wrapper and filter feature selection algorithm by introducing the filter's feature ranking score in the wrapper stage to speed up the search process for wrapper and thereby finding a more compact feature subset. The approach hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper approach where Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to guide the search process in the wrapper. The novelty of our approach is that we use hybrid of wrapper and filter methods that combines filter's ranking score with the wrapper-heuristic's score to take advantages of both filter and wrapper heuristics. Performance of the proposed MRANNIGMA has been verified using bench mark data sets and compared to both independent filter and wrapper based approaches. Experimental results show that MR-ANNIGMA achieves more compact feature sets and higher accuracies than both filter and wrapper approaches alone. © 2010 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Stranieri, Andrew
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Feature selection is an important research problem in machine learning and data mining applications. This paper proposes a hybrid wrapper and filter feature selection algorithm by introducing the filter's feature ranking score in the wrapper stage to speed up the search process for wrapper and thereby finding a more compact feature subset. The approach hybridizes a Mutual Information (MI) based Maximum Relevance (MR) filter ranking heuristic with an Artificial Neural Network (ANN) based wrapper approach where Artificial Neural Network Input Gain Measurement Approximation (ANNIGMA) has been combined with MR (MR-ANNIGMA) to guide the search process in the wrapper. The novelty of our approach is that we use hybrid of wrapper and filter methods that combines filter's ranking score with the wrapper-heuristic's score to take advantages of both filter and wrapper heuristics. Performance of the proposed MRANNIGMA has been verified using bench mark data sets and compared to both independent filter and wrapper based approaches. Experimental results show that MR-ANNIGMA achieves more compact feature sets and higher accuracies than both filter and wrapper approaches alone. © 2010 IEEE.
Cluster based rule discovery model for enhancement of government's tobacco control strategy
- Huda, Shamsul, Yearwood, John, Borland, Ron
- Authors: Huda, Shamsul , Yearwood, John , Borland, Ron
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Discovery of interesting rules describing the behavioural patterns of smokers' quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers' quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The clusterbased approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers' quitting intention more accurately than a single decision tree. © 2010 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Borland, Ron
- Date: 2010
- Type: Text , Conference proceedings
- Full Text:
- Description: Discovery of interesting rules describing the behavioural patterns of smokers' quitting intentions is an important task in the determination of an effective tobacco control strategy. In this paper, we investigate a compact and simplified rule discovery process for predicting smokers' quitting behaviour that can provide feedback to build an scientific evidence-based adaptive tobacco control policy. Standard decision tree (SDT) based rule discovery depends on decision boundaries in the feature space which are orthogonal to the axis of the feature of a particular decision node. This may limit the ability of SDT to learn intermediate concepts for high dimensional large datasets such as tobacco control. In this paper, we propose a cluster based rule discovery model (CRDM) for generation of more compact and simplified rules for the enhancement of tobacco control policy. The clusterbased approach builds conceptual groups from which a set of decision trees (a decision forest) are constructed. Experimental results on the tobacco control data set show that decision rules from the decision forest constructed by CRDM are simpler and can predict smokers' quitting intention more accurately than a single decision tree. © 2010 IEEE.