Your selections:

Show More

Show Less

An algorithm for the optimization of multiple classifers in data mining based on graphs

- Kelarev, Andrei, Ryan, Joe, Yearwood, John

**Authors:**Kelarev, Andrei , Ryan, Joe , Yearwood, John**Date:**2009**Type:**Text , Journal article**Relation:**The Journal of Combinatorial Mathematics and Combinatorial Computing Vol. 71, no. (2009), p. 65-85**Full Text:**false**Reviewed:****Description:**This article develops an efficient combinatorial algorithm based on labeled directed graphs and motivated by applications in data mining for designing multiple classifiers. Our method originates from the standard approach described in [37]. It defines a representation of a multiclass classifier in terms of several binary classifiers. We are using labeled graphs to introduce additional structure on the classifier. Representations of this sort are known to have serious advantages. An important property of these representations is their ability to correct errors of individual binary classifiers and produce correct combined output. For every representation like this we develop a combinatorial algorithm with quadratic running time to compute the largest number of errors of individual binary classifiers which can be corrected by the combined multiple classifier. In addition, we consider the question of optimizing the classifiers of this type and find all optimal representations for these multiple classifiers.**Description:**2003007563

- Karasozen, Bulent, Rubinov, Alex, Weber, Gerhard-Wilhelm

**Authors:**Karasozen, Bulent , Rubinov, Alex , Weber, Gerhard-Wilhelm**Date:**2006**Type:**Text , Journal article**Relation:**European Journal of Operational Research Vol. 173, no. 3 (2006), p. 701-704**Full Text:**false**Reviewed:****Description:**C1

A heuristic algorithm for solving the minimum sum-of-squares clustering problems

**Authors:**Ordin, Burak , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?