Modified self-organising maps with a new topology and initialisation algorithm
- Authors: Mohebi, Ehsan , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Experimental and Theoretical Artificial Intelligence Vol. 27, no. 3 (2015), p. 351-372
- Full Text: false
- Reviewed:
- Description: Mapping quality of the self-organising maps (SOMs) is sensitive to the map topology and initialisation of neurons. In this article, in order to improve the convergence of the SOM, an algorithm based on split and merge of clusters to initialise neurons is introduced. The initialisation algorithm speeds up the learning process in large high-dimensional data sets. We also develop a topology based on this initialisation to optimise the vector quantisation error and topology preservation of the SOMs. Such an approach allows to find more accurate data visualisation and consequently clustering problem. The numerical results on eight small-to-large real-world data sets are reported to demonstrate the performance of the proposed algorithm in the sense of vector quantisation, topology preservation and CPU time requirement. © 2014 Taylor & Francis.
Empirical evaluation methods for multiobjective reinforcement learning algorithms
- Authors: Vamplew, Peter , Dazeley, Richard , Berry, Adam , Issabekov, Rustam , Dekker, Evan
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 84, no. 1-2 (2011), p. 51-80
- Full Text: false
- Reviewed:
- Description: While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms. © 2010 The Author(s).
A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning
- Authors: Ghosh, Ranadhir , Verma, Brijesh
- Date: 2003
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 13, no. 1 (2003), p. 13-24
- Full Text: false
- Reviewed:
- Description: In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
- Description: 2003004100