A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning
- Authors: Ghosh, Ranadhir , Verma, Brijesh
- Date: 2003
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 13, no. 1 (2003), p. 13-24
- Full Text: false
- Reviewed:
- Description: In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
- Description: 2003004100
Connection topologies for combining genetic and least square methods for neural learning
- Authors: Ghosh, Ranadhir
- Date: 2004
- Type: Text , Journal article
- Relation: Journal of Intelligent Systems Vol. 13, no. 3 (2004), p. 199-232
- Full Text: false
- Reviewed:
- Description: In the last few years, there have been many works in the area of hybrid neural learning algorithms combining a global and local based method for training artificial neural networks. In this paper, we discuss various connection strategies that can be applied to a special kind of a hybrid neural learning algorithm group, one that combines a genetic algorithm-based method with various least square-based methods like QR factorization. The relative advantages and disadvantages of the different connection types are studied to find a suitable connection topology for combining the two different learning methods. The methodology also finds the optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. We have tested our proposed approach on XOR, 10 bit odd parity, and some other real-world benchmark data sets, such as the hand-writing character dataset from CEDAR, Breast cancer, and Heart Disease from the UCI machine learning repository.
- Description: C1
Some special properties of G A- and LS-based neural learning method
- Authors: Ghosh, Ranadhir
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Intelligent Systems Vol. 14, no. 4 (2005), p. 289-319
- Full Text: false
- Reviewed:
- Description: Many works in the area of hybrid neural learning algorithms combine global and local based method for artificial neural network. In this paper, we discuss some special properties of a hybrid neural learning algorithm that combines the GA based method with least square based methods such as QR factorization. We look at different types of learning properties of this new hybrid algorithm, such as time complexity, convergence property, and the stability of the algorithm.
- Description: C1
- Description: 2003001361
Empirical evaluation methods for multiobjective reinforcement learning algorithms
- Authors: Vamplew, Peter , Dazeley, Richard , Berry, Adam , Issabekov, Rustam , Dekker, Evan
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 84, no. 1-2 (2011), p. 51-80
- Full Text: false
- Reviewed:
- Description: While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms. © 2010 The Author(s).
Modified self-organising maps with a new topology and initialisation algorithm
- Authors: Mohebi, Ehsan , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Experimental and Theoretical Artificial Intelligence Vol. 27, no. 3 (2015), p. 351-372
- Full Text: false
- Reviewed:
- Description: Mapping quality of the self-organising maps (SOMs) is sensitive to the map topology and initialisation of neurons. In this article, in order to improve the convergence of the SOM, an algorithm based on split and merge of clusters to initialise neurons is introduced. The initialisation algorithm speeds up the learning process in large high-dimensional data sets. We also develop a topology based on this initialisation to optimise the vector quantisation error and topology preservation of the SOMs. Such an approach allows to find more accurate data visualisation and consequently clustering problem. The numerical results on eight small-to-large real-world data sets are reported to demonstrate the performance of the proposed algorithm in the sense of vector quantisation, topology preservation and CPU time requirement. © 2014 Taylor & Francis.