A hierarchical method for finding optimal architecture and weights using evolutionary least square based learning
- Authors: Ghosh, Ranadhir , Verma, Brijesh
- Date: 2003
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 13, no. 1 (2003), p. 13-24
- Full Text: false
- Reviewed:
- Description: In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
- Description: 2003004100
An intelligent offline handwriting recognition system using evolutionary neural learning algorithm and rule based over segmented data points
- Authors: Ghosh, Ranadhir , Ghosh, Moumita
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Research and Practice in Information Technology Vol. 37, no. 1 (2005), p. 73-86
- Full Text: false
- Reviewed:
- Description: In this paper we propose a novel technique of using a hybrid evolutionary method, which uses a combination of genetic algorithm and matrix based solution methods such as QR factorization. The training of the model is based on a layer based hierarchical structure for the architecture and the weights for the Artificial Neural Network classifier. The architecture for the classifier is found using a binary search type procedure. The hierarchical structured algorithm (EALS-BT) is also a hybrid, because it combines the Genetic Algorithm based method with the Matrix based solution method for finding weights. A heuristic segmentation algorithm is initially used to over segment each word. Then the segmentation points are passed through the rule-based module to discard the incorrect segmentation points and include any missing segmentation points. Following the segmentation the contour is extracted between two correct segmentation points. The contour is passed through the feature extraction module that extracts the angular features, after which the EALS-BT algorithm finds the architecture and the weights for the classifier network. These recognized characters are grouped into words and passed to a variable length lexicon that retrieves words that have the highest confidence value.
- Description: C1
- Description: 2003001367
Combination strategies for finding optimal neural network architecture and weights
- Authors: Verma, Brijesh , Ghosh, Ranadhir
- Date: 2004
- Type: Text , Book chapter
- Relation: Neural information processing : Research and development Chapter p. 294-319
- Full Text: false
- Description: The chapter presents a novel neural learning methodology by using different combination strategies for finding architecture and weights. The methodology combines evolutionary algorithms with direct/matrix solution methods such as Gram-Schmidt, singular value decomposition, etc., to achieve optimal weights for hidden and output layers. The proposed method uses evolutionary algorithms in the first layer and the least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons and weights using hierarchical combination strategies. The chapter explores all different facets of the proposed method in terms of classification accuracy, convergence property, generalization ability, time and memory complexity. The learning methodology has been tested using many benchmark databases such as XOR, 10 bit odd parity, handwriting characters from CEDAR, breast cancer and heart disease from UCI machine learning repository. The experimental results, detailed discussion and analysis are included in the chapter.
- Description: 2003004097
Connection topologies for combining genetic and least square methods for neural learning
- Authors: Ghosh, Ranadhir
- Date: 2004
- Type: Text , Journal article
- Relation: Journal of Intelligent Systems Vol. 13, no. 3 (2004), p. 199-232
- Full Text: false
- Reviewed:
- Description: In the last few years, there have been many works in the area of hybrid neural learning algorithms combining a global and local based method for training artificial neural networks. In this paper, we discuss various connection strategies that can be applied to a special kind of a hybrid neural learning algorithm group, one that combines a genetic algorithm-based method with various least square-based methods like QR factorization. The relative advantages and disadvantages of the different connection types are studied to find a suitable connection topology for combining the two different learning methods. The methodology also finds the optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. We have tested our proposed approach on XOR, 10 bit odd parity, and some other real-world benchmark data sets, such as the hand-writing character dataset from CEDAR, Breast cancer, and Heart Disease from the UCI machine learning repository.
- Description: C1
Hybridization of neural learning algorithms using evolutionary and discrete gradient approaches
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Computer Science Vol. 1, no. 3 (2005), p. 387-394
- Full Text: false
- Reviewed:
- Description: In this study we investigated a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this study we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
- Description: C1
- Description: 2003001357
Some special properties of G A- and LS-based neural learning method
- Authors: Ghosh, Ranadhir
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Intelligent Systems Vol. 14, no. 4 (2005), p. 289-319
- Full Text: false
- Reviewed:
- Description: Many works in the area of hybrid neural learning algorithms combine global and local based method for artificial neural network. In this paper, we discuss some special properties of a hybrid neural learning algorithm that combines the GA based method with least square based methods such as QR factorization. We look at different types of learning properties of this new hybrid algorithm, such as time complexity, convergence property, and the stability of the algorithm.
- Description: C1
- Description: 2003001361