In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
The chapter presents a novel neural learning methodology by using different combination strategies for finding architecture and weights. The methodology combines evolutionary algorithms with direct/matrix solution methods such as Gram-Schmidt, singular value decomposition, etc., to achieve optimal weights for hidden and output layers. The proposed method uses evolutionary algorithms in the first layer and the least square method (LS) in the second layer of the ANN. The methodology also finds optimum number of hidden neurons and weights using hierarchical combination strategies. The chapter explores all different facets of the proposed method in terms of classification accuracy, convergence property, generalization ability, time and memory complexity. The learning methodology has been tested using many benchmark databases such as XOR, 10 bit odd parity, handwriting characters from CEDAR, breast cancer and heart disease from UCI machine learning repository. The experimental results, detailed discussion and analysis are included in the chapter.