Description:
In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
Description:
In the last few years, there have been many works in the area of hybrid neural learning algorithms combining a global and local based method for training artificial neural networks. In this paper, we discuss various connection strategies that can be applied to a special kind of a hybrid neural learning algorithm group, one that combines a genetic algorithm-based method with various least square-based methods like QR factorization. The relative advantages and disadvantages of the different connection types are studied to find a suitable connection topology for combining the two different learning methods. The methodology also finds the optimum number of hidden neurons using a hierarchical combination methodology structure for weights and architecture. We have tested our proposed approach on XOR, 10 bit odd parity, and some other real-world benchmark data sets, such as the hand-writing character dataset from CEDAR, Breast cancer, and Heart Disease from the UCI machine learning repository.