A hybrid neural learning algorithm using evolutionary learning and derivative free local search method
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2006
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 16, no. 3 (2006), p. 201-213
- Full Text: false
- Reviewed:
- Description: In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models. © World Scientific Publishing Company.
- Description: C1
- Description: 2003001712
Derivative free stochastic discrete gradient method with adaptive mutation
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Bagirov, Adil
- Date: 2006
- Type: Text , Journal article
- Relation: Advances in Data Mining Vol. 4065, no. (2006), p. 264-278
- Full Text: false
- Reviewed:
- Description: In data mining we come across many problems such as function optimization problem or parameter estimation problem for classifiers for which a good learning algorithm for searching is very much necessary. In this paper we propose a stochastic based derivative free algorithm for unconstrained optimization problem. Many derivative-based local search methods exist which usually stuck into local solution for non-convex optimization problems. On the other hand global search methods are very time consuming and works for only limited number of variables. In this paper we investigate a derivative free multi search gradient based method which overcomes the problems of local minima and produces global solution in less time. We have tested the proposed method on many benchmark dataset in literature and compared the results with other existing algorithms. The results are very promising.
- Description: C1
- Description: 2003001541
Comparative analysis of genetic algorithm vs. evolutionary algorithm for hybrid models with discrete gradient method for artificial neural network
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at the 11th International Fuzzy Systems Associations World Congress, IFSA 2005, Beijing, China, Volume III, Beijing, China : 28th - 31th July, 2005
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003001359
Comparative analysis of genetic algorithm, simulated annealing and cutting angle method for artificial neural networks
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Machine Learning and Data Mining in Pattern Recognition, Proceedings Vol. 3587, no. (2005), p. 62-70
- Full Text: false
- Reviewed:
- Description: Neural network learning is the main essence of ANN. There are many problems associated with the multiple local minima in neural networks. Global optimization methods are capable of finding global optimal solution. In this paper we investigate and present a comparative study for the effects of probabilistic and deterministic global search method for artificial neural network using fully connected feed forward multi-layered perceptron architecture. We investigate two probabilistic global search method namely Genetic algorithm and Simulated annealing method and a deterministic cutting angle method to find weights in neural network. Experiments were carried out on UCI benchmark dataset.
- Description: C1
- Description: 2003003398
Determining regularization parameters for derivative free neural learning
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at 4th International Conference, MLDM 2005: Machine Learning and Data Mining in Pattern Recognition, Leipzig, Germany : 9th-11th July 2005 p. 71-79
- Full Text: false
- Description: Derivative free optimization methods have recently gained a lot of attractions for neural learning. The curse of dimensionality for the neural learning problem makes local optimization methods very attractive; however the error surface contains many local minima. Discrete gradient method is a special case of derivative free methods based on bundle methods and has the ability to jump over many local minima. There are two types of problems that are associated with this when local optimization methods are used for neural learning. The first type of problems is initial sensitivity dependence problem- that is commonly solved by using a hybrid model. Our early research has shown that discrete gradient method combining with other global methods such as evolutionary algorithm makes them even more attractive. These types of hybrid models have been studied by other researchers also. Another less mentioned problem is the problem of large weight values for the synaptic connections of the network. Large synaptic weight values often lead to the problem of paralysis and convergence problem especially when a hybrid model is used for fine tuning the learning task. In this paper we study and analyse the effect of different regularization parameters for our objective function to restrict the weight values without compromising the classification accuracy.
- Description: 2003001362
Fusion strategies for neural learning algorithms using evolutionary and discrete gradient approaches
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at AIA 2005: International Conference on Artificial Intelligence and Applications, Innsbruck, Austria : 14th - 16th February, 2006
- Full Text: false
- Reviewed:
- Description: In this paper we investigate different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model Comparative results on a range of standard datasets are provided for different fusion hybrid models.
- Description: E1
- Description: 2003001365
Hybridization of neural learning algorithms using evolutionary and discrete gradient approaches
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Computer Science Vol. 1, no. 3 (2005), p. 387-394
- Full Text: false
- Reviewed:
- Description: In this study we investigated a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this study we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
- Description: C1
- Description: 2003001357
A hybrid neural learning algorithm combining evolutionary algorithm with discrete gradient method
- Authors: Ghosh, Ranadhir , Yearwood, John , Bagirov, Adil
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at the Second International Conference on Software Computing and Intelligent Systems, Yokahama, Japan : 21st October, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000860
Optimization of feed forward MLPs using the discrete gradient method
- Authors: Bagirov, Adil , Yearwood, John , Ghosh, Ranadhir
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at CIMCA 2004: International Conference on Computational Intelligence for Modelling, Control & Automation, Gold Coast, Queensland : 12th July, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000845
Solving Euclidian travelling salesman problem using discrete-gradient based clustering and kohonen neural network
- Authors: Ghosh, Moumita , Ugon, Julien , Ghosh, Ranadhir , Bagirov, Adil
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at ICOTA6: 6th International Conference on Optimization - Techniques and Applications, Ballarat, Victoria : 9th December, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000864