An algorithm for minimization of pumping costs in water distribution systems using a novel approach to pump scheduling
- Authors: Bagirov, Adil , Barton, Andrew , Mala-Jetmarova, Helena , Al Nuaimat, Alia , Ahmed, S. T. , Sultanova, Nargiz , Yearwood, John
- Date: 2013
- Type: Text , Journal article
- Relation: Mathematical and Computer Modelling Vol. 57, no. 3-4 (2013), p. 873-886
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Full Text: false
- Reviewed:
- Description: The operation of a water distribution system is a complex task which involves scheduling of pumps, regulating water levels of storages, and providing satisfactory water quality to customers at required flow and pressure. Pump scheduling is one of the most important tasks of the operation of a water distribution system as it represents the major part of its operating costs. In this paper, a novel approach for modeling of explicit pump scheduling to minimize energy consumption by pumps is introduced which uses the pump start/end run times as continuous variables, and binary integer variables to describe the pump status at the beginning of the scheduling period. This is different from other approaches where binary integer variables for each hour are typically used, which is considered very impractical from an operational perspective. The problem is formulated as a mixed integer nonlinear programming problem, and a new algorithm is developed for its solution. This algorithm is based on the combination of the grid search with the Hooke-Jeeves pattern search method. The performance of the algorithm is evaluated using literature test problems applying the hydraulic simulation model EPANet. © 2012 Elsevier Ltd.
- Description: 2003010583
A novel approach to optimal pump scheduling in water distribution systems
- Authors: Bagirov, Adil , Barton, Andrew , Mala-Jetmarova, Helena , Al Nuaimat, Alia , Ahmed, S. T. , Sultanova, Nargiz , Yearwood, John
- Date: 2012
- Type: Text , Conference paper
- Relation: 14th Water Distribution Systems Analysis Conference 2012, WDSA 2012 Vol. 1; Adelaide, Australia; 24th-27th September; p. 618-631
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Full Text: false
- Reviewed:
- Description: The operation of a water distribution system is a complex task which involves scheduling of pumps, regulating water levels of storages, and providing satisfactory water quality to customers at required flow and pressure. Pump scheduling is one of the most important tasks of the operation of a water distribution system as it represents the major part of its operating costs. In this paper, a novel approach for modeling of pump scheduling to minimize energy consumption by pumps is introduced which uses pump's start/end run times as continuous variables. This is different from other approaches where binary integer variables for each hour are typically used which is considered very impractical from an operational perspective. The problem is formulated as a nonlinear programming problem and a new algorithm is developed for its solution. This algorithm is based on the combination of the grid search with the Hooke-Jeeves pattern search method. The performance of the algorithm is evaluated using literature test problems applying the hydraulic simulation model EPANet.
- Description: E1
Machine learning algorithms for analysis of DNA data sets
- Authors: Yearwood, John , Bagirov, Adil , Kelarev, Andrei
- Date: 2012
- Type: Text , Book chapter
- Relation: Machine Learning Algorithms for Problem Solving in Computational Applications: Intelligent Techniques p. 47-58
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Full Text: false
- Reviewed:
- Description: The applications of machine learning algorithms to the analysis of data sets of DNA sequences are very important. The present chapter is devoted to the experimental investigation of applications of several machine learning algorithms for the analysis of a JLA data set consisting of DNA sequences derived from non-coding segments in the junction of the large single copy region and inverted repeat A of the chloroplast genome in Eucalyptus collected by Australian biologists. Data sets of this sort represent a new situation, where sophisticated alignment scores have to be used as a measure of similarity. The alignment scores do not satisfy properties of the Minkowski metric, and new machine learning approaches have to be investigated. The authors' experiments show that machine learning algorithms based on local alignment scores achieve very good agreement with known biological classes for this data set. A new machine learning algorithm based on graph partitioning performed best for clustering of the JLA data set. Our novel k-committees algorithm produced most accurate results for classification. Two new examples of synthetic data sets demonstrate that the authors' k-committees algorithm can outperform both the Nearest Neighbour and k-medoids algorithms simultaneously.
A novel hybrid neural learning algorithm using simulated annealing and quasisecant method
- Authors: Yearwood, John , Bagirov, Adil , Seifollahi, Sattar
- Date: 2011
- Type: Text , Conference proceedings
- Full Text: false
- Description: In this paper, we propose a hybrid learning algorithm for the single hidden layer feedforward neural networks (SLFNs) for data classification. The proposed hybrid algorithm is a two-phase learning algorithm and is based on the quasisecant and the simulated annealing methods. First, the weights between the hidden layer and the output layer nodes (output layer weights) are adjusted by the quasisecant algorithm. Then the simulated annealing is applied for global attribute weighting. The weights between the input layer and the hidden layer nodes are fixed in advance and are not included in the learning process. The proposed two-phase learning of the network is a novel idea and is different from that of the existing ones. The numerical results on some benchmark data sets are also reported and these results are promising. © 2011, Australian Computer Society, Inc.
- Description: 2003009507
Optimization methods and the k-committees algorithm for clustering of sequence data
- Authors: Yearwood, John , Bagirov, Adil , Kelarev, Andrei
- Date: 2009
- Type: Text , Journal article
- Relation: Applied and Computational Mathematics Vol. 8, no. 1 (2009), p. 92-101
- Relation: http://purl.org/au-research/grants/arc/DP0211866
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Description: The present paper is devoted to new algorithms for unsupervised clustering based on the optimization approaches due to [2], [3] and [4]. We consider a novel situation, where the datasets consist of nucleotide or protein sequences and rather sophisticated biologically significant alignment scores have to be used as a measure of distance. Sequences of this kind cannot be regarded as points in a finite dimensional space. Besides, the alignment scores do not satisfy properties of Minkowski metrics. Nevertheless the optimization approaches have made it possible to introduce a new k-committees algorithm and compare its performance with previous algorithms for two datasets. Our experimental results show that the k-committees algorithms achieves intermediate accuracy for a dataset of ITS sequences, and it can perform better than the discrete k-means and Nearest Neighbour algorithms for certain datasets. All three algorithms achieve good agreement with clusters published in the biological literature before and can be used to obtain biologically significant clusterings.
Visual tools for analysing evolution, emergence, and error in data streams
- Authors: Hart, Sol , Yearwood, John , Bagirov, Adil
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 6th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2007, Melbourne, Victoria : 11th-13th July 2007 p. 987-992
- Full Text:
- Description: The relatively new field of stream mining has necessitated the development of robust drift-aware algorithms that provide accurate, real time, data handling capabilities. Tools are needed to assess and diagnose important trends and investigate drift evolution parameters. In this paper, we present two new and novel visualisation techniques, Pixie and Luna graphs, which incorporate salient group statistics coupled with intuitive visual representations of multidimensional groupings over time. Through the novel representations presented here, spatial interactions between temporal divisions can be diagnosed and overall distribution patterns identified. It provides a means of evaluating in non-constrained capacity, commonly constrained evolutionary problems.
- Description: 2003005432
A hybrid neural learning algorithm using evolutionary learning and derivative free local search method
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2006
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 16, no. 3 (2006), p. 201-213
- Full Text: false
- Reviewed:
- Description: In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models. © World Scientific Publishing Company.
- Description: C1
- Description: 2003001712
A new nonsmooth optimization algorithm for minimum sum-of-squares clustering problems
- Authors: Bagirov, Adil , Yearwood, John
- Date: 2006
- Type: Text , Journal article
- Relation: European Journal of Operational Research Vol. 170, no. 2 (2006), p. 578-596
- Full Text: false
- Reviewed:
- Description: The minimum sum-of-squares clustering problem is formulated as a problem of nonsmooth, nonconvex optimization, and an algorithm for solving the former problem based on nonsmooth optimization techniques is developed. The issue of applying this algorithm to large data sets is discussed. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithm. © 2004 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003001520
Comparative analysis of genetic algorithm vs. evolutionary algorithm for hybrid models with discrete gradient method for artificial neural network
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at the 11th International Fuzzy Systems Associations World Congress, IFSA 2005, Beijing, China, Volume III, Beijing, China : 28th - 31th July, 2005
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003001359
Comparative analysis of genetic algorithm, simulated annealing and cutting angle method for artificial neural networks
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Machine Learning and Data Mining in Pattern Recognition, Proceedings Vol. 3587, no. (2005), p. 62-70
- Full Text: false
- Reviewed:
- Description: Neural network learning is the main essence of ANN. There are many problems associated with the multiple local minima in neural networks. Global optimization methods are capable of finding global optimal solution. In this paper we investigate and present a comparative study for the effects of probabilistic and deterministic global search method for artificial neural network using fully connected feed forward multi-layered perceptron architecture. We investigate two probabilistic global search method namely Genetic algorithm and Simulated annealing method and a deterministic cutting angle method to find weights in neural network. Experiments were carried out on UCI benchmark dataset.
- Description: C1
- Description: 2003003398
Determining regularization parameters for derivative free neural learning
- Authors: Ghosh, Ranadhir , Ghosh, Moumita , Yearwood, John , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at 4th International Conference, MLDM 2005: Machine Learning and Data Mining in Pattern Recognition, Leipzig, Germany : 9th-11th July 2005 p. 71-79
- Full Text: false
- Description: Derivative free optimization methods have recently gained a lot of attractions for neural learning. The curse of dimensionality for the neural learning problem makes local optimization methods very attractive; however the error surface contains many local minima. Discrete gradient method is a special case of derivative free methods based on bundle methods and has the ability to jump over many local minima. There are two types of problems that are associated with this when local optimization methods are used for neural learning. The first type of problems is initial sensitivity dependence problem- that is commonly solved by using a hybrid model. Our early research has shown that discrete gradient method combining with other global methods such as evolutionary algorithm makes them even more attractive. These types of hybrid models have been studied by other researchers also. Another less mentioned problem is the problem of large weight values for the synaptic connections of the network. Large synaptic weight values often lead to the problem of paralysis and convergence problem especially when a hybrid model is used for fine tuning the learning task. In this paper we study and analyse the effect of different regularization parameters for our objective function to restrict the weight values without compromising the classification accuracy.
- Description: 2003001362
Fusion strategies for neural learning algorithms using evolutionary and discrete gradient approaches
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2005
- Type: Text , Conference paper
- Relation: Paper presented at AIA 2005: International Conference on Artificial Intelligence and Applications, Innsbruck, Austria : 14th - 16th February, 2006
- Full Text: false
- Reviewed:
- Description: In this paper we investigate different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model Comparative results on a range of standard datasets are provided for different fusion hybrid models.
- Description: E1
- Description: 2003001365
Hybridization of neural learning algorithms using evolutionary and discrete gradient approaches
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Journal of Computer Science Vol. 1, no. 3 (2005), p. 387-394
- Full Text: false
- Reviewed:
- Description: In this study we investigated a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this study we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.
- Description: C1
- Description: 2003001357
A hybrid neural learning algorithm combining evolutionary algorithm with discrete gradient method
- Authors: Ghosh, Ranadhir , Yearwood, John , Bagirov, Adil
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at the Second International Conference on Software Computing and Intelligent Systems, Yokahama, Japan : 21st October, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000860
Optimization of feed forward MLPs using the discrete gradient method
- Authors: Bagirov, Adil , Yearwood, John , Ghosh, Ranadhir
- Date: 2004
- Type: Text , Conference paper
- Relation: Paper presented at CIMCA 2004: International Conference on Computational Intelligence for Modelling, Control & Automation, Gold Coast, Queensland : 12th July, 2004
- Full Text: false
- Reviewed:
- Description: E1
- Description: 2003000845
An algorithm for clustering based on non-smooth optimization techniques
- Authors: Bagirov, Adil , Rubinov, Alex , Sukhorukova, Nadezda , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: International Transactions in Operational Research Vol. 10, no. 6 (2003), p. 611-617
- Full Text: false
- Reviewed:
- Description: The problem of cluster analysis is formulated as a problem of non-smooth, non-convex optimization, and an algorithm for solving the cluster analysis problem based on non-smooth optimization techniques is developed. We discuss applications of this algorithm in large databases. Results of numerical experiments are presented to demonstrate the effectiveness of this algorithm.
- Description: C1
- Description: 2003000422
New algorithms for multi-class cancer diagnosis using tumor gene expression signatures
- Authors: Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807
- Full Text:
- Reviewed:
- Description: Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.
- Description: C1
- Description: 2003000439
Unsupervised and supervised data classification via nonsmooth and global optimisation
- Authors: Bagirov, Adil , Rubinov, Alex , Sukhorukova, Nadezda , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Top Vol. 11, no. 1 (2003), p. 1-92
- Full Text:
- Reviewed:
- Description: We examine various methods for data clustering and data classification that are based on the minimization of the so-called cluster function and its modications. These functions are nonsmooth and nonconvex. We use Discrete Gradient methods for their local minimization. We consider also a combination of this method with the cutting angle method for global minimization. We present and discuss results of numerical experiments.
- Description: C1
- Description: 2003000421
A global optimization approach to classification
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John
- Date: 2002
- Type: Text , Journal article
- Relation: Optimization and Engineering Vol. 9, no. 7 (2002), p. 129-155
- Full Text: false
- Reviewed:
- Description: In this paper is presented an hybrid algorithm for finding the absolute extreme point of a multimodal scalar function of many variables. The algorithm is suitable when the objective function is expensive to compute, the computation can be affected by noise and/or partial derivatives cannot be calculated. The method used is a genetic modification of a previous algorithm based on the Prices method. All information about behavior of objective function collected on previous iterates are used to chose new evaluation points. The genetic part of the algorithm is very effective to escape from local attractors of the algorithm and assures convergence in probability to the global optimum. The proposed algorithm has been tested on a large set of multimodal test problems outperforming both the modified Prices algorithm and classical genetic approach.
- Description: C1
- Description: 2003000061
A global optimisation approach to classification in medical diagnosis and prognosis
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John , Stranieri, Andrew
- Date: 2001
- Type: Text , Conference paper
- Relation: Paper presented at 34th Hawaii International Conference on System Sciences, HICSS-34, Maui, Hawaii, USA : 3rd-6th January 2001
- Full Text:
- Description: In this paper global optimisation-based techniques are studied in order to increase the accuracy of medical diagnosis and prognosis with FNA image data from the Wisconsin Diagnostic and Prognostic Breast Cancer databases. First we discuss the problem of determining the most informative features for the classification of cancerous cases in the databases under consideration. Then we apply a technique based on convex and global optimisation to breast cancer diagnosis. It allows the classification of benign cases and malignant ones and the subsequent diagnosis of patients with very high accuracy. The third application of this technique is a method that calculates centres of clusters to predict when breast cancer is likely to recur in patients for which cancer has been removed. The technique achieves higher accuracy with these databases than reported elsewhere in the literature.
- Description: 2003003950