A comparative assessment of models to predict monthly rainfall in Australia
- Authors: Bagirov, Adil , Mahmood, Arshad
- Date: 2018
- Type: Text , Journal article
- Relation: Water Resources Management Vol. 32, no. 5 (2018), p. 1777-1794
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: Accurate rainfall prediction is a challenging task. It is especially challenging in Australia where the climate is highly variable. Australia’s climatic zones range from high rainfall tropical regions in the north to the driest desert region in the interior. The performance of prediction models may vary depending on climatic conditions. It is, therefore, important to assess and compare the performance of these models in different climatic zones. This paper examines the performance of data driven models such as the support vector machines for regression, the multiple linear regression, the k-nearest neighbors and the artificial neural networks for monthly rainfall prediction in Australia depending on climatic conditions. Rainfall data with five meteorological variables over the period of 1970–2014 from 24 geographically diverse weather stations are used for this purpose. The prediction performance of each model was evaluated by comparing observed and predicted rainfall using various measures for prediction accuracy. © 2018, Springer Science+Business Media B.V., part of Springer Nature.
A convolutional recursive modified Self Organizing Map for handwritten digits recognition
- Authors: Mohebi, Ehsan , Bagirov, Adil
- Date: 2014
- Type: Text , Journal article
- Relation: Neural Networks Vol. 60, no. (2014), p. 104-118
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: It is well known that the handwritten digits recognition is a challenging problem. Different classification algorithms have been applied to solve it. Among them, the Self Organizing Maps (SOM) produced promising results. In this paper, first we introduce a Modified SOM for the vector quantization problem with improved initialization process and topology preservation. Then we develop a Convolutional Recursive Modified SOM and apply it to the problem of handwritten digits recognition. The computational results obtained using the well known MNIST dataset demonstrate the superiority of the proposed algorithm over the existing SOM-based algorithms.
A difference of convex optimization algorithm for piecewise linear regression
- Authors: Bagirov, Adil , Taheri, Sona , Asadi, Soodabeh
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 15, no. 2 (2019), p. 909-932
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.
A generalized subgradient method with piecewise linear subproblem
- Authors: Bagirov, Adil , Ganjehlou, Asef Nazari , Tor, Hakan , Ugon, Julien
- Date: 2010
- Type: Text , Journal article
- Relation: Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 621-638
- Full Text: false
- Reviewed:
- Description: In this paper, a new version of the quasisecant method for nonsmooth nonconvex optimization is developed. Quasisecants are overestimates to the objective function in some neighborhood of a given point. Subgradients are used to obtain quasisecants. We describe classes of nonsmooth functions where quasisecants can be computed explicitly. We show that a descent direction with suffcient decrease must satisfy a set of linear inequalities. In the proposed algorithm this set of linear inequalities is solved by applying the subgradient algorithm to minimize a piecewise linear function. We compare results of numerical experiments between the proposed algorithm and subgradient method. Copyright © 2010 Watam Press.
A global optimization approach to classification
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John
- Date: 2002
- Type: Text , Journal article
- Relation: Optimization and Engineering Vol. 9, no. 7 (2002), p. 129-155
- Full Text: false
- Reviewed:
- Description: In this paper is presented an hybrid algorithm for finding the absolute extreme point of a multimodal scalar function of many variables. The algorithm is suitable when the objective function is expensive to compute, the computation can be affected by noise and/or partial derivatives cannot be calculated. The method used is a genetic modification of a previous algorithm based on the Prices method. All information about behavior of objective function collected on previous iterates are used to chose new evaluation points. The genetic part of the algorithm is very effective to escape from local attractors of the algorithm and assures convergence in probability to the global optimum. The proposed algorithm has been tested on a large set of multimodal test problems outperforming both the modified Prices algorithm and classical genetic approach.
- Description: C1
- Description: 2003000061
A heuristic algorithm for solving the minimum sum-of-squares clustering problems
- Authors: Ordin, Burak , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.
A history of water distribution systems and their optimisation
- Authors: Mala-Jetmarova, Helena , Barton, Andrew , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Water Science and Technology-Water Supply Vol. 15, no. 2 (2015), p. 224-235
- Relation: http://purl.org/au-research/grants/arc/LP0990908
- Full Text: false
- Reviewed:
- Description: Water distribution systems have a very long and rich history dating back to the third millennium B.C. Advances in water supply and distribution were followed in parallel by discoveries and inventions in other related fields. Therefore, it is the aim of this paper to review both the history of water distribution systems and those related fields in order to present a coherent summary of the complex multi-stranded discipline of water engineering. Related fields reviewed in this paper include devices for raising water and water pumps, water quality and water treatment, hydraulics, network analysis, and optimisation of water distribution systems. The review is brief and concise and allows the reader to quickly gain an understanding of the history and advancements of water distribution systems and analysis. Furthermore, the paper gives details of other existing publications where more information can be found.
A hybrid neural learning algorithm using evolutionary learning and derivative free local search method
- Authors: Ghosh, Ranadhir , Yearwood, John , Ghosh, Moumita , Bagirov, Adil
- Date: 2006
- Type: Text , Journal article
- Relation: International Journal of Neural Systems Vol. 16, no. 3 (2006), p. 201-213
- Full Text: false
- Reviewed:
- Description: In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models. © World Scientific Publishing Company.
- Description: C1
- Description: 2003001712
A method for minimization of quasidifferentiable functions
- Authors: Bagirov, Adil
- Date: 2002
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 17, no. 1 (2002), p. 31-60
- Full Text: false
- Reviewed:
- Description: In this paper, we propose a new method for the unconstrained minimization of a function presented as a difference of two convex functions. This method is based on continuous approximations to the Demyanov-Rubinov quasidifferential. First, a terminating algorithm for the computation of a descent direction of the objective function is described. Then we present a minimization algorithm and study its convergence. An implementable version of this algorithm is discussed. Finally, we report the results of preliminary numerical experiments.
- Description: C1
- Description: 2003000064
A method of truncated codifferential with application to some problems of cluster analysis
- Authors: Demyanov, Vladimir , Bagirov, Adil , Rubinov, Alex
- Date: 2002
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 23, no. 1 (May 2002), p. 63-80
- Full Text: false
- Reviewed:
- Description: A method of truncated codifferential descent for minimizing continuously codifferentiable functions is suggested. The convergence of the method is studied. Results of numerical experiments are presented. Application of the suggested method for the solution of some problems of cluster analysis are discussed. In numerical experiments Wisconsin Diagnostic Breast Cancer database was used.
- Description: 2003000062
A multidimensional descent method for global optimization
- Authors: Bagirov, Adil , Rubinov, Alex , Zhang, Jiapu
- Date: 2009
- Type: Text , Journal article
- Relation: Optimization Vol. 58, no. 5 (2009), p. 611-625
- Full Text: false
- Reviewed:
- Description: This article presents a new multidimensional descent method for solving global optimization problems with box-constraints. This is a hybrid method where local search method is used for a local descent and global search is used for further multidimensional search on the subsets of intersection of cones generated by the local search method and the feasible region. The discrete gradient method is used for local search and the cutting angle method is used for global search. Two-and three-dimensional cones are used for the global search. Such an approach allows one, as a rule, to escape local minimizers which are not global ones. The proposed method is local optimization method with strong global search properties. We present results of numerical experiments using both smooth and non-smooth global optimization test problems. These results demonstrate that the proposed algorithm allows one to find a global or a near global minimizer.
A new nonsmooth optimization algorithm for minimum sum-of-squares clustering problems
- Authors: Bagirov, Adil , Yearwood, John
- Date: 2006
- Type: Text , Journal article
- Relation: European Journal of Operational Research Vol. 170, no. 2 (2006), p. 578-596
- Full Text: false
- Reviewed:
- Description: The minimum sum-of-squares clustering problem is formulated as a problem of nonsmooth, nonconvex optimization, and an algorithm for solving the former problem based on nonsmooth optimization techniques is developed. The issue of applying this algorithm to large data sets is discussed. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithm. © 2004 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003001520
A novel optimization approach towards improving separability of clusters
- Authors: Bagirov, Adil , Hoseini-Monjezi, Najmeh , Taheri, Sona
- Date: 2023
- Type: Text , Journal article
- Relation: Computers and Operations Research Vol. 152, no. (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text: false
- Reviewed:
- Description: The objective functions in optimization models of the sum-of-squares clustering problem reflect intra-cluster similarity and inter-cluster dissimilarities and in general, optimal values of these functions can be considered as appropriate measures for compactness of clusters. However, the use of the objective function alone may not lead to the finding of separable clusters. To address this shortcoming in existing models for clustering, we develop a new optimization model where the objective function is represented as a sum of two terms reflecting the compactness and separability of clusters. Based on this model we develop a two-phase incremental clustering algorithm. In the first phase, the clustering function is minimized to find compact clusters and in the second phase, a new model is applied to improve the separability of clusters. The Davies–Bouldin cluster validity index is applied as an additional measure to compare the compactness of clusters and silhouette coefficients are used to estimate the separability of clusters. The performance of the proposed algorithm is demonstrated and compared with that of four other algorithms using synthetic and real-world data sets. Numerical results clearly show that in comparison with other algorithms the new algorithm is able to find clusters with better separability and similar compactness. © 2022
A novel piecewise linear classifier based on polyhedral conic and max-min separabilities
- Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean , Ozturk, Gurkan , Kasimbeyli, Refail
- Date: 2011
- Type: Text , Journal article
- Relation: TOP Vol.21, no.1 (2011), p. 1-22
- Full Text: false
- Reviewed:
- Description: In this paper, an algorithm for finding piecewise linear boundaries between pattern classes is developed. This algorithm consists of two main stages. In the first stage, a polyhedral conic set is used to identify data points which lie inside their classes, and in the second stage we exclude those points to compute a piecewise linear boundary using the remaining data points. Piecewise linear boundaries are computed incrementally starting with one hyperplane. Such an approach allows one to significantly reduce the computational effort in many large data sets. Results of numerical experiments are reported. These results demonstrate that the new algorithm consistently produces a good test set accuracy on most data sets comparing with a number of other mainstream classifiers. © 2011 Sociedad de EstadÃstica e Investigación Operativa.
A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes
- Authors: Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 68, no. 3 (2017), p. 501-535
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an -critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.
A quasisecant method for minimizing nonsmooth functions
- Authors: Bagirov, Adil , Ganjehlou, Asef Nazari
- Date: 2010
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.
A sharp augmented Lagrangian-based method in constrained non-convex optimization
- Authors: Bagirov, Adil , Ozturk, Gurkan , Kasimbeyli, Refail
- Date: 2019
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 34, no. 3 (2019), p. 462-488
- Full Text: false
- Reviewed:
- Description: In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature.
A simulated annealing-based maximum-margin clustering algorithm
- Authors: Seifollahi, Sattar , Bagirov, Adil , Borzeshi, Ehsan , Piccardi, Massimo
- Date: 2019
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 35, no. 1 (2019), p. 23-41
- Full Text:
- Reviewed:
- Description: Maximum-margin clustering is an extension of the support vector machine (SVM) to clustering. It partitions a set of unlabeled data into multiple groups by finding hyperplanes with the largest margins. Although existing algorithms have shown promising results, there is no guarantee of convergence of these algorithms to global solutions due to the nonconvexity of the optimization problem. In this paper, we propose a simulated annealing-based algorithm that is able to mitigate the issue of local minima in the maximum-margin clustering problem. The novelty of our algorithm is twofold, ie, (i) it comprises a comprehensive cluster modification scheme based on simulated annealing, and (ii) it introduces a new approach based on the combination of k-means++ and SVM at each step of the annealing process. More precisely, k-means++ is initially applied to extract subsets of the data points. Then, an unsupervised SVM is applied to improve the clustering results. Experimental results on various benchmark data sets (of up to over a million points) give evidence that the proposed algorithm is more effective at solving the clustering problem than a number of popular clustering algorithms.
Aggregate codifferential method for nonsmooth DC optimization
- Authors: Tor, Ali , Bagirov, Adil , Karasozen, Bulent
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Computational and Applied Mathematics Vol. 259, no. Part B (2014), p. 851-867
- Full Text: false
- Reviewed:
- Description: A new algorithm is developed based on the concept of codifferential for minimizing the difference of convex nonsmooth functions. Since the computation of the whole codifferential is not always possible, we use a fixed number of elements from the codifferential to compute the search directions. The convergence of the proposed algorithm is proved. The efficiency of the algorithm is demonstrated by comparing it with the subgradient, the truncated codifferential and the proximal bundle methods using nonsmooth optimization test problems.
Aggregate subgradient method for nonsmooth DC optimization
- Authors: Bagirov, Adil , Taheri, Sona , Joki, Kaisa , Karmitsa, Napsu , Mäkelä, Marko
- Date: 2021
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 15, no. 1 (2021), p. 83-96
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.