The discrete gradient evolutionary strategy method for global optimization
- Authors: Abbas, Hussein , Bagirov, Adil , Zhang, Jiapu
- Date: 2003
- Type: Text , Conference paper
- Relation: Paper presented at the Congress on Evolutionary Computation CEC 2003, Canberra : 8th December, 2003
- Full Text:
- Reviewed:
- Description: Global optimization problems continue to be a challenge in computational mathematics. The field is progressing in two streams: deterministic and heuristic approaches. In this paper, we present a hybrid method that uses the discrete gradient method, which is a derivative free local search method, and evolutionary strategies. We show that the hybridization of the two methods is better than each of them in isolation.
- Description: E1
- Description: 2003000440
Batch clustering algorithm for big data sets
- Authors: Alguliyev, Rasim , Aliguliyev, Ramiz , Bagirov, Adil , Karimov, Rafael
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 10th IEEE International Conference on Application of Information and Communication Technologies, AICT 2016; Baku, Azerbaijan; 12th-14th October 2016 p. 1-4
- Full Text: false
- Reviewed:
- Description: Vast spread of computing technologies has led to abundance of large data sets. Today tech companies like, Google, Facebook, Twitter and Amazon handle big data sets and log terabytes, if not petabytes, of data per day. Thus, there is a need to find similarities and define groupings among the elements of these big data sets. One of the ways to find these similarities is data clustering. Currently, there exist several data clustering algorithms which differ by their application area and efficiency. Increase in computational power and algorithmic improvements have reduced the time for clustering of big data sets. But it usually happens that big data sets can't be processed whole due to hardware and computational restrictions. In this paper, the classic k-means clustering algorithm is compared to the proposed batch clustering (BC) algorithm for the required computation time and objective function. The BC algorithm is designed to cluster large data sets in batches but maintain the efficiency and quality. Several experiments confirm that batch clustering algorithm for big data sets is more efficient in using computational power, data storage and results in better clustering compared to k-means algorithm. The experiments are conducted with the data set of 2 (two) million two-dimensional data points. © 2016 IEEE.
Global optimization in the summarization of text documents
- Authors: Alyguliev, R. M. , Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Automatic Control and Computer Sciences Vol. 39, no. 6 (2005), p. 42-47
- Full Text: false
- Reviewed:
- Description: In order to ensure minimal redundancy in the summary of a document and the greatest possible coverage of its content a method for the construction of summaries (summarization) based on the clustering of sentences is proposed in the article. Clustering of sentences reduces to a determination of cluster centroids the mathematical realization of which relies on a problem of global optimization. A determination of the number of clusters is one of the complex problems in the clustering procedure. Therefore, an algorithm of stepwise determination of the number of clusters is also proposed in the present study. © 2006 by Allerton Press, Inc.
- Description: C1
Global optimization of marginal functions with applications to economic equilibrium
- Authors: Bagirov, Adil , Rubinov, Alex
- Date: 2001
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 20, no. 3-4 (Aug 2001), p. 215-237
- Full Text: false
- Reviewed:
- Description: We discuss the applicability of the cutting angle method to global minimization of marginal functions. The search of equilibrium prices in the exchange model can be reduced to the global minimization of certain functions, which include marginal functions. This problem has been approximately solved by the cutting angle method. Results of numerical experiments are presented and discussed.
An algorithm for minimizing clustering functions
- Authors: Bagirov, Adil , Ugon, Julien
- Date: 2005
- Type: Text , Journal article
- Relation: Optimization Vol. 54, no. 4-5 (Aug-Oct 2005), p. 351-368
- Full Text:
- Reviewed:
- Description: The problem of cluster analysis is formulated as a problem of nonsmooth, nonconvex optimization. An algorithm for solving the latter optimization problem is developed which allows one to significantly reduce the computational efforts. This algorithm is based on the so-called discrete gradient method. Results of numerical experiments are presented which demonstrate the effectiveness of the proposed algorithm.
- Description: C1
- Description: 2003001266
New algorithms for multi-class cancer diagnosis using tumor gene expression signatures
- Authors: Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John
- Date: 2003
- Type: Text , Journal article
- Relation: Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807
- Full Text:
- Reviewed:
- Description: Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.
- Description: C1
- Description: 2003000439
A generalized subgradient method with piecewise linear subproblem
- Authors: Bagirov, Adil , Ganjehlou, Asef Nazari , Tor, Hakan , Ugon, Julien
- Date: 2010
- Type: Text , Journal article
- Relation: Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 621-638
- Full Text: false
- Reviewed:
- Description: In this paper, a new version of the quasisecant method for nonsmooth nonconvex optimization is developed. Quasisecants are overestimates to the objective function in some neighborhood of a given point. Subgradients are used to obtain quasisecants. We describe classes of nonsmooth functions where quasisecants can be computed explicitly. We show that a descent direction with suffcient decrease must satisfy a set of linear inequalities. In the proposed algorithm this set of linear inequalities is solved by applying the subgradient algorithm to minimize a piecewise linear function. We compare results of numerical experiments between the proposed algorithm and subgradient method. Copyright © 2010 Watam Press.
A quasisecant method for minimizing nonsmooth functions
- Authors: Bagirov, Adil , Ganjehlou, Asef Nazari
- Date: 2010
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.
A global optimization approach to classification
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John
- Date: 2002
- Type: Text , Journal article
- Relation: Optimization and Engineering Vol. 9, no. 7 (2002), p. 129-155
- Full Text: false
- Reviewed:
- Description: In this paper is presented an hybrid algorithm for finding the absolute extreme point of a multimodal scalar function of many variables. The algorithm is suitable when the objective function is expensive to compute, the computation can be affected by noise and/or partial derivatives cannot be calculated. The method used is a genetic modification of a previous algorithm based on the Prices method. All information about behavior of objective function collected on previous iterates are used to chose new evaluation points. The genetic part of the algorithm is very effective to escape from local attractors of the algorithm and assures convergence in probability to the global optimum. The proposed algorithm has been tested on a large set of multimodal test problems outperforming both the modified Prices algorithm and classical genetic approach.
- Description: C1
- Description: 2003000061
An optimization-based approach to patient grouping for acute healthcare in Australia
- Authors: Bagirov, Adil , Churilov, Leonid
- Date: 2003
- Type: Text , Conference paper
- Relation: Paper presented at Computational Science - ICCS 2003 Conference, Melbourne : 2nd June, 2003
- Full Text: false
- Reviewed:
- Description: The problem of cluster analysis is formulated as a problem of nonsmooth, nonconvex optimization, and an algorithm for solving the cluster analysis problem based on the nonsmooth optimization techniques is developed. The issues of applying this algorithm to large data sets are discussed and a feature selection procedure is demonstrated. The algorithm is then applied to a hospital data set to generate new knowledge about different patterns of patients resource consumption.
- Description: E1
- Description: 2003000434
Estimation of a regression function by maxima of minima of linear functions
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 55, no. 2 (2009), p. 833-845
- Full Text:
- Reviewed:
- Description: In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data. © 2009 IEEE.
Feature selection using misclassification counts
- Authors: Bagirov, Adil , Yatsko, Andrew , Stranieri, Andrew
- Date: 2011
- Type: Conference proceedings , Unpublished work
- Relation: Proceedings of the 9th Australasian Data Mining Conference (AusDM 2011), 51-62. Conferences in Research and Practice in Information Technology (CRPIT), Vol. 121.
- Full Text:
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
- Description: Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and the data acquisition effort, considering all data components being accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree, to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance, what ranking does not immediately decide. Additionally, feature ranking methods available from different independent sources are called in for direct comparison.
An efficient algorithm for the incremental construction of a piecewise linear classifier
- Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean
- Date: 2011
- Type: Text , Journal article
- Relation: Information Systems Vol. 36, no. 4 (2011), p. 782-790
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: In this paper the problem of finding piecewise linear boundaries between sets is considered and is applied for solving supervised data classification problems. An algorithm for the computation of piecewise linear boundaries, consisting of two main steps, is proposed. In the first step sets are approximated by hyperboxes to find so-called "indeterminate" regions between sets. In the second step sets are separated inside these "indeterminate" regions by piecewise linear functions. These functions are computed incrementally starting with a linear function. Results of numerical experiments are reported. These results demonstrate that the new algorithm requires a reasonable training time and it produces consistently good test set accuracy on most data sets comparing with mainstream classifiers. © 2010 Elsevier B.V. All rights reserved.
A global optimisation approach to classification in medical diagnosis and prognosis
- Authors: Bagirov, Adil , Rubinov, Alex , Yearwood, John , Stranieri, Andrew
- Date: 2001
- Type: Text , Conference paper
- Relation: Paper presented at 34th Hawaii International Conference on System Sciences, HICSS-34, Maui, Hawaii, USA : 3rd-6th January 2001
- Full Text:
- Description: In this paper global optimisation-based techniques are studied in order to increase the accuracy of medical diagnosis and prognosis with FNA image data from the Wisconsin Diagnostic and Prognostic Breast Cancer databases. First we discuss the problem of determining the most informative features for the classification of cancerous cases in the databases under consideration. Then we apply a technique based on convex and global optimisation to breast cancer diagnosis. It allows the classification of benign cases and malignant ones and the subsequent diagnosis of patients with very high accuracy. The third application of this technique is a method that calculates centres of clusters to predict when breast cancer is likely to recur in patients for which cancer has been removed. The technique achieves higher accuracy with these databases than reported elsewhere in the literature.
- Description: 2003003950
Max-min separability
- Authors: Bagirov, Adil
- Date: 2005
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 20, no. 2-3 (2005), p. 271-290
- Full Text:
- Reviewed:
- Description: We consider the problem of discriminating two finite point sets in the n-dimensional space by a finite number of hyperplanes generating a piecewise linear function. If the intersection of these sets is empty, then they can be strictly separated by a max-min of linear functions. An error function is introduced. This function is nonconvex piecewise linear. We discuss an algorithm for its minimization. The results of numerical experiments using some real-world datasets are presented, which show the effectiveness of the proposed approach.
- Description: C1
- Description: 2003001350
A nonsmooth optimization approach to sensor network localization
- Authors: Bagirov, Adil , Lai, Daniel , Palaniswami, M.
- Date: 2007
- Type: Text , Conference paper
- Relation: Paper presented at 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, Melbourne, Victoria : 3rd-6th December 2007 p. 727-732
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text:
- Description: In this paper the problem of localization of wireless sensor network is formulated as an unconstrained nonsmooth optimization problem. We minimize a distance objective function which incorporates unknown sensor nodes and nodes with known positions (anchors) in contrast to popular semidefinite programming (SDP) methods which use artificial objective functions. We study the main properties of the objective function in this problem and design an algorithm for its minimization. Our algorithm is a derivative-free discrete gradient method that allows one to find a near global solution. The algorithm can handle a large number of sensors in the network. This paper contains the theory of our proposed formulation and algorithm while experimental results are included in later work.
- Description: 2003004949
A multidimensional descent method for global optimization
- Authors: Bagirov, Adil , Rubinov, Alex , Zhang, Jiapu
- Date: 2009
- Type: Text , Journal article
- Relation: Optimization Vol. 58, no. 5 (2009), p. 611-625
- Full Text: false
- Reviewed:
- Description: This article presents a new multidimensional descent method for solving global optimization problems with box-constraints. This is a hybrid method where local search method is used for a local descent and global search is used for further multidimensional search on the subsets of intersection of cones generated by the local search method and the feasible region. The discrete gradient method is used for local search and the cutting angle method is used for global search. Two-and three-dimensional cones are used for the global search. Such an approach allows one, as a rule, to escape local minimizers which are not global ones. The proposed method is local optimization method with strong global search properties. We present results of numerical experiments using both smooth and non-smooth global optimization test problems. These results demonstrate that the proposed algorithm allows one to find a global or a near global minimizer.
Alexander Rubinov - An outstanding scholar
- Authors: Bagirov, Adil
- Date: 2010
- Type: Text , Journal article
- Relation: Pacific Journal of Optimization Vol. 6, no. 2, Suppl. 1 (2010), p. 203-209
- Full Text: false
Incremental DC optimization algorithm for large-scale clusterwise linear regression
- Authors: Bagirov, Adil , Taheri, Sona , Cimen, Emre
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Computational and Applied Mathematics Vol. 389, no. (2021), p. 1-17
- Relation: https://purl.org/au-research/grants/arc/DP190100580
- Full Text: false
- Reviewed:
- Description: The objective function in the nonsmooth optimization model of the clusterwise linear regression (CLR) problem with the squared regression error is represented as a difference of two convex functions. Then using the difference of convex algorithm (DCA) approach the CLR problem is replaced by the sequence of smooth unconstrained optimization subproblems. A new algorithm based on the DCA and the incremental approach is designed to solve the CLR problem. We apply the Quasi-Newton method to solve the subproblems. The proposed algorithm is evaluated using several synthetic and real-world data sets for regression and compared with other algorithms for CLR. Results demonstrate that the DCA based algorithm is efficient for solving CLR problems with the large number of data points and in particular, outperforms other algorithms when the number of input variables is small. © 2020 Elsevier B.V.
Nonsmooth optimization approach to clustering
- Authors: Bagirov, Adil
- Date: 2009
- Type: Text , Book chapter
- Relation: Encyclopedia of Optimization Chapter p. 2664-2671
- Full Text: false
- Description: 2003007532