Your selections:

9Nonsmooth optimization
40102 Applied Mathematics
4Subdifferential
30103 Numerical and Computational Mathematics
30802 Computation Theory and Mathematics
2Data mining
2Global k-means algorithm
2Minimum sum-of-squares clustering
2Nonconvex optimization
2Optimization
2Semismooth functions
2Supervised learning
10801 Artificial Intelligence and Image Processing
10806 Information Systems
10899 Other Information and Computing Sciences
10902 Automotive Engineering
1Affinity matrix
1Algorithms
1Artificial intelligence
1Black box method

Show More

Show Less

Format Type

A quasisecant method for minimizing nonsmooth functions

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2010**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.

An efficient algorithm for the incremental construction of a piecewise linear classifier

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2011**Type:**Text , Journal article**Relation:**Information Systems Vol. 36, no. 4 (2011), p. 782-790**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper the problem of finding piecewise linear boundaries between sets is considered and is applied for solving supervised data classification problems. An algorithm for the computation of piecewise linear boundaries, consisting of two main steps, is proposed. In the first step sets are approximated by hyperboxes to find so-called "indeterminate" regions between sets. In the second step sets are separated inside these "indeterminate" regions by piecewise linear functions. These functions are computed incrementally starting with a linear function. Results of numerical experiments are reported. These results demonstrate that the new algorithm requires a reasonable training time and it produces consistently good test set accuracy on most data sets comparing with mainstream classifiers. Â© 2010 Elsevier B.V. All rights reserved.

A nonsmooth optimization approach to sensor network localization

- Bagirov, Adil, Lai, Daniel, Palaniswami, M.

**Authors:**Bagirov, Adil , Lai, Daniel , Palaniswami, M.**Date:**2007**Type:**Text , Conference paper**Relation:**Paper presented at 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, Melbourne, Victoria : 3rd-6th December 2007 p. 727-732**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Description:**In this paper the problem of localization of wireless sensor network is formulated as an unconstrained nonsmooth optimization problem. We minimize a distance objective function which incorporates unknown sensor nodes and nodes with known positions (anchors) in contrast to popular semidefinite programming (SDP) methods which use artificial objective functions. We study the main properties of the objective function in this problem and design an algorithm for its minimization. Our algorithm is a derivative-free discrete gradient method that allows one to find a near global solution. The algorithm can handle a large number of sensors in the network. This paper contains the theory of our proposed formulation and algorithm while experimental results are included in later work.**Description:**2003004949

**Authors:**Bagirov, Adil , Lai, Daniel , Palaniswami, M.**Date:**2007**Type:**Text , Conference paper**Relation:**Paper presented at 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, Melbourne, Victoria : 3rd-6th December 2007 p. 727-732**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Description:**In this paper the problem of localization of wireless sensor network is formulated as an unconstrained nonsmooth optimization problem. We minimize a distance objective function which incorporates unknown sensor nodes and nodes with known positions (anchors) in contrast to popular semidefinite programming (SDP) methods which use artificial objective functions. We study the main properties of the objective function in this problem and design an algorithm for its minimization. Our algorithm is a derivative-free discrete gradient method that allows one to find a near global solution. The algorithm can handle a large number of sensors in the network. This paper contains the theory of our proposed formulation and algorithm while experimental results are included in later work.**Description:**2003004949

An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

Discrete gradient method : Derivative-free method for nonsmooth optimization

- Bagirov, Adil, Karasozen, Bulent, Sezer, Monsalve

**Authors:**Bagirov, Adil , Karasozen, Bulent , Sezer, Monsalve**Date:**2008**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 137, no. 2 (2008), p. 317-334**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented. © 2007 Springer Science+Business Media, LLC.**Description:**C1

An algorithm for the estimation of a regression function by continuous piecewise linear functions

- Bagirov, Adil, Clausen, Conny, Kohler, Michael

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 45, no. (2008), p. 159-179**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones.**Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones. © 2008 Springer Science+Business Media, LLC.

Modified global k-means algorithm for minimum sum-of-squares clustering problems

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

Fast modified global k-means algorithm for incremental cluster construction

- Bagirov, Adil, Ugon, Julien, Webb, Dean

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 44, no. 4 (2011), p. 866-876**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**The k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and are inefficient for solving clustering problems in large datasets. Recently, incremental approaches have been developed to resolve difficulties with the choice of starting points. The global k-means and the modified global k-means algorithms are based on such an approach. They iteratively add one cluster center at a time. Numerical experiments show that these algorithms considerably improve the k-means algorithm. However, they require storing the whole affinity matrix or computing this matrix at each iteration. This makes both algorithms time consuming and memory demanding for clustering even moderately large datasets. In this paper, a new version of the modified global k-means algorithm is proposed. We introduce an auxiliary cluster function to generate a set of starting points lying in different parts of the dataset. We exploit information gathered in previous iterations of the incremental algorithm to eliminate the need of computing or storing the whole affinity matrix and thereby to reduce computational effort and memory usage. Results of numerical experiments on six standard datasets demonstrate that the new algorithm is more efficient than the global and the modified global k-means algorithms. Â© 2010 Elsevier Ltd. All rights reserved.

Codifferential method for minimizing nonsmooth DC functions

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2011**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 50, no. 1 (2011), p. 3-22**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given. © 2010 Springer Science+Business Media, LLC.

Optimization methods and the k-committees algorithm for clustering of sequence data

- Yearwood, John, Bagirov, Adil, Kelarev, Andrei

**Authors:**Yearwood, John , Bagirov, Adil , Kelarev, Andrei**Date:**2009**Type:**Text , Journal article**Relation:**Applied and Computational Mathematics Vol. 8, no. 1 (2009), p. 92-101**Relation:**http://purl.org/au-research/grants/arc/DP0211866**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Description:**The present paper is devoted to new algorithms for unsupervised clustering based on the optimization approaches due to [2], [3] and [4]. We consider a novel situation, where the datasets consist of nucleotide or protein sequences and rather sophisticated biologically significant alignment scores have to be used as a measure of distance. Sequences of this kind cannot be regarded as points in a finite dimensional space. Besides, the alignment scores do not satisfy properties of Minkowski metrics. Nevertheless the optimization approaches have made it possible to introduce a new k-committees algorithm and compare its performance with previous algorithms for two datasets. Our experimental results show that the k-committees algorithms achieves intermediate accuracy for a dataset of ITS sequences, and it can perform better than the discrete k-means and Nearest Neighbour algorithms for certain datasets. All three algorithms achieve good agreement with clusters published in the biological literature before and can be used to obtain biologically significant clusterings.

Comparing different nonsmooth minimization methods and software

- Karmitsa, Napsu, Bagirov, Adil, Makela, Marko

**Authors:**Karmitsa, Napsu , Bagirov, Adil , Makela, Marko**Date:**2012**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 27, no. 1 (2012), p. 131-153**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**Most nonsmooth optimization (NSO) methods can be divided into two main groups: subgradient methods and bundle methods. In this paper, we test and compare different methods from both groups as well as some methods which may be considered as hybrids of these two and/or some others. All the solvers tested are so-called general black box methods which, at least in theory, can be applied to solve almost all NSO problems. The test set includes a large number of unconstrained nonsmooth convex and nonconvex problems of different size. In particular, it includes piecewise linear and quadratic problems. The aim of this work is not to foreground some methods over the others but to get some insight on which method to select for certain types of problems. © 2012 Taylor and Francis Group, LLC.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent**Date:**2011**Type:**Text , Journal article**Relation:**Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?