Your selections:

45Bagirov, Adil
13Ugon, Julien
11Taheri, Sona
9Karmitsa, Napsu
4Ganjehlou, Asef Nazari
4Joki, Kaisa
4Makela, Marko
4Ozturk, Gurkan
4Sukhorukova, Nadezda
3Karasozen, Bulent
3Kasimbeyli, Refail
3Ordin, Burak
3Sultanova, Nargiz
3Webb, Dean
2Al Nuaimat, Alia
2Beliakov, Gleb
2Burachik, Regina
2Kaya, Yalcin
2Long, Qiang
2Mammadov, Musa

Show More

Show Less

260103 Numerical and Computational Mathematics
220102 Applied Mathematics
21Nonconvex optimization
10Subdifferential
90802 Computation Theory and Mathematics
8Cluster analysis
7Optimization
6Smoothing techniques
50906 Electrical and Electronic Engineering
5Bundle methods
5Classification
5DC optimization
40801 Artificial Intelligence and Image Processing
4DC functions
4Data mining
4Incremental algorithm
4Incremental approach
4Numerical methods
4Regression analysis

Show More

Show Less

Format Type

Implementation of novel methods of global and nonsmooth optimization : GANSO programming library

- Beliakov, Gleb, Ugon, Julien

**Authors:**Beliakov, Gleb , Ugon, Julien**Date:**2007**Type:**Text , Journal article**Relation:**Optimization Vol. 56, no. 5-6 (2007), p. 543-546**Full Text:****Reviewed:****Description:**We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages.**Description:**C1**Description:**2003004865

**Authors:**Beliakov, Gleb , Ugon, Julien**Date:**2007**Type:**Text , Journal article**Relation:**Optimization Vol. 56, no. 5-6 (2007), p. 543-546**Full Text:****Reviewed:****Description:**We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages.**Description:**C1**Description:**2003004865

**Authors:**Sukhorukova, Nadezda**Date:**2008**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 23, no. 5 (2008), p. 793-810**Full Text:**false**Reviewed:****Description:**The classical Remez algorithm was developed for constructing the best polynomial approximations for continuous and discrete functions in an interval [a, b]. In this paper, the classical Remez algorithm is generalized to the problem of linear spline approximation with certain conditions on the spline parameters. Namely, the spline parameters have to be nonnegative and the values of the splines at one of the borders (or both borders) of the approximation intervals may be fixed. This type of constraint occurs in some practical applications, e.g. the problem of taxation tables restoration. The results of the numerical experiments with a Remez-like algorithm developed for this class of conditional optimization problems, are presented.**Description:**C1

New algorithms for multi-class cancer diagnosis using tumor gene expression signatures

- Bagirov, Adil, Ferguson, Brent, Ivkovic, Sasha, Saunders, Gary, Yearwood, John

**Authors:**Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John**Date:**2003**Type:**Text , Journal article**Relation:**Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807**Full Text:****Reviewed:****Description:**Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.**Description:**C1**Description:**2003000439

**Authors:**Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John**Date:**2003**Type:**Text , Journal article**Relation:**Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807**Full Text:****Reviewed:****Description:**Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.**Description:**C1**Description:**2003000439

A generalized subgradient method with piecewise linear subproblem

- Bagirov, Adil, Ganjehlou, Asef Nazari, Tor, Hakan, Ugon, Julien

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari , Tor, Hakan , Ugon, Julien**Date:**2010**Type:**Text , Journal article**Relation:**Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications and Algorithms Vol. 17, no. 5 (2010), p. 621-638**Full Text:**false**Reviewed:****Description:**In this paper, a new version of the quasisecant method for nonsmooth nonconvex optimization is developed. Quasisecants are overestimates to the objective function in some neighborhood of a given point. Subgradients are used to obtain quasisecants. We describe classes of nonsmooth functions where quasisecants can be computed explicitly. We show that a descent direction with suffcient decrease must satisfy a set of linear inequalities. In the proposed algorithm this set of linear inequalities is solved by applying the subgradient algorithm to minimize a piecewise linear function. We compare results of numerical experiments between the proposed algorithm and subgradient method. Copyright Â© 2010 Watam Press.

A quasisecant method for minimizing nonsmooth functions

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2010**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.

Max-min separability

**Authors:**Bagirov, Adil**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 20, no. 2-3 (2005), p. 271-290**Full Text:****Reviewed:****Description:**We consider the problem of discriminating two finite point sets in the n-dimensional space by a finite number of hyperplanes generating a piecewise linear function. If the intersection of these sets is empty, then they can be strictly separated by a max-min of linear functions. An error function is introduced. This function is nonconvex piecewise linear. We discuss an algorithm for its minimization. The results of numerical experiments using some real-world datasets are presented, which show the effectiveness of the proposed approach.**Description:**C1**Description:**2003001350

**Authors:**Bagirov, Adil**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 20, no. 2-3 (2005), p. 271-290**Full Text:****Reviewed:****Description:**We consider the problem of discriminating two finite point sets in the n-dimensional space by a finite number of hyperplanes generating a piecewise linear function. If the intersection of these sets is empty, then they can be strictly separated by a max-min of linear functions. An error function is introduced. This function is nonconvex piecewise linear. We discuss an algorithm for its minimization. The results of numerical experiments using some real-world datasets are presented, which show the effectiveness of the proposed approach.**Description:**C1**Description:**2003001350

A nonsmooth optimization approach to sensor network localization

- Bagirov, Adil, Lai, Daniel, Palaniswami, M.

**Authors:**Bagirov, Adil , Lai, Daniel , Palaniswami, M.**Date:**2007**Type:**Text , Conference paper**Relation:**Paper presented at 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, Melbourne, Victoria : 3rd-6th December 2007 p. 727-732**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Description:**In this paper the problem of localization of wireless sensor network is formulated as an unconstrained nonsmooth optimization problem. We minimize a distance objective function which incorporates unknown sensor nodes and nodes with known positions (anchors) in contrast to popular semidefinite programming (SDP) methods which use artificial objective functions. We study the main properties of the objective function in this problem and design an algorithm for its minimization. Our algorithm is a derivative-free discrete gradient method that allows one to find a near global solution. The algorithm can handle a large number of sensors in the network. This paper contains the theory of our proposed formulation and algorithm while experimental results are included in later work.**Description:**2003004949

**Authors:**Bagirov, Adil , Lai, Daniel , Palaniswami, M.**Date:**2007**Type:**Text , Conference paper**Relation:**Paper presented at 3rd International Conference on Intelligent Sensors, Sensor Networks and Information, ISSNIP 2007, Melbourne, Victoria : 3rd-6th December 2007 p. 727-732**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Description:**In this paper the problem of localization of wireless sensor network is formulated as an unconstrained nonsmooth optimization problem. We minimize a distance objective function which incorporates unknown sensor nodes and nodes with known positions (anchors) in contrast to popular semidefinite programming (SDP) methods which use artificial objective functions. We study the main properties of the objective function in this problem and design an algorithm for its minimization. Our algorithm is a derivative-free discrete gradient method that allows one to find a near global solution. The algorithm can handle a large number of sensors in the network. This paper contains the theory of our proposed formulation and algorithm while experimental results are included in later work.**Description:**2003004949

An incremental nonsmooth optimization algorithm for clustering using L1 and L∞ norms

- Ordin, Burak, Bagirov, Adil, Mohebi, Ehsam

**Authors:**Ordin, Burak , Bagirov, Adil , Mohebi, Ehsam**Date:**2020**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 16, no. 6 (2020), p. 2757-2779**Relation:**DP190100580**Full Text:**false**Reviewed:****Description:**An algorithm is developed for solving clustering problems with the similarity measure defined using the L1and L∞ norms. It is based on an incremental approach and applies nonsmooth optimization methods to find cluster centers. Computational results on 12 data sets are reported and the proposed algorithm is compared with the X-means algorithm. ©

Incremental DC optimization algorithm for large-scale clusterwise linear regression

- Bagirov, Adil, Taheri, Sona, Cimen, Emre

**Authors:**Bagirov, Adil , Taheri, Sona , Cimen, Emre**Date:**2021**Type:**Text , Journal article**Relation:**Journal of Computational and Applied Mathematics Vol. 389, no. (2021), p. 1-17**Relation:**https://purl.org/au-research/grants/arc/DP190100580**Full Text:**false**Reviewed:****Description:**The objective function in the nonsmooth optimization model of the clusterwise linear regression (CLR) problem with the squared regression error is represented as a difference of two convex functions. Then using the difference of convex algorithm (DCA) approach the CLR problem is replaced by the sequence of smooth unconstrained optimization subproblems. A new algorithm based on the DCA and the incremental approach is designed to solve the CLR problem. We apply the Quasi-Newton method to solve the subproblems. The proposed algorithm is evaluated using several synthetic and real-world data sets for regression and compared with other algorithms for CLR. Results demonstrate that the DCA based algorithm is efficient for solving CLR problems with the large number of data points and in particular, outperforms other algorithms when the number of input variables is small. © 2020 Elsevier B.V.

An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2008**Type:**Text , Journal article**Relation:**Mathematical Methods of Operations Research Vol. 67, no. 2 (2008), p. 187-206**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the proposed algorithm with two different versions of the subgradient method using the results of numerical experiments. These results demonstrate the superiority of the proposed algorithm over the subgradient method. © 2007 Springer-Verlag.**Description:**C1

Non-linear analogues of Lagrange functions in constrained optimization

**Authors:**Giri, Jason**Date:**2005**Type:**Text , Thesis , PhD**Full Text:****Description:**"This thesis investigates several non-linear analogues of Lagrange functions in the hope of answering the question 'Is it possible to generalise Lagrange functions such that they may be applied to a range of nonconvex objective problems?' The answer to this question is found to be yes for a particular class of optimization problems. Furthermore the thesis asserts that in derivative free optimization the general schema which is most theoretically and practically appealing involves the reformulation of both objective and constraint functions, whilst the least practically successful approach for everything but the most simple convex case is the augmented Lagrangian approach."**Description:**Doctor of Philosophy

**Authors:**Giri, Jason**Date:**2005**Type:**Text , Thesis , PhD**Full Text:****Description:**"This thesis investigates several non-linear analogues of Lagrange functions in the hope of answering the question 'Is it possible to generalise Lagrange functions such that they may be applied to a range of nonconvex objective problems?' The answer to this question is found to be yes for a particular class of optimization problems. Furthermore the thesis asserts that in derivative free optimization the general schema which is most theoretically and practically appealing involves the reformulation of both objective and constraint functions, whilst the least practically successful approach for everything but the most simple convex case is the augmented Lagrangian approach."**Description:**Doctor of Philosophy

An inexact modified subgradient algorithm for nonconvex optimization

- Burachik, Regina, Kaya, Yalcin, Mammadov, Musa

**Authors:**Burachik, Regina , Kaya, Yalcin , Mammadov, Musa**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. , no. (2008), p. 1-24**Full Text:****Reviewed:****Description:**We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes. © 2008 Springer Science+Business Media, LLC.**Description:**C1

**Authors:**Burachik, Regina , Kaya, Yalcin , Mammadov, Musa**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. , no. (2008), p. 1-24**Full Text:****Reviewed:****Description:**We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes. © 2008 Springer Science+Business Media, LLC.**Description:**C1

Discrete gradient method : Derivative-free method for nonsmooth optimization

- Bagirov, Adil, Karasozen, Bulent, Sezer, Monsalve

**Authors:**Bagirov, Adil , Karasozen, Bulent , Sezer, Monsalve**Date:**2008**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 137, no. 2 (2008), p. 317-334**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**A new derivative-free method is developed for solving unconstrained nonsmooth optimization problems. This method is based on the notion of a discrete gradient. It is demonstrated that the discrete gradients can be used to approximate subgradients of a broad class of nonsmooth functions. It is also shown that the discrete gradients can be applied to find descent directions of nonsmooth functions. The preliminary results of numerical experiments with unconstrained nonsmooth optimization problems as well as the comparison of the proposed method with the nonsmooth optimization solver DNLP from CONOPT-GAMS and the derivative-free optimization solver CONDOR are presented. © 2007 Springer Science+Business Media, LLC.**Description:**C1

An update rule and a convergence result for a penalty function method

- Burachik, Regina, Kaya, Yalcin

**Authors:**Burachik, Regina , Kaya, Yalcin**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial & Management Optimization Vol. 3, no. 2 (2007), p. 381-398**Full Text:****Reviewed:****Description:**We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results.**Description:**C1**Description:**2003004886

**Authors:**Burachik, Regina , Kaya, Yalcin**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial & Management Optimization Vol. 3, no. 2 (2007), p. 381-398**Full Text:****Reviewed:****Description:**We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results.**Description:**C1**Description:**2003004886

An algorithm for the estimation of a regression function by continuous piecewise linear functions

- Bagirov, Adil, Clausen, Conny, Kohler, Michael

**Authors:**Bagirov, Adil , Clausen, Conny , Kohler, Michael**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 45, no. (2008), p. 159-179**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones.**Description:**The problem of the estimation of a regression function by continuous piecewise linear functions is formulated as a nonconvex, nonsmooth optimization problem. Estimates are defined by minimization of the empirical L 2 risk over a class of functions, which are defined as maxima of minima of linear functions. An algorithm for finding continuous piecewise linear functions is presented. We observe that the objective function in the optimization problem is semismooth, quasidifferentiable and piecewise partially separable. The use of these properties allow us to design an efficient algorithm for approximation of subgradients of the objective function and to apply the discrete gradient method for its minimization. We present computational results with some simulated data and compare the new estimator with a number of existing ones. © 2008 Springer Science+Business Media, LLC.

Piecewise partially separable functions and a derivative-free algorithm for large scale nonsmooth optimization

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2006**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 35, no. 2 (Jun 2006), p. 163-195**Full Text:****Reviewed:****Description:**This paper introduces the notion of piecewise partially separable functions and studies their properties. We also consider some of many applications of these functions. Finally, we consider the problem of minimizing of piecewise partially separable functions and develop an algorithm for its solution. This algorithm exploits the structure of such functions. We present the results of preliminary numerical experiments.**Description:**2003001532

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2006**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 35, no. 2 (Jun 2006), p. 163-195**Full Text:****Reviewed:****Description:**This paper introduces the notion of piecewise partially separable functions and studies their properties. We also consider some of many applications of these functions. Finally, we consider the problem of minimizing of piecewise partially separable functions and develop an algorithm for its solution. This algorithm exploits the structure of such functions. We present the results of preliminary numerical experiments.**Description:**2003001532

Modified global k-means algorithm for minimum sum-of-squares clustering problems

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

**Authors:**Bagirov, Adil**Date:**2008**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 41, no. 10 (2008), p. 3192-3199**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:****Reviewed:****Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.**Description:**k-Means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm. © 2008 Elsevier Ltd. All rights reserved.**Description:**2003001713

Nonsmooth optimisation approach to data classification

- Bagirov, Adil, Soukhoroukova, Nadejda

**Authors:**Bagirov, Adil , Soukhoroukova, Nadejda**Date:**2001**Type:**Text , Conference paper**Relation:**Paper presented at Post-graduate ADFA Conference for Computer Science, PACCS01, Canberra, Australian Capital Territory : 14th July 2001**Full Text:****Description:**We reduce the supervised classification to solving a nonsmooth optimization problem. The proposed method allows one to solve classification problems for databases with arbitrary number of classes. Numerical experiments have been carried out with databases of small and medium size. We present their results and provide comparison of these results with ones obtained by other algorithms of classification based on the optimization techniques. Results of numerical experiments show effectiveness of the proposed algorithms.**Description:**2003003668

**Authors:**Bagirov, Adil , Soukhoroukova, Nadejda**Date:**2001**Type:**Text , Conference paper**Relation:**Paper presented at Post-graduate ADFA Conference for Computer Science, PACCS01, Canberra, Australian Capital Territory : 14th July 2001**Full Text:****Description:**We reduce the supervised classification to solving a nonsmooth optimization problem. The proposed method allows one to solve classification problems for databases with arbitrary number of classes. Numerical experiments have been carried out with databases of small and medium size. We present their results and provide comparison of these results with ones obtained by other algorithms of classification based on the optimization techniques. Results of numerical experiments show effectiveness of the proposed algorithms.**Description:**2003003668

Approximations for some classes of optimal control problems with state constraints and repetitive control systems

**Authors:**Dymkou, Siarhei**Date:**2005**Type:**Text , Thesis , Masters**Full Text:**false**Description:**This report presents the investigation on the research work "Approximations for some classes of optimal control problems with state constraints and repetitive control systems". This report mainly gives the theoretical part of the research which is based on the numerical methods that are developed. It is conjectured that the corresponding algorithms will be realized as computer programs after which they can be used in practical work.**Description:**Master of Information Technology

**Authors:**Dymkou, Siarhei**Date:**2005**Type:**Text , Thesis , Masters**Full Text:**false**Description:**This report presents the investigation on the research work "Approximations for some classes of optimal control problems with state constraints and repetitive control systems". This report mainly gives the theoretical part of the research which is based on the numerical methods that are developed. It is conjectured that the corresponding algorithms will be realized as computer programs after which they can be used in practical work.**Description:**Master of Information Technology

Derivative free algorithms for nonsmooth and global optimization with application in cluster analysis

**Authors:**Ganjehlou, Asef Nazari**Date:**2009**Type:**Text , Thesis , PhD**Full Text:****Description:**This thesis is devoted to the development of algorithms for solving nonsmooth nonconvex problems. Some of these algorithms are derivative free methods.**Description:**Doctor of Philosophy

**Authors:**Ganjehlou, Asef Nazari**Date:**2009**Type:**Text , Thesis , PhD**Full Text:****Description:**This thesis is devoted to the development of algorithms for solving nonsmooth nonconvex problems. Some of these algorithms are derivative free methods.**Description:**Doctor of Philosophy

Are you sure you would like to clear your session, including search history and login status?