Your selections:

6Nonsmooth optimization
50802 Computation Theory and Mathematics
40102 Applied Mathematics
30103 Numerical and Computational Mathematics
30801 Artificial Intelligence and Image Processing
3Regression analysis
20806 Information Systems
20906 Electrical and Electronic Engineering
2DC programming
2Non-smooth optimization
2Nonconvex optimization
2Similarity measure
2Smoothing techniques
10299 Other Physical Sciences
10401 Atmospheric Sciences
10899 Other Information and Computing Sciences
1Bundle methods
1Classification
1Clustering functions

Show More

Show Less

Format Type

A method of truncated codifferential with application to some problems of cluster analysis

- Demyanov, Vladimir, Bagirov, Adil, Rubinov, Alex

**Authors:**Demyanov, Vladimir , Bagirov, Adil , Rubinov, Alex**Date:**2002**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 23, no. 1 (May 2002), p. 63-80**Full Text:**false**Reviewed:****Description:**A method of truncated codifferential descent for minimizing continuously codifferentiable functions is suggested. The convergence of the method is studied. Results of numerical experiments are presented. Application of the suggested method for the solution of some problems of cluster analysis are discussed. In numerical experiments Wisconsin Diagnostic Breast Cancer database was used.**Description:**2003000062

An algorithm for clustering based on non-smooth optimization techniques

- Bagirov, Adil, Rubinov, Alex, Sukhorukova, Nadezda, Yearwood, John

**Authors:**Bagirov, Adil , Rubinov, Alex , Sukhorukova, Nadezda , Yearwood, John**Date:**2003**Type:**Text , Journal article**Relation:**International Transactions in Operational Research Vol. 10, no. 6 (2003), p. 611-617**Full Text:**false**Reviewed:****Description:**The problem of cluster analysis is formulated as a problem of non-smooth, non-convex optimization, and an algorithm for solving the cluster analysis problem based on non-smooth optimization techniques is developed. We discuss applications of this algorithm in large databases. Results of numerical experiments are presented to demonstrate the effectiveness of this algorithm.**Description:**C1**Description:**2003000422

An algorithm for clustering using L1-norm based on hyperbolic smoothing technique

- Bagirov, Adil, Mohebi, Ehsan

**Authors:**Bagirov, Adil , Mohebi, Ehsan**Date:**2016**Type:**Text , Journal article**Relation:**Computational Intelligence Vol. 32, no. 3 (2016), p. 439-457**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Cluster analysis deals with the problem of organization of a collection of objects into clusters based on a similarity measure, which can be defined using various distance functions. The use of different similarity measures allows one to find different cluster structures in a data set. In this article, an algorithm is developed to solve clustering problems where the similarity measure is defined using the L1-norm. The algorithm is designed using the nonsmooth optimization approach to the clustering problem. Smoothing techniques are applied to smooth both the clustering function and the L1-norm. The algorithm computes clusters sequentially and finds global or near global solutions to the clustering problem. Results of numerical experiments using 12 real-world data sets are reported, and the proposed algorithm is compared with two other clustering algorithms. ©2015 Wiley Periodicals, Inc.

An algorithm for minimizing clustering functions

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Vol. 54, no. 4-5 (Aug-Oct 2005), p. 351-368**Full Text:****Reviewed:****Description:**The problem of cluster analysis is formulated as a problem of nonsmooth, nonconvex optimization. An algorithm for solving the latter optimization problem is developed which allows one to significantly reduce the computational efforts. This algorithm is based on the so-called discrete gradient method. Results of numerical experiments are presented which demonstrate the effectiveness of the proposed algorithm.**Description:**C1**Description:**2003001266

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Vol. 54, no. 4-5 (Aug-Oct 2005), p. 351-368**Full Text:****Reviewed:****Description:**The problem of cluster analysis is formulated as a problem of nonsmooth, nonconvex optimization. An algorithm for solving the latter optimization problem is developed which allows one to significantly reduce the computational efforts. This algorithm is based on the so-called discrete gradient method. Results of numerical experiments are presented which demonstrate the effectiveness of the proposed algorithm.**Description:**C1**Description:**2003001266

An incremental clustering algorithm based on hyperbolic smoothing

- Bagirov, Adil, Ordin, Burak, Ozturk, Gurkan, Xavier, Adilson

**Authors:**Bagirov, Adil , Ordin, Burak , Ozturk, Gurkan , Xavier, Adilson**Date:**2015**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 61, no. 1 (2015), p. 219-241**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important problem in data mining. It can be formulated as a nonsmooth, nonconvex optimization problem. For the most global optimization techniques this problem is challenging even in medium size data sets. In this paper, we propose an approach that allows one to apply local methods of smooth optimization to solve the clustering problems. We apply an incremental approach to generate starting points for cluster centers which enables us to deal with nonconvexity of the problem. The hyperbolic smoothing technique is applied to handle nonsmoothness of the clustering problems and to make it possible application of smooth optimization algorithms to solve them. Results of numerical experiments with eleven real-world data sets and the comparison with state-of-the-art incremental clustering algorithms demonstrate that the smooth optimization algorithms in combination with the incremental approach are powerful alternative to existing clustering algorithms.

Cluster analysis of a tobacco control data set

- Dzalilov, Zari, Bagirov, Adil

**Authors:**Dzalilov, Zari , Bagirov, Adil**Date:**2010**Type:**Text , Journal article**Relation:**International Journal of Lean Thinking Vol. 1, no. 2 (2010), p.**Full Text:**false**Reviewed:****Description:**Development of theoretical and methodological frameworks in data analysis is fundamental for modeling complex tobacco control systems. Following this idea, a new optimization based approach was introduced in the paper through two distinct methods: the modified linear least square fit and a heuristic algorithm for feature slection based on optimization-based methods have the potential to detect nonlinearity, and therefore to be more effective analysis tools of complex data set. In this study we evaluate the modified global k-means clustering algorithm by applying it to a massive set of real-time tobacco control survey data. Cluster analysis identified fixed and stable clusters in the studied data. These clusters correspond to groups of smokers with similar behaviour and the identification of these clusters may allow us to give recommendations on modification of existing tobacco control systems and on the design of future data acquistion surveys.

Clustering in large data sets with the limited memory bundle method

- Karmitsa, Napsu, Bagirov, Adil, Taheri, Sona

**Authors:**Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 83, no. (2018), p. 245-259**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The aim of this paper is to design an algorithm based on nonsmooth optimization techniques to solve the minimum sum-of-squares clustering problems in very large data sets. First, the clustering problem is formulated as a nonsmooth optimization problem. Then the limited memory bundle method [Haarala et al., 2007] is modified and combined with an incremental approach to design a new clustering algorithm. The algorithm is evaluated using real world data sets with both the large number of attributes and the large number of data points. It is also compared with some other optimization based clustering algorithms. The numerical results demonstrate the efficiency of the proposed algorithm for clustering in very large data sets.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets. © 2017 Informa UK Limited, trading as Taylor & Francis Group.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.

Nonsmooth DC programming approach to the minimum sum-of-squares clustering problems

- Bagirov, Adil, Taheri, Sona, Ugon, Julien

**Authors:**Bagirov, Adil , Taheri, Sona , Ugon, Julien**Date:**2016**Type:**Text , Journal article**Relation:**Pattern Recognition Vol. 53, no. (2016), p. 12-24**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**This paper introduces an algorithm for solving the minimum sum-of-squares clustering problems using their difference of convex representations. A non-smooth non-convex optimization formulation of the clustering problem is used to design the algorithm. Characterizations of critical points, stationary points in the sense of generalized gradients and inf-stationary points of the clustering problem are given. The proposed algorithm is tested and compared with other clustering algorithms using large real world data sets. © 2015 Elsevier Ltd. All rights reserved.

Nonsmooth optimization based algorithms in cluster analysis

- Bagirov, Adil, Mohebi, Ehsan

**Authors:**Bagirov, Adil , Mohebi, Ehsan**Date:**2015**Type:**Text , Book chapter**Relation:**Partitional Clustering Algorithms p. 99-146**Full Text:**false**Reviewed:****Description:**Cluster analysis is an important task in data mining. It deals with the problem of organization of a collection of objects into clusters based on a similarity measure. Various distance functions can be used to define the similarity measure. Cluster analysis problems with the similarity measure defined by the squared Euclidean distance, which is also known as the minimum sum-of-squares clustering, has been studied extensively over the last five decades. L1 and L1 norms have attracted less attention. In this chapter, we consider a nonsmooth nonconvex optimization formulation of the cluster analysis problems. This formulation allows one to easily apply similarity measures defined using different distance functions. Moreover, an efficient incremental algorithm can be designed based on this formulation to solve the clustering problems. We develop incremental algorithms for solving clustering problems where the similarity measure is defined using the L1; L2 and L1 norms. We also consider different algorithms for solving nonsmooth nonconvex optimization problems in cluster analysis. The proposed algorithms are tested using several real world data sets and compared with other similar algorithms.**Description:**Cluster analysis is an important task in data mining. It deals with the problem of organization of a collection of objects into clusters based on a similarity measure. Various distance functions can be used to define the similarity measure. Cluster analysis problems with the similarity measure defined by the squared Euclidean distance, which is also known as the minimum sum-of-squares clustering, has been studied extensively over the last five decades. However, problems with the L

Prediction of monthly rainfall in Victoria, Australia : Clusterwise linear regression approach

- Bagirov, Adil, Mahmood, Arshad, Barton, Andrew

**Authors:**Bagirov, Adil , Mahmood, Arshad , Barton, Andrew**Date:**2017**Type:**Text , Journal article**Relation:**Atmospheric Research Vol. 188, no. (2017), p. 20-29**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**This paper develops the Clusterwise Linear Regression (CLR) technique for prediction of monthly rainfall. The CLR is a combination of clustering and regression techniques. It is formulated as an optimization problem and an incremental algorithm is designed to solve it. The algorithm is applied to predict monthly rainfall in Victoria, Australia using rainfall data with five input meteorological variables over the period of 1889–2014 from eight geographically diverse weather stations. The prediction performance of the CLR method is evaluated by comparing observed and predicted rainfall values using four measures of forecast accuracy. The proposed method is also compared with the CLR using the maximum likelihood framework by the expectation-maximization algorithm, multiple linear regression, artificial neural networks and the support vector machines for regression models using computational results. The results demonstrate that the proposed algorithm outperforms other methods in most locations. © 2017 Elsevier B.V.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?