Classes and clusters in data analysis
- Authors: Rubinov, Alex , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2006
- Type: Text , Journal article
- Relation: European Journal of Operational Research Vol. 173, no. 3 (Sep 2006), p. 849-865
- Full Text:
- Reviewed:
- Description: We discuss the relation between classes and clusters in datasets with given classes. We examine the distribution of classes within obtained clusters, using different clustering methods which are based on different techniques. We also study the structure of the obtained clusters. One of the main conclusions, obtained in this research is that the notion purity cannot be always used for evaluation of accuracy of clustering techniques. (c) 2005 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003001593
Facility location via continuous optimization with discontinuous objective functions
- Authors: Ugon, Julien , Kouhbor, Shahnaz , Mammadov, Musa , Rubinov, Alex , Kruger, Alexander
- Date: 2007
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 48, no. 3 (2007), p. 315-325
- Full Text:
- Reviewed:
- Description: Facility location problems are one of the most common applications of optimization methods. Continuous formulations are usually more accurate, but often result in complex problems that cannot be solved using traditional optimization methods. This paper examines the use of a global optimization method - AGOP - for solving location problems where the objective function is discontinuous. This approach is motivated by a real-world application in wireless networks design. © Australian Mathematical Society 2007.
- Description: 2003004859
Codifferential method for minimizing nonsmooth DC functions
- Authors: Bagirov, Adil , Ugon, Julien
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 50, no. 1 (2011), p. 3-22
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given. © 2010 Springer Science+Business Media, LLC.
An algorithm for clusterwise linear regression based on smoothing techniques
- Authors: Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 2 (2014), p. 375-390
- Full Text: false
- Reviewed:
- Description: We propose an algorithm based on an incremental approach and smoothing techniques to solve clusterwise linear regression (CLR) problems. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate an initial solution for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or approximate global solutions to the CLR problems. The algorithm is tested using several data sets for regression analysis and compared with the multistart and incremental Spath algorithms.
Classification through incremental max-min separability
- Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent
- Date: 2011
- Type: Text , Journal article
- Relation: Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174
- Relation: http://purl.org/au-research/grants/arc/DP0666061
- Full Text: false
- Reviewed:
- Description: Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.
Characterization theorem for best polynomial spline approximation with free knots, variable degree and fixed tails
- Authors: Crouzeix, Jean-Pierre , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 172, no. 3 (2017), p. 950-964
- Full Text:
- Reviewed:
- Description: In this paper, we derive a necessary condition for a best approximation by piecewise polynomial functions of varying degree from one interval to another. Based on these results, we obtain a characterization theorem for the polynomial splines with fixed tails, that is the value of the spline is fixed in one or more knots (external or internal). We apply nonsmooth nonconvex analysis to obtain this result, which is also a necessary and sufficient condition for inf-stationarity in the sense of Demyanov-Rubinov. This paper is an extension of a paper where similar conditions were obtained for free tails splines. The main results of this paper are essential for the development of a Remez-type algorithm for free knot spline approximation.
Global optimality conditions and optimization methods for polynomial programming problems
- Authors: Wu, Zhiyou , Tian, Jing , Ugon, Julien
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 62, no. 4 (2015), p. 617-641
- Full Text: false
- Reviewed:
- Description: This paper is concerned with the general polynomial programming problem with box constraints, including global optimality conditions and optimization methods. First, a necessary global optimality condition for a general polynomial programming problem with box constraints is given. Then we design a local optimization method by using the necessary global optimality condition to obtain some strongly or -strongly local minimizers which substantially improve some KKT points. Finally, a global optimization method, by combining the new local optimization method and an auxiliary function, is designed. Numerical examples show that our methods are efficient and stable.
Global optimality conditions and optimization methods for constrained polynomial programming problems
- Authors: Wu, Zhiyou , Tian, Jing , Ugon, Julien , Zhang, Liang
- Date: 2015
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 262, no. (2015), p. 312-325
- Full Text:
- Reviewed:
- Description: The general constrained polynomial programming problem (GPP) is considered in this paper. Problem (GPP) has a broad range of applications and is proved to be NP-hard. Necessary global optimality conditions for problem (GPP) are established. Then, a new local optimization method for this problem is proposed by exploiting these necessary global optimality conditions. A global optimization method is proposed for this problem by combining this local optimization method together with an auxiliary function. Some numerical examples are also given to illustrate that these approaches are very efficient. (C) 2015 Elsevier Inc. All rights reserved.
Optimality conditions and optimization methods for quartic polynomial optimization
- Authors: Wu, Zhiyou , Tian, Jing , Quan, Jing , Ugon, Julien
- Date: 2014
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 232, no. (2014), p. 968-982
- Full Text: false
- Reviewed:
- Description: In this paper multivariate quartic polynomial optimization program (QPOP) is considered. Quartic optimization problems arise in various practical applications and are proved to be NP hard. We discuss necessary global optimality conditions for quartic problem (QPOP). And then we present a new (strongly or ε-strongly) local optimization method according to necessary global optimality conditions, which may escape and improve some KKT points. Finally we design a global optimization method for problem (QPOP) by combining the new (strongly or ε-strongly) local optimization method and an auxiliary function. Numerical examples show that our algorithms are efficient and stable.
Chebyshev approximation by linear combinations of fixed knot polynomial splines with weighting functions
- Authors: Sukhorukova, Nadezda , Ugon, Julien
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 171, no. 2 (2016), p. 536-549
- Full Text: false
- Reviewed:
- Description: In this paper, we derive conditions for best uniform approximation by fixed knots polynomial splines with weighting functions. The theory of Chebyshev approximation for fixed knots polynomial functions is very elegant and complete. Necessary and sufficient optimality conditions have been developed leading to efficient algorithms for constructing optimal spline approximations. The optimality conditions are based on the notion of alternance (maximal deviation points with alternating deviation signs). In this paper, we extend these results to the case when the model function is a product of fixed knots polynomial splines (whose parameters are subject to optimization) and other functions (whose parameters are predefined). This problem is nonsmooth, and therefore, we make use of convex and nonsmooth analysis to solve it.
Nonsmooth DC programming approach to clusterwise linear regression : Optimality conditions and algorithms
- Authors: Bagirov, Adil , Ugon, Julien
- Date: 2018
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets. © 2017 Informa UK Limited, trading as Taylor & Francis Group.
Nonsmooth optimization algorithm for solving clusterwise linear regression problems
- Authors: Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 164, no. 3 (2015), p. 755-780
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.
Chebyshev multivariate polynomial approximation and point reduction procedure
- Authors: Sukhorukova, Nadezda , Ugon, Julien , Yost, David
- Date: 2021
- Type: Text , Journal article
- Relation: Constructive Approximation Vol. 53, no. 3 (2021), p. 529-544
- Relation: http://purl.org/au-research/grants/arc/DP180100602
- Full Text:
- Reviewed:
- Description: We apply the methods of nonsmooth and convex analysis to extend the study of Chebyshev (uniform) approximation for univariate polynomial functions to the case of general multivariate functions (not just polynomials). First of all, we give new necessary and sufficient optimality conditions for multivariate approximation, and a geometrical interpretation of them which reduces to the classical alternating sequence condition in the univariate case. Then, we present a procedure for verification of necessary and sufficient optimality conditions that is based on our generalization of the notion of alternating sequence to the case of multivariate polynomials. Finally, we develop an algorithm for fast verification of necessary optimality conditions in the multivariate polynomial case. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.
Generalised rational approximation and its application to improve deep learning classifiers
- Authors: Peiris, V , Sharon, Nir , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 389, no. (2021), p.
- Relation: https://purl.org/au-research/grants/arc/DP180100602
- Full Text: false
- Reviewed:
- Description: A rational approximation (that is, approximation by a ratio of two polynomials) is a flexible alternative to polynomial approximation. In particular, rational functions exhibit accurate estimations to nonsmooth and non-Lipschitz functions, where polynomial approximations are not efficient. We prove that the optimisation problems appearing in the best uniform rational approximation and its generalisation to a ratio of linear combinations of basis functions are quasiconvex even when the basis functions are not restricted to monomials. Then we show how this fact can be used in the development of computational methods. This paper presents a theoretical study of the arising optimisation problems and provides results of several numerical experiments. We apply our approximation as a preprocessing step to deep learning classifiers and demonstrate that the classification accuracy is significantly improved compared to the classification of the raw signals. © 2020
- Description: This research was supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602 ). This research was partially sponsored by Tel Aviv-Swinburne Research Collaboration Grant (2019).