Your selections:

25Bagirov, Adil
20Rubinov, Alex
11Gao, David
10Wu, Zhiyou
9Ugon, Julien
5Burachik, Regina
5Roshchina, Vera
4Bai, Fusheng
4López, Marco
4Miller, Mirka
4Sukhorukova, Nadezda
4Weber, Gerhard-Wilhelm
4Yearwood, John
3Fang, Shucherng
3Gasimov, Rafail
3Jeyakumar, Vaithilingam
3Karmitsa, Napsu
3Mammadov, Musa
3Martinez-Legaz, Juan
3Peña, Javier

Show More

Show Less

670102 Applied Mathematics
240802 Computation Theory and Mathematics
16Global optimization
16Nonsmooth optimization
100906 Electrical and Electronic Engineering
8Nonconvex optimization
7Subdifferential
6Optimisation
6Problem solving
5Constrained optimization
5DC programming
5Optimization
40101 Pure Mathematics
4Abstract convexity
4Algorithms
4Augmented Lagrangian
4Constraint theory
4Global optimisation
3Auxiliary function

Show More

Show Less

Format Type

A difference of convex optimization algorithm for piecewise linear regression

- Bagirov, Adil, Taheri, Sona, Asadi, Soodabeh

**Authors:**Bagirov, Adil , Taheri, Sona , Asadi, Soodabeh**Date:**2019**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 15, no. 2 (2019), p. 909-932**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.

A sharp augmented Lagrangian-based method in constrained non-convex optimization

- Bagirov, Adil, Ozturk, Gurkan, Kasimbeyli, Refail

**Authors:**Bagirov, Adil , Ozturk, Gurkan , Kasimbeyli, Refail**Date:**2019**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 34, no. 3 (2019), p. 462-488**Full Text:**false**Reviewed:****Description:**In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature.

Minimizing nonsmooth DC functions via successive DC piecewise-affine approximations

- Gaudioso, Manlio, Giallombardo, Giovanni, Miglionico, Giovanna, Bagirov, Adil

**Authors:**Gaudioso, Manlio , Giallombardo, Giovanni , Miglionico, Giovanna , Bagirov, Adil**Date:**2018**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 71, no. 1 (2018), p. 37-55**Full Text:**false**Reviewed:****Description:**We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems. © 2017, Springer Science+Business Media, LLC.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets. © 2017 Informa UK Limited, trading as Taylor & Francis Group.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.

A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 68, no. 3 (2017), p. 501-535**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an -critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.

A unifying approach to robust convex infinite optimization duality

- Dinh, Nguyen, Goberna, Miguel, López, Marco, Volle, Michel

**Authors:**Dinh, Nguyen , Goberna, Miguel , López, Marco , Volle, Michel**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 174, no. 3 (2017), p. 650-685**Relation:**http://purl.org/au-research/grants/arc/DP160100854**Full Text:**false**Reviewed:****Description:**This paper considers an uncertain convex optimization problem, posed in a locally convex decision space with an arbitrary number of uncertain constraints. To this problem, where the uncertainty only affects the constraints, we associate a robust (pessimistic) counterpart and several dual problems. The paper provides corresponding dual variational principles for the robust counterpart in terms of the closed convexity of different associated cones.

Double well potential function and its optimization in the n-dimensional real space - Part I

- Fang, Shucherng, Gao, David, Lin, Gang-Xuan, Sheu, Ruey-Lin, Xing, Wenxun

**Authors:**Fang, Shucherng , Gao, David , Lin, Gang-Xuan , Sheu, Ruey-Lin , Xing, Wenxun**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 13, no. 3 (2017), p. 1291-1305**Full Text:**false**Reviewed:****Description:**A special type of multi-variate polynomial of degree 4, called the double well potential function, is studied. It is derived from a discrete approx imation of the generalized Ginzburg-Landau functional, and we are interested in understanding its global minimum solution and all local non-global points. The main difficulty for the model is due to its non-convexity. In Part I of the paper, we first characterize the global minimum solution set, whereas the study for local non-global optimal solutions is left for Part II. We show that, the dual of the Lagrange dual of the double well potential problem is a linearly constrained convex minimization problem, which, under a designated nonlin ear transformation, can be equivalently mapped to a portion of the original double well potential function containing the global minimum. In other words, solving the global minimum of the double well potential function is essentially a convex minimization problem, despite of its non-convex nature. Numerical examples are provided to illustrate the important features of the problem and the mapping in between.

Double well potential function and its optimization in the N-dimensional real space -- Part I

- Fang, Shucherng, Gao, David, Lin, Gang-Xuan, Sheu, Ruey-Lin, Xing, Wenxun

**Authors:**Fang, Shucherng , Gao, David , Lin, Gang-Xuan , Sheu, Ruey-Lin , Xing, Wenxun**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 13, no. 3 (2017), p. 1291-1305**Full Text:**false**Reviewed:****Description:**A special type of multi-variate polynomial of degree 4, called the double well potential function, is studied. It is derived from a discrete approximation of the generalized Ginzburg-Landau functional, and we are interested in understanding its global minimum solution and all local non-global points. The main difficulty for the model is due to its non-convexity. In Part I of the paper, we first characterize the global minimum solution set, whereas the study for local non-global optimal solutions is left for Part II. We show that, the dual of the Lagrange dual of the double well potential problem is a linearly constrained convex minimization problem, which, under a designated nonlinear transformation, can be equivalently mapped to a portion of the original double well potential function containing the global minimum. In other words, solving the global minimum of the double well potential function is essentially a convex minimization problem, despite of its non-convex nature. Numerical examples are provided to illustrate the important features of the problem and the mapping in between.

- Sukhorukova, Nadezda, Ugon, Julien

**Authors:**Sukhorukova, Nadezda , Ugon, Julien**Date:**2016**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 171, no. 2 (2016), p. 536-549**Full Text:**false**Reviewed:****Description:**In this paper, we derive conditions for best uniform approximation by fixed knots polynomial splines with weighting functions. The theory of Chebyshev approximation for fixed knots polynomial functions is very elegant and complete. Necessary and sufficient optimality conditions have been developed leading to efficient algorithms for constructing optimal spline approximations. The optimality conditions are based on the notion of alternance (maximal deviation points with alternating deviation signs). In this paper, we extend these results to the case when the model function is a product of fixed knots polynomial splines (whose parameters are subject to optimization) and other functions (whose parameters are predefined). This problem is nonsmooth, and therefore, we make use of convex and nonsmooth analysis to solve it.

- Adly, Samir, Hantoute, Abderrahim, Thera, Michel

**Authors:**Adly, Samir , Hantoute, Abderrahim , Thera, Michel**Date:**2016**Type:**Text , Journal article**Relation:**Mathematical Programming Vol. 157, no. 2 (2016), p. 349-374**Full Text:**false**Reviewed:****Description:**The general theory of Lyapunov stability of first-order differential inclusions in Hilbert spaces has been studied by the authors in the previous paper (Adly et al. in Nonlinear Anal 75(3): 985–1008, 2012). This new contribution focuses on the case when the interior of the domain of the maximally monotone operator governing the given differential inclusion is nonempty; this includes in a natural way the finite-dimensional case. The current setting leads to simplified, more explicit criteria and permits some flexibility in the choice of the generalized subdifferentials. Some consequences of the viability of closed sets are given. Our analysis makes use of standard tools from convex and variational analysis. © 2015, Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society.

On computation of generalized derivatives of the normal-cone mapping and their applications

- Gfrerer, Helmut, Outrata, Jiri

**Authors:**Gfrerer, Helmut , Outrata, Jiri**Date:**2016**Type:**Text , Journal article**Relation:**Mathematics of Operations Research Vol. 41, no. 4 (2016), p. 1535-1556**Full Text:**false**Reviewed:****Description:**The paper concerns the computation of the graphical derivative and the regular (Fréchet) coderivative of the normal-cone mapping related to C2 inequality constraints under very weak qualification conditions. This enables us to provide the graphical derivative and the regular coderivative of the solution map to a class of parameterized generalized equations with the constraint set of the investigated type. On the basis of these results, we finally obtain a characterization of the isolated calmness property of the mentioned solution map and derive strong stationarity conditions for an MPEC with control constraints. © 2016 INFORMS.

- Gfrerer, Helmut, Outrata, Jiri

**Authors:**Gfrerer, Helmut , Outrata, Jiri**Date:**2016**Type:**Text , Journal article**Relation:**Optimization Vol. 65, no. 4 (2016), p. 671-700**Relation:**http://purl.org/au-research/grants/arc/DP110102011**Full Text:**false**Reviewed:****Description:**The paper concerns the computation of the limiting coderivative of the normal-cone mapping related to inequality constraints under weak qualification conditions. The obtained results are applied to verify the Aubin property of solution maps to a class of parameterized generalized equations.

- Zhou, Xiaojun, Dong, Tianxue, Tang, Xiaolin, Yang, Chunhua, Gui, Weihua

**Authors:**Zhou, Xiaojun , Dong, Tianxue , Tang, Xiaolin , Yang, Chunhua , Gui, Weihua**Date:**2015**Type:**Text , Journal article**Relation:**Optimal Control Applications and Methods Vol. 36, no. 6 (2015), p. 844-852**Full Text:**false**Reviewed:****Description:**In this study, the guaranteed cost control of discrete time uncertain system with both state and input delays is considered. Sufficient conditions for the existence of a memoryless state feedback guaranteed cost control law are given in the bilinear matrix inequality form, which needs much less auxiliary matrix variables and storage space. Furthermore, the design of guaranteed cost controller is reformulated as an optimization problem with a linear objective function, bilinear, and linear matrix inequalities constraints. A nonlinear semi-definite optimization solver - PENLAB is used as a solution technique. A numerical example is given to demonstrate the effectiveness of the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.

A heuristic algorithm for solving the minimum sum-of-squares clustering problems

**Authors:**Ordin, Burak , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.

An incremental clustering algorithm based on hyperbolic smoothing

- Bagirov, Adil, Ordin, Burak, Ozturk, Gurkan, Xavier, Adilson

**Authors:**Bagirov, Adil , Ordin, Burak , Ozturk, Gurkan , Xavier, Adilson**Date:**2015**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 61, no. 1 (2015), p. 219-241**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important problem in data mining. It can be formulated as a nonsmooth, nonconvex optimization problem. For the most global optimization techniques this problem is challenging even in medium size data sets. In this paper, we propose an approach that allows one to apply local methods of smooth optimization to solve the clustering problems. We apply an incremental approach to generate starting points for cluster centers which enables us to deal with nonconvexity of the problem. The hyperbolic smoothing technique is applied to handle nonsmoothness of the clustering problems and to make it possible application of smooth optimization algorithms to solve them. Results of numerical experiments with eleven real-world data sets and the comparison with state-of-the-art incremental clustering algorithms demonstrate that the smooth optimization algorithms in combination with the incremental approach are powerful alternative to existing clustering algorithms.

Global optimality conditions and optimization methods for polynomial programming problems

- Wu, Zhiyou, Tian, Jing, Ugon, Julien

**Authors:**Wu, Zhiyou , Tian, Jing , Ugon, Julien**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 62, no. 4 (2015), p. 617-641**Full Text:**false**Reviewed:****Description:**This paper is concerned with the general polynomial programming problem with box constraints, including global optimality conditions and optimization methods. First, a necessary global optimality condition for a general polynomial programming problem with box constraints is given. Then we design a local optimization method by using the necessary global optimality condition to obtain some strongly or -strongly local minimizers which substantially improve some KKT points. Finally, a global optimization method, by combining the new local optimization method and an auxiliary function, is designed. Numerical examples show that our methods are efficient and stable.

Global solutions to fractional programming problem with ratio of nonconvex functions

**Authors:**Ruan, Ning , Gao, David**Date:**2015**Type:**Text , Journal article**Relation:**Applied Mathematics and Computation Vol. 255, no. (2015), p. 66-72**Full Text:**false**Reviewed:****Description:**This paper presents a canonical dual approach for minimizing a sum of quadratic function and a ratio of nonconvex functions in Rn. By introducing a parameter, the problem is first equivalently reformed as a nonconvex polynomial minimization with elliptic constraint. It is proved that under certain conditions, the canonical dual is a concave maximization problem in R2 that exhibits no duality gap. Therefore, the global optimal solution of the primal problem can be obtained by solving the canonical dual problem. © 2014 Elsevier Inc. All rights reserved.

Nonsmooth optimization algorithm for solving clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 164, no. 3 (2015), p. 755-780**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.

On Holder continuity of solution maps of parametric primal and dual Ky Fan inequalities

- Anh, Lam Quoc, Khanh, Phan Quoc, Tam, T. N.

**Authors:**Anh, Lam Quoc , Khanh, Phan Quoc , Tam, T. N.**Date:**2015**Type:**Text , Journal article**Relation:**Top Vol. 23, no. 1 (2015), p. 151-167**Full Text:**false**Reviewed:****Description:**We consider parametric primal and dual Ky Fan inequalities in metric linear spaces. Sufficient conditions for Holder continuity of solutions are established. Many examples are provided to illustrate the essentialness of the imposed assumptions and advantages of the results over existing ones. As applications, we derive this Holder continuity of solutions for constrained minimization and variational inequalities.

Are you sure you would like to clear your session, including search history and login status?