Your selections:

6Ugon, Julien
4Karmitsa, Napsu
4Sultanova, Nargiz
4Yearwood, John
3Al Nuaimat, Alia
3Taheri, Sona
2Ahmed, S. T.
2Barton, Andrew
2Ganjehlou, Asef Nazari
2Joki, Kaisa
2Karasozen, Bulent
2Makela, Marko
2Mala-Jetmarova, Helena
2Mirzayeva, Hijran
2Ordin, Burak
2Ozturk, Gurkan
2Rubinov, Alex
2Sukhorukova, Nadezda
1Al Nuaimat, A.

Show More

Show Less

220103 Numerical and Computational Mathematics
16Nonsmooth optimization
120802 Computation Theory and Mathematics
8Nonconvex optimization
5Subdifferential
4Cluster analysis
4DC programming
30906 Electrical and Electronic Engineering
3Algorithms
3Bundle method
3DC functions
3Regression analysis
3Smoothing techniques
20801 Artificial Intelligence and Image Processing
2Bundle methods
2Classification
2Clusterwise regression
2Codifferential
2Cutting plane model

Show More

Show Less

Format Type

A difference of convex optimization algorithm for piecewise linear regression

- Bagirov, Adil, Taheri, Sona, Asadi, Soodabeh

**Authors:**Bagirov, Adil , Taheri, Sona , Asadi, Soodabeh**Date:**2019**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 15, no. 2 (2019), p. 909-932**Full Text:**false**Reviewed:****Description:**The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.

A sharp augmented Lagrangian-based method in constrained non-convex optimization

- Bagirov, Adil, Ozturk, Gurkan, Kasimbeyli, Refail

**Authors:**Bagirov, Adil , Ozturk, Gurkan , Kasimbeyli, Refail**Date:**2019**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 34, no. 3 (2019), p. 462-488**Full Text:**false**Reviewed:****Description:**In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature.

Double bundle method for finding clarke stationary points in nonsmooth dc programming

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko, Taheri, Sona

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

Minimizing nonsmooth DC functions via successive DC piecewise-affine approximations

- Gaudioso, Manlio, Giallombardo, Giovanni, Miglionico, Giovanna, Bagirov, Adil

**Authors:**Gaudioso, Manlio , Giallombardo, Giovanni , Miglionico, Giovanna , Bagirov, Adil**Date:**2018**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 71, no. 1 (2018), p. 37-55**Full Text:**false**Reviewed:****Description:**We introduce a proximal bundle method for the numerical minimization of a nonsmooth difference-of-convex (DC) function. Exploiting some classic ideas coming from cutting-plane approaches for the convex case, we iteratively build two separate piecewise-affine approximations of the component functions, grouping the corresponding information in two separate bundles. In the bundle of the first component, only information related to points close to the current iterate are maintained, while the second bundle only refers to a global model of the corresponding component function. We combine the two convex piecewise-affine approximations, and generate a DC piecewise-affine model, which can also be seen as the pointwise maximum of several concave piecewise-affine functions. Such a nonconvex model is locally approximated by means of an auxiliary quadratic program, whose solution is used to certify approximate criticality or to generate a descent search-direction, along with a predicted reduction, that is next explored in a line-search setting. To improve the approximation properties at points that are far from the current iterate a supplementary quadratic program is also introduced to generate an alternative more promising search-direction. We discuss the main convergence issues of the line-search based proximal bundle method, and provide computational results on a set of academic benchmark test problems. © 2017, Springer Science+Business Media, LLC.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets. © 2017 Informa UK Limited, trading as Taylor & Francis Group.

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2018**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 33, no. 1 (2018), p. 194-219**Full Text:**false**Reviewed:****Description:**The clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem using the squared regression error function. The objective function in this problem is represented as a difference of convex functions. Optimality conditions are derived, and an algorithm is designed based on such a representation. An incremental approach is proposed to generate starting solutions. The algorithm is tested on small to large data sets.

A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 68, no. 3 (2017), p. 501-535**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an -critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.

DC programming algorithm for clusterwise linear L1 regression

**Authors:**Bagirov, Adil , Taheri, Sona**Date:**2017**Type:**Text , Journal article**Relation:**Journal of the Operations Research Society of China Vol. 5, no. 2 (2017), p. 233-256**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**The aim of this paper is to develop an algorithm for solving the clusterwise linear least absolute deviations regression problem. This problem is formulated as a nonsmooth nonconvex optimization problem, and the objective function is represented as a difference of convex functions. Optimality conditions are derived by using this representation. An algorithm is designed based on the difference of convex representation and an incremental approach. The proposed algorithm is tested using small to large artificial and real-world data sets. © 2017, Operations Research Society of China, Periodicals Agency of Shanghai University, Science Press, and Springer-Verlag Berlin Heidelberg.

A heuristic algorithm for solving the minimum sum-of-squares clustering problems

**Authors:**Ordin, Burak , Bagirov, Adil**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 2 (2015), p. 341-361**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important task in data mining. It can be formulated as a global optimization problem which is challenging for existing global optimization techniques even in medium size data sets. Various heuristics were developed to solve the clustering problem. The global k-means and modified global k-means are among most efficient heuristics for solving the minimum sum-of-squares clustering problem. However, these algorithms are not always accurate in finding global or near global solutions to the clustering problem. In this paper, we introduce a new algorithm to improve the accuracy of the modified global k-means algorithm in finding global solutions. We use an auxiliary cluster problem to generate a set of initial points and apply the k-means algorithm starting from these points to find the global solution to the clustering problems. Numerical results on 16 real-world data sets clearly demonstrate the superiority of the proposed algorithm over the global and modified global k-means algorithms in finding global solutions to clustering problems.

An incremental clustering algorithm based on hyperbolic smoothing

- Bagirov, Adil, Ordin, Burak, Ozturk, Gurkan, Xavier, Adilson

**Authors:**Bagirov, Adil , Ordin, Burak , Ozturk, Gurkan , Xavier, Adilson**Date:**2015**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. 61, no. 1 (2015), p. 219-241**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clustering is an important problem in data mining. It can be formulated as a nonsmooth, nonconvex optimization problem. For the most global optimization techniques this problem is challenging even in medium size data sets. In this paper, we propose an approach that allows one to apply local methods of smooth optimization to solve the clustering problems. We apply an incremental approach to generate starting points for cluster centers which enables us to deal with nonconvexity of the problem. The hyperbolic smoothing technique is applied to handle nonsmoothness of the clustering problems and to make it possible application of smooth optimization algorithms to solve them. Results of numerical experiments with eleven real-world data sets and the comparison with state-of-the-art incremental clustering algorithms demonstrate that the smooth optimization algorithms in combination with the incremental approach are powerful alternative to existing clustering algorithms.

Nonsmooth optimization algorithm for solving clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 164, no. 3 (2015), p. 755-780**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.

Solving DC programs using the cutting angle method

- Ferrer, Albert, Bagirov, Adil, Beliakov, Gleb

**Authors:**Ferrer, Albert , Bagirov, Adil , Beliakov, Gleb**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 61, no. 1 (2015), p. 71-89**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, we propose a new algorithm for global minimization of functions represented as a difference of two convex functions. The proposed method is a derivative free method and it is designed by adapting the extended cutting angle method. We present preliminary results of numerical experiments using test problems with difference of convex objective functions and box-constraints. We also compare the proposed algorithm with a classical one that uses prismatical subdivisions.

Aggregate codifferential method for nonsmooth DC optimization

- Tor, Ali, Bagirov, Adil, Karasozen, Bulent

**Authors:**Tor, Ali , Bagirov, Adil , Karasozen, Bulent**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Computational and Applied Mathematics Vol. 259, no. Part B (2014), p. 851-867**Full Text:**false**Reviewed:****Description:**A new algorithm is developed based on the concept of codifferential for minimizing the difference of convex nonsmooth functions. Since the computation of the whole codifferential is not always possible, we use a fixed number of elements from the codifferential to compute the search directions. The convergence of the proposed algorithm is proved. The efficiency of the algorithm is demonstrated by comparing it with the subgradient, the truncated codifferential and the proximal bundle methods using nonsmooth optimization test problems.

An algorithm for clusterwise linear regression based on smoothing techniques

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2014**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 9, no. 2 (2014), p. 375-390**Full Text:**false**Reviewed:****Description:**We propose an algorithm based on an incremental approach and smoothing techniques to solve clusterwise linear regression (CLR) problems. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate an initial solution for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or approximate global solutions to the CLR problems. The algorithm is tested using several data sets for regression analysis and compared with the multistart and incremental Spath algorithms.

- Bagirov, Adil, Miettinen, Kaisa, Weber, Gerhard-Wilhelm

**Authors:**Bagirov, Adil , Miettinen, Kaisa , Weber, Gerhard-Wilhelm**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 60, no. 1 (June 2014), p. 1-3**Full Text:**false**Reviewed:****Description:**C1

- Bagirov, Adil, Barton, Andrew, Mala-Jetmarova, Helena, Al Nuaimat, Alia, Ahmed, S. T., Sultanova, Nargiz, Yearwood, John

**Authors:**Bagirov, Adil , Barton, Andrew , Mala-Jetmarova, Helena , Al Nuaimat, Alia , Ahmed, S. T. , Sultanova, Nargiz , Yearwood, John**Date:**2013**Type:**Text , Journal article**Relation:**Mathematical and Computer Modelling Vol. 57, no. 3-4 (2013), p. 873-886**Relation:**http://purl.org/au-research/grants/arc/LP0990908**Full Text:**false**Reviewed:****Description:**The operation of a water distribution system is a complex task which involves scheduling of pumps, regulating water levels of storages, and providing satisfactory water quality to customers at required flow and pressure. Pump scheduling is one of the most important tasks of the operation of a water distribution system as it represents the major part of its operating costs. In this paper, a novel approach for modeling of explicit pump scheduling to minimize energy consumption by pumps is introduced which uses the pump start/end run times as continuous variables, and binary integer variables to describe the pump status at the beginning of the scheduling period. This is different from other approaches where binary integer variables for each hour are typically used, which is considered very impractical from an operational perspective. The problem is formulated as a mixed integer nonlinear programming problem, and a new algorithm is developed for its solution. This algorithm is based on the combination of the grid search with the Hooke-Jeeves pattern search method. The performance of the algorithm is evaluated using literature test problems applying the hydraulic simulation model EPANet. © 2012 Elsevier Ltd.**Description:**2003010583

Hyperbolic smoothing function method for minimax problems

- Bagirov, Adil, Al Nuaimat, Alia, Sultanova, Nargiz

**Authors:**Bagirov, Adil , Al Nuaimat, Alia , Sultanova, Nargiz**Date:**2013**Type:**Text , Journal article**Relation:**Optimization Vol. 62, no. 6 (2013), p. 759-782**Full Text:**false**Reviewed:****Description:**In this article, an approach for solving finite minimax problems is proposed. This approach is based on the use of hyperbolic smoothing functions. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in general algebraic modelling system. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. © 2013 Copyright Taylor and Francis Group, LLC.**Description:**2003011099

Comparison of metaheuristic algorithms for pump operation optimization

- Bagirov, Adil, Ahmed, S. T., Barton, Andrew, Mala-Jetmarova, Helena, Al Nuaimat, Alia, Sultanova, Nargiz

**Authors:**Bagirov, Adil , Ahmed, S. T. , Barton, Andrew , Mala-Jetmarova, Helena , Al Nuaimat, Alia , Sultanova, Nargiz**Date:**2012**Type:**Text , Conference paper**Relation:**14th Water Distribution Systems Analysis Conference 2012, WDSA 2012 Vol. 2; Adelaide, Australia; 24th-27th September 2012; p. 886-896**Relation:**http://purl.org/au-research/grants/arc/LP0990908**Full Text:**false**Reviewed:****Description:**Pumping cost constitutes the main part of the overall operating cost of water distribution systems. There are different optimization formulations of the pumping cost minimization problem including those with application of continuous and integer programming approaches. To date mainly various metaheuristics have been applied to solve this problem. However, the comprehensive comparison of those metaheuristics has not been done. Such a comparison is important to identify strengths and weaknesses of different algorithms which reflects on their performance. In this paper, we present a methodology for comparative analysis of widely used metaheuristics for solving the pumping cost minimization problem. This methodology includes the following comparison criteria: (a) the "optimal solution" obtained; (b) the efficiency; and (c) robustness. Algorithms applied are: particle swarm optimization, artificial bee colony and firefly algorithms. These algorithms were applied to one test problem available in the literature. The results obtained demonstrate that the artificial bee colony is the most robust and the firefly is the most efficient and accurate algorithm for this test problem. Funding :ARC

Limited memory discrete gradient bundle method for nonsmooth derivative-free optimization

- Karmitsa, Napsu, Bagirov, Adil

**Authors:**Karmitsa, Napsu , Bagirov, Adil**Date:**2012**Type:**Text , Journal article**Relation:**Optimization Vol. 61, no. 12 (2012), p. 1491-1509**Full Text:**false**Reviewed:****Description:**Typically, practical nonsmooth optimization problems involve functions with hundreds of variables. Moreover, there are many practical problems where the computation of even one subgradient is either a difficult or an impossible task. In such cases derivative-free methods are the better (or only) choice since they do not use explicit computation of subgradients. However, these methods require a large number of function evaluations even for moderately large problems. In this article, we propose an efficient derivative-free limited memory discrete gradient bundle method for nonsmooth, possibly nonconvex optimization. The convergence of the proposed method is proved for locally Lipschitz continuous functions and the numerical experiments to be presented confirm the usability of the method especially for medium size and large-scale problems. © 2012 Copyright Taylor and Francis Group, LLC.**Description:**2003010398

Subgradient Method for Nonconvex Nonsmooth Optimization

- Bagirov, Adil, Jin, L., Karmitsa, Napsu, Al Nuaimat, A., Sultanova, Nargiz

**Authors:**Bagirov, Adil , Jin, L. , Karmitsa, Napsu , Al Nuaimat, A. , Sultanova, Nargiz**Date:**2012**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol.157, no.2 (2012), p.416–435**Full Text:**false**Reviewed:****Description:**In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments. © 2012 Springer Science+Business Media, LLC.

Are you sure you would like to clear your session, including search history and login status?