Your selections:

90103 Numerical and Computational Mathematics
8Nonsmooth optimization
30802 Computation Theory and Mathematics
3DC functions
20906 Electrical and Electronic Engineering
2Bundle method
2Canonical duality
2Cutting plane model
2Subdifferential
2Triality theory
10101 Pure Mathematics
10105 Mathematical Physics
10199 Other Mathematical Sciences
10801 Artificial Intelligence and Image Processing
1Bundle methods
1Canonical duality theories
1Canonical duality theory
1Clarke stationarity

Show More

Show Less

Format Type

A difference of convex optimization algorithm for piecewise linear regression

- Bagirov, Adil, Taheri, Sona, Asadi, Soodabeh

**Authors:**Bagirov, Adil , Taheri, Sona , Asadi, Soodabeh**Date:**2019**Type:**Text , Journal article**Relation:**Journal of Industrial and Management Optimization Vol. 15, no. 2 (2019), p. 909-932**Full Text:**false**Reviewed:****Description:**The problem of finding a continuous piecewise linear function approximating a regression function is considered. This problem is formulated as a nonconvex nonsmooth optimization problem where the objective function is represented as a difference of convex (DC) functions. Subdifferentials of DC components are computed and an algorithm is designed based on these subdifferentials to find piecewise linear functions. The algorithm is tested using some synthetic and real world data sets and compared with other regression algorithms.

Double bundle method for finding clarke stationary points in nonsmooth dc programming

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko, Taheri, Sona

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

On modeling and complete solutions to general fixpoint problems in multi-scale systems with applications

**Authors:**Ruan, Ning , Gao, David**Date:**2018**Type:**Text , Journal article**Relation:**Fixed Point Theory and Applications Vol. 2018, no. 1 (2018), p. 1-19**Full Text:****Reviewed:****Description:**This paper revisits the well-studied fixed point problem from a unified viewpoint of mathematical modeling and canonical duality theory, i.e., the general fixed point problem is first reformulated as a nonconvex optimization problem, its well-posedness is discussed based on the objectivity principle in continuum physics; then the canonical duality theory is applied for solving this challenging problem to obtain not only all fixed points, but also their stability properties. Applications are illustrated by problems governed by nonconvex polynomial, exponential, and logarithmic operators. This paper shows that within the framework of the canonical duality theory, there is no difference between the fixed point problems and nonconvex analysis/optimization in multidisciplinary studies.

**Authors:**Ruan, Ning , Gao, David**Date:**2018**Type:**Text , Journal article**Relation:**Fixed Point Theory and Applications Vol. 2018, no. 1 (2018), p. 1-19**Full Text:****Reviewed:****Description:**This paper revisits the well-studied fixed point problem from a unified viewpoint of mathematical modeling and canonical duality theory, i.e., the general fixed point problem is first reformulated as a nonconvex optimization problem, its well-posedness is discussed based on the objectivity principle in continuum physics; then the canonical duality theory is applied for solving this challenging problem to obtain not only all fixed points, but also their stability properties. Applications are illustrated by problems governed by nonconvex polynomial, exponential, and logarithmic operators. This paper shows that within the framework of the canonical duality theory, there is no difference between the fixed point problems and nonconvex analysis/optimization in multidisciplinary studies.

A proximal bundle method for nonsmooth DC optimization utilizing nonconvex cutting planes

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko**Date:**2017**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 68, no. 3 (2017), p. 501-535**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, we develop a version of the bundle method to solve unconstrained difference of convex (DC) programming problems. It is assumed that a DC representation of the objective function is available. Our main idea is to utilize subgradients of both the first and second components in the DC representation. This subgradient information is gathered from some neighborhood of the current iteration point and it is used to build separately an approximation for each component in the DC representation. By combining these approximations we obtain a new nonconvex cutting plane model of the original objective function, which takes into account explicitly both the convex and the concave behavior of the objective function. We design the proximal bundle method for DC programming based on this new approach and prove the convergence of the method to an -critical point. The algorithm is tested using some academic test problems and the preliminary numerical results have shown the good performance of the new bundle method. An interesting fact is that the new algorithm finds nearly always the global solution in our test problems.

Canonical duality theory and triality for solving general global optimization problems in complex systems

- Morales-Silva, Daniel, Gao, David

**Authors:**Morales-Silva, Daniel , Gao, David**Date:**2015**Type:**Text , Journal article**Relation:**Mathematics and Mechanics of Complex Systems Vol. 3, no. 2 (2015), p. 139-161**Full Text:****Reviewed:****Description:**General nonconvex optimization problems are studied by using the canonical duality-triality theory. The triality theory is proved for sums of exponentials and quartic polynomials, which solved an open problem left in 2003. This theory can be used to find the global minimum and local extrema, which bridges a gap between global optimization and nonconvex mechanics. Detailed applications are illustrated by several examples. © 2015 Mathematical Sciences Publishers.

**Authors:**Morales-Silva, Daniel , Gao, David**Date:**2015**Type:**Text , Journal article**Relation:**Mathematics and Mechanics of Complex Systems Vol. 3, no. 2 (2015), p. 139-161**Full Text:****Reviewed:****Description:**General nonconvex optimization problems are studied by using the canonical duality-triality theory. The triality theory is proved for sums of exponentials and quartic polynomials, which solved an open problem left in 2003. This theory can be used to find the global minimum and local extrema, which bridges a gap between global optimization and nonconvex mechanics. Detailed applications are illustrated by several examples. © 2015 Mathematical Sciences Publishers.

Nonsmooth optimization algorithm for solving clusterwise linear regression problems

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2015**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 164, no. 3 (2015), p. 755-780**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**Clusterwise linear regression consists of finding a number of linear regression functions each approximating a subset of the data. In this paper, the clusterwise linear regression problem is formulated as a nonsmooth nonconvex optimization problem and an algorithm based on an incremental approach and on the discrete gradient method of nonsmooth optimization is designed to solve it. This algorithm incrementally divides the whole dataset into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate good starting points for solving global optimization problems at each iteration of the incremental algorithm. The algorithm is compared with the multi-start Spath and the incremental algorithms on several publicly available datasets for regression analysis.

An algorithm for clusterwise linear regression based on smoothing techniques

- Bagirov, Adil, Ugon, Julien, Mirzayeva, Hijran

**Authors:**Bagirov, Adil , Ugon, Julien , Mirzayeva, Hijran**Date:**2014**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 9, no. 2 (2014), p. 375-390**Full Text:**false**Reviewed:****Description:**We propose an algorithm based on an incremental approach and smoothing techniques to solve clusterwise linear regression (CLR) problems. This algorithm incrementally divides the whole data set into groups which can be easily approximated by one linear regression function. A special procedure is introduced to generate an initial solution for solving global optimization problems at each iteration of the incremental algorithm. Such an approach allows one to find global or approximate global solutions to the CLR problems. The algorithm is tested using several data sets for regression analysis and compared with the multistart and incremental Spath algorithms.

- Yuan, Y. B., Fang, Shucherng, Gao, David

**Authors:**Yuan, Y. B. , Fang, Shucherng , Gao, David**Date:**2012**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 52, no. 2 (2012), p. 195-209**Full Text:**false**Reviewed:****Description:**This paper studies the canonical duality theory for solving a class of quadri- nomial minimization problems subject to one general quadratic constraint. It is shown that the nonconvex primal problem in Rn can be converted into a concave maximization dual problem over a convex set in R2 , such that the problem can be solved more efficiently. The existence and uniqueness theorems of global minimizers are provided using the triality theory. Examples are given to illustrate the results obtained. © 2011 Springer Science+Business Media, LLC.

Subgradient Method for Nonconvex Nonsmooth Optimization

- Bagirov, Adil, Jin, L., Karmitsa, Napsu, Al Nuaimat, A., Sultanova, Nargiz

**Authors:**Bagirov, Adil , Jin, L. , Karmitsa, Napsu , Al Nuaimat, A. , Sultanova, Nargiz**Date:**2012**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol.157, no.2 (2012), p.416–435**Full Text:**false**Reviewed:****Description:**In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments. © 2012 Springer Science+Business Media, LLC.

Codifferential method for minimizing nonsmooth DC functions

**Authors:**Bagirov, Adil , Ugon, Julien**Date:**2011**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 50, no. 1 (2011), p. 3-22**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**In this paper, a new algorithm to locally minimize nonsmooth functions represented as a difference of two convex functions (DC functions) is proposed. The algorithm is based on the concept of codifferential. It is assumed that DC decomposition of the objective function is known a priori. We develop an algorithm to compute descent directions using a few elements from codifferential. The convergence of the minimization algorithm is studied and its comparison with different versions of the bundle methods using results of numerical experiments is given. © 2010 Springer Science+Business Media, LLC.

A quasisecant method for minimizing nonsmooth functions

- Bagirov, Adil, Ganjehlou, Asef Nazari

**Authors:**Bagirov, Adil , Ganjehlou, Asef Nazari**Date:**2010**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 25, no. 1 (2010), p. 3-18**Relation:**http://purl.org/au-research/grants/arc/DP0666061**Full Text:**false**Reviewed:****Description:**We present an algorithm to locally minimize nonsmooth, nonconvex functions. In order to find descent directions, the notion of quasisecants, introduced in this paper, is applied. We prove that the algorithm converges to Clarke stationary points. Numerical results are presented demonstrating the applicability of the proposed algorithm to a wide variety of nonsmooth, nonconvex optimization problems. We also compare the proposed algorithm with the bundle method using numerical results.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?