Your selections:

Show More

Show Less

New algorithms for multi-class cancer diagnosis using tumor gene expression signatures

- Bagirov, Adil, Ferguson, Brent, Ivkovic, Sasha, Saunders, Gary, Yearwood, John

**Authors:**Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John**Date:**2003**Type:**Text , Journal article**Relation:**Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807**Full Text:****Reviewed:****Description:**Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.**Description:**C1**Description:**2003000439

**Authors:**Bagirov, Adil , Ferguson, Brent , Ivkovic, Sasha , Saunders, Gary , Yearwood, John**Date:**2003**Type:**Text , Journal article**Relation:**Bioinformatics Vol. 19, no. 14 (2003), p. 1800-1807**Full Text:****Reviewed:****Description:**Motivation: The increasing use of DNA microarray-based tumor gene expression profiles for cancer diagnosis requires mathematical methods with high accuracy for solving clustering, feature selection and classification problems of gene expression data. Results: New algorithms are developed for solving clustering, feature selection and classification problems of gene expression data. The clustering algorithm is based on optimization techniques and allows the calculation of clusters step-by-step. This approach allows us to find as many clusters as a data set contains with respect to some tolerance. Feature selection is crucial for a gene expression database. Our feature selection algorithm is based on calculating overlaps of different genes. The database used, contains over 16 000 genes and this number is considerably reduced by feature selection. We propose a classification algorithm where each tissue sample is considered as the center of a cluster which is a ball. The results of numerical experiments confirm that the classification algorithm in combination with the feature selection algorithm perform slightly better than the published results for multi-class classifiers based on support vector machines for this data set.**Description:**C1**Description:**2003000439

Max-min separability

**Authors:**Bagirov, Adil**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 20, no. 2-3 (2005), p. 271-290**Full Text:****Reviewed:****Description:**We consider the problem of discriminating two finite point sets in the n-dimensional space by a finite number of hyperplanes generating a piecewise linear function. If the intersection of these sets is empty, then they can be strictly separated by a max-min of linear functions. An error function is introduced. This function is nonconvex piecewise linear. We discuss an algorithm for its minimization. The results of numerical experiments using some real-world datasets are presented, which show the effectiveness of the proposed approach.**Description:**C1**Description:**2003001350

**Authors:**Bagirov, Adil**Date:**2005**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 20, no. 2-3 (2005), p. 271-290**Full Text:****Reviewed:****Description:**We consider the problem of discriminating two finite point sets in the n-dimensional space by a finite number of hyperplanes generating a piecewise linear function. If the intersection of these sets is empty, then they can be strictly separated by a max-min of linear functions. An error function is introduced. This function is nonconvex piecewise linear. We discuss an algorithm for its minimization. The results of numerical experiments using some real-world datasets are presented, which show the effectiveness of the proposed approach.**Description:**C1**Description:**2003001350

An inexact modified subgradient algorithm for nonconvex optimization

- Burachik, Regina, Kaya, Yalcin, Mammadov, Musa

**Authors:**Burachik, Regina , Kaya, Yalcin , Mammadov, Musa**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. , no. (2008), p. 1-24**Full Text:****Reviewed:****Description:**We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes. © 2008 Springer Science+Business Media, LLC.**Description:**C1

**Authors:**Burachik, Regina , Kaya, Yalcin , Mammadov, Musa**Date:**2008**Type:**Text , Journal article**Relation:**Computational Optimization and Applications Vol. , no. (2008), p. 1-24**Full Text:****Reviewed:****Description:**We propose and analyze an inexact version of the modified subgradient (MSG) algorithm, which we call the IMSG algorithm, for nonsmooth and nonconvex optimization over a compact set. We prove that under an approximate, i.e. inexact, minimization of the sharp augmented Lagrangian, the main convergence properties of the MSG algorithm are preserved for the IMSG algorithm. Inexact minimization may allow to solve problems with less computational effort. We illustrate this through test problems, including an optimal bang-bang control problem, under several different inexactness schemes. © 2008 Springer Science+Business Media, LLC.**Description:**C1

An update rule and a convergence result for a penalty function method

- Burachik, Regina, Kaya, Yalcin

**Authors:**Burachik, Regina , Kaya, Yalcin**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial & Management Optimization Vol. 3, no. 2 (2007), p. 381-398**Full Text:****Reviewed:****Description:**We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results.**Description:**C1**Description:**2003004886

**Authors:**Burachik, Regina , Kaya, Yalcin**Date:**2007**Type:**Text , Journal article**Relation:**Journal of Industrial & Management Optimization Vol. 3, no. 2 (2007), p. 381-398**Full Text:****Reviewed:****Description:**We use a primal-dual scheme to devise a new update rule for a penalty function method applicable to general optimization problems, including nonsmooth and nonconvex ones. The update rule we introduce uses dual information in a simple way. Numerical test problems show that our update rule has certain advantages over the classical one. We study the relationship between exact penalty parameters and dual solutions. Under the differentiability of the dual function at the least exact penalty parameter, we establish convergence of the minimizers of the sequential penalty functions to a solution of the original problem. Numerical experiments are then used to illustrate some of the theoretical results.**Description:**C1**Description:**2003004886

On modeling and complete solutions to general fixpoint problems in multi-scale systems with applications

**Authors:**Ruan, Ning , Gao, David**Date:**2018**Type:**Text , Journal article**Relation:**Fixed Point Theory and Applications Vol. 2018, no. 1 (2018), p. 1-19**Full Text:****Reviewed:****Description:**This paper revisits the well-studied fixed point problem from a unified viewpoint of mathematical modeling and canonical duality theory, i.e., the general fixed point problem is first reformulated as a nonconvex optimization problem, its well-posedness is discussed based on the objectivity principle in continuum physics; then the canonical duality theory is applied for solving this challenging problem to obtain not only all fixed points, but also their stability properties. Applications are illustrated by problems governed by nonconvex polynomial, exponential, and logarithmic operators. This paper shows that within the framework of the canonical duality theory, there is no difference between the fixed point problems and nonconvex analysis/optimization in multidisciplinary studies.

**Authors:**Ruan, Ning , Gao, David**Date:**2018**Type:**Text , Journal article**Relation:**Fixed Point Theory and Applications Vol. 2018, no. 1 (2018), p. 1-19**Full Text:****Reviewed:****Description:**This paper revisits the well-studied fixed point problem from a unified viewpoint of mathematical modeling and canonical duality theory, i.e., the general fixed point problem is first reformulated as a nonconvex optimization problem, its well-posedness is discussed based on the objectivity principle in continuum physics; then the canonical duality theory is applied for solving this challenging problem to obtain not only all fixed points, but also their stability properties. Applications are illustrated by problems governed by nonconvex polynomial, exponential, and logarithmic operators. This paper shows that within the framework of the canonical duality theory, there is no difference between the fixed point problems and nonconvex analysis/optimization in multidisciplinary studies.

Optimality conditions via weak subdifferentials in reflexive Banach spaces

- Hassani, Sara, Mammadov, Musa, Jamshidi, Mina

**Authors:**Hassani, Sara , Mammadov, Musa , Jamshidi, Mina**Date:**2017**Type:**Text , Journal article**Relation:**Turkish Journal of Mathematics Vol. 41, no. 1 (2017), p. 1-8**Full Text:****Reviewed:****Description:**In this paper the relation between the weak subdifferentials and the directional derivatives, as well as optimality conditions for nonconvex optimization problems in reflexive Banach spaces, are investigated. It partly generalizes several related results obtained for finite dimensional spaces. © Tübitak.

**Authors:**Hassani, Sara , Mammadov, Musa , Jamshidi, Mina**Date:**2017**Type:**Text , Journal article**Relation:**Turkish Journal of Mathematics Vol. 41, no. 1 (2017), p. 1-8**Full Text:****Reviewed:****Description:**In this paper the relation between the weak subdifferentials and the directional derivatives, as well as optimality conditions for nonconvex optimization problems in reflexive Banach spaces, are investigated. It partly generalizes several related results obtained for finite dimensional spaces. © Tübitak.

Canonical duality theory and triality for solving general global optimization problems in complex systems

- Morales-Silva, Daniel, Gao, David

**Authors:**Morales-Silva, Daniel , Gao, David**Date:**2015**Type:**Text , Journal article**Relation:**Mathematics and Mechanics of Complex Systems Vol. 3, no. 2 (2015), p. 139-161**Full Text:****Reviewed:****Description:**General nonconvex optimization problems are studied by using the canonical duality-triality theory. The triality theory is proved for sums of exponentials and quartic polynomials, which solved an open problem left in 2003. This theory can be used to find the global minimum and local extrema, which bridges a gap between global optimization and nonconvex mechanics. Detailed applications are illustrated by several examples. © 2015 Mathematical Sciences Publishers.

**Authors:**Morales-Silva, Daniel , Gao, David**Date:**2015**Type:**Text , Journal article**Relation:**Mathematics and Mechanics of Complex Systems Vol. 3, no. 2 (2015), p. 139-161**Full Text:****Reviewed:****Description:**General nonconvex optimization problems are studied by using the canonical duality-triality theory. The triality theory is proved for sums of exponentials and quartic polynomials, which solved an open problem left in 2003. This theory can be used to find the global minimum and local extrema, which bridges a gap between global optimization and nonconvex mechanics. Detailed applications are illustrated by several examples. © 2015 Mathematical Sciences Publishers.

Double bundle method for finding clarke stationary points in nonsmooth dc programming

- Joki, Kaisa, Bagirov, Adil, Karmitsa, Napsu, Makela, Marko, Taheri, Sona

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

**Authors:**Joki, Kaisa , Bagirov, Adil , Karmitsa, Napsu , Makela, Marko , Taheri, Sona**Date:**2018**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 28, no. 2 (2018), p. 1892-1919**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:****Reviewed:****Description:**The aim of this paper is to introduce a new proximal double bundle method for unconstrained nonsmooth optimization, where the objective function is presented as a difference of two convex (DC) functions. The novelty in our method is a new escape procedure which enables us to guarantee approximate Clarke stationarity for solutions by utilizing the DC components of the objective function. This optimality condition is stronger than the criticality condition typically used in DC programming. Moreover, if a candidate solution is not approximate Clarke stationary, then the escape procedure returns a descent direction. With this escape procedure, we can avoid some shortcomings encountered when criticality is used. The finite termination of the double bundle method to an approximate Clarke stationary point is proved by assuming that the subdifferentials of DC components are polytopes. Finally, some encouraging numerical results are presented.

Aggregate subgradient method for nonsmooth DC optimization

- Bagirov, Adil, Taheri, Sona, Joki, Kaisa, Karmitsa, Napsu, Mäkelä, Marko

**Authors:**Bagirov, Adil , Taheri, Sona , Joki, Kaisa , Karmitsa, Napsu , Mäkelä, Marko**Date:**2021**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 15, no. 1 (2021), p. 83-96**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.

**Authors:**Bagirov, Adil , Taheri, Sona , Joki, Kaisa , Karmitsa, Napsu , Mäkelä, Marko**Date:**2021**Type:**Text , Journal article**Relation:**Optimization Letters Vol. 15, no. 1 (2021), p. 83-96**Relation:**http://purl.org/au-research/grants/arc/DP190100580**Full Text:****Reviewed:****Description:**The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?