Your selections:

30102 Applied Mathematics
3Nonsmooth optimization
20103 Numerical and Computational Mathematics
2Directional derivative
2Discrete gradient method
2Modified subgradient algorithm
2Non-convex optimization
2Nonconvex analysis
2Optimization
2Piecewise linear classifiers
2Weak subdifferential
10801 Artificial Intelligence and Image Processing
10802 Computation Theory and Mathematics
11702 Cognitive Science
165K05
190C25
1Augmented dual cone
1Augmented normal cone
1Classification
1Classification (of information)

Show More

Show Less

Format Type

A sharp augmented Lagrangian-based method in constrained non-convex optimization

- Bagirov, Adil, Ozturk, Gurkan, Kasimbeyli, Refail

**Authors:**Bagirov, Adil , Ozturk, Gurkan , Kasimbeyli, Refail**Date:**2019**Type:**Text , Journal article**Relation:**Optimization Methods and Software Vol. 34, no. 3 (2019), p. 462-488**Full Text:**false**Reviewed:****Description:**In this paper, a novel sharp Augmented Lagrangian-based global optimization method is developed for solving constrained non-convex optimization problems. The algorithm consists of outer and inner loops. At each inner iteration, the discrete gradient method is applied to minimize the sharp augmented Lagrangian function. Depending on the solution found the algorithm stops or updates the dual variables in the inner loop, or updates the upper or lower bounds by going to the outer loop. The convergence results for the proposed method are presented. The performance of the method is demonstrated using a wide range of nonlinear smooth and non-smooth constrained optimization test problems from the literature.

A generalization of a theorem of Arrow, Barankin and Blackwell to a nonconvex case

- Kasimbeyli, Nergiz, Kasimbeyli, Refail, Mammadov, Musa

**Authors:**Kasimbeyli, Nergiz , Kasimbeyli, Refail , Mammadov, Musa**Date:**2016**Type:**Text , Journal article**Relation:**Optimization Vol. 65, no. 5 (May 2016), p. 937-945**Full Text:****Reviewed:****Description:**The paper presents a generalization of a known density theorem of Arrow, Barankin, and Blackwell for properly efficient points defined as support points of sets with respect to monotonically increasing sublinear functions. This result is shown to hold for nonconvex sets of a partially ordered reflexive Banach space.

**Authors:**Kasimbeyli, Nergiz , Kasimbeyli, Refail , Mammadov, Musa**Date:**2016**Type:**Text , Journal article**Relation:**Optimization Vol. 65, no. 5 (May 2016), p. 937-945**Full Text:****Reviewed:****Description:**The paper presents a generalization of a known density theorem of Arrow, Barankin, and Blackwell for properly efficient points defined as support points of sets with respect to monotonically increasing sublinear functions. This result is shown to hold for nonconvex sets of a partially ordered reflexive Banach space.

An incremental piecewise linear classifier based on polyhedral conic separation

- Ozturk, Gurkan, Bagirov, Adil, Kasimbeyli, Refail

**Authors:**Ozturk, Gurkan , Bagirov, Adil , Kasimbeyli, Refail**Date:**2015**Type:**Text , Journal article**Relation:**Machine Learning Vol. 101, no. 1-3 (2015), p. 397-413**Relation:**http://purl.org/au-research/grants/arc/DP140103213**Full Text:**false**Reviewed:****Description:**In this paper, a piecewise linear classifier based on polyhedral conic separation is developed. This classifier builds nonlinear boundaries between classes using polyhedral conic functions. Since the number of polyhedral conic functions separating classes is not known a priori, an incremental approach is proposed to build separating functions. These functions are found by minimizing an error function which is nonsmooth and nonconvex. A special procedure is proposed to generate starting points to minimize the error function and this procedure is based on the incremental approach. The discrete gradient method, which is a derivative-free method for nonsmooth optimization, is applied to minimize the error function starting from those points. The proposed classifier is applied to solve classification problems on 12 publicly available data sets and compared with some mainstream and piecewise linear classifiers. © 2014, The Author(s).

Piecewise linear classifiers based on nonsmooth optimization approaches

- Bagirov, Adil, Kasimbeyli, Refail, Ozturk, Gurkan, Ugon, Julien

**Authors:**Bagirov, Adil , Kasimbeyli, Refail , Ozturk, Gurkan , Ugon, Julien**Date:**2014**Type:**Text , Book chapter**Relation:**Optimization in Science and Engineering p. 1-32**Full Text:**false**Reviewed:****Description:**Nonsmooth optimization provides efficient algorithms for solving many machine learning problems. In particular, nonsmooth optimization approaches to supervised data classification problems lead to the design of very efficient algorithms for their solution. In this chapter, we demonstrate how nonsmooth optimization algorithms can be applied to design efficient piecewise linear classifiers for supervised data classification problems. Such classifiers are developed using a max–min and a polyhedral conic separabilities as well as an incremental approach. We report results of numerical experiments and compare the piecewise linear classifiers with a number of other mainstream classifiers.

Preface: Special issue of JOGO MEC EurOPT 2010-Izmir

- Kasimbeyli, Refail, Mammadov, Musa, Dincer, Cemali

**Authors:**Kasimbeyli, Refail , Mammadov, Musa , Dincer, Cemali**Date:**2013**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol. 56, no. 2 (June 2013), p. 217-218**Full Text:**false**Reviewed:****Description:**C1

A novel piecewise linear classifier based on polyhedral conic and max-min separabilities

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Ozturk, Gurkan, Kasimbeyli, Refail

**Authors:**Bagirov, Adil , Ugon, Julien , Webb, Dean , Ozturk, Gurkan , Kasimbeyli, Refail**Date:**2011**Type:**Text , Journal article**Relation:**TOP Vol.21, no.1 (2011), p. 1-22**Full Text:**false**Reviewed:****Description:**In this paper, an algorithm for finding piecewise linear boundaries between pattern classes is developed. This algorithm consists of two main stages. In the first stage, a polyhedral conic set is used to identify data points which lie inside their classes, and in the second stage we exclude those points to compute a piecewise linear boundary using the remaining data points. Piecewise linear boundaries are computed incrementally starting with one hyperplane. Such an approach allows one to significantly reduce the computational effort in many large data sets. Results of numerical experiments are reported. These results demonstrate that the new algorithm consistently produces a good test set accuracy on most data sets comparing with a number of other mainstream classifiers. Â© 2011 Sociedad de EstadÃstica e InvestigaciÃ³n Operativa.

Optimality conditions in nonconvex optimization via weak subdifferentials

- Kasimbeyli, Refail, Mammadov, Musa

**Authors:**Kasimbeyli, Refail , Mammadov, Musa**Date:**2011**Type:**Text , Journal article**Relation:**Nonlinear Analysis, Theory, Methods and Applications Vol. 74, no. 7 (2011), p. 2534-2547**Full Text:****Reviewed:****Description:**In this paper we study optimality conditions for optimization problems described by a special class of directionally differentiable functions. The well-known necessary and sufficient optimality condition of nonsmooth convex optimization, given in the form of variational inequality, is generalized to the nonconvex case by using the notion of weak subdifferentials. The equivalent formulation of this condition in terms of weak subdifferentials and augmented normal cones is also presented. Â© 2011 Elsevier Ltd. All rights reserved.

On weak subdifferentials, directional derivatives, and radial epiderivatives for nonconvex functions

- Kasimbeyli, Refail, Mammadov, Musa

**Authors:**Kasimbeyli, Refail , Mammadov, Musa**Date:**2009**Type:**Text , Journal article**Relation:**Siam Journal on Optimization Vol. 20, no. 2 (2009), p. 841-855**Full Text:****Reviewed:****Description:**In this paper we study relations between the directional derivatives, the weak subdifferentials, and the radial epiderivatives for nonconvex real-valued functions. We generalize the well-known theorem that represents the directional derivative of a convex function as a pointwise maximum of its subgradients for the nonconvex case. Using the notion of the weak subgradient, we establish conditions that guarantee equality of the directional derivative to the pointwise supremum of weak subgradients of a nonconvex real-valued function. A similar representation is also established for the radial epiderivative of a nonconvex function. Finally the equality between the directional derivatives and the radial epiderivatives for a nonconvex function is proved. An analogue of the well-known theorem on necessary and sufficient conditions for optimality is drawn without any convexity assumptions.

**Authors:**Kasimbeyli, Refail , Mammadov, Musa**Date:**2009**Type:**Text , Journal article**Relation:**Siam Journal on Optimization Vol. 20, no. 2 (2009), p. 841-855**Full Text:****Reviewed:****Description:**In this paper we study relations between the directional derivatives, the weak subdifferentials, and the radial epiderivatives for nonconvex real-valued functions. We generalize the well-known theorem that represents the directional derivative of a convex function as a pointwise maximum of its subgradients for the nonconvex case. Using the notion of the weak subgradient, we establish conditions that guarantee equality of the directional derivative to the pointwise supremum of weak subgradients of a nonconvex real-valued function. A similar representation is also established for the radial epiderivative of a nonconvex function. Finally the equality between the directional derivatives and the radial epiderivatives for a nonconvex function is proved. An analogue of the well-known theorem on necessary and sufficient conditions for optimality is drawn without any convexity assumptions.

The modified subgradient algorithm based on feasible values

- Kasimbeyli, Refail, Ustun, Ozden, Rubinov, Alex

**Authors:**Kasimbeyli, Refail , Ustun, Ozden , Rubinov, Alex**Date:**2009**Type:**Text , Journal article**Relation:**Optimization Vol. 58, no. 5 (2009), p. 535-560**Full Text:**false**Reviewed:****Description:**In this article, we continue to study the modified subgradient (MSG) algorithm previously suggested by Gasimov for solving the sharp augmented Lagrangian dual problems. The most important features of this algorithm are those that guarantees a global optimum for a wide class of non-convex optimization problems, generates a strictly increasing sequence of dual values, a property which is not shared by the other subgradient methods and guarantees convergence. The main drawbacks of MSG algorithm, which are typical for many subgradient algorithms, are those that uses an unconstrained global minimum of the augmented Lagrangian function and requires knowing an approximate upper bound of the initial problem to update stepsize parameters. In this study we introduce a new algorithm based on the so-called feasible values and give convergence theorems. The new algorithm does not require to know the optimal value initially and seeks it iteratively beginning with an arbitrary number. It is not necessary to find a global minimum of the augmented Lagrangian for updating the stepsize parameters in the new algorithm. A collection of test problems are used to demonstrate the performance of the new algorithm. © 2009 Taylor & Francis.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?