General lagrange-type functions in constrained global optimization part II : Exact auxiliary functions
- Authors: Evtushenko, Yu G. , Rubinov, Alex , Zhadan, V. G.
- Date: 2001
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 16, no. 1-4 (2001), p. 231-256
- Full Text: false
- Reviewed:
- Description: This paper is a continuation of [13]. For each constrained optimization problem we consider certain unconstrained problems, which are constructed by means of auxiliary (Lagrange-type) functions. We study only exact auxiliary functions, it means that the set of their global minimizers coincides with the solution set of the primal constrained optimization problem. Sufficient conditions for the exactness of an auxiliary function are given. These conditions are obtained without assumption that the Lagrange function has a saddle point. Some examples of exact auxiliary functions are given. © 2001 OPA (Overseas Publishers Association) N.V. Published by license under the Gordon and Breach Science Publishers imprint, a member of the Taylor & Francis Group.
Generalised rational approximation and its application to improve deep learning classifiers
- Authors: Peiris, V , Sharon, Nir , Sukhorukova, Nadezda , Ugon, Julien
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 389, no. (2021), p.
- Relation: https://purl.org/au-research/grants/arc/DP180100602
- Full Text: false
- Reviewed:
- Description: A rational approximation (that is, approximation by a ratio of two polynomials) is a flexible alternative to polynomial approximation. In particular, rational functions exhibit accurate estimations to nonsmooth and non-Lipschitz functions, where polynomial approximations are not efficient. We prove that the optimisation problems appearing in the best uniform rational approximation and its generalisation to a ratio of linear combinations of basis functions are quasiconvex even when the basis functions are not restricted to monomials. Then we show how this fact can be used in the development of computational methods. This paper presents a theoretical study of the arising optimisation problems and provides results of several numerical experiments. We apply our approximation as a preprocessing step to deep learning classifiers and demonstrate that the classification accuracy is significantly improved compared to the classification of the raw signals. © 2020
- Description: This research was supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602 ). This research was partially sponsored by Tel Aviv-Swinburne Research Collaboration Grant (2019).
Generalized bregman envelopes and proximity operators
- Authors: Burachik, Regina , Dao, Minh , Lindstrom, Scott
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 190, no. 3 (2021), p. 744-778
- Full Text:
- Reviewed:
- Description: Every maximally monotone operator can be associated with a family of convex functions, called the Fitzpatrick family or family of representative functions. Surprisingly, in 2017, Burachik and Martínez-Legaz showed that the well-known Bregman distance is a particular case of a general family of distances, each one induced by a specific maximally monotone operator and a specific choice of one of its representative functions. For the family of generalized Bregman distances, sufficient conditions for convexity, coercivity, and supercoercivity have recently been furnished. Motivated by these advances, we introduce in the present paper the generalized left and right envelopes and proximity operators, and we provide asymptotic results for parameters. Certain results extend readily from the more specific Bregman context, while others only extend for certain generalized cases. To illustrate, we construct examples from the Bregman generalizing case, together with the natural “extreme” cases that highlight the importance of which generalized Bregman distance is chosen. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Generalized Fenchel's conjugation formulas and duality for abstract convex functions
- Authors: Jeyakumar, Vaithilingam , Rubinov, Alex , Wu, Zhiyou
- Date: 2007
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 132, no. 3 (Mar 2007), p. 441-458
- Full Text: false
- Reviewed:
- Description: In this paper, we present a generalization of Fenchel's conjugation and derive infimal convolution formulas, duality and subdifferential (and epsilon-subdifferential) sum formulas for abstract convex functions. The class of abstract convex functions covers very broad classes of nonconvex functions. A nonaffine global support function technique and an extended sum- epiconjugate technique of convex functions play a crucial role in deriving the results for abstract convex functions. An additivity condition involving global support sets serves as a constraint qualification for the duality.
- Description: C1
Global descent methods for unconstrained global optimization
- Authors: Wu, Zhiyou , Li, Duan , Zhang, Lian-Sheng
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 50, no. 3 (2011), p. 379-3976
- Full Text: false
- Reviewed:
- Description: We propose in this paper novel global descent methods for unconstrained global optimization problems to attain the global optimality by carrying out a series of local minimization. More specifically, the solution framework consists of a two-phase cycle of local minimization: the first phase implements local search of the original objective function, while the second phase assures a global descent of the original objective function in the steepest descent direction of a (quasi) global descent function. The key element of global descent methods is the construction of the (quasi) global descent functions which possess prominent features in guaranteeing a global descent. © 2010 Springer Science+Business Media, LLC.
Global optimal solutions to a class of quadrinomial minimization problems with one quadratic constraint
- Authors: Yuan, Y. B. , Fang, Shucherng , Gao, David
- Date: 2012
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 52, no. 2 (2012), p. 195-209
- Full Text: false
- Reviewed:
- Description: This paper studies the canonical duality theory for solving a class of quadri- nomial minimization problems subject to one general quadratic constraint. It is shown that the nonconvex primal problem in Rn can be converted into a concave maximization dual problem over a convex set in R2 , such that the problem can be solved more efficiently. The existence and uniqueness theorems of global minimizers are provided using the triality theory. Examples are given to illustrate the results obtained. © 2011 Springer Science+Business Media, LLC.
Global optimal trajectory in Chaos and NP-Hardness
- Authors: Latorre, Vittorio , Gao, David
- Date: 2016
- Type: Text , Journal article
- Relation: International Journal of Bifurcation and Chaos Vol. 26, no. 8 (2016), p. 1-14
- Full Text:
- Reviewed:
- Description: This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory. © 2016 World Scientific Publishing Company.
Global optimality conditions and optimization methods for constrained polynomial programming problems
- Authors: Wu, Zhiyou , Tian, Jing , Ugon, Julien , Zhang, Liang
- Date: 2015
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 262, no. (2015), p. 312-325
- Full Text:
- Reviewed:
- Description: The general constrained polynomial programming problem (GPP) is considered in this paper. Problem (GPP) has a broad range of applications and is proved to be NP-hard. Necessary global optimality conditions for problem (GPP) are established. Then, a new local optimization method for this problem is proposed by exploiting these necessary global optimality conditions. A global optimization method is proposed for this problem by combining this local optimization method together with an auxiliary function. Some numerical examples are also given to illustrate that these approaches are very efficient. (C) 2015 Elsevier Inc. All rights reserved.
Global optimality conditions and optimization methods for polynomial programming problems
- Authors: Wu, Zhiyou , Tian, Jing , Ugon, Julien
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 62, no. 4 (2015), p. 617-641
- Full Text: false
- Reviewed:
- Description: This paper is concerned with the general polynomial programming problem with box constraints, including global optimality conditions and optimization methods. First, a necessary global optimality condition for a general polynomial programming problem with box constraints is given. Then we design a local optimization method by using the necessary global optimality condition to obtain some strongly or -strongly local minimizers which substantially improve some KKT points. Finally, a global optimization method, by combining the new local optimization method and an auxiliary function, is designed. Numerical examples show that our methods are efficient and stable.
Global optimality conditions for some classes of optimization problems
- Authors: Wu, Zhiyou , Rubinov, Alex
- Date: 2009
- Type: Text , Journal article
- Relation: Journal of Optimization Theory and Applications Vol. 145, no. 1 (2009), p. 164-185
- Full Text: false
- Reviewed:
- Description: We establish new necessary and sufficient optimality conditions for global optimization problems. In particular, we establish tractable optimality conditions for the problems of minimizing a weakly convex or concave function subject to standard constraints, such as box constraints, binary constraints, and simplex constraints. We also derive some new necessary and sufficient optimality conditions for quadratic optimization. Our main theoretical tool for establishing these optimality conditions is abstract convexity. © 2009 Springer Science+Business Media, LLC.
Global optimization of marginal functions with applications to economic equilibrium
- Authors: Bagirov, Adil , Rubinov, Alex
- Date: 2001
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 20, no. 3-4 (Aug 2001), p. 215-237
- Full Text: false
- Reviewed:
- Description: We discuss the applicability of the cutting angle method to global minimization of marginal functions. The search of equilibrium prices in the exchange model can be reduced to the global minimization of certain functions, which include marginal functions. This problem has been approximately solved by the cutting angle method. Results of numerical experiments are presented and discussed.
Global optimization over a box via canonical dual function
- Authors: Zhu, Jinghao , Wang, Chao , Gao, David
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Computational and Applied Mathematics Vol. 235, no. 5 (January 2011), p. 1141-1147
- Full Text: false
- Reviewed:
- Description: In this paper, we study global concave optimization by the canonical dual function. A differential flow on the dual feasible space is introduced. We show that the flow reaches a global minimizer of the concave function over a box. An example is illustrated.
Global solutions to a class of CEC benchmark constrained optimization problems
- Authors: Zhou, Xiaojun , Gao, David , Yang, Chunhua
- Date: 2016
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 10, no. 3 (2016), p. 457-472
- Full Text:
- Reviewed:
- Description: This paper aims to solve a class of CEC benchmark constrained optimization problems that have been widely studied by nature-inspired optimization algorithms. Based on canonical duality theory, these challenging problems can be reformulated as a unified canonical dual problem over a convex set, which can be solved deterministically to obtain global optimal solutions in polynomial time. Applications are illustrated by some well-known CEC benchmark problems, and comparisons with other methods have demonstrated the effectiveness of the proposed approach. © 2014, Springer-Verlag Berlin Heidelberg.
Global solutions to fractional programming problem with ratio of nonconvex functions
- Authors: Ruan, Ning , Gao, David
- Date: 2015
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 255, no. (2015), p. 66-72
- Full Text: false
- Reviewed:
- Description: This paper presents a canonical dual approach for minimizing a sum of quadratic function and a ratio of nonconvex functions in Rn. By introducing a parameter, the problem is first equivalently reformed as a nonconvex polynomial minimization with elliptic constraint. It is proved that under certain conditions, the canonical dual is a concave maximization problem in R2 that exhibits no duality gap. Therefore, the global optimal solution of the primal problem can be obtained by solving the canonical dual problem. © 2014 Elsevier Inc. All rights reserved.
Global solutions to nonconvex optimization of 4th-order polynomial and log-sum-exp functions
- Authors: Chen, Yi , Gao, David
- Date: 2016
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 64, no. 3 (2016), p. 417-431
- Full Text:
- Reviewed:
- Description: This paper presents a canonical dual approach for solving a nonconvex global optimization problem governed by a sum of 4th-order polynomial and a log-sum-exp function. Such a problem arises extensively in engineering and sciences. Based on the canonical duality–triality theory, this nonconvex problem is transformed to an equivalent dual problem, which can be solved easily under certain conditions. We proved that both global minimizer and the biggest local extrema of the primal problem can be obtained analytically from the canonical dual solutions. As two special cases, a quartic polynomial minimization and a minimax problem are discussed. Existence conditions are derived, which can be used to classify easy and relative hard instances. Applications are illustrated by several nonconvex and nonsmooth examples. © 2014, Springer Science+Business Media New York.
Globally convergent algorithms for solving unconstrained optimization problems
- Authors: Taheri, Sona , Mammadov, Musa , Seifollahi, Sattar
- Date: 2013
- Type: Text , Journal article
- Relation: Optimization Vol. , no. (2013), p. 1-15
- Full Text:
- Reviewed:
- Description: New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
H-infinity via a nonsmooth, nonconvex optimization approach
- Authors: Mammadov, Musa , Orsi, Robert
- Date: 2005
- Type: Text , Journal article
- Relation: Pacific Journal of Optimization Vol. 1, no. 2 (2005), p. 405-420
- Full Text: false
- Reviewed:
- Description: A numerical method for solving the H-infinity synthesis problem is presented. The problem is posed as an unconstrained, nonsmooth, nonconvex minimization problem. The optimization variables consist solely of the entries of the output feedback matrix. No additional variables, such as Lyapunov variables, need to be introduced. The main part of the optimization procedure uses a line search mechanism where the descent direction is defined by a recently introduced dynamical systems approach. Numerical results for various benchmark problems are included in the paper. In addition, the effectiveness of a preliminary part of the algorithm for successfully and quickly finding stabilizing controllers is also demonstrated.
- Description: C1
- Description: 2003001382
Hyperbolic smoothing function method for minimax problems
- Authors: Bagirov, Adil , Al Nuaimat, Alia , Sultanova, Nargiz
- Date: 2013
- Type: Text , Journal article
- Relation: Optimization Vol. 62, no. 6 (2013), p. 759-782
- Full Text: false
- Reviewed:
- Description: In this article, an approach for solving finite minimax problems is proposed. This approach is based on the use of hyperbolic smoothing functions. In order to apply the hyperbolic smoothing we reformulate the objective function in the minimax problem and study the relationship between the original minimax and reformulated problems. We also study main properties of the hyperbolic smoothing function. Based on these results an algorithm for solving the finite minimax problem is proposed and this algorithm is implemented in general algebraic modelling system. We present preliminary results of numerical experiments with well-known nonsmooth optimization test problems. We also compare the proposed algorithm with the algorithm that uses the exponential smoothing function as well as with the algorithm based on nonlinear programming reformulation of the finite minimax problem. © 2013 Copyright Taylor and Francis Group, LLC.
- Description: 2003011099
Implementation of novel methods of global and nonsmooth optimization : GANSO programming library
- Authors: Beliakov, Gleb , Ugon, Julien
- Date: 2007
- Type: Text , Journal article
- Relation: Optimization Vol. 56, no. 5-6 (2007), p. 543-546
- Full Text:
- Reviewed:
- Description: We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages.
- Description: C1
- Description: 2003004865