Global optimality conditions and optimization methods for polynomial programming problems
- Authors: Wu, Zhiyou , Tian, Jing , Ugon, Julien
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol. 62, no. 4 (2015), p. 617-641
- Full Text: false
- Reviewed:
- Description: This paper is concerned with the general polynomial programming problem with box constraints, including global optimality conditions and optimization methods. First, a necessary global optimality condition for a general polynomial programming problem with box constraints is given. Then we design a local optimization method by using the necessary global optimality condition to obtain some strongly or -strongly local minimizers which substantially improve some KKT points. Finally, a global optimization method, by combining the new local optimization method and an auxiliary function, is designed. Numerical examples show that our methods are efficient and stable.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.