Your selections:

2Convex optimization
2Distributed algorithm
101 Mathematical Sciences
10102 Applied Mathematics
10103 Numerical and Computational Mathematics
10802 Computation Theory and Mathematics
109 Engineering
1Approximate solution
1Constrained minimization
1Exact penalty function method
1Gaussian smoothing
1Gradient-free method
1Minimax problem
1Objective penalty function
1Proximal-gradient method

Show More

Show Less

Format Type

Gradient-free method for nonsmooth distributed optimization

- Li, Jueyou, Wu, Changzhi, Wu, Zhiyou, Long, Qiang

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340**Full Text:****Reviewed:****Description:**In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340**Full Text:****Reviewed:****Description:**In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

A New Objective Penalty Function Approach for Solving Constrained Minimax Problems

- Li, Jueyou, Wu, Zhiyou, Long, Qiang

**Authors:**Li, Jueyou , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of the Operations Research Society of China Vol. 2, no. 1 (March 2014 2014), p. 93-108**Full Text:**false**Reviewed:****Description:**In this paper, a new objective penalty function approach is proposed for solving minimax programming problems with equality and inequality constraints. This new objective penalty function combines the objective penalty and constraint penalty. By the new objective penalty function, a constrained minimax problem is converted to minimizations of a sequence of continuously differentiable functions with a simple box constraint. One can thus apply any efficient gradient minimization methods to solve the minimizations with box constraint at each step of the sequence. Some relationships between the original constrained minimax problem and the corresponding minimization problems with box constraint are established. Based on these results, an algorithm for finding a global solution of the constrained minimax problems is proposed by integrating the particular structure of minimax problems and its global convergence is proved under some conditions. Furthermore, an algorithm is developed for finding a local solution of the constrained minimax problems, with its convergence proved under certain conditions. Preliminary results of numerical experiments with well-known test problems show that satisfactorily approximate solutions for some constrained minimax problems can be obtained.

Distributed proximal-gradient method for convex optimization with inequality constraints

- Li, Jueyou, Wu, Changzhi, Wu, Zhiyou, Long, Qiang, Wang, Xiangyu

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang , Wang, Xiangyu**Date:**2014**Type:**Text , Journal article**Relation:**ANZIAM Journal Vol. 56, no. 2 (2014), p. 160-178**Full Text:**false**Reviewed:****Description:**We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal-dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O (1/k), after k iterations, which is faster than the rate, O (1/k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. Copyright © 2014 Australian Mathematical Society.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?