Distributed proximal-gradient method for convex optimization with inequality constraints
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang , Wang, Xiangyu
- Date: 2014
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 56, no. 2 (2014), p. 160-178
- Full Text: false
- Reviewed:
- Description: We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal-dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O (1/k), after k iterations, which is faster than the rate, O (1/k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. Copyright © 2014 Australian Mathematical Society.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.