- Title
- Gradient-free method for nonsmooth distributed optimization
- Creator
- Li, Jueyou; Wu, Changzhi; Wu, Zhiyou; Long, Qiang
- Date
- 2014
- Type
- Text; Journal article
- Identifier
- http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/61584
- Identifier
- vital:5908
- Identifier
-
https://doi.org/10.1007/s10898-014-0174-2
- Identifier
- ISSN:0925-5001
- Abstract
- In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
- Relation
- Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Rights
- © Springer, Part of Springer Science+Business Media
- Rights
- Open Access
- Rights
- This metadata is freely available under a CCO license
- Subject
- 0102 Applied Mathematics; 0103 Numerical and Computational Mathematics; 0802 Computation Theory and Mathematics; Distributed algorithm; Gaussian smoothing; Gradient-free method; Convex optimization
- Full Text
- Reviewed
- Hits: 2687
- Visitors: 3109
- Downloads: 450
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | SOURCE2 | Accepted Version | 177 KB | Adobe Acrobat PDF | View Details Download |