A DC programming approach for sensor network localization with uncertainties in anchor positions
- Authors: Wu, Changzhi , Li, Chaojie , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 3 (2014), p. 817-826
- Full Text: false
- Reviewed:
- Description: The sensor network localization with uncertainties in anchor positions has been studied in this paper. We formulate this problem as a DC (difference of two convex functions) programming. Then, a DC programming based algorithm has been proposed to solve such a problem. Simulation results obtained by our proposed method are better performance than those obtained by existing ones.
A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization
- Authors: Long, Qiang , Wu, Changzhi
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 4 (2014), p. 1279-1296
- Full Text:
- Reviewed:
- Description: A new global optimization method combining genetic algorithm and Hooke-Jeeves method to solve a class of constrained optimization problems is studied in this paper. We first introduce the quadratic penalty function method and the exact penalty function method to transform the original constrained optimization problem with general equality and inequality constraints into a sequence of optimization problems only with box constraints. Then, the combination of genetic algorithm and Hooke-Jeeves method is applied to solve the transformed optimization problems. Since Hooke-Jeeves method is good at local search, our proposed method dramatically improves the accuracy and convergence rate of genetic algorithm. In view of the derivative-free of Hooke-Jeeves method, our method only requires information of objective function value which not only can overcome the computational difficulties caused by the ill-condition of the square penalty function, but also can handle the non-diffierentiability by the exact penalty function. Some well-known test problems are investigated. The numerical results show that our proposed method is eficient and robust.
Distributed proximal-gradient method for convex optimization with inequality constraints
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang , Wang, Xiangyu
- Date: 2014
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 56, no. 2 (2014), p. 160-178
- Full Text: false
- Reviewed:
- Description: We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal-dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O (1/k), after k iterations, which is faster than the rate, O (1/k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. Copyright © 2014 Australian Mathematical Society.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
A quasisecant method for solving a system of nonsmooth equations
- Authors: Long, Qiang , Wu, Changzhi
- Date: 2013
- Type: Text , Journal article
- Relation: Computers and Mathematics with Applications Vol. 66, no. 4 (2013), p. 419-431
- Full Text: false
- Reviewed:
- Description: In this paper, the solution of nonsmooth equations is studied. We first transform the problem into an equivalent nonsmooth optimization problem and then the quasisecant method is introduced to solve it. Some nonsmooth equations that have arisen from bilevel programming problems are solved by our proposed method. The numerical results show the effectiveness and efficiency of our proposed method. © 2013 Elsevier Ltd. All rights reserved.
- Description: 2003011208