A DC programming approach for sensor network localization with uncertainties in anchor positions
- Authors: Wu, Changzhi , Li, Chaojie , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 3 (2014), p. 817-826
- Full Text: false
- Reviewed:
- Description: The sensor network localization with uncertainties in anchor positions has been studied in this paper. We formulate this problem as a DC (difference of two convex functions) programming. Then, a DC programming based algorithm has been proposed to solve such a problem. Simulation results obtained by our proposed method are better performance than those obtained by existing ones.
A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization
- Authors: Long, Qiang , Wu, Changzhi
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 4 (2014), p. 1279-1296
- Full Text:
- Reviewed:
- Description: A new global optimization method combining genetic algorithm and Hooke-Jeeves method to solve a class of constrained optimization problems is studied in this paper. We first introduce the quadratic penalty function method and the exact penalty function method to transform the original constrained optimization problem with general equality and inequality constraints into a sequence of optimization problems only with box constraints. Then, the combination of genetic algorithm and Hooke-Jeeves method is applied to solve the transformed optimization problems. Since Hooke-Jeeves method is good at local search, our proposed method dramatically improves the accuracy and convergence rate of genetic algorithm. In view of the derivative-free of Hooke-Jeeves method, our method only requires information of objective function value which not only can overcome the computational difficulties caused by the ill-condition of the square penalty function, but also can handle the non-diffierentiability by the exact penalty function. Some well-known test problems are investigated. The numerical results show that our proposed method is eficient and robust.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.