A constraint handling technique for constrained multi-objective genetic algorithm
- Authors: Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Swarm and Evolutionary Computation Vol. 15, no. (2014), p. 66-79
- Full Text: false
- Reviewed:
- Description: A new constraint handling technique for multi-objective genetic algorithm is proposed in this paper. There are two important issues in multi-objective genetic algorithm, closeness of the obtained solutions to the real Pareto frontier and diversity of the obtained solutions. If considering a constrained multi-objective programming problem, one needs to take account of feasibility of solutions. Thus, in this new constraint handling technique, we systematically take closeness, diversity and feasibility as three objectives in a multi-objective subproblem. And solutions in each iteration are sorted by optimal sequence method based on those three objectives. Then, the solutions inherited to the next generation are selected based on its optimal order. Numerical tests show that the solutions obtained by this method are not only feasible, but also close to the real Pareto front and have good diversity. © 2013 Elsevier B.V. © 2014 Elsevier Inc.
A DC programming approach for sensor network localization with uncertainties in anchor positions
- Authors: Wu, Changzhi , Li, Chaojie , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 3 (2014), p. 817-826
- Full Text: false
- Reviewed:
- Description: The sensor network localization with uncertainties in anchor positions has been studied in this paper. We formulate this problem as a DC (difference of two convex functions) programming. Then, a DC programming based algorithm has been proposed to solve such a problem. Simulation results obtained by our proposed method are better performance than those obtained by existing ones.
A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization
- Authors: Long, Qiang , Wu, Changzhi
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Industrial and Management Optimization Vol. 10, no. 4 (2014), p. 1279-1296
- Full Text:
- Reviewed:
- Description: A new global optimization method combining genetic algorithm and Hooke-Jeeves method to solve a class of constrained optimization problems is studied in this paper. We first introduce the quadratic penalty function method and the exact penalty function method to transform the original constrained optimization problem with general equality and inequality constraints into a sequence of optimization problems only with box constraints. Then, the combination of genetic algorithm and Hooke-Jeeves method is applied to solve the transformed optimization problems. Since Hooke-Jeeves method is good at local search, our proposed method dramatically improves the accuracy and convergence rate of genetic algorithm. In view of the derivative-free of Hooke-Jeeves method, our method only requires information of objective function value which not only can overcome the computational difficulties caused by the ill-condition of the square penalty function, but also can handle the non-diffierentiability by the exact penalty function. Some well-known test problems are investigated. The numerical results show that our proposed method is eficient and robust.
A new hybrid method combining genetic algorithm and coordinate search method
- Authors: Long, Qiang , Huang, Junjian
- Date: 2012
- Type: Text , Conference proceedings
- Full Text:
- Description: This paper proposed a new hybrid method combining genetic algorithm(GA) and coordinate search method (CSM). Genetic algorithm is good at global exploration but bad at accuracy and local search. Whereas, coordinate search method is good at local exploitation, and its accuracy is reliable when searching in a local area. Thus we combine those two methods in this paper to design a hybrid method called genetic algorithm with coordinate search (GACS). Experimental tests shows that this method are good at both global search and local accuracy. © 2012 IEEE.
- Description: 2003010808
A New Objective Penalty Function Approach for Solving Constrained Minimax Problems
- Authors: Li, Jueyou , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of the Operations Research Society of China Vol. 2, no. 1 (March 2014 2014), p. 93-108
- Full Text: false
- Reviewed:
- Description: In this paper, a new objective penalty function approach is proposed for solving minimax programming problems with equality and inequality constraints. This new objective penalty function combines the objective penalty and constraint penalty. By the new objective penalty function, a constrained minimax problem is converted to minimizations of a sequence of continuously differentiable functions with a simple box constraint. One can thus apply any efficient gradient minimization methods to solve the minimizations with box constraint at each step of the sequence. Some relationships between the original constrained minimax problem and the corresponding minimization problems with box constraint are established. Based on these results, an algorithm for finding a global solution of the constrained minimax problems is proposed by integrating the particular structure of minimax problems and its global convergence is proved under some conditions. Furthermore, an algorithm is developed for finding a local solution of the constrained minimax problems, with its convergence proved under certain conditions. Preliminary results of numerical experiments with well-known test problems show that satisfactorily approximate solutions for some constrained minimax problems can be obtained.
A quasisecant method for solving a system of nonsmooth equations
- Authors: Long, Qiang , Wu, Changzhi
- Date: 2013
- Type: Text , Journal article
- Relation: Computers and Mathematics with Applications Vol. 66, no. 4 (2013), p. 419-431
- Full Text: false
- Reviewed:
- Description: In this paper, the solution of nonsmooth equations is studied. We first transform the problem into an equivalent nonsmooth optimization problem and then the quasisecant method is introduced to solve it. Some nonsmooth equations that have arisen from bilevel programming problems are solved by our proposed method. The numerical results show the effectiveness and efficiency of our proposed method. © 2013 Elsevier Ltd. All rights reserved.
- Description: 2003011208
Application of genetic algorithm in solving nonsmooth optimization problems
- Authors: Long, Qiang
- Date: 2013
- Type: Text , Journal article
- Relation: Journal of Chongqing Normal University Vol. 30, no. 1 (2013), p. 12-16
- Full Text: false
- Reviewed:
Distributed proximal-gradient method for convex optimization with inequality constraints
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang , Wang, Xiangyu
- Date: 2014
- Type: Text , Journal article
- Relation: ANZIAM Journal Vol. 56, no. 2 (2014), p. 160-178
- Full Text: false
- Reviewed:
- Description: We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal-dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O (1/k), after k iterations, which is faster than the rate, O (1/k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. Copyright © 2014 Australian Mathematical Society.
Gradient-free method for nonsmooth distributed optimization
- Authors: Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340
- Full Text:
- Reviewed:
- Description: In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
Methods and applications of clusterwise linear regression : a survey and comparison
- Authors: Long, Qiang , Bagirov, Adil , Taheri, Sona , Sultanova, Nargiz , Wu, Xue
- Date: 2023
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 17, no. 3 (2023), p.
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text: false
- Reviewed:
- Description: Clusterwise linear regression (CLR) is a well-known technique for approximating a data using more than one linear function. It is based on the combination of clustering and multiple linear regression methods. This article provides a comprehensive survey and comparative assessments of CLR including model formulations, description of algorithms, and their performance on small to large-scale synthetic and real-world datasets. Some applications of the CLR algorithms and possible future research directions are also discussed. © 2023 Association for Computing Machinery.
Nonsmooth and derivative-free optimization based hybrid methods and applications
- Authors: Long, Qiang
- Date: 2014
- Type: Text , Thesis , PhD
- Full Text:
- Description: "In this thesis, we develop hybrid methods for solving global and in particular, nonsmooth optimization problems. Hybrid methods are becoming more popular in global optimization since they allow to apply powerful smooth optimization techniques to solve global optimization problems. Such methods are able to efficiently solve global optimization problems with large number of variables. To date global search algorithms have been mainly applied to improve global search properties of the local search methods (including smooth optimization algorithms). In this thesis we apply rather different strategy to design hybrid methods. We use local search algorithms to improve the efficiency of global search methods. The thesis consists of two parts. In the first part we describe hybrid algorithms and in the second part we consider their various applications." -- taken from Abstract.
- Description: Operational Research and Cybernetics
Numerical performance of subgradient methods in solving nonsmooth optimization problems
- Authors: Long, Qiang , Li, Jueyou
- Date: 2013
- Type: Text , Journal article
- Relation: Journal of Chongqing Normal University Vol. 30, no. 6 (2013), p. 25-30
- Full Text: false
- Reviewed: