2Convex optimization
2Distributed algorithm
101 Mathematical Sciences
10102 Applied Mathematics
10103 Numerical and Computational Mathematics
10802 Computation Theory and Mathematics
109 Engineering
1Approximate optimal solution
1Approximate solution
1Asymptotic stability
1Constrained minimization
1Convex quadratic bi-level programming
1Demand side management
1Distributed dual gradient algorithm
1Dynamic pricing
1Exact penalty function method
1Feedback neural network
1Gaussian smoothing
1Gradient-free method
1Minimax problem

Show More

Show Less

Format Type

A feedback neural network for solving convex quadratic bi-level programming problems

- Li, Jueyou, Li, Chaojie, Wu, Zhiyou, Huang, Junjian

**Authors:**Li, Jueyou , Li, Chaojie , Wu, Zhiyou , Huang, Junjian**Date:**2014**Type:**Text , Journal article**Relation:**Neural Computing and Applications Vol. 25, no. 3 (2014), p. 603-611**Full Text:**false**Reviewed:****Description:**In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the effi- ciency and performance of the proposed neural network approach.

A New Objective Penalty Function Approach for Solving Constrained Minimax Problems

- Li, Jueyou, Wu, Zhiyou, Long, Qiang

**Authors:**Li, Jueyou , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of the Operations Research Society of China Vol. 2, no. 1 (March 2014 2014), p. 93-108**Full Text:**false**Reviewed:****Description:**In this paper, a new objective penalty function approach is proposed for solving minimax programming problems with equality and inequality constraints. This new objective penalty function combines the objective penalty and constraint penalty. By the new objective penalty function, a constrained minimax problem is converted to minimizations of a sequence of continuously differentiable functions with a simple box constraint. One can thus apply any efficient gradient minimization methods to solve the minimizations with box constraint at each step of the sequence. Some relationships between the original constrained minimax problem and the corresponding minimization problems with box constraint are established. Based on these results, an algorithm for finding a global solution of the constrained minimax problems is proposed by integrating the particular structure of minimax problems and its global convergence is proved under some conditions. Furthermore, an algorithm is developed for finding a local solution of the constrained minimax problems, with its convergence proved under certain conditions. Preliminary results of numerical experiments with well-known test problems show that satisfactorily approximate solutions for some constrained minimax problems can be obtained.

An efficient algorithm for optimal real-time pricing strategy in smart grid

- Zhang, Wang, Chen, Guo, Dong, Zhaoyang, Li, Jueyou

**Authors:**Zhang, Wang , Chen, Guo , Dong, Zhaoyang , Li, Jueyou**Date:**2014**Type:**Text , Conference paper**Relation:**Power & Energy Society General Meeting/Conference & Exposition, 2014 IEEE; National Harbor, MD, USA; 27th-31st July 2014 p. 1-5**Full Text:**false**Reviewed:****Description:**Abstract— the new dynamic pricing schemes encourage the consumers to participate more actively in the electricity energy market, and the smart meter and demand side management (DSM) make it possible. In this paper, we consider a smart grid environment with multiple users equipped with smart meters and energy management devices (EMD). An improved optimization method is proposed to maximize the social welfare of both users and the provider under a real-time pricing strategy. More specifically, we proposed a more practical and advanced gradient algorithm – fast distributed dual gradient algorithm (FDDGA). Compared with traditional distributed dual sub- gradient algorithm, this improved method does not only accelerate the convergence rate but also overcome the possible oscillation that caused by the uncertainty in choosing step size over iteration process in sub-gradient projection method. The simulation results also validate that the proposed algorithm is effective and efficient in solving the real time pricing problem for demand response.

Distributed and parallel methods for structural convex optimization

**Authors:**Li, Jueyou**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**There has been considerable recent interest in optimization methods associated with a multi-agent network. The goal is to optimize a global objective function which is a sum of local objective functions only known by the agents through the network. The focus of this dissertation is the development of optimization algorithms for the special class when the optimization problem of interest has an additive or separable structure. Specifically, we are concerned with two classes of convex optimization problems. The first one is called as multi-agent convex problems and they arise in many network applications, including in-network estimation, machine learning and signal processing. The second one is termed as separable convex problems and they arise in diverse applications, including network resource allocation, and distributed model prediction and control. Due to the structure of problems and privacy of local objective functions, special optimization methods are always desirable, especially for large-scale structured problems. For the multi-agent convex problems with simple constraints, we develop gradient-free distributed methods based on the incremental and consensus strategies. The convergence analysis and convergence rates of proposed methods are provided. By comparison, existing distributed algorithms require first-order information of objective functions, but our methods only involve the estimates of objective function value. Therefore, the proposed methods are suitable to solve more general problems even when the first-order information of problems is unavailable or costly to compute. In practical applications a wide variety of problems are formulated as multi-agent optimization problems subject to equality and (or) inequality constraints. Methods available for solving this type of problems are still limited in the literature. Most of them are based on the Lagrangian duality, and there is no estimates on the convergence rate. In the thesis, we develop a distributed proximal-gradient method to solve multi-agent convex problems under global inequality constraints. Moreover, we provide the convergence analysis of the proposed method and obtain the explicit estimates of convergence rate. Our method relies on the exact penalty function method and multi-consensus averaging, not involving the Lagrangian multipliers. For the separable convex problems with linear constraints, on the framework of Lagrangian dual decomposition, we develop fast gradient-based optimization methods, including a fast dual gradient-projection method and a fast dual gradient method. In addition to parallel implementation of the algorithm, our focus is that the algorithm has faster convergence rate, since existing dual subgradient-based algorithms suffer from a slow convergence rate. Our proposed algorithms are based Nesterov’s smoothing technique and several fast gradient schemes. The explicit convergence rates of the proposed algorithms are obtained, which are superior to those obtained by subgradient-based algorithms. The proposed algorithms are applied to a real-pricing problem in smart grid and a network utility maximum problem. Dual decomposition methods often involve in finding the exact solution of an inner subproblem at each iteration. However, from a practical point of view, the subproblem is never solved exactly. Hence, we extend the proposed fast dual gradient-projection method to the inexact setting. Although the inner subproblem is solved only up to certain precision, we provide a complete analysis of computational complexity on the generated approximate solutions. Thus, our inexact version has the attractive computational advantage that the subproblem only needs to be solved with certain accuracy while still maintaining the same iteration complexity as the exact counterpart.**Description:**Doctor of Philosophy

**Authors:**Li, Jueyou**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**There has been considerable recent interest in optimization methods associated with a multi-agent network. The goal is to optimize a global objective function which is a sum of local objective functions only known by the agents through the network. The focus of this dissertation is the development of optimization algorithms for the special class when the optimization problem of interest has an additive or separable structure. Specifically, we are concerned with two classes of convex optimization problems. The first one is called as multi-agent convex problems and they arise in many network applications, including in-network estimation, machine learning and signal processing. The second one is termed as separable convex problems and they arise in diverse applications, including network resource allocation, and distributed model prediction and control. Due to the structure of problems and privacy of local objective functions, special optimization methods are always desirable, especially for large-scale structured problems. For the multi-agent convex problems with simple constraints, we develop gradient-free distributed methods based on the incremental and consensus strategies. The convergence analysis and convergence rates of proposed methods are provided. By comparison, existing distributed algorithms require first-order information of objective functions, but our methods only involve the estimates of objective function value. Therefore, the proposed methods are suitable to solve more general problems even when the first-order information of problems is unavailable or costly to compute. In practical applications a wide variety of problems are formulated as multi-agent optimization problems subject to equality and (or) inequality constraints. Methods available for solving this type of problems are still limited in the literature. Most of them are based on the Lagrangian duality, and there is no estimates on the convergence rate. In the thesis, we develop a distributed proximal-gradient method to solve multi-agent convex problems under global inequality constraints. Moreover, we provide the convergence analysis of the proposed method and obtain the explicit estimates of convergence rate. Our method relies on the exact penalty function method and multi-consensus averaging, not involving the Lagrangian multipliers. For the separable convex problems with linear constraints, on the framework of Lagrangian dual decomposition, we develop fast gradient-based optimization methods, including a fast dual gradient-projection method and a fast dual gradient method. In addition to parallel implementation of the algorithm, our focus is that the algorithm has faster convergence rate, since existing dual subgradient-based algorithms suffer from a slow convergence rate. Our proposed algorithms are based Nesterov’s smoothing technique and several fast gradient schemes. The explicit convergence rates of the proposed algorithms are obtained, which are superior to those obtained by subgradient-based algorithms. The proposed algorithms are applied to a real-pricing problem in smart grid and a network utility maximum problem. Dual decomposition methods often involve in finding the exact solution of an inner subproblem at each iteration. However, from a practical point of view, the subproblem is never solved exactly. Hence, we extend the proposed fast dual gradient-projection method to the inexact setting. Although the inner subproblem is solved only up to certain precision, we provide a complete analysis of computational complexity on the generated approximate solutions. Thus, our inexact version has the attractive computational advantage that the subproblem only needs to be solved with certain accuracy while still maintaining the same iteration complexity as the exact counterpart.**Description:**Doctor of Philosophy

Distributed proximal-gradient method for convex optimization with inequality constraints

- Li, Jueyou, Wu, Changzhi, Wu, Zhiyou, Long, Qiang, Wang, Xiangyu

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang , Wang, Xiangyu**Date:**2014**Type:**Text , Journal article**Relation:**ANZIAM Journal Vol. 56, no. 2 (2014), p. 160-178**Full Text:**false**Reviewed:****Description:**We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal-dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, O (1/k), after k iterations, which is faster than the rate, O (1/k), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. Copyright © 2014 Australian Mathematical Society.

Gradient-free method for nonsmooth distributed optimization

- Li, Jueyou, Wu, Changzhi, Wu, Zhiyou, Long, Qiang

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340**Full Text:****Reviewed:****Description:**In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

**Authors:**Li, Jueyou , Wu, Changzhi , Wu, Zhiyou , Long, Qiang**Date:**2014**Type:**Text , Journal article**Relation:**Journal of Global Optimization Vol.61, no.2 (March 2014), p.325-340**Full Text:****Reviewed:****Description:**In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov’s random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to d (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.

Numerical performance of subgradient methods in solving nonsmooth optimization problems

**Authors:**Long, Qiang , Li, Jueyou**Date:**2013**Type:**Text , Journal article**Relation:**Journal of Chongqing Normal University Vol. 30, no. 6 (2013), p. 25-30**Full Text:**false**Reviewed:**

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?