Your selections:

40102 Applied Mathematics
30103 Numerical and Computational Mathematics
3Fixed point iterations
2Conically averaged operator
2Convex function
2Fitzpatrick function
2Forward-backward algorithm
2Generalized Bregman distance
2Regularization
2Representative function
2Three-operator splitting
101 Mathematical Sciences
10101 Pure Mathematics
10906 Electrical and Electronic Engineering
10906 Electrical and Electronic EngineeringAlternating projections
1Adaptive Douglas–Rachford algorithm
1Bregman distance
1Cyclic projections
1Douglas–Rachford
1Downlink

Show More

Show Less

Format Type

The generalized bregman distance

- Burachik, Regina, Dao, Minh, Lindstrom, Scott

**Authors:**Burachik, Regina , Dao, Minh , Lindstrom, Scott**Date:**2021**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 31, no. 1 (2021), p. 404-424**Full Text:****Reviewed:****Description:**Recently, a new kind of distance has been introduced for the graphs of two point-to-set operators, one of which is maximally monotone. When both operators are the subdifferential of a proper lower semicontinuous convex function, this kind of distance specializes under modest assumptions to the classical Bregman distance. We name this new kind of distance the generalized Bregman distance, and we shed light on it with examples that utilize the other two most natural representative functions: the Fitzpatrick function and its conjugate. We provide sufficient conditions for convexity, coercivity, and supercoercivity: properties which are essential for implementation in proximal point type algorithms. We establish these results for both the left and right variants of this new kind of distance. We construct examples closely related to the Kullback-Leibler divergence, which was previously considered in the context of Bregman distances and whose importance in information theory is well known. In so doing, we demonstrate how to compute a difficult Fitzpatrick conjugate function, and we discover natural occurrences of the Lambert \scrW function, whose importance in optimization is of growing interest. © 2021 Society for Industrial and Applied Mathematics

**Authors:**Burachik, Regina , Dao, Minh , Lindstrom, Scott**Date:**2021**Type:**Text , Journal article**Relation:**SIAM Journal on Optimization Vol. 31, no. 1 (2021), p. 404-424**Full Text:****Reviewed:****Description:**Recently, a new kind of distance has been introduced for the graphs of two point-to-set operators, one of which is maximally monotone. When both operators are the subdifferential of a proper lower semicontinuous convex function, this kind of distance specializes under modest assumptions to the classical Bregman distance. We name this new kind of distance the generalized Bregman distance, and we shed light on it with examples that utilize the other two most natural representative functions: the Fitzpatrick function and its conjugate. We provide sufficient conditions for convexity, coercivity, and supercoercivity: properties which are essential for implementation in proximal point type algorithms. We establish these results for both the left and right variants of this new kind of distance. We construct examples closely related to the Kullback-Leibler divergence, which was previously considered in the context of Bregman distances and whose importance in information theory is well known. In so doing, we demonstrate how to compute a difficult Fitzpatrick conjugate function, and we discover natural occurrences of the Lambert \scrW function, whose importance in optimization is of growing interest. © 2021 Society for Industrial and Applied Mathematics

An adaptive splitting algorithm for the sum of two generalized monotone operators and one cocoercive operator

**Authors:**Dao, Minh , Phan, Hung**Date:**2021**Type:**Text , Journal article**Relation:**Fixed Point Theory and Algorithms for Sciences and Engineering Vol. 2021, no. 1 (2021), p. 1-19**Full Text:****Reviewed:****Description:**Splitting algorithms for finding a zero of sum of operators often involve multiple steps which are referred to as forward or backward steps. Forward steps are the explicit use of the operators and backward steps involve the operators implicitly via their resolvents. In this paper, we study an adaptive splitting algorithm for finding a zero of the sum of three operators. We assume that two of the operators are generalized monotone and their resolvents are computable, while the other operator is cocoercive but its resolvent is missing or costly to compute. Our splitting algorithm adapts new parameters to the generalized monotonicity of the operators and, at the same time, combines appropriate forward and backward steps to guarantee convergence to a solution of the problem. © 2021, The Author(s).

**Authors:**Dao, Minh , Phan, Hung**Date:**2021**Type:**Text , Journal article**Relation:**Fixed Point Theory and Algorithms for Sciences and Engineering Vol. 2021, no. 1 (2021), p. 1-19**Full Text:****Reviewed:****Description:**Splitting algorithms for finding a zero of sum of operators often involve multiple steps which are referred to as forward or backward steps. Forward steps are the explicit use of the operators and backward steps involve the operators implicitly via their resolvents. In this paper, we study an adaptive splitting algorithm for finding a zero of the sum of three operators. We assume that two of the operators are generalized monotone and their resolvents are computable, while the other operator is cocoercive but its resolvent is missing or costly to compute. Our splitting algorithm adapts new parameters to the generalized monotonicity of the operators and, at the same time, combines appropriate forward and backward steps to guarantee convergence to a solution of the problem. © 2021, The Author(s).

An adaptive splitting algorithm for the sum of two generalized monotone operators and one cocoercive operator

**Authors:**Dao, Minh , Phan, Hung**Date:**2021**Type:**Text , Journal article**Relation:**Fixed Point Theory and Algorithms for Sciences and Engineering Vol. 2021, no. 1 (2021), p.**Full Text:****Reviewed:****Description:**Splitting algorithms for finding a zero of sum of operators often involve multiple steps which are referred to as forward or backward steps. Forward steps are the explicit use of the operators and backward steps involve the operators implicitly via their resolvents. In this paper, we study an adaptive splitting algorithm for finding a zero of the sum of three operators. We assume that two of the operators are generalized monotone and their resolvents are computable, while the other operator is cocoercive but its resolvent is missing or costly to compute. Our splitting algorithm adapts new parameters to the generalized monotonicity of the operators and, at the same time, combines appropriate forward and backward steps to guarantee convergence to a solution of the problem. © 2021, The Author(s).

**Authors:**Dao, Minh , Phan, Hung**Date:**2021**Type:**Text , Journal article**Relation:**Fixed Point Theory and Algorithms for Sciences and Engineering Vol. 2021, no. 1 (2021), p.**Full Text:****Reviewed:****Description:**Splitting algorithms for finding a zero of sum of operators often involve multiple steps which are referred to as forward or backward steps. Forward steps are the explicit use of the operators and backward steps involve the operators implicitly via their resolvents. In this paper, we study an adaptive splitting algorithm for finding a zero of the sum of three operators. We assume that two of the operators are generalized monotone and their resolvents are computable, while the other operator is cocoercive but its resolvent is missing or costly to compute. Our splitting algorithm adapts new parameters to the generalized monotonicity of the operators and, at the same time, combines appropriate forward and backward steps to guarantee convergence to a solution of the problem. © 2021, The Author(s).

Constraint reduction reformulations for projection algorithms with applications to wavelet construction

- Dao, Minh, Dizon, Neil, Hogan, Jeffrey, Tam, Matthew

**Authors:**Dao, Minh , Dizon, Neil , Hogan, Jeffrey , Tam, Matthew**Date:**2021**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 190, no. 1 (2021), p. 201-233**Full Text:****Reviewed:****Description:**We introduce a reformulation technique that converts a many-set feasibility problem into an equivalent two-set problem. This technique involves reformulating the original feasibility problem by replacing a pair of its constraint sets with their intersection, before applying Pierra’s classical product space reformulation. The step of combining the two constraint sets reduces the dimension of the product spaces. We refer to this technique as the constraint reduction reformulation and use it to obtain constraint-reduced variants of well-known projection algorithms such as the Douglas–Rachford algorithm and the method of alternating projections, among others. We prove global convergence of constraint-reduced algorithms in the presence of convexity and local convergence in a nonconvex setting. In order to analyze convergence of the constraint-reduced Douglas–Rachford method, we generalize a classical result which guarantees that the composition of two projectors onto subspaces is a projector onto their intersection. Finally, we apply the constraint-reduced versions of Douglas–Rachford and alternating projections to solve the wavelet feasibility problems and then compare their performance with their usual product variants. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

**Authors:**Dao, Minh , Dizon, Neil , Hogan, Jeffrey , Tam, Matthew**Date:**2021**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 190, no. 1 (2021), p. 201-233**Full Text:****Reviewed:****Description:**We introduce a reformulation technique that converts a many-set feasibility problem into an equivalent two-set problem. This technique involves reformulating the original feasibility problem by replacing a pair of its constraint sets with their intersection, before applying Pierra’s classical product space reformulation. The step of combining the two constraint sets reduces the dimension of the product spaces. We refer to this technique as the constraint reduction reformulation and use it to obtain constraint-reduced variants of well-known projection algorithms such as the Douglas–Rachford algorithm and the method of alternating projections, among others. We prove global convergence of constraint-reduced algorithms in the presence of convexity and local convergence in a nonconvex setting. In order to analyze convergence of the constraint-reduced Douglas–Rachford method, we generalize a classical result which guarantees that the composition of two projectors onto subspaces is a projector onto their intersection. Finally, we apply the constraint-reduced versions of Douglas–Rachford and alternating projections to solve the wavelet feasibility problems and then compare their performance with their usual product variants. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

Energy-efficient design for downlink cloud radio access networks

- Vu, Tung, Ngo, Duy, Dao, Minh, Durrani, Salman, Nguyen, Duy, Middleton, Richard

**Authors:**Vu, Tung , Ngo, Duy , Dao, Minh , Durrani, Salman , Nguyen, Duy , Middleton, Richard**Date:**2018**Type:**Text , Conference proceedings**Relation:**2018 IEEE International Conference on Communications (ICC); Kansas City, MO, USA; 20-24 May 2018 p. 1-6**Full Text:**false**Reviewed:****Description:**This work aims to maximize the energy efficiency of a downlink cloud radio access network (C-RAN), where data is transferred from a baseband unit in the core network to several remote radio heads via a set of edge routers over capacity-limited fronthaul links. The remote radio heads then send the received signals to their users via radio access links. We formulate a new mixed-integer nonlinear problem in which the ratio of network throughput and total power consumption is maximized. This challenging problem formulation includes practical constraints on routing, predefined minimum data rates, fronthaul capacity and maximum RRH transmit power. By employing the successive convex quadratic programming framework, an iterative algorithm is proposed with guaranteed convergence to a Fritz John solution of the formulated problem. Significantly, each iteration of the proposed algorithm solves only one simple convex program. Numerical examples with practical parameters confirm that the proposed joint optimization design markedly improves the C-RAN's energy efficiency compared to benchmark schemes.

Generalized bregman envelopes and proximity operators

- Burachik, Regina, Dao, Minh, Lindstrom, Scott

**Authors:**Burachik, Regina , Dao, Minh , Lindstrom, Scott**Date:**2021**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 190, no. 3 (2021), p. 744-778**Full Text:****Reviewed:****Description:**Every maximally monotone operator can be associated with a family of convex functions, called the Fitzpatrick family or family of representative functions. Surprisingly, in 2017, Burachik and Martínez-Legaz showed that the well-known Bregman distance is a particular case of a general family of distances, each one induced by a specific maximally monotone operator and a specific choice of one of its representative functions. For the family of generalized Bregman distances, sufficient conditions for convexity, coercivity, and supercoercivity have recently been furnished. Motivated by these advances, we introduce in the present paper the generalized left and right envelopes and proximity operators, and we provide asymptotic results for parameters. Certain results extend readily from the more specific Bregman context, while others only extend for certain generalized cases. To illustrate, we construct examples from the Bregman generalizing case, together with the natural “extreme” cases that highlight the importance of which generalized Bregman distance is chosen. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

**Authors:**Burachik, Regina , Dao, Minh , Lindstrom, Scott**Date:**2021**Type:**Text , Journal article**Relation:**Journal of Optimization Theory and Applications Vol. 190, no. 3 (2021), p. 744-778**Full Text:****Reviewed:****Description:**Every maximally monotone operator can be associated with a family of convex functions, called the Fitzpatrick family or family of representative functions. Surprisingly, in 2017, Burachik and Martínez-Legaz showed that the well-known Bregman distance is a particular case of a general family of distances, each one induced by a specific maximally monotone operator and a specific choice of one of its representative functions. For the family of generalized Bregman distances, sufficient conditions for convexity, coercivity, and supercoercivity have recently been furnished. Motivated by these advances, we introduce in the present paper the generalized left and right envelopes and proximity operators, and we provide asymptotic results for parameters. Certain results extend readily from the more specific Bregman context, while others only extend for certain generalized cases. To illustrate, we construct examples from the Bregman generalizing case, together with the natural “extreme” cases that highlight the importance of which generalized Bregman distance is chosen. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

- «
- ‹
- 1
- ›
- »

Are you sure you would like to clear your session, including search history and login status?