About regularity properties in variational analysis and applications in optimization

**Authors:**Nguyen, Hieu Thao**Date:**2015**Type:**Text , Thesis , PhD**Full Text:****Description:**Regularity properties lie at the core of variational analysis because of their importance for stability analysis of optimization and variational problems, constraint qualications, qualication conditions in coderivative and subdierential calculus and convergence analysis of numerical algorithms. The thesis is devoted to investigation of several research questions related to regularity properties in variational analysis and their applications in convergence analysis and optimization. Following the works by Kruger, we examine several useful regularity properties of collections of sets in both linear and Holder-type settings and establish their characterizations and relationships to regularity properties of set-valued mappings. Following the recent publications by Lewis, Luke, Malick (2009), Drusvyatskiy, Ioe, Lewis (2014) and some others, we study application of the uniform regularity and related properties of collections of sets to alternating projections for solving nonconvex feasibility problems and compare existing results on this topic. Motivated by Ioe (2000) and his subsequent publications, we use the classical iteration scheme going back to Banach, Schauder, Lyusternik and Graves to establish criteria for regularity properties of set-valued mappings and compare this approach with the one based on the Ekeland variational principle. Finally, following the recent works by Khanh et al. on stability analysis for optimization related problems, we investigate calmness of set-valued solution mappings of variational problems.**Description:**Doctor of Philosophy**Description:**Regularity properties lie at the core of variational analysis because of their importance for stability analysis of optimization and variational problems, constraint qualications, qualication conditions in coderivative and subdierential calculus and convergence analysis of numerical algorithms. The thesis is devoted to investigation of several research questions related to regularity properties in variational analysis and their applications in convergence analysis and optimization. Following the works by Kruger, we examine several useful regularity properties of collections of sets in both linear and H

**Authors:**Nguyen, Hieu Thao**Date:**2015**Type:**Text , Thesis , PhD**Full Text:****Description:**Regularity properties lie at the core of variational analysis because of their importance for stability analysis of optimization and variational problems, constraint qualications, qualication conditions in coderivative and subdierential calculus and convergence analysis of numerical algorithms. The thesis is devoted to investigation of several research questions related to regularity properties in variational analysis and their applications in convergence analysis and optimization. Following the works by Kruger, we examine several useful regularity properties of collections of sets in both linear and Holder-type settings and establish their characterizations and relationships to regularity properties of set-valued mappings. Following the recent publications by Lewis, Luke, Malick (2009), Drusvyatskiy, Ioe, Lewis (2014) and some others, we study application of the uniform regularity and related properties of collections of sets to alternating projections for solving nonconvex feasibility problems and compare existing results on this topic. Motivated by Ioe (2000) and his subsequent publications, we use the classical iteration scheme going back to Banach, Schauder, Lyusternik and Graves to establish criteria for regularity properties of set-valued mappings and compare this approach with the one based on the Ekeland variational principle. Finally, following the recent works by Khanh et al. on stability analysis for optimization related problems, we investigate calmness of set-valued solution mappings of variational problems.**Description:**Doctor of Philosophy**Description:**Regularity properties lie at the core of variational analysis because of their importance for stability analysis of optimization and variational problems, constraint qualications, qualication conditions in coderivative and subdierential calculus and convergence analysis of numerical algorithms. The thesis is devoted to investigation of several research questions related to regularity properties in variational analysis and their applications in convergence analysis and optimization. Following the works by Kruger, we examine several useful regularity properties of collections of sets in both linear and H

Global optimality conditions and optimization methods for polynomial programming problems and their applications

**Authors:**Tian, Jing**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.**Description:**Doctor of Philosophy

**Authors:**Tian, Jing**Date:**2014**Type:**Text , Thesis , PhD**Full Text:****Description:**The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.**Description:**Doctor of Philosophy

Learning Bayesian networks based on optimization approaches

**Authors:**Taheri, Sona**Date:**2012**Type:**Text , Thesis , PhD**Full Text:**false**Description:**Learning accurate classifiers from preclassified data is a very active research topic in machine learning and artifcial intelligence. There are numerous classifier paradigms, among which Bayesian Networks are very effective and well known in domains with uncertainty. Bayesian Networks are widely used representation frameworks for reasoning with probabilistic information. These models use graphs to capture dependence and independence relationships between feature variables, allowing a concise representation of the knowledge as well as efficient graph based query processing algorithms. This representation is defined by two components: structure learning and parameter learning. The structure of this model represents a directed acyclic graph. The nodes in the graph correspond to the feature variables in the domain, and the arcs (edges) show the causal relationships between feature variables. A directed edge relates the variables so that the variable corresponding to the terminal node (child) will be conditioned on the variable corresponding to the initial node (parent). The parameter learning represents probabilities and conditional probabilities based on prior information or past experience. The set of probabilities are represented in the conditional probability table. Once the network structure is constructed, the probabilistic inferences are readily calculated, and can be performed to predict the outcome of some variables based on the observations of others. However, the problem of structure learning is a complex problem since the number of candidate structures grows exponentially when the number of feature variables increases. This thesis is devoted to the development of learning structures and parameters in Bayesian Networks. Different models based on optimization techniques are introduced to construct an optimal structure of a Bayesian Network. These models also consider the improvement of the Naive Bayes' structure by developing new algorithms to alleviate the independence assumptions. We present various models to learn parameters of Bayesian Networks; in particular we propose optimization models for the Naive Bayes and the Tree Augmented Naive Bayes by considering different objective functions. To solve corresponding optimization problems in Bayesian Networks, we develop new optimization algorithms. Local optimization methods are introduced based on the combination of the gradient and Newton methods. It is proved that the proposed methods are globally convergent and have superlinear convergence rates. As a global search we use the global optimization method, AGOP, implemented in the open software library GANSO. We apply the proposed local methods in the combination with AGOP. Therefore, the main contributions of this thesis include (a) new algorithms for learning an optimal structure of a Bayesian Network; (b) new models for learning the parameters of Bayesian Networks with the given structures; and finally (c) new optimization algorithms for optimizing the proposed models in (a) and (b). To validate the proposed methods, we conduct experiments across a number of real world problems. Print version is available at: http://library.federation.edu.au/record=b1804607~S4**Description:**Doctor of Philosophy

- «
- ‹
- 1
- ›
- »