A generalization of a theorem of Arrow, Barankin and Blackwell to a nonconvex case
- Authors: Kasimbeyli, Nergiz , Kasimbeyli, Refail , Mammadov, Musa
- Date: 2016
- Type: Text , Journal article
- Relation: Optimization Vol. 65, no. 5 (May 2016), p. 937-945
- Full Text:
- Reviewed:
- Description: The paper presents a generalization of a known density theorem of Arrow, Barankin, and Blackwell for properly efficient points defined as support points of sets with respect to monotonically increasing sublinear functions. This result is shown to hold for nonconvex sets of a partially ordered reflexive Banach space.
Structure learning of Bayesian Networks using global optimization with applications in data classification
- Authors: Taheri, Sona , Mammadov, Musa
- Date: 2014
- Type: Text , Journal article
- Relation: Optimization Letters Vol. 9, no. 5 (2014), p. 931-948
- Full Text:
- Reviewed:
- Description: Bayesian Networks are increasingly popular methods of modeling uncertainty in artificial intelligence and machine learning. A Bayesian Network consists of a directed acyclic graph in which each node represents a variable and each arc represents probabilistic dependency between two variables. Constructing a Bayesian Network from data is a learning process that consists of two steps: learning structure and learning parameter. Learning a network structure from data is the most difficult task in this process. This paper presents a new algorithm for constructing an optimal structure for Bayesian Networks based on optimization. The algorithm has two major parts. First, we define an optimization model to find the better network graphs. Then, we apply an optimization approach for removing possible cycles from the directed graphs obtained in the first part which is the first of its kind in the literature. The main advantage of the proposed method is that the maximal number of parents for variables is not fixed a priory and it is defined during the optimization procedure. It also considers all networks including cyclic ones and then choose a best structure by applying a global optimization method. To show the efficiency of the algorithm, several closely related algorithms including unrestricted dependency Bayesian Network algorithm, as well as, benchmarks algorithms SVM and C4.5 are employed for comparison. We apply these algorithms on data classification; data sets are taken from the UCI machine learning repository and the LIBSVM. © 2014, Springer-Verlag Berlin Heidelberg.
A new auxiliary function method for general constrained global optimization
- Authors: Wu, Zhiyou , Bai, Fusheng , Yang, Yongjian , Mammadov, Musa
- Date: 2013
- Type: Text , Journal article
- Relation: Optimization Vol. 62, no. 2 (2013), p. 193-210
- Full Text:
- Reviewed:
- Description: In this article, we first propose a method to obtain an approximate feasible point for general constrained global optimization problems (with both inequality and equality constraints). Then we propose an auxiliary function method to obtain a global minimizer or an approximate global minimizer with a required precision for general global optimization problems by locally solving some unconstrained programming problems. Some numerical examples are reported to demonstrate the efficiency of the present optimization method. © 2013 Taylor & Francis.
- Description: 2003011103
Globally convergent algorithms for solving unconstrained optimization problems
- Authors: Taheri, Sona , Mammadov, Musa , Seifollahi, Sattar
- Date: 2013
- Type: Text , Journal article
- Relation: Optimization Vol. , no. (2013), p. 1-15
- Full Text:
- Reviewed:
- Description: New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.
A new method for solving linear ill-posed problems
- Authors: Zhang, Jianjun , Mammadov, Musa
- Date: 2012
- Type: Text , Journal article
- Relation: Applied Mathematics and Computation Vol. 218, no. 20 (2012), p.10180-10187
- Full Text:
- Reviewed:
- Description: In this paper, we propose a new method for solving large-scale ill-posed problems. This method is based on the Karush-Kuhn-Tucker conditions, Fisher-Burmeister function and the discrepancy principle. The main difference from the majority of existing methods for solving ill-posed problems is that, we do not need to choose a regularization parameter in advance. Experimental results show that the proposed method is effective and promising for many practical problems. © 2012.
The effect of regularization on drug-reaction relationships
- Authors: Mammadov, Musa , Zhao, L. , Zhang, Jianjun
- Date: 2012
- Type: Text , Journal article
- Relation: Optimization Vol. 61, no. 4 (2012), p. 405-422
- Full Text:
- Reviewed:
- Description: The least-squares method is a standard approach used in data fitting that has important applications in many areas in science and engineering including many finance problems. In the case when the problem under consideration involves large-scale sparse matrices regularization methods are used to obtain more stable solutions by relaxing the data fitting. In this article, a new regularization algorithm is introduced based on the Karush-Kuhn-Tucker conditions and the Fisher-Burmeister function. The Newton method is used for solving corresponding systems of equations. The advantages of the proposed method has been demonstrated in the establishment of drug-reaction relationships based on the Australian Adverse Drug Reaction Advisory Committee database. © 2012 Copyright Taylor and Francis Group, LLC.
To be fair or efficient or a bit of both
- Authors: Zukerman, Moshe , Mammadov, Musa , Tan, Liansheng , Ouveysi, Iradj , Andrew, Lachlan
- Date: 2008
- Type: Text , Journal article
- Relation: Computers and Operations Research Vol. 35, no. 12 (2008), p. 3787-3806
- Full Text:
- Reviewed:
- Description: IIntroducing a new concept of (®, ¯)-fairness, which allows for a bounded fairness compromise, so that a source is allocated a rate neither less than 0 · ® · 1, nor more than ¯ ¸ 1, times its fair share, this paper provides a framework to optimize efficiency (utilization, throughput or revenue) subject to fairness constraints in a general telecommunications network for an arbitrary fairness criterion and cost functions. We formulate a non-linear program (NLP) that finds the optimal bandwidth allocation by maximizing efficiency subject to (®, ¯)-fairness constraints. This leads to what we call an efficiency-fairness function, which shows the benefit in efficiency as a function of the extent to which fairness is compromised. To solve the NLP we use two algorithms. The first is a well known branch-and-bound-based algorithm called Lipschitz Global Optimization and the second is a recently developed algorithm called Algorithm for Global Optimization Problems (AGOP). We demonstrate the applicability of the framework to a range of example from sharing a single link to efficiency fairness issues associated with serving customers in remote communities.
- Description: C1