Your selections:

21Yearwood, John
10Taheri, Sona
8Bai, Fusheng
8Tilakaratne, Chandima
8Wu, Zhiyou
6Rubinov, Alex
6Saunders, Gary
5Hajilarov, Eldar
5Kuznetsov, Alexey
5Morris, Sidney
5Sultan, Ibrahim
4Bagirov, Adil
4Kasimbeyli, Refail
4Yang, Y. J.
4Zhao, Lei
3Banerjee, Arunava
3Ivanov, Anatoli
3Kouhbor, Shahnaz
3Kruger, Alexander

Show More

Show Less

130102 Applied Mathematics
100103 Numerical and Computational Mathematics
80101 Pure Mathematics
8Classification
8Global optimization
8Optimization
7Data mining
7Optimisation
60802 Computation Theory and Mathematics
4Drug reaction
4Multi-label classification
4Newton's method
4Turnpike property
30801 Artificial Intelligence and Image Processing
3Adverse drug reaction
3Algorithm
3Asymptotical stability
3Australia
3Bayesian networks
3Data classification

Show More

Show Less

Format Type

A comparison of two methods to establish drug-reaction relationships in the ADRAC database

- Mammadov, Musa, Saunders, Gary

**Authors:**Mammadov, Musa , Saunders, Gary**Date:**2004**Type:**Text , Conference paper**Relation:**Paper presented at the Fourth International ICSC Symposium on Engineering of Intelligent Systems (EIS 2004), Island of Madeira, Portugal, Island of Madeira, Portugal : 29th February, 2004**Full Text:**false**Reviewed:****Description:**Adverse drug reactions (ADRs) are estimated to be one of the leading causes of death. Many national and international agencies have set up databases of ADR reports for the express purpose of determining the relationship between drugs and adverse reactions that they cause. We formulate the drug-reaction relationship problem as a continuous optimization problem and utilize C-GRASP, a new continuous global optimization heuristic, to approximately determine the relationship between drugs and adverse reactions. Our approach is compared against others in the literature and is shown to find better solutions. 1.**Description:**E1**Description:**2003000897

A globally optimization algorithm for systems of nonlinear equations

- Mammadov, Musa, Taheri, Sona

**Authors:**Mammadov, Musa , Taheri, Sona**Date:**2010**Type:**Text , Conference proceedings**Full Text:**false**Description:**In this paper, a new algorithm is proposed for the solutions of system of nonlinear equations. This algorithm uses a combination of the gradient and Newton's methods. A novel dynamic combinator is developed to determine the contribution of the methods in the combination. Also, by using some parameters in the proposed algorithm, this contribution is adjusted. The efficiency of the algoritms is studied in solving system of nonlinear equations.

Classification on shorter featured and multi-label datasets

**Authors:**Mammadov, Musa**Date:**2007**Type:**Text , Conference paper**Relation:**Paper presented at 7th International Conference on Optimization: Techniques and Applications, ICOTA7, Kobe International Conference Center, Japan : 12th-15th December 2007**Full Text:**false**Description:**2003005711

Optimality conditions in nonconvex optimization via weak subdifferentials

- Kasimbeyli, Refail, Mammadov, Musa

**Authors:**Kasimbeyli, Refail , Mammadov, Musa**Date:**2011**Type:**Text , Journal article**Relation:**Nonlinear Analysis, Theory, Methods and Applications Vol. 74, no. 7 (2011), p. 2534-2547**Full Text:****Reviewed:****Description:**In this paper we study optimality conditions for optimization problems described by a special class of directionally differentiable functions. The well-known necessary and sufficient optimality condition of nonsmooth convex optimization, given in the form of variational inequality, is generalized to the nonconvex case by using the notion of weak subdifferentials. The equivalent formulation of this condition in terms of weak subdifferentials and augmented normal cones is also presented. Â© 2011 Elsevier Ltd. All rights reserved.

Global asymptotic stability in a class of nonlinear differential delay equations

- Ivanov, Anatoli, Mammadov, Musa

**Authors:**Ivanov, Anatoli , Mammadov, Musa**Date:**2011**Type:**Text , Journal article**Relation:**Discrete and Continuous Dynamical Systems Vol. 2011, no. Supplement 2011 (2011), p.**Full Text:****Reviewed:****Description:**An essentially nonlinear dierential equation with delay serving as a mathematical model of several applied problems is considered. Sufficient conditions for the global asymptotic stability of a unique equilibrium are de- rived. An application to a physiological model by M.C. Mackey is treated in detail.**Description:**2003009358

**Authors:**Ivanov, Anatoli , Mammadov, Musa**Date:**2011**Type:**Text , Journal article**Relation:**Discrete and Continuous Dynamical Systems Vol. 2011, no. Supplement 2011 (2011), p.**Full Text:****Reviewed:****Description:**An essentially nonlinear dierential equation with delay serving as a mathematical model of several applied problems is considered. Sufficient conditions for the global asymptotic stability of a unique equilibrium are de- rived. An application to a physiological model by M.C. Mackey is treated in detail.**Description:**2003009358

An auxiliary function method for constrained systems of nonlinear equations

- Wu, Zhiyou, Bai, Fusheng, Mammadov, Musa

**Authors:**Wu, Zhiyou , Bai, Fusheng , Mammadov, Musa**Date:**2008**Type:**Text , Conference paper**Relation:**Paper presented at 20th EURO Mini Conference: Continuous Optimization and Knowledge-Based Technologies, EurOPT-2008, Neringa, Lithuania : 20th-23rd May 2008 p. 259-265**Full Text:**false**Description:**In this paper, we propose an auxiliary function method to solve constrained systems of nonlinear equations. By introducing an auxiliary function, an unconstrained (box-constrained) optimization problem is constructed for a given constrained system of nonlinear equations. It is shown that any local minimizer of the constructed unconstrained optimization problem is an approximate solution to the given constrained system when parameters are appropriately chosen, and the precision for approximation can be preset. It is also shown that any accumulation point of the local minimizers of the constructed unconstrained optimization problems with a sequence of parameters tending to zero is a solution to the given constrained system of nonlinear equations.

**Authors:**Mammadov, Musa**Date:**2008**Type:**Text , Book chapter**Relation:**Data Mining in Biomedicine p. 141-167**Full Text:**false**Reviewed:**

A new global optimization algorithm based on a dynamical systems approach

**Authors:**Mammadov, Musa**Date:**2004**Type:**Text , Conference paper**Relation:**Paper presented at ICOTA6: 6th International Conference on Optimization - Techniques and Applications, Ballarat, Victoria : 9th December, 2004**Full Text:**false**Reviewed:****Description:**The purpose of the paper is to develop and study new techniques for global optimization based on dynamical systems approach. This approach uses the notion of relationship between variables which describes influences of the changes of the variables to each other. A numerical algorithm for global optimization is introduced.**Description:**E1**Description:**2003000892

The effect of regularization on drug-reaction relationships

- Mammadov, Musa, Zhao, L., Zhang, Jianjun

**Authors:**Mammadov, Musa , Zhao, L. , Zhang, Jianjun**Date:**2012**Type:**Text , Journal article**Relation:**Optimization Vol. 61, no. 4 (2012), p. 405-422**Full Text:****Reviewed:****Description:**The least-squares method is a standard approach used in data fitting that has important applications in many areas in science and engineering including many finance problems. In the case when the problem under consideration involves large-scale sparse matrices regularization methods are used to obtain more stable solutions by relaxing the data fitting. In this article, a new regularization algorithm is introduced based on the Karush-Kuhn-Tucker conditions and the Fisher-Burmeister function. The Newton method is used for solving corresponding systems of equations. The advantages of the proposed method has been demonstrated in the establishment of drug-reaction relationships based on the Australian Adverse Drug Reaction Advisory Committee database. © 2012 Copyright Taylor and Francis Group, LLC.

**Authors:**Mammadov, Musa , Zhao, L. , Zhang, Jianjun**Date:**2012**Type:**Text , Journal article**Relation:**Optimization Vol. 61, no. 4 (2012), p. 405-422**Full Text:****Reviewed:****Description:**The least-squares method is a standard approach used in data fitting that has important applications in many areas in science and engineering including many finance problems. In the case when the problem under consideration involves large-scale sparse matrices regularization methods are used to obtain more stable solutions by relaxing the data fitting. In this article, a new regularization algorithm is introduced based on the Karush-Kuhn-Tucker conditions and the Fisher-Burmeister function. The Newton method is used for solving corresponding systems of equations. The advantages of the proposed method has been demonstrated in the establishment of drug-reaction relationships based on the Australian Adverse Drug Reaction Advisory Committee database. © 2012 Copyright Taylor and Francis Group, LLC.

From convex to nonconvex: A loss function analysis for binary classification

- Zhao, Lei, Mammadov, Musa, Yearwood, John

**Authors:**Zhao, Lei , Mammadov, Musa , Yearwood, John**Date:**2010**Type:**Text , Conference paper**Relation:**Paper presented at10th IEEE International Conference on Data Mining Workshops, ICDMW 2010 p. 1281-1288**Full Text:****Reviewed:****Description:**Problems of data classification can be studied in the framework of regularization theory as ill-posed problems. In this framework, loss functions play an important role in the application of regularization theory to classification. In this paper, we review some important convex loss functions, including hinge loss, square loss, modified square loss, exponential loss, logistic regression loss, as well as some non-convex loss functions, such as sigmoid loss, Ã¸-loss, ramp loss, normalized sigmoid loss, and the loss function of 2 layer neural network. Based on the analysis of these loss functions, we propose a new differentiable non-convex loss function, called smoothed 0-1 loss function, which is a natural approximation of the 0-1 loss function. To compare the performance of different loss functions, we propose two binary classification algorithms for binary classification, one for convex loss functions, the other for non-convex loss functions. A set of experiments are launched on several binary data sets from the UCI repository. The results show that the proposed smoothed 0-1 loss function is robust, especially for those noisy data sets with many outliers. Â© 2010 IEEE.

**Authors:**Zhao, Lei , Mammadov, Musa , Yearwood, John**Date:**2010**Type:**Text , Conference paper**Relation:**Paper presented at10th IEEE International Conference on Data Mining Workshops, ICDMW 2010 p. 1281-1288**Full Text:****Reviewed:****Description:**Problems of data classification can be studied in the framework of regularization theory as ill-posed problems. In this framework, loss functions play an important role in the application of regularization theory to classification. In this paper, we review some important convex loss functions, including hinge loss, square loss, modified square loss, exponential loss, logistic regression loss, as well as some non-convex loss functions, such as sigmoid loss, Ã¸-loss, ramp loss, normalized sigmoid loss, and the loss function of 2 layer neural network. Based on the analysis of these loss functions, we propose a new differentiable non-convex loss function, called smoothed 0-1 loss function, which is a natural approximation of the 0-1 loss function. To compare the performance of different loss functions, we propose two binary classification algorithms for binary classification, one for convex loss functions, the other for non-convex loss functions. A set of experiments are launched on several binary data sets from the UCI repository. The results show that the proposed smoothed 0-1 loss function is robust, especially for those noisy data sets with many outliers. Â© 2010 IEEE.

Predicting trading signals of the All Share Price Index Using a modified neural network algorithm

- Tilakaratne, Chandima, Tissera, J.H.D.S.P, Mammadov, Musa

**Authors:**Tilakaratne, Chandima , Tissera, J.H.D.S.P , Mammadov, Musa**Date:**2008**Type:**Text , Conference paper**Relation:**Proceedings of the 9th International Information Technology Conference; 28th-29th October, 2008, Colombo , Sri Lanka**Full Text:**false**Reviewed:****Description:**This study predicts whether it is best to buy, hold or sell shares (trading signals) of the All Share Price Index (ASPI) of the Colombo Stock Exchange, using a modified neural network (NN) algorithm. Most commonly used classification techniques are not successful in predicting trading signals when the distribution of the actual trading signals, among these three classes, is imbalanced. The structure of this modified neural network is same as that of feedforward neural networks. This algorithm minimises a modified Ordinary Least Squares (OLS) error function. An adjustment relating to the contribution from the historical data used for training the networks, and penalisation of incorrectly classified trading signals were accounted for, when modifying the OLS function. A global optimization algorithm was employed to train these networks. Results obtained were satisfactory.

Structure learning of Bayesian networks using a new unrestricted dependency algorithm

- Taheri, Sona, Mammadov, Musa

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2012**Type:**Text , Conference proceedings**Full Text:****Description:**Bayesian Networks have deserved extensive attentions in data mining due to their efficiencies, and reasonable predictive accuracy. A Bayesian Network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency between two variables. Constructing a Bayesian Network from data is the learning process that is divided in two steps: learning structure and learning parameter. In many domains, the structure is not known a priori and must be inferred from data. This paper presents an iterative unrestricted dependency algorithm for learning structure of Bayesian Networks for binary classification problems. Numerical experiments are conducted on several real world data sets, where continuous features are discretized by applying two different methods. The performance of the proposed algorithm is compared with the Naive Bayes, the Tree Augmented Naive Bayes, and the k

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2012**Type:**Text , Conference proceedings**Full Text:****Description:**Bayesian Networks have deserved extensive attentions in data mining due to their efficiencies, and reasonable predictive accuracy. A Bayesian Network is a directed acyclic graph in which each node represents a variable and each arc a probabilistic dependency between two variables. Constructing a Bayesian Network from data is the learning process that is divided in two steps: learning structure and learning parameter. In many domains, the structure is not known a priori and must be inferred from data. This paper presents an iterative unrestricted dependency algorithm for learning structure of Bayesian Networks for binary classification problems. Numerical experiments are conducted on several real world data sets, where continuous features are discretized by applying two different methods. The performance of the proposed algorithm is compared with the Naive Bayes, the Tree Augmented Naive Bayes, and the k

Learning the naive bayes classifier with optimization models

- Taheri, Sona, Mammadov, Musa

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2013**Type:**Text , Journal article**Relation:**International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795**Full Text:****Reviewed:****Description:**Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

**Authors:**Taheri, Sona , Mammadov, Musa**Date:**2013**Type:**Text , Journal article**Relation:**International Journal of Applied Mathematics and Computer Science Vol. 23, no. 4 (2013), p. 787-795**Full Text:****Reviewed:****Description:**Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.

A study of drug-reaction relationships in Australian drug safety data

- Mammadov, Musa, Saunders, Gary, Dekker, Evan

**Authors:**Mammadov, Musa , Saunders, Gary , Dekker, Evan**Date:**2003**Type:**Text , Conference paper**Relation:**Paper presented at the 2nd Australian Data Mining Workshop, Sydney, New South Wales : 8th December, 2003**Full Text:**false**Reviewed:****Description:**The sparse nature of voluntarily reported drug safety data benefits from a system that consolidates the massive amount of data into a manageable format for analysis. This has been done for Australian drug safety data by the Australian Adverse Drug Reaction Advisory Committee (ADRAC) for reactions using the systems organ class (SOC) ontology. There has long been a need for a similar kind of grouping to apply to drugs in this type of data. In ADRAC, drugs are currently listed by trade-name, where only some of these trade-names were assigned anatomical-therapeutic-chemical classification (ATC) codes. We assigned an ATC code for each ADRAC trade-name and show that this ontology facilitates the detection of drug class / reaction class associations at various levels of specificity. This allows different views of these associations (even very rare ones) and their significance measured for the development of more sensitive signal detection methods. We report that this ATC classification enables both the grouping of association rule approach that is useful for studying rare associations, and the development of an adverse reaction signal detection method.**Description:**E1**Description:**2003000340

Dynamical systems based on a fuzzy derivative and its applications to data classification

- Mammadov, Musa, Rubinov, Alex, Yearwood, John

**Authors:**Mammadov, Musa , Rubinov, Alex , Yearwood, John**Date:**2003**Type:**Text , Conference paper**Relation:**Paper presented at the Industrial Optimisation 2003 Conference, Perth : 30th September, 2002**Full Text:**false**Reviewed:****Description:**E1**Description:**2003000339

A nonsmooth optimization approach to H-infinity synthesis

- Mammadov, Musa, Orsi, Robert

**Authors:**Mammadov, Musa , Orsi, Robert**Date:**2005**Type:**Text , Conference paper**Relation:**Paper presented at the 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005, Seville, Sp[ain, 12-15 December 2005, Seville, Spain : 12th - 15th December, 2005**Full Text:**false**Reviewed:****Description:**A numerical method for solving the H∞ synthesis problem is presented. The problem is posed as an unconstrained, nonsmooth, nonconvex minimization problem. The optimization variables consist solely of the entries of the output feedback matrix. No additional variables, such as Lyapunov variables, need to be introduced. The optimization procedure uses a line search mechanism where the descent direction is defined by a recently introduced dynamical systems approach. Numerical results for various benchmark problems are included.**Description:**E1**Description:**2003001386

A filled function method for nonlinear equations

- Wu, Zhiyou, Mammadov, Musa, Bai, Fusheng, Yang, Y. J.

**Authors:**Wu, Zhiyou , Mammadov, Musa , Bai, Fusheng , Yang, Y. J.**Date:**2007**Type:**Text , Journal article**Relation:**Applied Mathematics and Computation Vol. 189, no. 2 (2007), p. 1196-1204**Full Text:**false**Reviewed:****Description:**In this paper, we propose a new global optimization approach based on the filled function method for solving box-constrained systems of nonlinear equations. The special properties of optimization problem are employed to construct a novel filled function. The objective function value can be reduced by half in each iteration of our filled function algorithm. Several numerical examples are presented to illustrate the efficiency of the present approach.**Description:**C1**Description:**2003005618

Optimality conditions via weak subdifferentials in reflexive Banach spaces

- Hassani, Sara, Mammadov, Musa, Jamshidi, Mina

**Authors:**Hassani, Sara , Mammadov, Musa , Jamshidi, Mina**Date:**2017**Type:**Text , Journal article**Relation:**Turkish Journal of Mathematics Vol. 41, no. 1 (2017), p. 1-8**Full Text:****Reviewed:****Description:**In this paper the relation between the weak subdifferentials and the directional derivatives, as well as optimality conditions for nonconvex optimization problems in reflexive Banach spaces, are investigated. It partly generalizes several related results obtained for finite dimensional spaces. © Tübitak.

**Authors:**Hassani, Sara , Mammadov, Musa , Jamshidi, Mina**Date:**2017**Type:**Text , Journal article**Relation:**Turkish Journal of Mathematics Vol. 41, no. 1 (2017), p. 1-8**Full Text:****Reviewed:****Description:**In this paper the relation between the weak subdifferentials and the directional derivatives, as well as optimality conditions for nonconvex optimization problems in reflexive Banach spaces, are investigated. It partly generalizes several related results obtained for finite dimensional spaces. © Tübitak.

Globally convergent algorithms for solving unconstrained optimization problems

- Taheri, Sona, Mammadov, Musa, Seifollahi, Sattar

**Authors:**Taheri, Sona , Mammadov, Musa , Seifollahi, Sattar**Date:**2013**Type:**Text , Journal article**Relation:**Optimization Vol. , no. (2013), p. 1-15**Full Text:****Reviewed:****Description:**New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.

**Authors:**Taheri, Sona , Mammadov, Musa , Seifollahi, Sattar**Date:**2013**Type:**Text , Journal article**Relation:**Optimization Vol. , no. (2013), p. 1-15**Full Text:****Reviewed:****Description:**New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods.

An introduction algorithm with selection significance based on a fuzzy deriviative

- Mammadov, Musa, Yearwood, John

**Authors:**Mammadov, Musa , Yearwood, John**Date:**2002**Type:**Text , Conference paper**Relation:**Paper presented at Hybrid Information Systems (Advances in Soft Computing), Adelaide : 11th December, 2001**Full Text:**false**Reviewed:****Description:**E1**Description:**2003000076

Are you sure you would like to clear your session, including search history and login status?