Workload coverage through nonsmooth optimization
- Authors: Sukhorukova, Nadezda , Ugon, Julien , Yearwood, John
- Date: 2009
- Type: Text , Journal article
- Relation: Optimization Methods and Software Vol. 24, no. 2 (2009), p. 285-298
- Full Text: false
- Reviewed:
- Description: In this paper, workload coverage is the problem of identifying a pattern of days worked and days off, along with the number of hours worked on each work day. This pattern must satisfy certain work-related constraints and fit best to a predefined workload. In our study, we formulate the problem of workload coverage as an optimization problem. We propose a number of models which take into consideration various staffing constraints. For each of these models, our study aims to find a compromise between an accurate workload coverage and the ability to solve the corresponding optimization problems in a reasonable time. Numerical experiments on each model are carried out and the results are presented. Interestingly, the nonlinear programming approaches are found to be competitive with linear programming ones. © 2009 Taylor & Francis.
From convex to nonconvex: A loss function analysis for binary classification
- Authors: Zhao, Lei , Mammadov, Musa , Yearwood, John
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at10th IEEE International Conference on Data Mining Workshops, ICDMW 2010 p. 1281-1288
- Full Text:
- Reviewed:
- Description: Problems of data classification can be studied in the framework of regularization theory as ill-posed problems. In this framework, loss functions play an important role in the application of regularization theory to classification. In this paper, we review some important convex loss functions, including hinge loss, square loss, modified square loss, exponential loss, logistic regression loss, as well as some non-convex loss functions, such as sigmoid loss, ø-loss, ramp loss, normalized sigmoid loss, and the loss function of 2 layer neural network. Based on the analysis of these loss functions, we propose a new differentiable non-convex loss function, called smoothed 0-1 loss function, which is a natural approximation of the 0-1 loss function. To compare the performance of different loss functions, we propose two binary classification algorithms for binary classification, one for convex loss functions, the other for non-convex loss functions. A set of experiments are launched on several binary data sets from the UCI repository. The results show that the proposed smoothed 0-1 loss function is robust, especially for those noisy data sets with many outliers. © 2010 IEEE.
A novel canonical dual computational approach for prion AGAAAAGA amyloid fibril molecular modeling
- Authors: Zhang, Jiapu , Gao, David , Yearwood, John
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Theoretical Biology Vol. 284, no. 1 (2011), p. 149-157
- Full Text: false
- Reviewed:
- Description: Many experimental studies have shown that the prion AGAAAAGA palindrome hydrophobic region (113-120) has amyloid fibril forming properties and plays an important role in prion diseases. However, due to the unstable, noncrystalline and insoluble nature of the amyloid fibril, to date structural information on AGAAAAGA region (113-120) has been very limited. This region falls just within the N-terminal unstructured region PrP (1-123) of prion proteins. Traditional X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy experimental methods cannot be used to get its structural information. Under this background, this paper introduces a novel approach of the canonical dual theory to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid fibrils. The novel and powerful canonical dual computational approach introduced in this paper is for the molecular modeling of prion AGAAAAGA amyloid fibrils, and that the optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils presented in this paper are useful for the drive to find treatments for prion diseases in the field of medicinal chemistry. Overall, this paper presents an important method and provides useful information for treatments of prion diseases. © 2011.
A new loss function for robust classification
- Authors: Zhao, Lei , Mammadov, Musa , Yearwood, John
- Date: 2014
- Type: Text , Journal article
- Relation: Intelligent Data Analysis Vol. 18, no. 4 (2014), p. 697-715
- Full Text: false
- Reviewed:
- Description: Loss function plays an important role in data classification. Manyloss functions have been proposed and applied to differentclassification problems. This paper proposes a new so called thesmoothed 0-1 loss function, that could be considered as anapproximation of the classical 0-1 loss function. Due to thenon-convexity property of the proposed loss function, globaloptimization methods are required to solve the correspondingoptimization problems. Together with the proposed loss function, wecompare the performance of several existing loss functions in theclassification of noisy data sets. In this comparison, differentoptimization problems are considered in regards to the convexity andsmoothness of different loss functions. The experimental resultsshow that the proposed smoothed 0-1 loss function works better ondata sets with noisy labels, noisy features, and outliers. © 2014 - IOS Press and the authors. All rights reserved.
Optimization of multiple classifiers in data mining based on string rewriting systems
- Authors: Dazeley, Richard , Kelarev, Andrei , Yearwood, John , Mammadov, Musa
- Date: 2009
- Type: Text , Journal article
- Relation: Asian-European Journal of Mathematics Vol. 2, no. 1 (2009), p. 41-56
- Relation: https://purl.org/au-research/grants/arc/DP0211866
- Relation: https://purl.org/au-research/grants/arc/LP0669752
- Full Text:
- Description: Optimization of multiple classifiers is an important problem in data mining. We introduce additional structure on the class sets of the classifiers using string rewriting systems with a convenient matrix representation. The aim of the present paper is to develop an efficient algorithm for the optimization of the number of errors of individual classifiers, which can be corrected by these multiple classifiers.