/

Default Site
  • Change Site
  • Default Site
  • Advanced Search
  • Expert Search
  • Sign In
    • Help
    • Search History
    • Clear Session
  • Browse
    • Entire Repository  
    • Recent Additions
    • Communities & Collections
    • By Title
    • By Creator
    • By Subject
    • By Type
    • Most Accessed Papers
    • Most Accessed Items
    • Most Accessed Authors
  • Quick Collection  
Sign In
  • Help
  • Search History
  • Clear Session

Showing items 1 - 3 of 3

Your selections:

  • Classification
  • 0801 Artificial Intelligence and Image Processing
Full Text
2No 1Yes
Creator
1Jelinek, Herbert 1Karasozen, Bulent 1Kasimbeyli, Refail 1Ozturk, Gurkan 1Stranieri, Andrew 1Ugon, Julien 1Venkatraman, Sitalakshmi 1Webb, Dean 1Yatsko, Andrew
Subject
10102 Applied Mathematics 10802 Computation Theory and Mathematics 11702 Cognitive Science 1Categorical data 1Classification (of information) 1Continuous features 1Data analysis 1Data mining 1Derivative-free methods 1Discrete gradient method 1Discretization 1Errors 1Gradient methods 1Incremental approach 1Missing values 1Nonlinear boundary 1Nonsmooth nonconvex optimization 1Nonsmooth optimization
Show More
Show Less
Format Type
1Adobe Acrobat PDF
Facets
Full Text
2No 1Yes
Creator
1Jelinek, Herbert 1Karasozen, Bulent 1Kasimbeyli, Refail 1Ozturk, Gurkan 1Stranieri, Andrew 1Ugon, Julien 1Venkatraman, Sitalakshmi 1Webb, Dean 1Yatsko, Andrew
Subject
10102 Applied Mathematics 10802 Computation Theory and Mathematics 11702 Cognitive Science 1Categorical data 1Classification (of information) 1Continuous features 1Data analysis 1Data mining 1Derivative-free methods 1Discrete gradient method 1Discretization 1Errors 1Gradient methods 1Incremental approach 1Missing values 1Nonlinear boundary 1Nonsmooth nonconvex optimization 1Nonsmooth optimization
Show More
Show Less
Format Type
1Adobe Acrobat PDF
  • Title
  • Creator
  • Date

Quick View

Diagnostic with incomplete nominal/discrete data

- Jelinek, Herbert, Yatsko, Andrew, Stranieri, Andrew, Venkatraman, Sitalakshmi, Bagirov, Adil


  • Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
  • Date: 2015
  • Type: Text , Journal article
  • Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
  • Full Text:
  • Reviewed:
  • Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.

Diagnostic with incomplete nominal/discrete data

  • Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
  • Date: 2015
  • Type: Text , Journal article
  • Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
  • Full Text:
  • Reviewed:
  • Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

  • Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent
  • Date: 2011
  • Type: Text , Journal article
  • Relation: Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174
  • Relation: http://purl.org/au-research/grants/arc/DP0666061
  • Full Text: false
  • Reviewed:
  • Description: Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

An incremental piecewise linear classifier based on polyhedral conic separation

- Ozturk, Gurkan, Bagirov, Adil, Kasimbeyli, Refail

  • Authors: Ozturk, Gurkan , Bagirov, Adil , Kasimbeyli, Refail
  • Date: 2015
  • Type: Text , Journal article
  • Relation: Machine Learning Vol. 101, no. 1-3 (2015), p. 397-413
  • Relation: http://purl.org/au-research/grants/arc/DP140103213
  • Full Text: false
  • Reviewed:
  • Description: In this paper, a piecewise linear classifier based on polyhedral conic separation is developed. This classifier builds nonlinear boundaries between classes using polyhedral conic functions. Since the number of polyhedral conic functions separating classes is not known a priori, an incremental approach is proposed to build separating functions. These functions are found by minimizing an error function which is nonsmooth and nonconvex. A special procedure is proposed to generate starting points to minimize the error function and this procedure is based on the incremental approach. The discrete gradient method, which is a derivative-free method for nonsmooth optimization, is applied to minimize the error function starting from those points. The proposed classifier is applied to solve classification problems on 12 publicly available data sets and compared with some mainstream and piecewise linear classifiers. © 2014, The Author(s).

  • «
  • ‹
  • 1
  • ›
  • »
  • English (United States)
  • English (United States)
  • Disclaimer
  • Privacy
  • Copyright
  • Contact
  • FedUni Library
  • FedUni ResearchOnline policy
  • About Vital

‹ › ×

    Clear Session

    Are you sure you would like to clear your session, including search history and login status?