/

Default Site
  • Change Site
  • Default Site
  • Advanced Search
  • Expert Search
  • Sign In
    • Help
    • Search History
    • Clear Session
  • Browse
    • Entire Repository  
    • Recent Additions
    • Communities & Collections
    • By Title
    • By Creator
    • By Subject
    • By Type
    • Most Accessed Papers
    • Most Accessed Items
    • Most Accessed Authors
  • Quick Collection  
Sign In
  • Help
  • Search History
  • Clear Session

Showing items 1 - 2 of 2

Your selections:

  • Yearwood, John
  • 0801 Artificial Intelligence and Image Processing
  • 1702 Cognitive Science
Full Text
1No 1Yes
Creator
1Seifollahi, Sattar 1Taheri, Sona 1Zhao, Lei
Subject
10804 Data Format 1Attribute weighting 1Data mining 1Local optimization 1Loss function 1Machine learning 1Naive bayes 1Optimization
Format Type
1Adobe Acrobat PDF
Facets
Full Text
1No 1Yes
Creator
1Seifollahi, Sattar 1Taheri, Sona 1Zhao, Lei
Subject
10804 Data Format 1Attribute weighting 1Data mining 1Local optimization 1Loss function 1Machine learning 1Naive bayes 1Optimization
Format Type
1Adobe Acrobat PDF
  • Title
  • Creator
  • Date

A new loss function for robust classification

- Zhao, Lei, Mammadov, Musa, Yearwood, John

  • Authors: Zhao, Lei , Mammadov, Musa , Yearwood, John
  • Date: 2014
  • Type: Text , Journal article
  • Relation: Intelligent Data Analysis Vol. 18, no. 4 (2014), p. 697-715
  • Full Text: false
  • Reviewed:
  • Description: Loss function plays an important role in data classification. Manyloss functions have been proposed and applied to differentclassification problems. This paper proposes a new so called thesmoothed 0-1 loss function, that could be considered as anapproximation of the classical 0-1 loss function. Due to thenon-convexity property of the proposed loss function, globaloptimization methods are required to solve the correspondingoptimization problems. Together with the proposed loss function, wecompare the performance of several existing loss functions in theclassification of noisy data sets. In this comparison, differentoptimization problems are considered in regards to the convexity andsmoothness of different loss functions. The experimental resultsshow that the proposed smoothed 0-1 loss function works better ondata sets with noisy labels, noisy features, and outliers. © 2014 - IOS Press and the authors. All rights reserved.
Quick View

Attribute weighted Naive Bayes classifier using a local optimization

- Taheri, Sona, Yearwood, John, Mammadov, Musa, Seifollahi, Sattar


  • Authors: Taheri, Sona , Yearwood, John , Mammadov, Musa , Seifollahi, Sattar
  • Date: 2013
  • Type: Text , Journal article
  • Relation: Neural Computing & Applications Vol.24, no.5 (2013), p.995-1002
  • Full Text:
  • Reviewed:
  • Description: The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.

Attribute weighted Naive Bayes classifier using a local optimization

  • Authors: Taheri, Sona , Yearwood, John , Mammadov, Musa , Seifollahi, Sattar
  • Date: 2013
  • Type: Text , Journal article
  • Relation: Neural Computing & Applications Vol.24, no.5 (2013), p.995-1002
  • Full Text:
  • Reviewed:
  • Description: The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.

  • «
  • ‹
  • 1
  • ›
  • »
  • English (United States)
  • English (United States)
  • Disclaimer
  • Privacy
  • Copyright
  • Contact
  • Federation Library
  • Federation ResearchOnline policy
  • About Vital

‹ › ×

    Clear Session

    Are you sure you would like to clear your session, including search history and login status?