/

Default Site
  • Change Site
  • Default Site
  • Advanced Search
  • Expert Search
  • Sign In
    • Help
    • Search History
    • Clear Session
  • Browse
    • Entire Repository  
    • Recent Additions
    • Communities & Collections
    • By Title
    • By Creator
    • By Subject
    • By Type
    • Most Accessed Papers
    • Most Accessed Items
    • Most Accessed Authors
  • Quick Collection  
Sign In
  • Help
  • Search History
  • Clear Session

Showing items 1 - 8 of 8

Your selections:

  • 0801 Artificial Intelligence and Image Processing
  • 0802 Computation Theory and Mathematics
Creator
3Khandelwal, Manoj 3Ting, Kaiming 2Bagirov, Adil 2Wells, Jonathan 1Albrecht, David 1Armaghani, Danial 1Aryal, Sunil 1Bandaragoda, Tharindu 1Fatemi, Seyed 1Ghoroqi, Mahyar 1Karasozen, Bulent 1Kumar, Lalit 1Liu, Fei 1Marto, Aminaton 1Mohebi, Ehsan 1Singh, Trilok 1Tabrizi, Omid 1Ugon, Julien 1Webb, Dean 1Yellishetty, Mohan
Show More
Show Less
Subject
40102 Applied Mathematics 30806 Information Systems 2Blast vibration 11702 Cognitive Science 1ANN 1Anomaly detection 1Artificial neural network 1Back-propagation 1Bayesian classifiers 1Boosting 1Classification 1Cluster analysis 1Coefficient of determination 1Cohesion 1Conventional vibration predictor equations 1Conventional vibration predictors 1Data analysis 1Data mining
Show More
Show Less
Facets
Creator
3Khandelwal, Manoj 3Ting, Kaiming 2Bagirov, Adil 2Wells, Jonathan 1Albrecht, David 1Armaghani, Danial 1Aryal, Sunil 1Bandaragoda, Tharindu 1Fatemi, Seyed 1Ghoroqi, Mahyar 1Karasozen, Bulent 1Kumar, Lalit 1Liu, Fei 1Marto, Aminaton 1Mohebi, Ehsan 1Singh, Trilok 1Tabrizi, Omid 1Ugon, Julien 1Webb, Dean 1Yellishetty, Mohan
Show More
Show Less
Subject
40102 Applied Mathematics 30806 Information Systems 2Blast vibration 11702 Cognitive Science 1ANN 1Anomaly detection 1Artificial neural network 1Back-propagation 1Bayesian classifiers 1Boosting 1Classification 1Cluster analysis 1Coefficient of determination 1Cohesion 1Conventional vibration predictor equations 1Conventional vibration predictors 1Data analysis 1Data mining
Show More
Show Less
  • Title
  • Creator
  • Date

A generic ensemble approach to estimate multidimensional likelihood in Bayesian classifier learning

- Aryal, Sunil, Ting, Kaiming

  • Authors: Aryal, Sunil , Ting, Kaiming
  • Date: 2016
  • Type: Text , Journal article
  • Relation: Computational Intelligence Vol. 32, no. 3 (2016), p. 458-479
  • Full Text: false
  • Reviewed:
  • Description: In Bayesian classifier learning, estimating the joint probability distribution (,) or the likelihood (|) directly from training data is considered to be difficult, especially in large multidimensional data sets. To circumvent this difficulty, existing Bayesian classifiers such as Naive Bayes, BayesNet, and ADE have focused on estimating simplified surrogates of (,) from different forms of one‐dimensional likelihoods. Contrary to the perceived difficulty in multidimensional likelihood estimation, we present a simple generic ensemble approach to estimate multidimensional likelihood directly from data. The idea is to aggregate (|) estimated from a random subsample of data . This article presents two ways to estimate multidimensional likelihoods using the proposed generic approach and introduces two new Bayesian classifiers called and that estimate (|) using a nearest‐neighbor density estimation and a probability estimation through feature space partitioning, respectively. Unlike the existing Bayesian classifiers, ENNBayes and MassBayes have constant training time and space complexities and they scale better than existing Bayesian classifiers in very large data sets. Our empirical evaluation shows that ENNBayes and MassBayes yield better predictive accuracy than the existing Bayesian classifiers in benchmark data sets.

An algorithm for clustering using L1-norm based on hyperbolic smoothing technique

- Bagirov, Adil, Mohebi, Ehsan

  • Authors: Bagirov, Adil , Mohebi, Ehsan
  • Date: 2016
  • Type: Text , Journal article
  • Relation: Computational Intelligence Vol. 32, no. 3 (2016), p. 439-457
  • Relation: http://purl.org/au-research/grants/arc/DP140103213
  • Full Text: false
  • Reviewed:
  • Description: Cluster analysis deals with the problem of organization of a collection of objects into clusters based on a similarity measure, which can be defined using various distance functions. The use of different similarity measures allows one to find different cluster structures in a data set. In this article, an algorithm is developed to solve clustering problems where the similarity measure is defined using the L1-norm. The algorithm is designed using the nonsmooth optimization approach to the clustering problem. Smoothing techniques are applied to smooth both the clustering function and the L1-norm. The algorithm computes clusters sequentially and finds global or near global solutions to the clustering problem. Results of numerical experiments using 12 real-world data sets are reported, and the proposed algorithm is compared with two other clustering algorithms. ©2015 Wiley Periodicals, Inc.

Classification through incremental max-min separability

- Bagirov, Adil, Ugon, Julien, Webb, Dean, Karasozen, Bulent

  • Authors: Bagirov, Adil , Ugon, Julien , Webb, Dean , Karasozen, Bulent
  • Date: 2011
  • Type: Text , Journal article
  • Relation: Pattern Analysis and Applications Vol. 14, no. 2 (2011), p. 165-174
  • Relation: http://purl.org/au-research/grants/arc/DP0666061
  • Full Text: false
  • Reviewed:
  • Description: Piecewise linear functions can be used to approximate non-linear decision boundaries between pattern classes. Piecewise linear boundaries are known to provide efficient real-time classifiers. However, they require a long training time. Finding piecewise linear boundaries between sets is a difficult optimization problem. Most approaches use heuristics to avoid solving this problem, which may lead to suboptimal piecewise linear boundaries. In this paper, we propose an algorithm for globally training hyperplanes using an incremental approach. Such an approach allows one to find a near global minimizer of the classification error function and to compute as few hyperplanes as needed for separating sets. We apply this algorithm for solving supervised data classification problems and report the results of numerical experiments on real-world data sets. These results demonstrate that the new algorithm requires a reasonable training time and its test set accuracy is consistently good on most data sets compared with mainstream classifiers. © 2010 Springer-Verlag London Limited.

Isolation-based anomaly detection using nearest-neighbor ensembles

- Bandaragoda, Tharindu, Ting, Kaiming, Albrecht, David, Liu, Fei, Zhu, Ye, Wells, Jonathan

  • Authors: Bandaragoda, Tharindu , Ting, Kaiming , Albrecht, David , Liu, Fei , Zhu, Ye , Wells, Jonathan
  • Date: 2018
  • Type: Text , Journal article
  • Relation: Computational Intelligence Vol. 34, no. 4 (2018), p. 968-998
  • Full Text: false
  • Reviewed:
  • Description: The first successful isolation-based anomaly detector, ie, iForest, uses trees as a means to perform isolation. Although it has been shown to have advantages over existing anomaly detectors, we have identified 4 weaknesses, ie, its inability to detect local anomalies, anomalies with a high percentage of irrelevant attributes, anomalies that are masked by axis-parallel clusters, and anomalies in multimodal data sets. To overcome these weaknesses, this paper shows that an alternative isolation mechanism is required and thus presents iNNE or isolation using Nearest Neighbor Ensemble. Although relying on nearest neighbors, iNNE runs significantly faster than the existing nearest neighbor–based methods such as the local outlier factor, especially in data sets having thousands of dimensions or millions of instances. This is because the proposed method has linear time complexity and constant space complexity. © 2018 Wiley Periodicals, Inc.

Implementing an ANN model optimized by genetic algorithm for estimating cohesion of limestone samples

- Khandelwal, Manoj, Marto, Aminaton, Fatemi, Seyed, Ghoroqi, Mahyar, Armaghani, Danial, Singh, Trilok, Tabrizi, Omid

  • Authors: Khandelwal, Manoj , Marto, Aminaton , Fatemi, Seyed , Ghoroqi, Mahyar , Armaghani, Danial , Singh, Trilok , Tabrizi, Omid
  • Date: 2018
  • Type: Text , Journal article
  • Relation: Engineering with Computers Vol. 34, no. 2 (2018), p. 307-317
  • Full Text: false
  • Reviewed:
  • Description: Shear strength parameters such as cohesion are the most significant rock parameters which can be utilized for initial design of some geotechnical engineering applications. In this study, evaluation and prediction of rock material cohesion is presented using different approaches i.e., simple and multiple regression, artificial neural network (ANN) and genetic algorithm (GA)-ANN. For this purpose, a database including three model inputs i.e., p-wave velocity, uniaxial compressive strength and Brazilian tensile strength and one output which is cohesion of limestone samples was prepared. A meaningful relationship was found for all of the model inputs with suitable performance capacity for prediction of rock cohesion. Additionally, a high level of accuracy (coefficient of determination, R2 of 0.925) was observed developing multiple regression equation. To obtain higher performance capacity, a series of ANN and GA-ANN models were built. As a result, hybrid GA-ANN network provides higher performance for prediction of rock cohesion compared to ANN technique. GA-ANN model results (R2 = 0.976 and 0.967 for train and test) were better compared to ANN model results (R2 = 0.949 and 0.948 for train and test). Therefore, this technique is introduced as a new one in estimating cohesion of limestone samples. © 2017, Springer-Verlag London Ltd., part of Springer Nature.

Application of soft computing to predict blast-induced ground vibration

- Khandelwal, Manoj, Kumar, Lalit, Yellishetty, Mohan

  • Authors: Khandelwal, Manoj , Kumar, Lalit , Yellishetty, Mohan
  • Date: 2011
  • Type: Text , Journal article
  • Relation: Engineering with Computers Vol. 27, no. 2 (2011), p. 117-125
  • Full Text: false
  • Reviewed:
  • Description: In this study, an attempt has been made to evaluate and predict the blast-induced ground vibration by incorporating explosive charge per delay and distance from the blast face to the monitoring point using artificial neural network (ANN) technique. A three-layer feed-forward back-propagation neural network with 2-5-1 architecture was trained and tested using 130 experimental and monitored blast records from the surface coal mines of Singareni Collieries Company Limited, Kothagudem, Andhra Pradesh, India. Twenty new blast data sets were used for the validation and comparison of the peak particle velocity (PPV) by ANN and conventional vibration predictors. Results were compared based on coefficient of determination and mean absolute error between monitored and predicted values of PPV. © 2009 Springer-Verlag London Limited.

Blast-induced ground vibration prediction using support vector machine

- Khandelwal, Manoj

  • Authors: Khandelwal, Manoj
  • Date: 2011
  • Type: Text , Journal article
  • Relation: Engineering with Computers Vol. 27, no. 3 (2011), p. 193-200
  • Full Text: false
  • Reviewed:
  • Description: Ground vibrations induced by blasting are one of the fundamental problems in the mining industry and may cause severe damage to structures and plants nearby. Therefore, a vibration control study plays an important role in the minimization of environmental effects of blasting in mines. In this paper, an attempt has been made to predict the peak particle velocity using support vector machine (SVM) by taking into consideration of maximum charge per delay and distance between blast face to monitoring point. To investigate the suitability of this approach, the predictions by SVM have been compared with conventional vibration predictor equations. Coefficient of determination (CoD) and mean absolute error were taken as a performance measure. © 2010 Springer-Verlag London Limited.

Local models - the key to boosting stable learners successfully

- Ting, Kaiming, Zhu, Lian, Wells, Jonathan

  • Authors: Ting, Kaiming , Zhu, Lian , Wells, Jonathan
  • Date: 2013
  • Type: Text , Journal article
  • Relation: Computational Intelligence Vol. 29, no. 2 (2013), p. 331-356
  • Full Text: false
  • Reviewed:
  • Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbours and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbours and Naive Bayes classifiers.
  • Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbors and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbors and Naive Bayes classifiers.

  • «
  • ‹
  • 1
  • ›
  • »
  • English (United States)
  • English (United States)
  • Disclaimer
  • Privacy
  • Copyright
  • Contact
  • Federation Library
  • Federation ResearchOnline policy
  • About Vital

‹ › ×

    Clear Session

    Are you sure you would like to clear your session, including search history and login status?