Backbreak prediction in the Chadormalu iron mine using artificial neural network
- Authors: Monjezi, Masoud , Ahmadi, Zabiholla , Yazdian-Varjani, Ali , Khandelwal, Manoj
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 23, no. 3-4 (2013), p. 1101-1107
- Full Text: false
- Reviewed:
- Description: Backbreak is one of the unfavorable blasting results, which can be defined as the unwanted rock breakage behind the last row of blast holes. Blast pattern parameters, like stemming, burden, delay timing, stiffness ratio (bench height/burden) and rock mass conditions (e.g., geo-mechanical properties and joints), are effective in backbreak intensity. Till date, with the exception of some qualitative guidelines, no specific method has been developed for predicting the phenomenon. In this paper, an effort has been made to apply artificial neural networks (ANNs) for predicting backbreak in the blasting operation of the Chadormalu iron mine (Iran). Number of ANN models with different hidden layers and neurons were tried, and it was found that a network with architecture 10-7-7-1 is the optimum model. A comparative study also approved the superiority of the ANN modeling over the conventional regression analysis. Mean square error (MSE), variance account for (VAF) and coefficient of determination (R 2) between measured and predicted backbreak for the ANN model were calculated and found 89.46 %, 0.714 and 90.02 %, respectively. Also, for the regression model, MSE, VAF and R 2 were computed and found 66.93 %, 1.46 and 68.10 %, respectively. Sensitivity analysis was also carried out to find out the influence of each input parameter on backbreak results, and it was revealed that burden is the most influencing parameter on the backbreak, whereas water content is the least effective parameter in this regard. © 2012 Springer-Verlag London Limited.
Efficient nonlinear classification via low-rank regularised least squares
- Authors: Fu, Zhouyu , Lu, Guojun , Ting, Kaiming , Zhang, Dengsheng
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 22, no. 7-8(2013), p. 1279-1289
- Full Text: false
- Reviewed:
- Description: We revisit the classical technique of regularised least squares (RLS) for nonlinear classification in this paper. Specifically, we focus on a low-rank formulation of the RLS, which has linear time complexity in the size of data set only, independent of both the number of classes and number of features. This makes low-rank RLS particularly suitable for problems with large data and moderate feature dimensions. Moreover, we have proposed a general theorem for obtaining the closed-form estimation of prediction values on a holdout validation set given the low-rank RLS classifier trained on the whole training data. It is thus possible to obtain an error estimate for each parameter setting without retraining and greatly accelerate the process of cross-validation for parameter selection. Experimental results on several large-scale benchmark data sets have shown that low-rank RLS achieves comparable classification performance while being much more efficient than standard kernel SVM for nonlinear classification. The improvement in efficiency is more evident for data sets with higher dimensions.
Evaluating authorship distance methods using the positive Silhouette coefficient
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2013
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 19, no. 4 (2013), p. 517-535
- Full Text:
- Reviewed:
- Description: Unsupervised Authorship Analysis (UAA) aims to cluster documents by authorship without knowing the authorship of any documents. An important factor in UAA is the method for calculating the distance between documents. This choice of the authorship distance method is considered more critical to the end result than the choice of cluster analysis algorithm. One method for measuring the correlation between a distance metric and a labelling (such as class values or clusters) is the Silhouette Coefficient (SC). The SC can be leveraged by measuring the correlation between the authorship distance method and the true authorship, evaluating the quality of the distance method. However, we show that the SC can be severely affected by outliers. To address this issue, we introduce the Positive Silhouette Coefficient, given as the proportion of instances with a positive SC value. This metric is not easily altered by outliers and produces a more robust metric. A large number of authorship distance methods are then compared using the PSC, and the findings are presented. This research provides an insight into the efficacy of methods for UAA and presents a framework for testing authorship distance methods.
- Description: C1
Evaluation and prediction of blast-induced ground vibration at Shur River Dam, Iran, by artificial neural network
- Authors: Monjezi, Masoud , Hasanipanah, Mahdi , Khandelwal, Manoj
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 22, no. 7-8 (2013), p. 1637-1643
- Full Text: false
- Reviewed:
- Description: The purpose of this article is to evaluate and predict blast-induced ground vibration at Shur River Dam in Iran using different empirical vibration predictors and artificial neural network (ANN) model. Ground vibration is a seismic wave that spreads out from the blasthole when explosive charge is detonated in a confined manner. Ground vibrations were recorded and monitored in and around the Shur River Dam, Iran, at different vulnerable and strategic locations. A total of 20 blast vibration records were monitored, out of which 16 data sets were used for training of the ANN model as well as determining site constants of various vibration predictors. The rest of the 4 blast vibration data sets were used for the validation and comparison of the result of ANN and different empirical predictors. Performances of the different predictor models were assessed using standard statistical evaluation criteria. Finally, it was found that the ANN model is more accurate as compared to the various empirical models available. As such, a high conformity (R 2 = 0.927) was observed between the measured and predicted peak particle velocity by the developed ANN model. © 2012 Springer-Verlag London Limited.
Evaluation of effect of blast design parameters on flyrock using artificial neural networks
- Authors: Monjezi, Masoud , Mehrdanesh, Amirhossein , Malek, Alaeddin , Khandelwal, Manoj
- Date: 2013
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 23, no. 2 (2013), p. 349-356
- Full Text: false
- Reviewed:
- Description: Flyrock, the propelled rock fragments beyond a specific limit, can be considered as one of the most crucial and hazardous events in the open pit blasting operations. Involvement of various effective parameters has made the problem so complicated, and the available empirical methods are not proficient to predict the flyrock. To achieve more accurate results, employment of new approaches, such as artificial neural network (ANN) can be very helpful. In this paper, an attempt has been made to apply the ANN method to predict the flyrock in the blasting operations of Sungun copper mine, Iran. Number of ANN models was tried using various permutation and combinations, and it was observed that a model trained with back-propagation algorithm having 9-5-2-1 architecture is the best optimum. Flyrock were also computed from various available empirical models suggested by Lundborg. Statistical modeling has also been done to compare the prediction capability of ANN over other methods. Comparison of the results showed absolute superiority of the ANN modeling over the empirical as well as statistical models. Sensitivity analysis was also performed to identify the most influential inputs on the output results. It was observed that powder factor, hole diameter, stemming and charge per delay are the most effective parameters on the flyrock. © 2012 Springer-Verlag London Limited.
Local models - the key to boosting stable learners successfully
- Authors: Ting, Kaiming , Zhu, Lian , Wells, Jonathan
- Date: 2013
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 29, no. 2 (2013), p. 331-356
- Full Text: false
- Reviewed:
- Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbours and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbours and Naive Bayes classifiers.
- Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbors and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbors and Naive Bayes classifiers.
Mass estimation
- Authors: Ting, Kaiming , Zhou, Guang , Liu, Fei , Tan, Swee
- Date: 2013
- Type: Text , Journal article
- Relation: Machine Learning Vol. 90, no. 1 (2013), p. 127-160
- Full Text: false
- Reviewed:
- Description: This paper introduces mass estimation—a base modelling mechanism that can be employed to solve various tasks in machine learning. We present the theoretical basis of mass and efficient methods to estimate mass. We show that mass estimation solves problems effectively in tasks such as information retrieval, regression and anomaly detection. The models, which use mass in these three tasks, perform at least as well as and often better than eight state-of-the-art methods in terms of task-specific performance measures. In addition, mass estimation has constant time and space complexities.
An intelligent approach to evaluate drilling performance
- Authors: Bhatnagar, Anupam , Khandelwal, Manoj
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 21, no. 4 (2012), p. 763-770
- Full Text: false
- Reviewed:
- Description: In this paper, an attempt has been made to predict the rate of penetration (ROP) of rocks by incorporating thrust, revolutions per minute (rpm), flushing media and compressive strength of rocks using artificial neural network (ANN) technique. A three-layer feed-forward back-propagation neural network with 4-7-1 architecture was trained using 472 experimental data sets of sandstone, limestone, rock phosphate, dolomite, marble and quartz-chlorite-schist rocks. A total of 146 new data sets were used for the testing and comparison of the ROP by ANN. Multivariate regression analysis (MVRA) has also been done with same data sets of ANN. ANN and MVRA results were compared based on coefficient of determination (CoD) and mean absolute error (MAE) between experimental and predicted values of ROP. The coefficient of determination by ANN was 0. 985, while coefficient of determination was 0. 629 for rate of penetration. The mean absolute error (MAE) for rate of penetration by ANN was 0. 3547, whereas MAE by MVRA was 1. 7499. © 2010 Springer-Verlag London Limited.
Application of an expert system to predict thermal conductivity of rocks
- Authors: Khandelwal, Manoj
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 21, no. 6 (2012), p. 1341-1347
- Full Text: false
- Reviewed:
- Description: In this paper, an attempt has been made to predict the thermal conductivity (TC) of rocks by incorporating uniaxial compressive strength, density, porosity, and P-wave velocity using support vector machine (SVM). Training of the SVM network was carried out using 102 experimental data sets of various rocks, whereas 25 new data sets were used for the testing of the TC by SVM model. Multivariate regression analysis (MVRA) has also been carried out with same data sets that were used for the training of SVM model. SVM and MVRA results were compared based on coefficient of determination (CoD) and mean absolute error (MAE) between experimental and predicted values of TC. It was found that CoD between measured and predicted values of TC by SVM and MVRA was 0. 994 and 0. 918, respectively, whereas MAE was 0. 0453 and 0. 2085 for SVM and MVRA, respectively. © 2011 Springer-Verlag London Limited.
Application of artificial intelligence to improve quality of service in computer networks
- Authors: Ahmad, Iftekhar , Kamruzzaman, Joarder , Habibi, Daryoush
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing & Applications Vol. 21, no. 1 (2012), p. 81-90
- Full Text: false
- Reviewed:
- Description: Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rates for ongoing IR calls in computer networks. High IR call preemption rates cause interruptions to service continuity, which is considered detrimental in a QoS-enabled network. A number of call admission control models have been proposed in the literature to reduce preemption rates for ongoing IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of ongoing calls in a QoS-enabled network. The model maps network traffic parameters and desired operating preemption rate by network operator providing the best for the network under consideration into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired operating preemption rates. Simulation results show that the preemption rate attained by the model closely matches with the target rate.
Artificial neural network for prediction of air flow in a single rock joint
- Authors: Ranjith, Pathegama , Khandelwal, Manoj
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 21, no. 6 (2012), p. 1413-1422
- Full Text: false
- Reviewed:
- Description: In this paper, an attempt has been made to evaluate and predict the air flow rate in triaxial conditions at various confining pressures incorporating cell pressure, air inlet pressure, and air outlet pressure using artificial neural network (ANN) technique. A three-layer feed forward back propagation neural network having 3-7-1 architecture network was trained using 37 data sets measured from laboratory investigation. Ten new data sets were used for the, validation and comparison of the air flow rate by ANN and multi-variate regression analysis (MVRA) to develop more confidence on the proposed method. Results were compared based on coefficient of determination (CoD) and mean absolute error (MAE) between measured and predicted values of air flow rate. It was found that CoD between measured and predicted air flow rate was 0. 995 and 0. 758 by ANN and MVRA, respectively, whereas MAE was 0. 0413 and 0. 1876. © 2011 Springer-Verlag London Limited.
Performance comparisons of contour-based corner detectors
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2012
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 21, no. 9 (2012), p. 4167-4179
- Full Text: false
- Reviewed:
- Description: Abstract— Corner detectors have many applications in computer vision and image identification and retrieval. Contour-based corner detectors directly or indirectly estimate a significance measure (e.g., curvature) on the points of a planar curve, and select the curvature extrema points as corners. While an extensive number of contour-based corner detectors have been proposed over the last four decades, there is no comparative study of recently proposed detectors. This paper is an attempt to fill this gap. The general framework of contour-based corner detection is presented, and two major issues – curve smoothing and curvature estimation, which have major impacts on the corner detection performance, are discussed. A number of promising detectors are compared using both automatic and manual evaluation systems on two large datasets. It is observed that while the detectors using indirect curvature estimation techniques are more robust, the detectors using direct curvature estimation techniques are faster.
Recentred local profiles for authorship attribution
- Authors: Layton, Robert , Watters, Paul , Dazeley, Richard
- Date: 2012
- Type: Text , Journal article
- Relation: Natural Language Engineering Vol. 18, no. 3 (2012), p. 293-312
- Full Text:
- Reviewed:
- Description: Authorship attribution methods aim to determine the author of a document, by using information gathered from a set of documents with known authors. One method of performing this task is to create profiles containing distinctive features known to be used by each author. In this paper, a new method of creating an author or document profile is presented that detects features considered distinctive, compared to normal language usage. This recentreing approach creates more accurate profiles than previous methods, as demonstrated empirically using a known corpus of authorship problems. This method, named recentred local profiles, determines authorship accurately using a simple 'best matching author' approach to classification, compared to other methods in the literature. The proposed method is shown to be more stable than related methods as parameter values change. Using a weighted voting scheme, recentred local profiles is shown to outperform other methods in authorship attribution, with an overall accuracy of 69.9% on the ad-hoc authorship attribution competition corpus, representing a significant improvement over related methods. Copyright © Cambridge University Press 2011.
- Description: 2003010688
A basic theory of intelligent finance
- Authors: Pan, Heping
- Date: 2011
- Type: Text , Journal article
- Relation: New Mathematics and Natural Computation Vol. 7, no. 2 (May 2011), p. 197-227
- Full Text: false
- Reviewed:
- Description: This paper presents a basic theory of intelligent finance as a new paradigm of financial investment. It is assumed that the financial market is always in a state of swing between efficient and inefficient modes on multiple levels of time scale; it is possible to go beyond the efficient market theory to study the dynamic evolving process of the market between equilibrium and far-from-equilibrium; there are robust dynamic patterns in this evolving process, which may be exploitable via intelligent trading systems. On the foundation of the four principles - comprehensive, predictive, dynamic and strategic, the basic theory takes the information sources into the loop as the starting points for all the market analysis, introducing the scale space of time into the pricing process analysis in order to detect and capture trends, cycles and seasonality on multiple intrinsic levels of time scale which are then used as the dynamic basis for constructing and managing portfolios. In stock markets, the theory exhibits itself in the form of an Intelligent Dynamic Portfolio Theory, which integrates predictive modeling of a bullbear market cycle, sector rotation, and portfolio optimization with a reactive trend following trading strategy.
Empirical evaluation methods for multiobjective reinforcement learning algorithms
- Authors: Vamplew, Peter , Dazeley, Richard , Berry, Adam , Issabekov, Rustam , Dekker, Evan
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 84, no. 1-2 (2011), p. 51-80
- Full Text: false
- Reviewed:
- Description: While a number of algorithms for multiobjective reinforcement learning have been proposed, and a small number of applications developed, there has been very little rigorous empirical evaluation of the performance and limitations of these algorithms. This paper proposes standard methods for such empirical evaluation, to act as a foundation for future comparative studies. Two classes of multiobjective reinforcement learning algorithms are identified, and appropriate evaluation metrics and methodologies are proposed for each class. A suite of benchmark problems with known Pareto fronts is described, and future extensions and implementations of this benchmark suite are discussed. The utility of the proposed evaluation methods are demonstrated via an empirical comparison of two example learning algorithms. © 2010 The Author(s).
Feature-subspace aggregating: ensembles for stable and unstable learners
- Authors: Ting, Kaiming , Wells, Jonathan , Tan, Swee , Teng, Shyh , Webb, Geoffrey
- Date: 2011
- Type: Text , Journal article
- Relation: Machine Learning Vol. 82, no. 3 (2011), p. 375-397
- Full Text: false
- Reviewed:
- Description: This paper introduces a new ensemble approach, Feature-Subspace Aggregating (Feating), which builds local models instead of global models. Feating is a generic ensemble approach that can enhance the predictive performance of both stable and unstable learners. In contrast, most existing ensemble approaches can improve the predictive performance of unstable learners only. Our analysis shows that the new approach reduces the execution time to generate a model in an ensemble through an increased level of localisation in Feating. Our empirical evaluation shows that Feating performs significantly better than Boosting, Random Subspace and Bagging in terms of predictive accuracy, when a stable learner SVM is used as the base learner. The speed up achieved by Feating makes feasible SVM ensembles that would otherwise be infeasible for large data sets. When SVM is the preferred base learner, we show that Feating SVM performs better than Boosting decision trees and Random Forests. We further demonstrate that Feating also substantially reduces the error of another stable learner, k-nearest neighbour, and an unstable learner, decision tree.
Preface
- Authors: Pan, Heping , Hayward, Serge
- Date: 2011
- Type: Text , Journal article
- Relation: New Mathematics and Natural Computation Vol. 7, no. 2 (2011), p. 187-196
- Full Text: false
- Reviewed:
An L-2-Boosting Algorithm for Estimation of a Regression Function
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429
- Full Text:
- Reviewed:
- Description: An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
Pattern recognition in bioinformatics : Girls lose out
- Authors: Ahmad, Shandar , Chetty, Madhu , Schmidt, Bertil
- Date: 2010
- Type: Text , Journal article
- Relation: Pattern Recognition Letter Vol. 31, no. 14 (2010), p. 2071-2072
- Full Text: false
- Reviewed:
- Description: Editorial- With the advent of high speed computers, in-silico studies on biological patterns in recent years have been significantly impacted by the pattern recognition techniques. In this special issue, ‘Pattern Recognition in Bioinformatics’, we present various sophisticated algorithms for a wide range of pattern recognition problems from the world of complex biological systems, whether these are specific sequence signatures – motifs that stand out in discovering its partner – or substructures in an interaction network that determines an organisms’ response to external stimuli. The 12 high-quality articles included in this special issue are essentially based on significant extensions of the selected papers presented at the Third International Conference on Pattern Recognition in Bioinformatics (PRIB 2008) held in Melbourne, Australia. All these selected papers for special issue have again undergone a thorough review by at least three reviewers who are experts in the field. The fresh review process was followed to ensure that the papers met the high standards of scientific and technical merit of the Pattern Recognition Letters journal. The issue is broadly divided into three sections of four papers each, namely (1) Section 1: Interaction Networks and Feature-based Predictions (2) Section 2: Microarray and Transcription Data Analysis (3) Section 3: Sequence Analysis and Motif Discovery
Video coding focusing on block partitioning and occlusion
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 19, no. 3 (2010), p. 691-701
- Full Text: false
- Reviewed:
- Description: Among the existing block partitioning schemes, the pattern-based video coding (PVC) has already established its superiority at low bit-rate. Its innovative segmentation process with regular-shaped pattern templates is very fast as it avoids handling the exact shape of the moving objects. It also judiciously encodes the pattern-uncovered background segments capturing high level of interblock temporal redundancy without any motion compensation, which is favoured by the rate-distortion optimizer at low bit-rates. The existing PVC technique, however, uses a number of content-sensitive thresholds and thus setting them to any predefined values risks ignoring some of the macroblocks that would otherwise be encoded with patterns. Furthermore, occluded background can potentially degrade the performance of this technique. In this paper, a robust PVC scheme is proposed by removing all the content-sensitive thresholds, introducing a new similarity metric, considering multiple top-ranked patterns by the rate-distortion optimizer, and refining the Lagrangian multiplier of the H.264 standard for efficient embedding. A novel pattern-based residual encoding approach is also integrated to address the occlusion issue. Once embedded into the H.264 Baseline profile, the proposed PVC scheme improves the image quality perceptually significantly by at least 0.5 dB in low bit-rate video coding applications. A similar trend is observed for moderate to high bit-rate applications when the proposed scheme replaces the bi-directional predictive mode in the H.264 High profile.