Investment decision model via an improved BP neural network
- Authors: Shen, Jihong , Zhang, Canxin , Lian, Chunbo , Hu, Hao , Mammadov, Musa
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 2010 IEEE International Conference on Information and Automation, ICIA 2010, Harbin, Heilongjiang 20th-23rd June 2010 p. 2092-2096
- Full Text:
- Description: In macro investment, an investment decision model is established by using an improved back propagation (BP) artificial neural network (ANN). In this paper, the relations between elements of investment and output of products are determined, and then the optimal distribution of investment is determined by adjusting the distributions rationally. This model can reflect the highly nonlinear mapping relations among each element of investment by using nonlinear utility functions to improve the architecture of artificial neural network, which can be widely applied in investment problems. ©2010 IEEE.
Isolation forest
- Authors: Liu, Fei , Ting, Kaiming , Zhou, Zhi-Hua
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the Eighth IEEE International Conference on Data Mining p. 413-422
- Full Text: false
- Reviewed:
- Description: Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. This paper proposes a fundamentally different model-based method that explicitly isolates anomalies instead of profiles normal points. To our best knowledge, the concept of isolation has not been explored in current literature. The use of isolation enables the proposed method, iForest, to exploit sub-sampling to an extent that is not feasible in existing methods, creating an algorithm which has a linear time complexity with a low constant and a low memory requirement. Our empirical evaluation shows that iForest performs favourably to ORCA, a near-linear time complexity distance-based method, LOF and random forests in terms of AUC and processing time, and especially in large data sets. iForest also works well in high dimensional problems which have a large number of irrelevant attributes, and in situations where training set does not contain any anomalies.
Isolation-based anomaly detection using nearest-neighbor ensembles
- Authors: Bandaragoda, Tharindu , Ting, Kaiming , Albrecht, David , Liu, Fei , Zhu, Ye , Wells, Jonathan
- Date: 2018
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 34, no. 4 (2018), p. 968-998
- Full Text: false
- Reviewed:
- Description: The first successful isolation-based anomaly detector, ie, iForest, uses trees as a means to perform isolation. Although it has been shown to have advantages over existing anomaly detectors, we have identified 4 weaknesses, ie, its inability to detect local anomalies, anomalies with a high percentage of irrelevant attributes, anomalies that are masked by axis-parallel clusters, and anomalies in multimodal data sets. To overcome these weaknesses, this paper shows that an alternative isolation mechanism is required and thus presents iNNE or isolation using Nearest Neighbor Ensemble. Although relying on nearest neighbors, iNNE runs significantly faster than the existing nearest neighbor–based methods such as the local outlier factor, especially in data sets having thousands of dimensions or millions of instances. This is because the proposed method has linear time complexity and constant space complexity. © 2018 Wiley Periodicals, Inc.
Issues of grid-cluster retrievals in swarm-based clustering
- Authors: Tan, Swee , Ting, Kaiming , Teng, Shyh
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the 2008 IEEE World Congress on Computational Intelligence p. 511-518
- Full Text: false
- Reviewed:
- Description: One common approach in swarm-based clustering is to use agents to create a set of clusters on a two-dimensional grid, and then use an existing clustering method to retrieve the clusters on the grid. The second step, which we call grid-cluster retrieval, is an essential step to obtain an explicit partitioning of data. In this study, we highlight the issues in grid-cluster retrievals commonly neglected by researchers, and demonstrate the non-trivial difficulties involved. To tackle the issues, we then evaluate three methods: K-means, hierarchical clustering (Weighted Single-link) and density-based clustering (DBScan). Among the three methods, DBScan is the only method which has not been previously used for grid-cluster retrievals, yet it is shown to be the most suitable method in terms of effectiveness and efficiency.
Keeping the patient asleep and alive : Towards a computational cognitive model of disturbance management in anaesthesia
- Authors: Keogh, Kathleen , Sonenberg, Elizabeth
- Date: 2007
- Type: Text , Journal article
- Relation: Cognitive Systems Research Vol. 8, no. 4 (2007), p. 249-261
- Full Text:
- Reviewed:
- Description: We have analysed rich, dynamic data about the behaviour of anaesthetists during the management of a simulated critical incident in the operating theatre. We use a paper based analysis and a partial implementation to further the development of a computational cognitive model for disturbance management in anaesthesia. We suggest that our data analysis pattern may be used for the analysis of behavioural data describing cognitive and observable events in other complex dynamic domains. © 2007 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003005060
Learning by extrapolation from marginal to full-multivariate probability distributions: Decreasingly naive Bayesian classification
- Authors: Webb, Geoffrey , Boughton, Janice , Zheng, Fei , Ting, Kaiming , Salem, Houssam
- Date: 2012
- Type: Text , Journal article
- Relation: Machine Learning Vol. 86, no. 2 (2012), p.233-272
- Full Text: false
- Reviewed:
- Description: Averaged n-Dependence Estimators (AnDE) is an approach to probabilistic classification learning that learns by extrapolation from marginal to full-multivariate probability distributions. It utilizes a single parameter that transforms the approach between a low-variance high-bias learner (Naive Bayes) and a high-variance low-bias learner with Bayes optimal asymptotic error. It extends the underlying strategy of Averaged One-Dependence Estimators (AODE), which relaxes the Naive Bayes independence assumption while retaining many of Naive Bayes’ desirable computational and theoretical properties. AnDE further relaxes the independence assumption by generalizing AODE to higher-levels of dependence. Extensive experimental evaluation shows that the bias-variance trade-off for Averaged 2-Dependence Estimators results in strong predictive accuracy over a wide range of data sets. It has training time linear with respect to the number of examples, learns in a single pass through the training data, supports incremental learning, handles directly missing values, and is robust in the face of noise. Beyond the practical utility of its lower-dimensional variants, AnDE is of interest in that it demonstrates that it is possible to create low-bias high-variance generative learners and suggests strategies for developing even more powerful classifiers.
Learning naive Bayes classifiers for music classification and retrieval
- Authors: Fu, Zhouyu , Lu, Guojun , Ting, Kaiming , Zhang, Dengsheng
- Date: 2010
- Type: Text , Conference paper
- Relation: Proceedings of the 20th International Conference on Pattern Recognition p. 4589-4592
- Full Text: false
- Reviewed:
- Description: In this paper, we explore the use of naive Bayes classifiers for music classification and retrieval. The motivation is to employ all audio features extracted from local windows for classification instead of just using a single song-level feature vector produced by compressing the local features. Two variants of naive Bayes classifiers are studied based on the extensions of standard nearest neighbor and support vector machine classifiers. Experimental results have demonstrated superior performance achieved by the proposed naive Bayes classifiers for both music classification and retrieval as compared to the alternative methods.
Learning sparse kernel classifiers for multi-instance classification
- Authors: Fu, Zhouyu , Lu, Guojun , Ting, Kaiming , Zhang, Dengsheng
- Date: 2013
- Type: Text , Journal article
- Relation: IEEE Transactions on Neural Networks and Learning Systems Vol. 24, no. 9 (2013), p. 1377-1389
- Full Text: false
- Reviewed:
- Description: We propose a direct approach to learning sparse kernel classifiers for multi-instance (MI) classification to improve efficiency while maintaining predictive accuracy. The proposed method builds on a convex formulation for MI classification by considering the average score of individual instances for bag-level prediction. In contrast, existing formulations used the maximum score of individual instances in each bag, which leads to nonconvex optimization problems. Based on the convex MI framework, we formulate a sparse kernel learning algorithm by imposing additional constraints on the objective function to enforce the maximum number of expansions allowed in the prediction function. The formulated sparse learning problem for the MI classification is convex with respect to the classifier weights. Therefore, we can employ an effective optimization strategy to solve the optimization problem that involves the joint learning of both the classifier and the expansion vectors. In addition, the proposed formulation can explicitly control the complexity of the prediction model while still maintaining competitive predictive performance. Experimental results on benchmark data sets demonstrate that our proposed approach is effective in building very sparse kernel classifiers while achieving comparable performance to the state-of-the-art MI classifiers.
Levels of explainable artificial intelligence for human-aligned conversational explanations
- Authors: Dazeley, Richard , Vamplew, Peter , Foale, Cameron , Young, Cameron , Aryal, Sunil , Cruz, Francisco
- Date: 2021
- Type: Text , Journal article
- Relation: Artificial Intelligence Vol. 299, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level ‘narrow’ explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level ‘strong’ explanations. © 2021 Elsevier B.V.
LiNearN : A new approach to nearest neighbour density estimator
- Authors: Wells, Jonathan , Ting, Kaiming , Washio, Takashi
- Date: 2014
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 47, no. 8 (2014), p. 2702-2720
- Full Text: false
- Reviewed:
- Description: Despite their wide spread use, nearest neighbour density estimators have two fundamental limitations: O(n2) time complexity and O(n) space complexity. Both limitations constrain nearest neighbour density estimators to small data sets only. Recent progress using indexing schemes has improved to near linear time complexity only.We propose a new approach, called LiNearN for Linear time Nearest Neighbour algorithm, that yields the first nearest neighbour density estimator having O(n) time complexity and constant space complexity, as far as we know. This is achieved without using any indexing scheme because LiNearN uses a subsampling approach for which the subsample values are significantly less than the data size. Like existing density estimators, our asymptotic analysis reveals that the new density estimator has a parameter to trade off between bias and variance. We show that algorithms based on the new nearest neighbour density estimator can easily scale up to data sets with millions of instances in anomaly detection and clustering tasks. Highlights•Reject the premise that a NN algorithm must find the NN for every instance.•The first NN density estimator that has O(n) time complexity and O(1) space complexity.•These complexities are achieved without using any indexing scheme.•Our asymptotic analysis reveals that it trades off between bias and variance.•Easily scales up to large data sets in anomaly detection and clustering tasks.
Local and global algorithms for learning dynamic Bayesian networks
- Authors: Nguyen, Vinh , Chetty, Madhu , Coppel, Ross , Wangikar, Pramod
- Date: 2012
- Type: Text , Conference paper
- Relation: The 12th IEEE International Conference on Data Mining (ICDM 2012) p. 685-694
- Full Text: false
- Reviewed:
- Description: Learning optimal Bayesian networks (BN) from data is NP-hard in general. Nevertheless, certain BN classes with additional topological constraints, such as the dynamic BN (DBN) models, widely applied in specific fields such as systems biology, can be efficiently learned in polynomial time. Such algorithms have been developed for the Bayesian-Dirichlet (BD), Minimum Description Length (MDL), and Mutual Information Test (MIT) scoring metrics. The BD-based algorithm admits a large polynomial bound, hence it is impractical for even modestly sized networks. The MDL-and MIT-based algorithms admit much smaller bounds, but require a very restrictive assumption that all variables have the same cardinality, thus significantly limiting their applicability. In this paper, we first propose an improvement to the MDL-and MIT-based algorithms, dropping the equicardinality constraint, thus significantly enhancing their generality. We also explore local Markov blanket based algorithms for constructing BN in the context of DBN, and show an interesting result: under the faithfulness assumption, the mutual information test based local Markov blanket algorithms yield the same network as learned by the global optimization MIT-based algorithm. Experimental validation on small and large scale genetic networks demonstrates the effectiveness of our proposed approaches.
Local contrast as an effective means to robust clustering against varying densities
- Authors: Chen, Bo , Ting, Kaiming , Washio, Takashi , Zhu, Ye
- Date: 2018
- Type: Text , Journal article
- Relation: Machine Learning Vol. 107, no. 8-10 (2018), p. 1621-1645
- Full Text:
- Reviewed:
- Description: Most density-based clustering methods have difficulties detecting clusters of hugely different densities in a dataset. A recent density-based clustering CFSFDP appears to have mitigated the issue. However, through formalising the condition under which it fails, we reveal that CFSFDP still has the same issue. To address this issue, we propose a new measure called Local Contrast, as an alternative to density, to find cluster centers and detect clusters. We then apply Local Contrast to CFSFDP, and create a new clustering method called LC-CFSFDP which is robust in the presence of varying densities. Our empirical evaluation shows that LC-CFSFDP outperforms CFSFDP and three other state-of-the-art variants of CFSFDP. © 2018, The Author(s).
Local models - the key to boosting stable learners successfully
- Authors: Ting, Kaiming , Zhu, Lian , Wells, Jonathan
- Date: 2013
- Type: Text , Journal article
- Relation: Computational Intelligence Vol. 29, no. 2 (2013), p. 331-356
- Full Text: false
- Reviewed:
- Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbours and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbours and Naive Bayes classifiers.
- Description: Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k-nearest neighbors and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k-nearest neighbors and Naive Bayes classifiers.
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.
Mass estimation
- Authors: Ting, Kaiming , Zhou, Guang , Liu, Fei , Tan, Swee
- Date: 2013
- Type: Text , Journal article
- Relation: Machine Learning Vol. 90, no. 1 (2013), p. 127-160
- Full Text: false
- Reviewed:
- Description: This paper introduces mass estimation—a base modelling mechanism that can be employed to solve various tasks in machine learning. We present the theoretical basis of mass and efficient methods to estimate mass. We show that mass estimation solves problems effectively in tasks such as information retrieval, regression and anomaly detection. The models, which use mass in these three tasks, perform at least as well as and often better than eight state-of-the-art methods in terms of task-specific performance measures. In addition, mass estimation has constant time and space complexities.
Mass estimation and its applications
- Authors: Ting, Kaiming , Zhou, Guangtong , Liu, Fei , Chuan, J. T. S.
- Date: 2010
- Type: Text , Conference paper
- Relation: Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining p. 989-998
- Full Text: false
- Reviewed:
- Description: This paper introduces mass estimation--a base modelling mechanism in data mining. It provides the theoretical basis of mass and an efficient method to estimate mass. We show that it solves problems very effectively in tasks such as information retrieval, regression and anomaly detection. The models, which use mass in these three tasks, perform at least as good as and often better than a total of eight state-of-the-art methods in terms of task-specific performance measures. In addition, mass estimation has constant time and space complexities.
MassBayes: a new generative classifier with multi-dimensional likelihood estimation
- Authors: Aryal, Sunil , Ting, Kaiming
- Date: 2013
- Type: Text , Conference paper
- Relation: Advances in Knowledge Discovery and Data Mining: 17th Pacific-Asia Conference p. 136-148
- Full Text: false
- Reviewed:
- Description: Existing generative classifiers (e.g., BayesNet and AnDE) make independence assumptions and estimate one-dimensional likelihood. This paper presents a new generative classifier called MassBayes that estimates multi-dimensional likelihood without making any explicit assumptions. It aggregates the multi-dimensional likelihoods estimated from random subsets of the training data using varying size random feature subsets. Our empirical evaluations show that MassBayes yields better classification accuracy than the existing generative classifiers in large data sets. As it works with fixed-size subsets of training data, it has constant training time complexity and constant space complexity, and it can easily scale up to very large data sets.
Maximizing structural similarity in multimodal biomedical microscopic images for effective registration
- Authors: Lv, Guohua , Teng, Shyh , Lu, Guojun , Lackmann, Martin
- Date: 2013
- Type: Text , Conference paper
- Relation: 2013 IEEE International Conference on Multimedia and Expo (ICME)
- Full Text: false
- Reviewed:
- Description: Multimodal image registration (MMIR) is the alignment of contents in images captured from different sensors or instruments. MMIR is important in medical applications as it enables the visualization of the complementary contents in biomedical microscopic images. The registration for such images can be challenging as the structures of their contents are usually only partially similar. Thus in this paper, we propose a new method to maximize the structural similarity of the contents in such images by utilizing intensity relationships among Red-Green-Blue color channels. Our experimental results will demonstrate that our proposed method substantially improves the accuracy of registering such images as compared to the state-of-the-art methods.
mDBN: motif based learning of gene regulatory networks using dynamic Bayesian networks
- Authors: Morshed, Nizamul , Chetty, Madhu , Nguyen, Vinh , Caelli, Terry
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO'13, Association for Computing Machinery Inc. (ACM), 2013 p. 279-286
- Full Text: false
- Reviewed:
- Description: Solutions for deriving the most consistent Bayesian gene regulatory network model from given data sets using evolutionary algorithms typically only result in locally optimal solutions.
Mobile robotic sensors for environmental monitoring using gaussian markov random field
- Authors: Nguyen, Linh , Kodagoda, Sarath , Ranasinghe, Ravindra , Dissanayake, Gamini
- Date: 2021
- Type: Text , Journal article
- Relation: Robotica Vol. 39, no. 5 (2021), p. 862-884
- Full Text:
- Reviewed:
- Description: This paper addresses the issue of monitoring spatial environmental phenomena of interest utilizing information collected by a network of mobile, wireless, and noisy sensors that can take discrete measurements as they navigate through the environment. It is proposed to employ Gaussian Markov random field (GMRF) represented on an irregular discrete lattice by using the stochastic partial differential equations method to model the physical spatial field. It then derives a GMRF-based approach to effectively predict the field at unmeasured locations, given available observations, in both centralized and distributed manners. Furthermore, a novel but efficient optimality criterion is then proposed to design centralized and distributed adaptive sampling strategies for the mobile robotic sensors to find the most informative sampling paths in taking future measurements. By taking advantage of conditional independence property in the GMRF, the adaptive sampling optimization problem is proven to be resolved in a deterministic time. The effectiveness of the proposed approach is compared and demonstrated using pre-published data sets with appealing results. Copyright © The Author(s), 2020. Published by Cambridge University Press.