A geometric method to compute directionality features for texture images
- Authors: Islam, Md , Zhang, Dengsheng , Lu, Guojun
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the 2008 IEEE International Conference on Multimedia and Expo p. 1521-1524
- Full Text: false
- Reviewed:
- Description: In content based image analysis and retrieval, texture feature is an essential component due to its strong discriminative power. Directionality is one of the most significant texture features which are well perceived by the human visual system. A new method to calculate the directionality of image is proposed in this paper. In contrast to Tamura method which uses the statistical property of the directional histogram of an image to calculate its directionality, the proposed method makes use of the geometric property of the directional histogram. Both subjective and objective analyses prove that the proposed method outperforms the conventional Tamura method. It has also been shown that the proposed directionality has better retrieval performance than the conventional Tamura directionality.
Automatic categorization of image regions using dominant color based vector quantization
- Authors: Islam, Md , Zhang, Dengsheng , Lu, Guojun
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the Digital Image Computing: Techniques and Applications p. 191-198
- Full Text: false
- Reviewed:
- Description: This paper proposes a dominant color based vector quantization algorithm that automatically categorizes image regions. In contrast to the conventional vector quantization algorithm, the new algorithm effectively handles variable feature vectors like dominant color descriptors. Furthermore, the algorithm is guided by a novel splitting and stopping criterion which is specially designed for dominant color descriptors. This criterion helps the algorithm not only to learn the number of clusters, but also to avoid unnecessary over-fragmentations of region-clusters. Experimental result shows that the proposed approach categorizes image-regions with very high accuracy.
Region based color image retrieval using curvelet transform
- Authors: Islam, Md , Zhang, Dengsheng , Lu, Guojun
- Date: 2010
- Type: Text , Conference paper
- Relation: Proceedings of the 9th Asian Conference on Computer Vision p. 448-457
- Full Text: false
- Reviewed:
- Description: Effective texture feature is an essential component in any content based image retrieval system. In the past, spectral features, like Gabor and wavelet, have shown superior retrieval performance than many other statistical and structural based features. Recent researches on multi-resolution analysis have found that curvelet captures texture properties, like curves, lines, and edges, more accurately than Gabor filters. However, the texture feature extracted using curvelet transform is not rotation invariant. This can degrade its retrieval performance significantly, especially in cases where there are many similar images with different orientations. This paper analyses the curvelet transform and derives a useful approach to extract rotation invariant curvelet features. Experimental results show that the new rotation invariant curvelet feature outperforms the curvelet feature without rotation invariance.
Novel local improvement techniques in clustered memetic algorithm for protein structure prediction
- Authors: Islam, Md Kamrul , Chetty, Madhu , Murshed, Manzur
- Date: 2011
- Type: Text , Conference paper
- Relation: IEEE Congress on Evolutionary Computation (IEEE CEC) p. 1003-1011
- Full Text: false
- Reviewed:
- Description: Evolutionary algorithms (EAs) often fail to find the global optimum due to genetic drift. As the protein structure prediction problem is multimodal having several global optima, EAs empowered with combined application of local and global search e.g., memetic algorithms, can be more effective. This paper introduces two novel local improvement techniques for the clustered memetic algorithm to incorporate both problem specific and search-space specific knowledge to find one of the optimum structures of a hydrophobic-polar protein sequence on lattice models. Experimental results show the superiority of the proposed techniques against existing EAs on benchmark sequences.
Using meta-regression data mining to improve predictions of performance based on heart rate dynamics for Australian football
- Authors: Jelinek, Herbert , Kelarev, Andrei , Robinson, Dean , Stranieri, Andrew , Cornforth, David
- Date: 2014
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 14, no. PART A (2014), p. 81-87
- Full Text: false
- Reviewed:
- Description: This work investigates the effectiveness of using computer-based machine learning regression algorithms and meta-regression methods to predict performance data for Australian football players based on parameters collected during daily physiological tests. Three experiments are described. The first uses all available data with a variety of regression techniques. The second uses a subset of features selected from the available data using the Random Forest method. The third used meta-regression with the selected feature subset. Our experiments demonstrate that feature selection and meta-regression methods improve the accuracy of predictions for match performance of Australian football players based on daily data of medical tests, compared to regression methods alone. Meta-regression methods and feature selection were able to obtain performance prediction outcomes with significant correlation coefficients. The best results were obtained by the additive regression based on isotonic regression for a set of most influential features selected by Random Forest. This model was able to predict athlete performance data with a correlation coefficient of 0.86 (p < 0.05). © 2013 Published by Elsevier B.V. All rights reserved.
- Description: C1
Diagnostic with incomplete nominal/discrete data
- Authors: Jelinek, Herbert , Yatsko, Andrew , Stranieri, Andrew , Venkatraman, Sitalakshmi , Bagirov, Adil
- Date: 2015
- Type: Text , Journal article
- Relation: Artificial Intelligence Research Vol. 4, no. 1 (2015), p. 22-35
- Full Text:
- Reviewed:
- Description: Missing values may be present in data without undermining its use for diagnostic / classification purposes but compromise application of readily available software. Surrogate entries can remedy the situation, although the outcome is generally unknown. Discretization of continuous attributes renders all data nominal and is helpful in dealing with missing values; particularly, no special handling is required for different attribute types. A number of classifiers exist or can be reformulated for this representation. Some classifiers can be reinvented as data completion methods. In this work the Decision Tree, Nearest Neighbour, and Naive Bayesian methods are demonstrated to have the required aptness. An approach is implemented whereby the entered missing values are not necessarily a close match of the true data; however, they intend to cause the least hindrance for classification. The proposed techniques find their application particularly in medical diagnostics. Where clinical data represents a number of related conditions, taking Cartesian product of class values of the underlying sub-problems allows narrowing down of the selection of missing value substitutes. Real-world data examples, some publically available, are enlisted for testing. The proposed and benchmark methods are compared by classifying the data before and after missing value imputation, indicating a significant improvement.
Automatic image annotation based on decision tree machine learning
- Authors: Jiang, Lixing , Hou, Jin , Zeng, Chen , Zhang, Dengsheng
- Date: 2009
- Type: Text , Conference paper
- Relation: Proceedings of the International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery p. 170-175
- Full Text: false
- Reviewed:
- Description: With the rapid development of digital imaging technology, image annotation is an important and challenging task in image retrieval. At present, many machine learning methods have been applied to solve the problem of automatic image annotation (AIA). However, there exists enormous semantic expressive gap between the low-level image features and high-level semantic concepts. Due to the problem, the annotation performance of existing methods is not satisfactory, and needs to be further improved. This paper proposes an automatic annotation framework via a novel decision tree-based Bayesian (DTB) machine learning algorithm. It is a hybrid approach that attempts to utilize the advantages of both DT and Naive-Bayesian (NB). We firstly segment an image into different regions and extract low-level features of each region. From these features, high-level semantic concepts are obtained using a DTB learning algorithm. Finally, experiments conducted on the Corel dataset demonstrate the effectiveness of DTB machine learning. The DTB can not only enhance the classification accuracy, but also associate low-level region features with high-level image concepts. This method presents the advantages of the Bayesian method and the DT. Moreover, this semantic interpretation capability is a natural simulation of human learning.
Extraction and processing of real time strain of embedded FBG sensors using a fixed filter FBG circuit and an artificial neural network
- Authors: Kahandawa, Gayan , Epaarachchi, Jayantha , Wang, Hao , Canning, John , Lau, Alan
- Date: 2013
- Type: Text , Journal article
- Relation: Measurement: Journal of the International Measurement Confederation Vol. 46, no. 10 (2013), p. 4045-4051
- Full Text:
- Reviewed:
- Description: Fibre Bragg Grating (FBG) sensors have been used in the development of structural health monitoring (SHM) and damage detection systems for advanced composite structures over several decades. Unfortunately, to date only a handful of appropriate configurations and algorithm sare available for using in SHM systems have been developed. This paper reveals a novel configuration of FBG sensors to acquire strain reading and an integrated statistical approach to analyse data in real time. The proposed configuration has proven its capability to overcome practical constraints and the engineering challenges associated with FBG-based SHM systems. A fixed filter decoding system and an integrated artificial neural network algorithm for extracting strain from embedded FBG sensor were proposed and experimentally proved. Furthermore, the laboratory level experimental data was used to verify the accuracy of the system and it was found that the error levels were less than 0.3% in predictions. The developed SMH system using this technology has been submitted to US patent office and will be available for use of aerospace applications in due course. © 2013 Elsevier Ltd. All rights reserved.
Assessing trust level of a driverless car using deep learning
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Clustering in large data sets with the limited memory bundle method
- Authors: Karmitsa, Napsu , Bagirov, Adil , Taheri, Sona
- Date: 2018
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 83, no. (2018), p. 245-259
- Relation: http://purl.org/au-research/grants/arc/DP140103213
- Full Text: false
- Reviewed:
- Description: The aim of this paper is to design an algorithm based on nonsmooth optimization techniques to solve the minimum sum-of-squares clustering problems in very large data sets. First, the clustering problem is formulated as a nonsmooth optimization problem. Then the limited memory bundle method [Haarala et al., 2007] is modified and combined with an incremental approach to design a new clustering algorithm. The algorithm is evaluated using real world data sets with both the large number of attributes and the large number of data points. It is also compared with some other optimization based clustering algorithms. The numerical results demonstrate the efficiency of the proposed algorithm for clustering in very large data sets.
A comparison of bidding strategies for online auctions using fuzzy reasoning and negotiation decision functions
- Authors: Kaur, Preetinder , Goyal, Madhu , Lu, Jie
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Transactions on Fuzzy Systems Vol. 25, no. 2 (2017), p. 425-438
- Full Text:
- Reviewed:
- Description: Bidders often feel challenged when looking for the best bidding strategies to excel in the competitive environment of multiple and simultaneous online auctions for same or similar items. Bidders face complicated issues for deciding which auction to participate in, whether to bid early or late, and how much to bid. In this paper, we present the design of bidding strategies, which aim to forecast the bid amounts for buyers at a particular moment in time based on their bidding behavior and their valuation of an auctioned item. The agent develops a comprehensive methodology for final price estimation, which designs bidding strategies to address buyers' different bidding behaviors using two approaches: Mamdani method with regression analysis and negotiation decision functions. The experimental results show that the agents who follow fuzzy reasoning with a regression approach outperform other existing agents in most settings in terms of their success rate and expected utility.
Keeping the patient asleep and alive : Towards a computational cognitive model of disturbance management in anaesthesia
- Authors: Keogh, Kathleen , Sonenberg, Elizabeth
- Date: 2007
- Type: Text , Journal article
- Relation: Cognitive Systems Research Vol. 8, no. 4 (2007), p. 249-261
- Full Text:
- Reviewed:
- Description: We have analysed rich, dynamic data about the behaviour of anaesthetists during the management of a simulated critical incident in the operating theatre. We use a paper based analysis and a partial implementation to further the development of a computational cognitive model for disturbance management in anaesthesia. We suggest that our data analysis pattern may be used for the analysis of behavioural data describing cognitive and observable events in other complex dynamic domains. © 2007 Elsevier B.V. All rights reserved.
- Description: C1
- Description: 2003005060
Blast-induced ground vibration prediction using support vector machine
- Authors: Khandelwal, Manoj
- Date: 2011
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 27, no. 3 (2011), p. 193-200
- Full Text: false
- Reviewed:
- Description: Ground vibrations induced by blasting are one of the fundamental problems in the mining industry and may cause severe damage to structures and plants nearby. Therefore, a vibration control study plays an important role in the minimization of environmental effects of blasting in mines. In this paper, an attempt has been made to predict the peak particle velocity using support vector machine (SVM) by taking into consideration of maximum charge per delay and distance between blast face to monitoring point. To investigate the suitability of this approach, the predictions by SVM have been compared with conventional vibration predictor equations. Coefficient of determination (CoD) and mean absolute error were taken as a performance measure. © 2010 Springer-Verlag London Limited.
Application of soft computing to predict blast-induced ground vibration
- Authors: Khandelwal, Manoj , Kumar, Lalit , Yellishetty, Mohan
- Date: 2011
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 27, no. 2 (2011), p. 117-125
- Full Text: false
- Reviewed:
- Description: In this study, an attempt has been made to evaluate and predict the blast-induced ground vibration by incorporating explosive charge per delay and distance from the blast face to the monitoring point using artificial neural network (ANN) technique. A three-layer feed-forward back-propagation neural network with 2-5-1 architecture was trained and tested using 130 experimental and monitored blast records from the surface coal mines of Singareni Collieries Company Limited, Kothagudem, Andhra Pradesh, India. Twenty new blast data sets were used for the validation and comparison of the peak particle velocity (PPV) by ANN and conventional vibration predictors. Results were compared based on coefficient of determination and mean absolute error between monitored and predicted values of PPV. © 2009 Springer-Verlag London Limited.
Application of an expert system to predict thermal conductivity of rocks
- Authors: Khandelwal, Manoj
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 21, no. 6 (2012), p. 1341-1347
- Full Text: false
- Reviewed:
- Description: In this paper, an attempt has been made to predict the thermal conductivity (TC) of rocks by incorporating uniaxial compressive strength, density, porosity, and P-wave velocity using support vector machine (SVM). Training of the SVM network was carried out using 102 experimental data sets of various rocks, whereas 25 new data sets were used for the testing of the TC by SVM model. Multivariate regression analysis (MVRA) has also been carried out with same data sets that were used for the training of SVM model. SVM and MVRA results were compared based on coefficient of determination (CoD) and mean absolute error (MAE) between experimental and predicted values of TC. It was found that CoD between measured and predicted values of TC by SVM and MVRA was 0. 994 and 0. 918, respectively, whereas MAE was 0. 0453 and 0. 2085 for SVM and MVRA, respectively. © 2011 Springer-Verlag London Limited.
Implementing an ANN model optimized by genetic algorithm for estimating cohesion of limestone samples
- Authors: Khandelwal, Manoj , Marto, Aminaton , Fatemi, Seyed , Ghoroqi, Mahyar , Armaghani, Danial , Singh, Trilok , Tabrizi, Omid
- Date: 2018
- Type: Text , Journal article
- Relation: Engineering with Computers Vol. 34, no. 2 (2018), p. 307-317
- Full Text: false
- Reviewed:
- Description: Shear strength parameters such as cohesion are the most significant rock parameters which can be utilized for initial design of some geotechnical engineering applications. In this study, evaluation and prediction of rock material cohesion is presented using different approaches i.e., simple and multiple regression, artificial neural network (ANN) and genetic algorithm (GA)-ANN. For this purpose, a database including three model inputs i.e., p-wave velocity, uniaxial compressive strength and Brazilian tensile strength and one output which is cohesion of limestone samples was prepared. A meaningful relationship was found for all of the model inputs with suitable performance capacity for prediction of rock cohesion. Additionally, a high level of accuracy (coefficient of determination, R2 of 0.925) was observed developing multiple regression equation. To obtain higher performance capacity, a series of ANN and GA-ANN models were built. As a result, hybrid GA-ANN network provides higher performance for prediction of rock cohesion compared to ANN technique. GA-ANN model results (R2 = 0.976 and 0.967 for train and test) were better compared to ANN model results (R2 = 0.949 and 0.948 for train and test). Therefore, this technique is introduced as a new one in estimating cohesion of limestone samples. © 2017, Springer-Verlag London Ltd., part of Springer Nature.
How does saline backflow affect the treatment of saline-infused radiofrequency ablation?
- Authors: Kho, Antony , Ooi, Ean H. , Foo, Ji , Ooi, Ean Tat
- Date: 2021
- Type: Text , Journal article
- Relation: Computer Methods and Programs in Biomedicine Vol. 211, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: Background and objective: Saline infusion is applied together with radiofrequency ablation (RFA) to enlarge the ablation zone. However, one of the issues with saline-infused RFA is backflow, which spreads saline along the insertion track. This raises the concern of not only thermally ablating the tissue within the backflow region, but also the loss of saline from the targeted tissue, which may affect the treatment efficacy. Methods: In the present study, 2D axisymmetric models were developed to investigate how saline backflow influence saline-infused RFA and whether the aforementioned concerns are warranted. Saline-infused RFA was described using the dual porosity-Joule heating model. The hydrodynamics of backflow was described using Poiseuille law by assuming the flow to be similar to that in a thin annulus. Backflow lengths of 3, 4.5, 6 and 9 cm were considered. Results: Results showed that there is no concern of thermally ablating the tissue in the backflow region. This is due to the Joule heating being inversely proportional to distance from the electrode to the fourth power. Results also indicated that larger backflow lengths led to larger growth of thermal damage along the backflow region and greater decrease in coagulation volume. Hence, backflow needs to be controlled to ensure an effective treatment of saline-infused RFA. Conclusions: There is no risk of ablating tissues around the needle insertion track due to backflow. Instead, the risk of underablation as a result of the loss of saline due to backflow was found to be of greater concern. © 2021 Elsevier B.V.
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.
The evolution of Turing Award Collaboration Network : bibliometric-level and network-level metrics
- Authors: Kong, Xiangjie , Shi, Yajie , Wang, Wei , Ma, Kai , Wan, Liangtian , Xia, Feng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Computational Social Systems Vol. 6, no. 6 (2019), p. 1318-1328
- Full Text:
- Reviewed:
- Description: The year of 2017 for the 50th anniversary of the Turing Award, which represents the top-level award in the computer science field, is a milestone. We study the long-term evolution of the Turing Award Collaboration Network, and it can be considered as a microcosm of the computer science field from 1974 to 2016. First, scholars tend to publish articles by themselves at the early stages, and they began to focus on tight collaboration since the late 1980s. Second, compared with the same scale random network, although the Turing Award Collaboration Network has small-world properties, it is not a scale-free network. The reason may be that the number of collaborators per scholar is limited. It is impossible for scholars to connect to others freely (preferential attachment) as the scale-free network. Third, to measure how far a scholar is from the Turing Award, we propose a metric called the Turing Number (TN) and find that the TN decreases gradually over time. Meanwhile, we discover the phenomenon that scholars prefer to gather into groups to do research with the development of computer science. This article presents a new way to explore the evolution of academic collaboration network in the field of computer science by building and analyzing the Turing Award Collaboration Network for decades. © 2014 IEEE.
The gene of scientific success
- Authors: Kong, Xiangjie , Zhang, Jun , Zhang, Da , Bu, Yi , Ding, Ying , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: ACM Transactions on Knowledge Discovery from Data Vol. 14, no. 4 (2020), p.
- Full Text:
- Reviewed:
- Description: This article elaborates how to identify and evaluate causal factors to improve scientific impact. Currently, analyzing scientific impact can be beneficial to various academic activities including funding application, mentor recommendation, discovering potential cooperators, and the like. It is universally acknowledged that high-impact scholars often have more opportunities to receive awards as an encouragement for their hard work. Therefore, scholars spend great efforts in making scientific achievements and improving scientific impact during their academic life. However, what are the determinate factors that control scholars' academic success? The answer to this question can help scholars conduct their research more efficiently. Under this consideration, our article presents and analyzes the causal factors that are crucial for scholars' academic success. We first propose five major factors including article-centered factors, author-centered factors, venue-centered factors, institution-centered factors, and temporal factors. Then, we apply recent advanced machine learning algorithms and jackknife method to assess the importance of each causal factor. Our empirical results show that author-centered and article-centered factors have the highest relevancy to scholars' future success in the computer science area. Additionally, we discover an interesting phenomenon that the h-index of scholars within the same institution or university are actually very close to each other. © 2020 ACM.