Neural networks for detection and classification of walking pattern changes due to ageing
- Authors: Begg, Rezaul , Kamruzzaman, Joarder
- Date: 2006
- Type: Text , Journal article
- Relation: Australasian Physical & Engineering Sciences in Medicine Vol. 29, no. 2 (2006), p. 188-195
- Full Text: false
- Reviewed:
- Description: With age, gait functions reflected in the walking patterns degenerate and threaten the balance control mechanisms of the locomotor system. The aim of this paper is to explore applications of artificial neural networks for automated recognition of gait changes due to ageing from their respective gait-pattern characteristics. The ability of such discrimination has many advantages including the identification of at-risk or faulty gait. Various gait features (e.g., temporal-spatial, footground reaction forces and lower limb joint angular data) were extracted from 12 young and 12 elderly participants during normal walking and these were utilized for training and testing on three neural network algorithms (Standard Backpropagation; Scaled Conjugate Gradient; and Backpropagation with Bayesian Regularization, BR). Receiver operating characteristics plots, sensitivity and specificity results as well as accuracy rates were used to evaluate performance of the three classifiers. Cross-validation test results indicate a maximum generalization performance of 83.3% in the recognition of the young and elderly gait patterns. Out of the three neural network algorithms, BR performed superiorly in the test results with best sensitivity, selectivity and detection rates. With the help of a feature selection technique, the maximum classification accuracy of the BR attained 100%, when trained with a small subset of selected gait features. The results of this study demonstrate the capability of neural networks in the detection of gait changes with ageing and their potentials for future applications as gait diagnostics.
Resonant frequency band estimation using adaptive wavelet decomposition level selection
- Authors: Yaqub, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Conference paper
- Relation: 2011 IEEE International Conference on Mechatronics and Automation (ICMA) p. 376-381
- Full Text: false
- Reviewed:
- Description: The vibrations induced by machine faults help in diagnosis and prognosis of the machine. It is crucial for the fault diagnostic system to extract resonant frequency band which carries useful information about the defect frequencies and contains maximum signal to noise ratio. The spectral orientation of the resonant frequency band varies with the variation in machine dynamics. The existing techniques which employ wavelet transformation to exploit the signal energy distribution among different frequency sub-bands, are based on fixed decomposition level and do not optimize the wavelet parameters according to varying machine dynamics. The proposed study develops a novel technique: Adaptive Wavelet Decomposition and Resonance Frequency Estimation (AWRE) which estimates the positioning of the resonant frequency band based on adaptive selection of the wavelet decomposition levels. The results for the simulated as well as actual vibration data demonstrate that the proposed technique estimates the bandwidth of the resonant frequency band quite effectively.
Severity invariant machine fault diagnosis
- Authors: Yaqub, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Conference paper
- Relation: 6th IEEE Conference on Industrial Electronics and Applications p. 21-26
- Full Text: false
- Reviewed:
- Description: Vibration signals used for abnormality detection in machine health monitoring (MHM) suffer from significant variation with fault severity. This variation causes overlap among the features belonging to different types of faults resulting in severe degradation of fault detection accuracy. This paper identifies a new problem due to severity variant features and proposes a novel adaptive training set and feature selection (ATSFS) scheme based upon the orientation of the test data. In order to build ATSFS and validate its performance, training and testing data are obtained from different severity levels. To capture the non-stationary behavior of vibration signal, robust tools such as wavelet transform (WT) for time-frequency analysis are employed. Simulation studies show that ATSFS attains high classification accuracy even if training and testing data belong to different severity levels.
Application of artificial intelligence to improve quality of service in computer networks
- Authors: Ahmad, Iftekhar , Kamruzzaman, Joarder , Habibi, Daryoush
- Date: 2012
- Type: Text , Journal article
- Relation: Neural Computing & Applications Vol. 21, no. 1 (2012), p. 81-90
- Full Text: false
- Reviewed:
- Description: Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rates for ongoing IR calls in computer networks. High IR call preemption rates cause interruptions to service continuity, which is considered detrimental in a QoS-enabled network. A number of call admission control models have been proposed in the literature to reduce preemption rates for ongoing IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of ongoing calls in a QoS-enabled network. The model maps network traffic parameters and desired operating preemption rate by network operator providing the best for the network under consideration into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired operating preemption rates. Simulation results show that the preemption rate attained by the model closely matches with the target rate.
ACSP-Tree: A tree structure for mining behavioral patterns from wireless sensor networks
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2013
- Type: Text , Conference paper
- Relation: IEEE Conference on Local Computer Networks (LCN 2013) (21 October 2013 to 24 October 2013) p. 691-694
- Full Text: false
- Reviewed:
- Description: WSNs generates a large amount of data in the form of stream and mining knowledge from the stream of data can be extremely useful. Association rules mining, from the sensor data, has been studied in recent literature. However, sensor association rules mining often produces a huge number of rules, but most of them either are redundant or fail to reflect the true correlation relationship among data objects. In this paper, we address this problem and propose mining of a new type of sensor behavioral pattern called associated-correlated sensor patterns. The proposed behavioral patterns capture not only association-like co-occurrences but also the substantial temporal correlations implied by such co-occurrences in the sensor data. Here, we also use a prefix tree-based structure called associated-correlated sensor pattern-tree (ACSP-tree), which facilitates frequent pattern (FP) growth-based mining technique to generate all associated-correlated patterns from WSN data with only one scan over the sensor database. Extensive performance study shows that our approach is time and memory efficient in finding associated-correlated patterns than the existing most efficient algorithms.
Regularly frequent patterns mining from sensor data stream
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2013
- Type: Text , Conference paper
- Relation: International Conference on Neural Information Processing (ICONIP 2013) p. 417-424
- Full Text: false
- Reviewed:
- Description: Mining interesting and useful knowledge from the huge amount of data gathered in wireless sensor networks is a challenging task. Works reported in literature use support metric-based sensor association rule which employs the occurrence frequency of patterns as criteria. Such criteria may not be appropriate for finding significant patterns. Moreover, temporal regularity in occurrence behavior should be considered as another important measure for assessing the importance of patterns in WSNs. Frequent sensor patterns that occur after regular intervals is called regularly frequent sensor patterns. Even though mining regularly frequent sensor patterns from sensor data stream is extremely important in many real-time applications, no such algorithm has been proposed yet. In this paper, we propose a novel tree structure called Regularly Frequent Sensor Pattern-tree (RSP-tree) and an efficient mining approach for finding regularly frequent sensor patterns from WSNs. Extensive performance analyses show that our technique is time and memory efficient in finding regularly frequent sensor patterns.
A new convergence rate estimation of general artificial immune algorithm
- Authors: Hong, Lu , Kamruzzaman, Joarder
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Intelligent and Fuzzy Systems Vol. 28, no. 6 (2015), p. 2793-2800
- Full Text: false
- Reviewed:
- Description: Artificial immune algorithm has been used widely and successfully in many computational optimization areas, but the theoretical research exploring the convergence rate characteristics of artificial immune algorithm is yet inadequate. In this paper, instead of the traditional eigenvalue estimation of state transition matrix, stochastic processes theory is introduced to study the convergence rate of general artificial immune algorithm. The method begins by analyzing the necessary condition for convergence of artificial immune algorithm and takes it as the sufficient condition for a class of general artificial immune algorithm. Through the definition of Markov chain convergence rate, a probability strong convergence rate estimation method of general artificial immune algorithm is proposed. This method is judged by the final convergence of the best antibody, which overcomes the conservative defect of traditional estimation methods. The simulation results show the correctness of the proposed estimation method, and the estimation method can be used to judge the convergence and convergence rate of a class of artificial immune algorithms. This research has a certain theoretical reference value to optimize the convergence rate in the practical application of artificial immune algorithm.
A data mining approach for machine fault diagnosis based on associated frequency patterns
- Authors: Rashid, Md. Mamunur , Amar, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2016
- Type: Text , Journal article
- Relation: Applied Intelligence Vol. 45, no. 3 (2016), p. 638-651
- Full Text: false
- Reviewed:
- Description: Bearings play a crucial role in rotational machines and their failure is one of the foremost causes of breakdowns in rotary machinery. Their functionality is directly relevant to the operational performance, service life and efficiency of these machines. Therefore, bearing fault identification is very significant. The accuracy of fault or anomaly detection by the current techniques is not adequate. We propose a data mining-based framework for fault identification and anomaly detection from machine vibration data. In this framework, to capture the useful knowledge from the vibration data stream (VDS), we first pre-process the data using Fast Fourier Transform (FFT) to extract the frequency signature and then build a compact tree called SAFP-tree (sliding window associated frequency pattern tree), and propose a mining algorithm called SAFP. Our SAFP algorithm can mine associated frequency patterns (i.e., fault frequency signatures) in the current window of VDS and use them to identify faults in the bearing data. Finally, SAFP is further enhanced to SAFP-AD for anomaly detection by determining the normal behavior measure (NBM) from the extracted frequency patterns. The results show that our technique is very efficient in identifying faults and detecting anomalies over VDS and can be used for remote machine health diagnosis. © 2016, Springer Science+Business Media New York.
Search and tracking algorithms for swarms of robots: A survey
- Authors: Senanayake, Madhubhashi , Senthooran, Ilankaikaone , Barca, Jan , Chung, Hoam , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2016
- Type: Text , Journal article
- Relation: Robotics and Autonomous Systems Vol. 75, no. Part B (2016), p. 422-434
- Full Text: false
- Reviewed:
- Description: Target search and tracking is a classical but difficult problem in many research domains, including computer vision, wireless sensor networks and robotics. We review the seminal works that addressed this problem in the area of swarm robotics, which is the application of swarm intelligence principles to the control of multi-robot systems. Robustness, scalability and flexibility, as well as distributed sensing, make swarm robotic systems well suited for the problem of target search and tracking in real-world applications. We classify the works we review according to the variations and aspects of the search and tracking problems they addressed. As this is a particularly application-driven research area, the adopted taxonomy makes this review serve as a quick reference guide to our readers in identifying related works and approaches according to their problem at hand. By no means is this an exhaustive review, but an overview for researchers who are new to the swarm robotics field, to help them easily start off their research. © 2015 Elsevier B.V.
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
A survey on context awareness in big data analytics for business applications
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
Assessing trust level of a driverless car using deep learning
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
Malware detection in edge devices with fuzzy oversampling and dynamic class weighting
- Authors: Khoda, Mahbub , Kamruzzaman, Joarder , Gondal, Iqbal , Imam, Tasadduq , Rahman, Ashfaqur
- Date: 2021
- Type: Text , Journal article
- Relation: Applied Soft Computing Vol. 112, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: In Internet-of-things (IoT) domain, edge devices are used increasingly for data accumulation, preprocessing, and analytics. Intelligent integration of edge devices with Artificial Intelligence (AI) facilitates real-time analysis and decision making. However, these devices simultaneously provide additional attack opportunities for malware developers, potentially leading to information and financial loss. Machine learning approaches can detect such attacks but their performance degrades when benign samples substantially outnumber malware samples in training data. Existing approaches for such imbalanced data assume samples represented as continuous features and thus can generate invalid samples when malware applications are represented by binary features. We propose a novel malware oversampling technique that addresses this issue. Further, we propose two approaches for malware detection. Our first approach uses fuzzy set theory, while the second approach dynamically assigns higher priority to malware samples using a novel loss function. Combining our oversampling technique with these approaches, the proposed approach attains over 9% improvement over competing methods in terms of F1_score. Our approaches can, therefore, result in enhanced privacy and security in edge computing services. © 2021 Elsevier B.V.