Cyberattacks detection in iot-based smart city applications using machine learning techniques
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Imam, Tassadduq, Gordon, Steven
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
Hybrid intrusion detection system based on the stacking ensemble of C5 decision tree classifier and one class support vector machine
- Khraisat, Ansam, Gondal, Iqbal, Vamplew, Peter, Kamruzzaman, Joarder, Alazab, Ammar
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 9, no. 1 (2020), p.
- Full Text:
- Reviewed:
- Description: Cyberttacks are becoming increasingly sophisticated, necessitating the efficient intrusion detection mechanisms to monitor computer resources and generate reports on anomalous or suspicious activities. Many Intrusion Detection Systems (IDSs) use a single classifier for identifying intrusions. Single classifier IDSs are unable to achieve high accuracy and low false alarm rates due to polymorphic, metamorphic, and zero-day behaviors of malware. In this paper, a Hybrid IDS (HIDS) is proposed by combining the C5 decision tree classifier and One Class Support Vector Machine (OC-SVM). HIDS combines the strengths of SIDS) and Anomaly-based Intrusion Detection System (AIDS). The SIDS was developed based on the C5.0 Decision tree classifier and AIDS was developed based on the one-class Support Vector Machine (SVM). This framework aims to identify both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the benchmark datasets, namely, Network Security Laboratory-Knowledge Discovery in Databases (NSL-KDD) and Australian Defence Force Academy (ADFA) datasets. Studies show that the performance of HIDS is enhanced, compared to SIDS and AIDS in terms of detection rate and low false-alarm rates. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 9, no. 1 (2020), p.
- Full Text:
- Reviewed:
- Description: Cyberttacks are becoming increasingly sophisticated, necessitating the efficient intrusion detection mechanisms to monitor computer resources and generate reports on anomalous or suspicious activities. Many Intrusion Detection Systems (IDSs) use a single classifier for identifying intrusions. Single classifier IDSs are unable to achieve high accuracy and low false alarm rates due to polymorphic, metamorphic, and zero-day behaviors of malware. In this paper, a Hybrid IDS (HIDS) is proposed by combining the C5 decision tree classifier and One Class Support Vector Machine (OC-SVM). HIDS combines the strengths of SIDS) and Anomaly-based Intrusion Detection System (AIDS). The SIDS was developed based on the C5.0 Decision tree classifier and AIDS was developed based on the one-class Support Vector Machine (SVM). This framework aims to identify both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the benchmark datasets, namely, Network Security Laboratory-Knowledge Discovery in Databases (NSL-KDD) and Australian Defence Force Academy (ADFA) datasets. Studies show that the performance of HIDS is enhanced, compared to SIDS and AIDS in terms of detection rate and low false-alarm rates. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Buurman, Ben, Kamruzzaman, Joarder, Karmakar, Gour, Islam, Syed
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
A dynamic content distribution scheme for decentralized sharing in tourist hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
A novel ensemble of hybrid intrusion detection system for detecting internet of things attacks
- Khraisat, Ansam, Gondal, Iqbal, Vamplew, Peter, Kamruzzaman, Joarder, Alazab, Ammar
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2019
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 8, no. 11 (2019), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2019
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 8, no. 11 (2019), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Senthooran, Ilankalkone, Murshed, Manzur, Barca, Jan, Kamruzzaman, Joarder, Chung, Hoam
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
Assessing transformer oil quality using deep convolutional networks
- Alam, Mohammad, Karmakar, Gour, Islam, Syed, Kamruzzaman, Joarder, Chetty, Madhu, Lim, Suryani, Appuhamillage, Gayan, Chattopadhyay, Gopi, Wilcox, Steve, Verheyen, Vincent
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
Robust malware defense in industrial IoT applications using machine learning with selective adversarial samples
- Khoda, Mahbub, Imam, Tasadduq, Kamruzzaman, Joarder, Gondal, Iqbal, Rahman, Ashfaqur
- Authors: Khoda, Mahbub , Imam, Tasadduq , Kamruzzaman, Joarder , Gondal, Iqbal , Rahman, Ashfaqur
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol.56, no 4. (2020), p. 4415-4424
- Full Text:
- Reviewed:
- Description: Industrial Internet of Things (IIoT) deploys edge devices to act as intermediaries between sensors and actuators and application servers or cloud services. Machine learning models have been widely used to thwart malware attacks in such edge devices. However, these models are vulnerable to adversarial attacks where attackers craft adversarial samples by introducing small perturbations to malware samples to fool a classifier to misclassify them as benign applications. Literature on deep learning networks proposes adversarial retraining as a defense mechanism where adversarial samples are combined with legitimate samples to retrain the classifier. However, existing works select such adversarial samples in a random fashion which degrades the classifier's performance. This work proposes two novel approaches for selecting adversarial samples to retrain a classifier. One, based on the distance from malware cluster center, and the other, based on a probability measure derived from a kernel based learning (KBL). Our experiments show that both of our sample selection methods outperform the random selection method and the KBL selection method improves detection accuracy by 6%. Also, while existing works focus on deep neural networks with respect to adversarial retraining, we additionally assess the impact of such adversarial samples on other classifiers and our proposed selective adversarial retraining approaches show similar performance improvement for these classifiers as well. The outcomes from the study can assist in designing robust security systems for IIoT applications.
- Authors: Khoda, Mahbub , Imam, Tasadduq , Kamruzzaman, Joarder , Gondal, Iqbal , Rahman, Ashfaqur
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol.56, no 4. (2020), p. 4415-4424
- Full Text:
- Reviewed:
- Description: Industrial Internet of Things (IIoT) deploys edge devices to act as intermediaries between sensors and actuators and application servers or cloud services. Machine learning models have been widely used to thwart malware attacks in such edge devices. However, these models are vulnerable to adversarial attacks where attackers craft adversarial samples by introducing small perturbations to malware samples to fool a classifier to misclassify them as benign applications. Literature on deep learning networks proposes adversarial retraining as a defense mechanism where adversarial samples are combined with legitimate samples to retrain the classifier. However, existing works select such adversarial samples in a random fashion which degrades the classifier's performance. This work proposes two novel approaches for selecting adversarial samples to retrain a classifier. One, based on the distance from malware cluster center, and the other, based on a probability measure derived from a kernel based learning (KBL). Our experiments show that both of our sample selection methods outperform the random selection method and the KBL selection method improves detection accuracy by 6%. Also, while existing works focus on deep neural networks with respect to adversarial retraining, we additionally assess the impact of such adversarial samples on other classifiers and our proposed selective adversarial retraining approaches show similar performance improvement for these classifiers as well. The outcomes from the study can assist in designing robust security systems for IIoT applications.
Survey of intrusion detection systems : techniques, datasets and challenges
- Khraisat, Ansam, Iqbal, Gondal, Vamplew, Peter, Kamruzzaman, Joarder
- Authors: Khraisat, Ansam , Iqbal, Gondal , Vamplew, Peter , Kamruzzaman, Joarder
- Date: 2019
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 2 , no. 1 (2019), p. 1-22
- Full Text:
- Reviewed:
- Authors: Khraisat, Ansam , Iqbal, Gondal , Vamplew, Peter , Kamruzzaman, Joarder
- Date: 2019
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 2 , no. 1 (2019), p. 1-22
- Full Text:
- Reviewed:
Breast density classification for cancer detection using DCT-PCA feature extraction and classifier ensemble
- Haque, Md Sarwar, Hassan, Md Rafiul, BinMakhashen, Galal, Owaidh, Abdullah, Kamruzzaman, Joarder
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
Detecting splicing and copy-move attacks in color images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur, Kahandawa, Gayan, Parvin, Nahida
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur , Kahandawa, Gayan , Parvin, Nahida
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-7
- Full Text:
- Reviewed:
- Description: Image sensors are generating limitless digital images every day. Image forgery like splicing and copy-move are very common type of attacks that are easy to execute using sophisticated photo editing tools. As a result, digital forensics has attracted much attention to identify such tampering on digital images. In this paper, a passive (blind) image tampering identification method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) has been proposed. First, the chroma components of an image is divided into fixed sized non-overlapping blocks and 2D block DCT is applied to identify the changes due to forgery in local frequency distribution of the image. Then a texture descriptor, LBP is applied on the magnitude component of the 2D-DCT array to enhance the artifacts introduced by the tampering operation. The resulting LBP image is again divided into non-overlapping blocks. Finally, summations of corresponding inter-cell values of all the LBP blocks are computed and arranged as a feature vector. These features are fed into a Support Vector Machine (SVM) with Radial Basis Function (RBF) as kernel to distinguish forged images from authentic ones. The proposed method has been experimented extensively on three publicly available well-known image splicing and copy-move detection benchmark datasets of color images. Results demonstrate the superiority of the proposed method over recently proposed state-of-the-art approaches in terms of well accepted performance metrics such as accuracy, area under ROC curve and others.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
Passive detection of splicing and copy-move attacks in image forgery
- Islam, Mohammad, Kamruzzaman, Joarder, Karmakar, Gour, Murshed, Manzur, Kahandawa, Gayan
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
- Authors: Islam, Mohammad , Kamruzzaman, Joarder , Karmakar, Gour , Murshed, Manzur , Kahandawa, Gayan
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 25th International Conference on Neural Information Processing, ICONIP 2018; Siem Reap, Cambodia; 13th-16th December 2018; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 11304 LNCS, p. 555-567
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors for surveillance and monitoring, digital cameras, smart phones and social media generate huge volume of digital images every day. Image splicing and copy-move attacks are the most common types of image forgery that can be done very easily using modern photo editing software. Recently, digital forensics has drawn much attention to detect such tampering on images. In this paper, we introduce a novel feature extraction technique, namely Sum of Relevant Inter-Cell Values (SRIV) using which we propose a passive (blind) image forgery detection method based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP). First, the input image is divided into non-overlapping blocks and 2D block DCT is applied to capture the changes of a tampered image in the frequency domain. Then LBP operator is applied to enhance the local changes among the neighbouring DCT coefficients, magnifying the changes in high frequency components resulting from splicing and copy-move attacks. The resulting LBP image is again divided into non-overlapping blocks. Finally, SRIV is applied on the LBP image blocks to extract features which are then fed into a Support Vector Machine (SVM) classifier to identify forged images from authentic ones. Extensive experiment on four well-known benchmark datasets of tampered images reveal the superiority of our method over recent state-of-the-art methods.
Decentralized content sharing among tourists in visiting hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 79, no. (2017), p. 25-40
- Full Text:
- Reviewed:
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency.
- Description: Content sharing with smart mobile devices using decentralized approach enables users to share contents without the use of any fixed infrastructure, and thereby offers a free-of-cost platform that does not add to Internet traffic which, in its current state, is approaching bottleneck in its capacity. Most of the existing decentralized approaches in the literature consider spatio-temporal regularity in human movement patterns and pre-existing social relationship for the sharing scheme to work. However, such predictable movement patterns and social relationship information are not available in places like tourist spots where people visit only for a short period of time and usually meet strangers. No works exist in literature that deals with content sharing in such environment. In this work, we propose a content sharing approach for such environments. The group formation mechanism is based on users' interest score and stay probability in the individual region of interest (ROI) as well as on the availability and delivery probabilities of contents in the group. The administrator of each group is selected by taking into account its probability of stay in the ROI, connectivity with other nodes, its trustworthiness and computing and energy resources to serve the group. We have also adopted an incentive mechanism as encouragement that awards nodes for sharing and forwarding contents. We have used network simulator NS3 to perform extensive simulation on a popular tourist spot in Australia which facilitates a number of activities. The proposed approach shows promising results in sharing contents among tourists, measured in terms of content hit, delivery success rate and latency. © 2016
An efficient data extraction framework for mining wireless sensor networks
- Rashid, Md. Mamunur, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2016
- Type: Text , Conference paper
- Relation: 23rd International Conference, ICONIP 2016; Kyoto, Japan; 16th-21st October 2016; published in Neural Information Processing, Part III (Lecture Notes in Computer Science series) Vol. 9949, p. 491-498
- Full Text:
- Reviewed:
- Description: Behavioral patterns for sensors have received a great deal of attention recently due to their usefulness in capturing the temporal relations between sensors in wireless sensor networks. To discover these patterns, we need to collect the behavioral data that represents the sensor's activities over time from the sensor database that attached with a well-equipped central node called sink for further analysis. However, given the limited resources of sensor nodes, an effective data collection method is required for collecting the behavioral data efficiently. In this paper, we introduce a new framework for behavioral patterns called associated-correlated sensor patterns and also propose a MapReduce based new paradigm for extract data from the wireless sensor network by distributed away. Extensive performance study shows that the proposed method is capable to reduce the data size almost 50% compared to the centralized model.
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2016
- Type: Text , Conference paper
- Relation: 23rd International Conference, ICONIP 2016; Kyoto, Japan; 16th-21st October 2016; published in Neural Information Processing, Part III (Lecture Notes in Computer Science series) Vol. 9949, p. 491-498
- Full Text:
- Reviewed:
- Description: Behavioral patterns for sensors have received a great deal of attention recently due to their usefulness in capturing the temporal relations between sensors in wireless sensor networks. To discover these patterns, we need to collect the behavioral data that represents the sensor's activities over time from the sensor database that attached with a well-equipped central node called sink for further analysis. However, given the limited resources of sensor nodes, an effective data collection method is required for collecting the behavioral data efficiently. In this paper, we introduce a new framework for behavioral patterns called associated-correlated sensor patterns and also propose a MapReduce based new paradigm for extract data from the wireless sensor network by distributed away. Extensive performance study shows that the proposed method is capable to reduce the data size almost 50% compared to the centralized model.
Carry me if you can : A utility based forwarding scheme for content sharing in tourist destinations
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Gondal, Iqbal
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Gondal, Iqbal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 22nd Asia-Pacific Conference on Communications, APCC 2016; Yogyakarta, Indonesia; 25th-27th August 2016 p. 261-267
- Full Text:
- Reviewed:
- Description: Message forwarding is an integral part of the decentralized content sharing process as the content delivery success highly depends on it. Existing literature employs spatio-temporal regularity of human movement pattern and pre-existing social relationship to take message forwarding decisions. However, such approaches are ineffectual in environments where those information are unavailable such as a tourist spot or camping site. In this study, we explore the message forwarding techniques in such environments considering the information that are readily available and can be gathered on the fly. We propose a utility based forwarding scheme to select the appropriate forwarder node based on co-location stay time, connectivity and available resources. A higher co-location stay time reflects that the forwarder and the destination node is likely to have more opportunistic contacts, while the connectivity and available resource ensure that the selected forwarder has sufficient neighbours and resources to carry the message forward. Simulation results suggest that the proposed approach attains high hit and success rate and low latency for successful content delivery, which is comparable to those proposed for work-place type scenarios with regular movement pattern and pre-existing relationships. © 2016 IEEE.
A comprehensive spectrum trading scheme based on market competition, reputation and buyer specific requirements
- Hassan, Md Rakib, Karmakar, Gour, Kamruzzaman, Joarder, Srinivasan, Bala
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
A technique for parallel share-frequent sensor pattern mining from wireless sensor networks
- Rashid, Md. Mamunur, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
An adaptive approach to opportunistic data forwarding in underwater acoustic sensor networks
- Nowsheen, Nusrat, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
A novel vertical handover scheme for diminution in social network traffic
- Haider, Ammar, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Haider, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Conference paper
- Full Text:
- Reviewed:
- Authors: Haider, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Conference paper
- Full Text:
- Reviewed:
Inchoate fault detection framework: adaptive selection of wavelet nodes and cumulant orders
- Yaqub, Muhammad, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Yaqub, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Journal article
- Relation: IEEE Transactions on Instrumentation and Measurement Vol. 61, no. 3 (2012), p. 685-695
- Full Text:
- Reviewed:
- Description: Inchoate fault detection for machine health monitoring (MHM) demands high level of fault classification accuracy under poor signal-to-noise ratio (SNR) which persists in most industrial environment. Vibration signals are extensively used in signature matching for abnormality detection and diagnosis. In order to guarantee improved performance under poor SNR, feature extraction based on statistical parameters which are immune to Gaussian noise becomes inevitable. This paper proposes a novel framework for adaptive feature extraction based on higher order cumulants (HOCs) and wavelet transform (WT) (AFHCW) for MHM. Features extracted based on HOCs have the tendency to mitigate the impact of Gaussian noise. WT provides better time and frequency domain analysis for the nonstationary signals such as vibration in which spectral contents vary with respect to time. In AFHCW, stationary WT is used to ensure linear processing on the vibration data prior to feature extraction, and it helps in mitigating the impact of poor SNR. K-nearest neighbor classifier is used to categorize the type of the fault. Simulation studies show that the proposed scheme outperforms the existing techniques in terms of classification accuracy under poor SNR.
- Authors: Yaqub, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Journal article
- Relation: IEEE Transactions on Instrumentation and Measurement Vol. 61, no. 3 (2012), p. 685-695
- Full Text:
- Reviewed:
- Description: Inchoate fault detection for machine health monitoring (MHM) demands high level of fault classification accuracy under poor signal-to-noise ratio (SNR) which persists in most industrial environment. Vibration signals are extensively used in signature matching for abnormality detection and diagnosis. In order to guarantee improved performance under poor SNR, feature extraction based on statistical parameters which are immune to Gaussian noise becomes inevitable. This paper proposes a novel framework for adaptive feature extraction based on higher order cumulants (HOCs) and wavelet transform (WT) (AFHCW) for MHM. Features extracted based on HOCs have the tendency to mitigate the impact of Gaussian noise. WT provides better time and frequency domain analysis for the nonstationary signals such as vibration in which spectral contents vary with respect to time. In AFHCW, stationary WT is used to ensure linear processing on the vibration data prior to feature extraction, and it helps in mitigating the impact of poor SNR. K-nearest neighbor classifier is used to categorize the type of the fault. Simulation studies show that the proposed scheme outperforms the existing techniques in terms of classification accuracy under poor SNR.