A comprehensive spectrum trading scheme based on market competition, reputation and buyer specific requirements
- Hassan, Md Rakib, Karmakar, Gour, Kamruzzaman, Joarder, Srinivasan, Bala
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
A dynamic content distribution scheme for decentralized sharing in tourist hotspots
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 129, no. (2019), p. 9-24
- Full Text:
- Reviewed:
- Description: Decentralized content sharing (DCS) is emerging as a suitable platform for smart mobile device users to generate and share contents seamlessly without the requirement of a centralized server. This feature is particularly important for places that lack Internet coverage such as tourist attractions where users can form an ad-hoc network and communicate opportunistically to share contents. Existing DCS approaches when applied for such type of places suffer from low delivery success rate and high latency. Although a handful of recent approaches have specifically targeted improvement of content delivery service in tourist spot like scenario, these and other DCS approaches do not focus on contents’ demand and supply which vary considerably due to visitor in-and-out flow and occurrence of influencing events. This is further compounded by the lack of any content distribution (replication) scheme. The content delivery service will be improved if contents can be proactively distributed in strategic positions based on dynamic demand and supply and medium access contention. In this paper, we propose a dynamic content distribution scheme (DCDS) considering these practical issues for sharing contents in tourist attractions. Simulation results show that the proposed approach significantly improves (7
A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
A novel ensemble of hybrid intrusion detection system for detecting internet of things attacks
- Khraisat, Ansam, Gondal, Iqbal, Vamplew, Peter, Kamruzzaman, Joarder, Alazab, Ammar
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2019
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 8, no. 11 (2019), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Khraisat, Ansam , Gondal, Iqbal , Vamplew, Peter , Kamruzzaman, Joarder , Alazab, Ammar
- Date: 2019
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 8, no. 11 (2019), p.
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques. © 2019 by the authors. Licensee MDPI, Basel, Switzerland.
A novel OFDM format and a machine learning based dimming control for lifi
- Nowrin, Itisha, Mondal, M., Islam, Rashed, Kamruzzaman, Joarder
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
A novel vertical handover scheme for diminution in social network traffic
- Haider, Ammar, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Haider, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Conference paper
- Full Text:
- Reviewed:
- Authors: Haider, Ammar , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2012
- Type: Text , Conference paper
- Full Text:
- Reviewed:
A robust forgery detection method for copy-move and splicing attacks in images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Shahriar Shafin, Sakib, Bhuiyan, Md Zakirul
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
A survey on context awareness in big data analytics for business applications
- Dinh, Loan, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
A technique for parallel share-frequent sensor pattern mining from wireless sensor networks
- Rashid, Md. Mamunur, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
An adaptive approach to opportunistic data forwarding in underwater acoustic sensor networks
- Nowsheen, Nusrat, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
- Authors: Nowsheen, Nusrat , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Reliable data transfer for underwater acoustic sensor networks (UASNs) is a major research challenge in applications such as pollution monitoring, oceanic data collection, and surveillance due to the long propagation delay and high error rate of the acoustic channel. To address this issue, an opportunistic data forwarding protocol was proposed which achieves high packet delivery success ratio with less routing overhead and energy consumption by selecting the next hop forwarder among a set of candidates based on its link reliability and data transfer reach ability. However, the protocol relies on fixed data hold time approach, i.e., Each node holds data packets for a fixed amount of time before a forwarder discovery process is initiated. Depending on the value of the fixed hold time and deployment contextual scenario, this may incur large end-to-end delay. Moreover, lack of consideration of network condition in hold time limits its performance. In this paper, we propose an adaptive technique to improve its performance. The adaptive approach calculates data hold time at each node dynamically considering a number of 'node and network' metrics including current buffer occupancy, delay experienced by stored data packets, arrival and service rate, neighbors' data transmissions and reach ability. Simulation results show that compared with fixed hold time approach, our adaptive technique reduces end-to-end delay significantly, achieves considerably higher data delivery and less energy consumption per successful packet delivery.
An efficient data extraction framework for mining wireless sensor networks
- Rashid, Md. Mamunur, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2016
- Type: Text , Conference paper
- Relation: 23rd International Conference, ICONIP 2016; Kyoto, Japan; 16th-21st October 2016; published in Neural Information Processing, Part III (Lecture Notes in Computer Science series) Vol. 9949, p. 491-498
- Full Text:
- Reviewed:
- Description: Behavioral patterns for sensors have received a great deal of attention recently due to their usefulness in capturing the temporal relations between sensors in wireless sensor networks. To discover these patterns, we need to collect the behavioral data that represents the sensor's activities over time from the sensor database that attached with a well-equipped central node called sink for further analysis. However, given the limited resources of sensor nodes, an effective data collection method is required for collecting the behavioral data efficiently. In this paper, we introduce a new framework for behavioral patterns called associated-correlated sensor patterns and also propose a MapReduce based new paradigm for extract data from the wireless sensor network by distributed away. Extensive performance study shows that the proposed method is capable to reduce the data size almost 50% compared to the centralized model.
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2016
- Type: Text , Conference paper
- Relation: 23rd International Conference, ICONIP 2016; Kyoto, Japan; 16th-21st October 2016; published in Neural Information Processing, Part III (Lecture Notes in Computer Science series) Vol. 9949, p. 491-498
- Full Text:
- Reviewed:
- Description: Behavioral patterns for sensors have received a great deal of attention recently due to their usefulness in capturing the temporal relations between sensors in wireless sensor networks. To discover these patterns, we need to collect the behavioral data that represents the sensor's activities over time from the sensor database that attached with a well-equipped central node called sink for further analysis. However, given the limited resources of sensor nodes, an effective data collection method is required for collecting the behavioral data efficiently. In this paper, we introduce a new framework for behavioral patterns called associated-correlated sensor patterns and also propose a MapReduce based new paradigm for extract data from the wireless sensor network by distributed away. Extensive performance study shows that the proposed method is capable to reduce the data size almost 50% compared to the centralized model.
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Senthooran, Ilankalkone, Murshed, Manzur, Barca, Jan, Kamruzzaman, Joarder, Chung, Hoam
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
An evidence theoretic approach for traffic signal intrusion detection
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Das, Rajkumar, Newaz, Shah
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Assessing transformer oil quality using deep convolutional networks
- Alam, Mohammad, Karmakar, Gour, Islam, Syed, Kamruzzaman, Joarder, Chetty, Madhu, Lim, Suryani, Appuhamillage, Gayan, Chattopadhyay, Gopi, Wilcox, Steve, Verheyen, Vincent
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
- Authors: Alam, Mohammad , Karmakar, Gour , Islam, Syed , Kamruzzaman, Joarder , Chetty, Madhu , Lim, Suryani , Appuhamillage, Gayan , Chattopadhyay, Gopi , Wilcox, Steve , Verheyen, Vincent
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 29th Australasian Universities Power Engineering Conference, AUPEC 2019
- Full Text:
- Reviewed:
- Description: Electrical power grids comprise a significantly large number of transformers that interconnect power generation, transmission and distribution. These transformers having different MVA ratings are critical assets that require proper maintenance to provide long and uninterrupted electrical service. The mineral oil, an essential component of any transformer, not only provides cooling but also acts as an insulating medium within the transformer. The quality and the key dissolved properties of insulating mineral oil for the transformer are critical with its proper and reliable operation. However, traditional chemical diagnostic methods are expensive and time-consuming. A transformer oil image analysis approach, based on the entropy value of oil, which is inexpensive, effective and quick. However, the inability of entropy to estimate the vital transformer oil properties such as equivalent age, Neutralization Number (NN), dissipation factor (tanδ) and power factor (PF); and many intuitively derived constants usage limit its estimation accuracy. To address this issue, in this paper, we introduce an innovative transformer oil analysis using two deep convolutional learning techniques such as Convolutional Neural Network (ConvNet) and Residual Neural Network (ResNet). These two deep neural networks are chosen for this project as they have superior performance in computer vision. After estimating the equivalent aging year of transformer oil from its image by our proposed method, NN, tanδ and PF are computed using that estimated age. Our deep learning based techniques can accurately predict the transformer oil equivalent age, leading to calculate NN, tanδ and PF more accurately. The root means square error of estimated equivalent age produced by entropy, ConvNet and ResNet based methods are 0.718, 0.122 and 0.065, respectively. ConvNet and ResNet based methods have reduced the error of the oil age estimation by 83% and 91%, respectively compared to that of the entropy method. Our proposed oil image analysis can calculate the equivalent age that is very close to the actual age for all images used in the experiment. © 2019 IEEE.
- Description: E1
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Blockchain technology and application : an overview
- Dong, Shi, Abbas, Khushnood, Li, Meixi, Kamruzzaman, Joarder
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
Breast density classification for cancer detection using DCT-PCA feature extraction and classifier ensemble
- Haque, Md Sarwar, Hassan, Md Rafiul, BinMakhashen, Galal, Owaidh, Abdullah, Kamruzzaman, Joarder
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
Cancer classification utilizing voting classifier with ensemble feature selection method and transcriptomic data
- Khatun, Rabea, Akter, Maksuda, Islam, Md Manowarul, Uddin, Md Ashraf, Talukder, Md Alamin, Kamruzzaman, Joarder, Azad, Akm, Paul, Bikash, Almoyad, Muhammad, Aryal, Sunil, Moni, Mohammad
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.