A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
An evidence theoretic approach for traffic signal intrusion detection
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Das, Rajkumar, Newaz, Shah
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Blockchain technology and application : an overview
- Dong, Shi, Abbas, Khushnood, Li, Meixi, Kamruzzaman, Joarder
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
Cancer classification utilizing voting classifier with ensemble feature selection method and transcriptomic data
- Khatun, Rabea, Akter, Maksuda, Islam, Md Manowarul, Uddin, Md Ashraf, Talukder, Md Alamin, Kamruzzaman, Joarder, Azad, Akm, Paul, Bikash, Almoyad, Muhammad, Aryal, Sunil, Moni, Mohammad
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
Decentralized content sharing in mobile ad-hoc networks : a survey
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Rashid, Md Mamunur
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
Deep learning and federated learning for screening COVID-19 : a review
- Mondal, M., Bharati, Subrato, Podder, Prajoy, Kamruzzaman, Joarder
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
RBFK cipher : a randomized butterfly architecture-based lightweight block cipher for IoT devices in the edge computing environment
- Rana, Sohel, Mondal, Mondal, Kamruzzaman, Joarder
- Authors: Rana, Sohel , Mondal, Mondal , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 6, no. 1 (2023), p.
- Full Text:
- Reviewed:
- Description: Internet security has become a major concern with the growing use of the Internet of Things (IoT) and edge computing technologies. Even though data processing is handled by the edge server, sensitive data is generated and stored by the IoT devices, which are subject to attack. Since most IoT devices have limited resources, standard security algorithms such as AES, DES, and RSA hamper their ability to run properly. In this paper, a lightweight symmetric key cipher termed randomized butterfly architecture of fast Fourier transform for key (RBFK) cipher is proposed for resource-constrained IoT devices in the edge computing environment. The butterfly architecture is used in the key scheduling system to produce strong round keys for five rounds of the encryption method. The RBFK cipher has two key sizes: 64 and 128 bits, with a block size of 64 bits. The RBFK ciphers have a larger avalanche effect due to the butterfly architecture ensuring strong security. The proposed cipher satisfies the Shannon characteristics of confusion and diffusion. The memory usage and execution cycle of the RBFK cipher are assessed using the fair evaluation of the lightweight cryptographic systems (FELICS) tool. The proposed ciphers were also implemented using MATLAB 2021a to test key sensitivity by analyzing the histogram, correlation graph, and entropy of encrypted and decrypted images. Since the RBFK ciphers with minimal computational complexity provide better security than recently proposed competing ciphers, these are suitable for IoT devices in an edge computing environment. © 2023, The Author(s).
- Authors: Rana, Sohel , Mondal, Mondal , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 6, no. 1 (2023), p.
- Full Text:
- Reviewed:
- Description: Internet security has become a major concern with the growing use of the Internet of Things (IoT) and edge computing technologies. Even though data processing is handled by the edge server, sensitive data is generated and stored by the IoT devices, which are subject to attack. Since most IoT devices have limited resources, standard security algorithms such as AES, DES, and RSA hamper their ability to run properly. In this paper, a lightweight symmetric key cipher termed randomized butterfly architecture of fast Fourier transform for key (RBFK) cipher is proposed for resource-constrained IoT devices in the edge computing environment. The butterfly architecture is used in the key scheduling system to produce strong round keys for five rounds of the encryption method. The RBFK cipher has two key sizes: 64 and 128 bits, with a block size of 64 bits. The RBFK ciphers have a larger avalanche effect due to the butterfly architecture ensuring strong security. The proposed cipher satisfies the Shannon characteristics of confusion and diffusion. The memory usage and execution cycle of the RBFK cipher are assessed using the fair evaluation of the lightweight cryptographic systems (FELICS) tool. The proposed ciphers were also implemented using MATLAB 2021a to test key sensitivity by analyzing the histogram, correlation graph, and entropy of encrypted and decrypted images. Since the RBFK ciphers with minimal computational complexity provide better security than recently proposed competing ciphers, these are suitable for IoT devices in an edge computing environment. © 2023, The Author(s).
UDTN-RS : a new underwater delay tolerant network routing protocol for coastal patrol and surveillance
- Azad, Saiful, Neffati, Ahmed, Mahmud, Mufti, Kaiser, M., Ahmed, Muhammad, Kamruzzaman, Joarder
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
False data detection in a clustered smart grid using unscented Kalman filter
- Rashed, Muhammad, Kamruzzaman, Joarder, Gondal, Iqbal, Islam, Syed
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
Remote reconfiguration of FPGA-based wireless sensor nodes for flexible Internet of Things
- Aziz, Syed, Hoskin, Dylan, Pham, Duc, Kamruzzaman, Joarder
- Authors: Aziz, Syed , Hoskin, Dylan , Pham, Duc , Kamruzzaman, Joarder
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Electrical Engineering Vol. 100, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, sensor nodes in Wireless Sensor Networks (WSNs) have been using Field Programmable Gate Arrays (FPGA) for high-speed, low-power processing and reconfigurability. Reconfigurability enables adaptation of functionality and performance to changing requirements. This paper presents an efficient architecture for full remote reconfiguration of FPGA-based wireless sensors. The novelty of the work includes the ability to wirelessly upload new configuration bitstreams to remote sensor nodes using a protocol developed to provide full remote access to the flash memory of the sensor nodes. Results show that the FPGA can be remotely reconfigured in 1.35 s using a bitstream stored in the flash memory. The proposed scheme uses negligible amount of FPGA logic and does not require a dedicated microcontroller or softcore processor. It can help develop truly flexible IoT, where the FPGAs on thousands of sensor nodes can be reprogrammed or new configuration bitstreams uploaded without requiring physical access to the nodes. © 2022
- Authors: Aziz, Syed , Hoskin, Dylan , Pham, Duc , Kamruzzaman, Joarder
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Electrical Engineering Vol. 100, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, sensor nodes in Wireless Sensor Networks (WSNs) have been using Field Programmable Gate Arrays (FPGA) for high-speed, low-power processing and reconfigurability. Reconfigurability enables adaptation of functionality and performance to changing requirements. This paper presents an efficient architecture for full remote reconfiguration of FPGA-based wireless sensors. The novelty of the work includes the ability to wirelessly upload new configuration bitstreams to remote sensor nodes using a protocol developed to provide full remote access to the flash memory of the sensor nodes. Results show that the FPGA can be remotely reconfigured in 1.35 s using a bitstream stored in the flash memory. The proposed scheme uses negligible amount of FPGA logic and does not require a dedicated microcontroller or softcore processor. It can help develop truly flexible IoT, where the FPGAs on thousands of sensor nodes can be reprogrammed or new configuration bitstreams uploaded without requiring physical access to the nodes. © 2022
Sensitivity analysis for vulnerability mitigation in hybrid networks
- Ur‐rehman, Attiq, Gondal, Iqbal, Kamruzzaman, Joarder, Jolfaei, Alireza
- Authors: Ur‐rehman, Attiq , Gondal, Iqbal , Kamruzzaman, Joarder , Jolfaei, Alireza
- Date: 2022
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 11, no. 2 (2022), p.
- Full Text:
- Reviewed:
- Description: The development of cyber‐assured systems is a challenging task, particularly due to the cost and complexities associated with the modern hybrid networks architectures, as well as the recent advancements in cloud computing. For this reason, the early detection of vulnerabilities and threat strategies are vital for minimising the risks for enterprise networks configured with a variety of node types, which are called hybrid networks. Existing vulnerability assessment techniques are unable to exhaustively analyse all vulnerabilities in modern dynamic IT networks, which utilise a wide range of IoT and industrial control devices (ICS). This could lead to having a less optimal risk evaluation. In this paper, we present a novel framework to analyse the mitigation strategies for a variety of nodes, including traditional IT systems and their dependability on IoT devices, as well as industrial control systems. The framework adopts avoid, reduce, and manage as its core principles in characterising mitigation strategies. Our results confirmed the effectiveness of our mitigation strategy framework, which took node types, their criticality, and the network topology into account. Our results showed that our proposed framework was highly effective at reducing the risks in dynamic and resource constraint environments, in contrast to the existing techniques in the literature. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Ur‐rehman, Attiq , Gondal, Iqbal , Kamruzzaman, Joarder , Jolfaei, Alireza
- Date: 2022
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 11, no. 2 (2022), p.
- Full Text:
- Reviewed:
- Description: The development of cyber‐assured systems is a challenging task, particularly due to the cost and complexities associated with the modern hybrid networks architectures, as well as the recent advancements in cloud computing. For this reason, the early detection of vulnerabilities and threat strategies are vital for minimising the risks for enterprise networks configured with a variety of node types, which are called hybrid networks. Existing vulnerability assessment techniques are unable to exhaustively analyse all vulnerabilities in modern dynamic IT networks, which utilise a wide range of IoT and industrial control devices (ICS). This could lead to having a less optimal risk evaluation. In this paper, we present a novel framework to analyse the mitigation strategies for a variety of nodes, including traditional IT systems and their dependability on IoT devices, as well as industrial control systems. The framework adopts avoid, reduce, and manage as its core principles in characterising mitigation strategies. Our results confirmed the effectiveness of our mitigation strategy framework, which took node types, their criticality, and the network topology into account. Our results showed that our proposed framework was highly effective at reducing the risks in dynamic and resource constraint environments, in contrast to the existing techniques in the literature. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
A novel OFDM format and a machine learning based dimming control for lifi
- Nowrin, Itisha, Mondal, M., Islam, Rashed, Kamruzzaman, Joarder
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Islam, Kazi, Ahmad, Iftekhar, Habibi, Daryoush, Zahed, M., Kamruzzaman, Joarder
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
How much I can rely on you : measuring trustworthiness of a twitter user
- Das, Rajkumar, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
- Authors: Das, Rajkumar , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Dependable and Secure Computing Vol. 18, no. 2 (2021), p. 949-966
- Full Text:
- Reviewed:
- Description: Trustworthiness in an online environment is essential because individuals and organizations can easily be misled by false and malicious information receiving from untrustworthy users. Though existing methods assess users' trustworthiness by exploiting Twitter account properties, their efficacy is inadequate because of Twitter's restriction on profile and tweet size, the existence of missing or insufficient profiles, and ease to create fake accounts or relationships to pretend as trustworthy. In this paper, we present a holistic approach by exploiting ideas perceived from real-world organizations for trust estimation along with available Twitter information. Users' trustworthiness is determined by considering their credentials, recommendation from referees and the quality of the information in their Twitter accounts and tweets. We establish the feasibility of our approach analytically and further devise a multi-objective cost function for the A
State estimation within ied based smart grid using kalman estimates
- Rashed, Muhammad, Gondal, Iqbal, Kamruzzaman, Joarder, Islam, Syed
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 15 (2021), p.
- Full Text:
- Reviewed:
- Description: State Estimation is a traditional and reliable technique within power distribution and control systems. It is used for building a topology of the power grid network based on state measurements and current operational state of different nodes & buses. The protection of sensors and measurement units such as Intelligent Electronic Devices (IED) in Central Energy Management System (CEMS) against False Data Injection Attacks (FDIAs) is a big concern to grid operators. These are special kind of cyber-attacks that are directed towards the state & measurement data in such a way that mislead the CEMS into making incorrect decisions and create generation load imbalance. These are known to bypass the traditional bad data detection systems within central estimators. This paper presents the use of an additional novel state estimator based on Kalman filter along with traditional Distributed State Estimation (DSE) which is based on Weighted Least Square (WLS). Kalman filter is a feedback control mechanism that constantly updates itself based on state prediction and state correction technique and shows improvement in the estimates. The additional estimator output is compared with the results of DSE in order to identify anomalies and injection of false data. We evaluated our methodology by simulating proposed technique using MATPOWER over IEEE-14, IEEE-30, IEEE-118, IEEE-300 bus. The results clearly demonstrate the superiority of the proposed method over traditional state estimation. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Rashed, Muhammad , Gondal, Iqbal , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 15 (2021), p.
- Full Text:
- Reviewed:
- Description: State Estimation is a traditional and reliable technique within power distribution and control systems. It is used for building a topology of the power grid network based on state measurements and current operational state of different nodes & buses. The protection of sensors and measurement units such as Intelligent Electronic Devices (IED) in Central Energy Management System (CEMS) against False Data Injection Attacks (FDIAs) is a big concern to grid operators. These are special kind of cyber-attacks that are directed towards the state & measurement data in such a way that mislead the CEMS into making incorrect decisions and create generation load imbalance. These are known to bypass the traditional bad data detection systems within central estimators. This paper presents the use of an additional novel state estimator based on Kalman filter along with traditional Distributed State Estimation (DSE) which is based on Weighted Least Square (WLS). Kalman filter is a feedback control mechanism that constantly updates itself based on state prediction and state correction technique and shows improvement in the estimates. The additional estimator output is compared with the results of DSE in order to identify anomalies and injection of false data. We evaluated our methodology by simulating proposed technique using MATPOWER over IEEE-14, IEEE-30, IEEE-118, IEEE-300 bus. The results clearly demonstrate the superiority of the proposed method over traditional state estimation. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
A robust forgery detection method for copy-move and splicing attacks in images
- Islam, Mohammad, Karmakar, Gour, Kamruzzaman, Joarder, Murshed, Manzur
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Shahriar Shafin, Sakib, Bhuiyan, Md Zakirul
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
A survey on context awareness in big data analytics for business applications
- Dinh, Loan, Karmakar, Gour, Kamruzzaman, Joarder
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Dinh, Loan , Karmakar, Gour , Kamruzzaman, Joarder
- Date: 2020
- Type: Text , Journal article
- Relation: Knowledge and Information Systems Vol. 62, no. 9 (2020), p. 3387-3415
- Full Text:
- Reviewed:
- Description: The concept of context awareness has been in existence since the 1990s. Though initially applied exclusively in computer science, over time it has increasingly been adopted by many different application domains such as business, health and military. Contexts change continuously because of objective reasons, such as economic situation, political matter and social issues. The adoption of big data analytics by businesses is facilitating such change at an even faster rate in much complicated ways. The potential benefits of embedding contextual information into an application are already evidenced by the improved outcomes of the existing context-aware methods in those applications. Since big data is growing very rapidly, context awareness in big data analytics has become more important and timely because of its proven efficiency in big data understanding and preparation, contributing to extracting the more and accurate value of big data. Many surveys have been published on context-based methods such as context modelling and reasoning, workflow adaptations, computational intelligence techniques and mobile ubiquitous systems. However, to our knowledge, no survey of context-aware methods on big data analytics for business applications supported by enterprise level software has been published to date. To bridge this research gap, in this paper first, we present a definition of context, its modelling and evaluation techniques, and highlight the importance of contextual information for big data analytics. Second, the works in three key business application areas that are context-aware and/or exploit big data analytics have been thoroughly reviewed. Finally, the paper concludes by highlighting a number of contemporary research challenges, including issues concerning modelling, managing and applying business contexts to big data analytics. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.