A novel dynamic software-defined networking approach to neutralize traffic burst
- Sharma, Aakanksha, Balasubramanian, Venki, Kamruzzaman, Joarder
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
- Authors: Sharma, Aakanksha , Balasubramanian, Venki , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Computers Vol. 12, no. 7 (2023), p.
- Full Text:
- Reviewed:
- Description: Software-defined networks (SDN) has a holistic view of the network. It is highly suitable for handling dynamic loads in the traditional network with a minimal update in the network infrastructure. However, the standard SDN architecture control plane has been designed for single or multiple distributed SDN controllers facing severe bottleneck issues. Our initial research created a reference model for the traditional network, using the standard SDN (referred to as SDN hereafter) in a network simulator called NetSim. Based on the network traffic, the reference models consisted of light, modest and heavy networks depending on the number of connected IoT devices. Furthermore, a priority scheduling and congestion control algorithm is proposed in the standard SDN, named extended SDN (eSDN), which minimises congestion and performs better than the standard SDN. However, the enhancement was suitable only for the small-scale network because, in a large-scale network, the eSDN does not support dynamic SDN controller mapping. Often, the same SDN controller gets overloaded, leading to a single point of failure. Our literature review shows that most proposed solutions are based on static SDN controller deployment without considering flow fluctuations and traffic bursts that lead to a lack of load balancing among the SDN controllers in real-time, eventually increasing the network latency. Therefore, to maintain the Quality of Service (QoS) in the network, it becomes imperative for the static SDN controller to neutralise the on-the-fly traffic burst. Thus, our novel dynamic controller mapping algorithm with multiple-controller placement in the SDN is critical to solving the identified issues. In dSDN, the SDN controllers are mapped dynamically with the load fluctuation. If any SDN controller reaches its maximum threshold, the rest of the traffic will be diverted to another controller, significantly reducing delay and enhancing the overall performance. Our technique considers the latency and load fluctuation in the network and manages the situations where static mapping is ineffective in dealing with the dynamic flow variation. © 2023 by the authors.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Quantum particle swarm optimization for task offloading in mobile edge computing
- Dong, Shi, Xia, Yuanjun, Kamruzzaman, Joarder
- Authors: Dong, Shi , Xia, Yuanjun , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 19, no. 8 (2023), p. 9113-9122
- Full Text: false
- Reviewed:
- Description: Mobile edge computing (MEC) deploys servers on the edge of the mobile network to reduce the data transmission delay between servers and mobile devices, and can meet the computing demand of mobile computing tasks. It alleviates the problem of computing power and delay requirements of mobile computing tasks and reduces the energy consumption of mobile devices. However, the MEC server has limited computing and storage resources and mobile network bandwidth, making it impossible to offload all mobile computing tasks to MEC servers for processing. Therefore, MEC needs to reasonably offload and schedule mobile computing tasks, to achieve efficient utilization of server resources. To solve the above-mentioned problems, in this article, the task offloading problem is formulated as an optimization problem, and particle swarm optimization (PSO) and quantum PSO based task offloading strategies are proposed. Extensive simulation results show that the proposed algorithm can significantly reduce the system energy consumption, task completion time, and running time compared with recent advanced strategies, namely ant colony optimization, multiagent deep deterministic policy gradients, deep meta reinforcement learning-based offloading, iterative proximal algorithm, and parallel random forest. © 2005-2012 IEEE.
UDTN-RS : a new underwater delay tolerant network routing protocol for coastal patrol and surveillance
- Azad, Saiful, Neffati, Ahmed, Mahmud, Mufti, Kaiser, M., Ahmed, Muhammad, Kamruzzaman, Joarder
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
- Authors: Azad, Saiful , Neffati, Ahmed , Mahmud, Mufti , Kaiser, M. , Ahmed, Muhammad , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 142780-142793
- Full Text:
- Reviewed:
- Description: The Coastal Patrol and Surveillance Application (CPSA) is developed and deployed to detect, track and monitor water vessel traffic using automated devices. The latest advancements of marine technologies, including Automatic Underwater Vehicles, have encouraged the development of this type of applications. To facilitate their operations, installation of a Coastal Patrol and Surveillance Network (CPSN) is mandatory. One of the primary design objectives of this network is to deliver an adequate amount of data within an effective time frame. This is particularly essential for the detection of an intruder's vessel and its notification through the adverse underwater communication channels. Additionally, intermittent connectivity of the nodes remains another important obstacle to overcome to allow the smooth functioning of CPSA. Taking these objectives and obstacles into account, this work proposes a new protocol by ensembling forward error correction technique (namely Reed-Solomon codes or RS) in Underwater Delay Tolerant Network with probabilistic spraying technique (UDTN-Prob) routing protocol, named Underwater Delay Tolerant Protocol with RS (UDTN-RS). In addition, the existing binary packet spraying technique in UDTN-Prob is enhanced for supporting encoded packet exchange between the contacting nodes. A comprehensive simulation has been performed employing DEsign, Simulate, Emulate and Realize Test-beds (DESERT) underwater simulator along with World Ocean Simulation System (WOSS) package to receive a more realistic account of acoustic propagation for identifying the effectiveness of the proposed protocol. Three scenarios are considered during the simulation campaign, namely varying data transmission rate, varying area size, and a scenario focusing on estimating the overhead ratio. Conversely, for the first two scenarios, three metrics are taken into account: normalised packet delivery ratio, delay, and normalised throughput. The acquired results for these scenarios and metrics are compared to its ancestor, i.e., UDTN-Prob. The results suggest that the proposed UDTN-RS protocol can be considered as a suitable alternative to the existing protocols like UDTN-Prob, Epidemic, and others for sparse networks like CPSN. © 2013 IEEE.
False data detection in a clustered smart grid using unscented Kalman filter
- Rashed, Muhammad, Kamruzzaman, Joarder, Gondal, Iqbal, Islam, Syed
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
- Authors: Rashed, Muhammad , Kamruzzaman, Joarder , Gondal, Iqbal , Islam, Syed
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78548-78556
- Full Text:
- Reviewed:
- Description: The smart grid accessibility over the Internet of Things (IoT) is becoming attractive to electrical grid operators as it brings considerable operational and cost efficiencies. However, this in return creates significant cyber security challenges, such as fortification of state estimation data such as state variables against false data injection attacks (FDIAs). In this paper, a clustered partitioning state estimation (CPSE) technique is proposed to detect FDIA by using static state estimation, namely, weighted least square (WLS) method in conjunction with dynamic state estimation using minimum variance unscented Kalman filter (MV-UKF) which improves the accuracy of state estimation. The estimates acquired from the MV-UKF do not deviate like WLS as these are purely based on the previous iteration saved in the transition matrix. The deviation between the corresponding estimations of WLS and MV-UKF are utilised to partition the smart grid into smaller sub-systems to detect FDIA and then identify its location. To validate the proposed detection technique, FIDAs are injected into IEEE 14-bus, IEEE 30-bus, IEEE 118-bus, and IEEE 300-bus distribution feeder using MATPOWER simulation platform. Our results clearly demonstrate that the proposed technique can locate the attack area efficiently compared to other techniques such as chi square. © 2013 IEEE.
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- «
- ‹
- 1
- ›
- »