Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
An evidence theoretic approach for traffic signal intrusion detection
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Das, Rajkumar, Newaz, Shah
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Das, Rajkumar , Newaz, Shah
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 10 (2023), p. 4646
- Full Text:
- Reviewed:
- Description: The increasing attacks on traffic signals worldwide indicate the importance of intrusion detection. The existing traffic signal Intrusion Detection Systems (IDSs) that rely on inputs from connected vehicles and image analysis techniques can only detect intrusions created by spoofed vehicles. However, these approaches fail to detect intrusion from attacks on in-road sensors, traffic controllers, and signals. In this paper, we proposed an IDS based on detecting anomalies associated with flow rate, phase time, and vehicle speed, which is a significant extension of our previous work using additional traffic parameters and statistical tools. We theoretically modelled our system using the Dempster-Shafer decision theory, considering the instantaneous observations of traffic parameters and their relevant historical normal traffic data. We also used Shannon's entropy to determine the uncertainty associated with the observations. To validate our work, we developed a simulation model based on the traffic simulator called SUMO using many real scenarios and the data recorded by the Victorian Transportation Authority, Australia. The scenarios for abnormal traffic conditions were generated considering attacks such as jamming, Sybil, and false data injection attacks. The results show that the overall detection accuracy of our proposed system is 79.3% with fewer false alarms.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Islam, Kazi, Ahmad, Iftekhar, Habibi, Daryoush, Zahed, M., Kamruzzaman, Joarder
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
Breast density classification for cancer detection using DCT-PCA feature extraction and classifier ensemble
- Haque, Md Sarwar, Hassan, Md Rafiul, BinMakhashen, Galal, Owaidh, Abdullah, Kamruzzaman, Joarder
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
- Authors: Haque, Md Sarwar , Hassan, Md Rafiul , BinMakhashen, Galal , Owaidh, Abdullah , Kamruzzaman, Joarder
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 17th International Conference on Intelligent Systems Design and Applications, ISDA 2017; Delhi, India; 14th-16th December 2017; published in Intelligent Systems Design and Applications (part of the Advances in Intelligent Systems and Computing book series) Vol. 736, p. 702-711
- Full Text:
- Reviewed:
- Description: It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.
Blockchain technology and application : an overview
- Dong, Shi, Abbas, Khushnood, Li, Meixi, Kamruzzaman, Joarder
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
- Authors: Dong, Shi , Abbas, Khushnood , Li, Meixi , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: PeerJ Computer Science Vol. 9, no. (2023), p.
- Full Text:
- Reviewed:
- Description: In recent years, with the rise of digital currency, its underlying technology, blockchain, has become increasingly well-known. This technology has several key characteristics, including decentralization, time-stamped data, consensus mechanism, traceability, programmability, security, and credibility, and block data is essentially tamper-proof. Due to these characteristics, blockchain can address the shortcomings of traditional financial institutions. As a result, this emerging technology has garnered significant attention from financial intermediaries, technology-based companies, and government agencies. This article offers an overview of the fundamentals of blockchain technology and its various applications. The introduction defines blockchain and explains its fundamental working principles, emphasizing features such as decentralization, immutability, and transparency. The article then traces the evolution of blockchain, from its inception in cryptocurrency to its development as a versatile tool with diverse potential applications. The main body of the article explores fundamentals of block chain systems, its limitations, various applications, applicability etc. Finally, the study concludes by discussing the present state of blockchain technology and its future potential, as well as the challenges that must be surmounted to unlock its full potential. © Copyright 2023 Dong et al
Remote reconfiguration of FPGA-based wireless sensor nodes for flexible Internet of Things
- Aziz, Syed, Hoskin, Dylan, Pham, Duc, Kamruzzaman, Joarder
- Authors: Aziz, Syed , Hoskin, Dylan , Pham, Duc , Kamruzzaman, Joarder
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Electrical Engineering Vol. 100, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, sensor nodes in Wireless Sensor Networks (WSNs) have been using Field Programmable Gate Arrays (FPGA) for high-speed, low-power processing and reconfigurability. Reconfigurability enables adaptation of functionality and performance to changing requirements. This paper presents an efficient architecture for full remote reconfiguration of FPGA-based wireless sensors. The novelty of the work includes the ability to wirelessly upload new configuration bitstreams to remote sensor nodes using a protocol developed to provide full remote access to the flash memory of the sensor nodes. Results show that the FPGA can be remotely reconfigured in 1.35 s using a bitstream stored in the flash memory. The proposed scheme uses negligible amount of FPGA logic and does not require a dedicated microcontroller or softcore processor. It can help develop truly flexible IoT, where the FPGAs on thousands of sensor nodes can be reprogrammed or new configuration bitstreams uploaded without requiring physical access to the nodes. © 2022
- Authors: Aziz, Syed , Hoskin, Dylan , Pham, Duc , Kamruzzaman, Joarder
- Date: 2022
- Type: Text , Journal article
- Relation: Computers and Electrical Engineering Vol. 100, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Recently, sensor nodes in Wireless Sensor Networks (WSNs) have been using Field Programmable Gate Arrays (FPGA) for high-speed, low-power processing and reconfigurability. Reconfigurability enables adaptation of functionality and performance to changing requirements. This paper presents an efficient architecture for full remote reconfiguration of FPGA-based wireless sensors. The novelty of the work includes the ability to wirelessly upload new configuration bitstreams to remote sensor nodes using a protocol developed to provide full remote access to the flash memory of the sensor nodes. Results show that the FPGA can be remotely reconfigured in 1.35 s using a bitstream stored in the flash memory. The proposed scheme uses negligible amount of FPGA logic and does not require a dedicated microcontroller or softcore processor. It can help develop truly flexible IoT, where the FPGAs on thousands of sensor nodes can be reprogrammed or new configuration bitstreams uploaded without requiring physical access to the nodes. © 2022
Decentralized content sharing in mobile ad-hoc networks : a survey
- Kaisar, Shahriar, Kamruzzaman, Joarder, Karmakar, Gour, Rashid, Md Mamunur
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
- Authors: Kaisar, Shahriar , Kamruzzaman, Joarder , Karmakar, Gour , Rashid, Md Mamunur
- Date: 2023
- Type: Text , Journal article , Review
- Relation: Digital Communications and Networks Vol. 9, no. 6 (2023), p. 1363-1398
- Full Text:
- Reviewed:
- Description: The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic. To address this issue and take advantage of the short-range communication capabilities of smart mobile devices, the decentralized content sharing approach has emerged as a suitable and promising alternative. Decentralized content sharing uses a peer-to-peer network among co-located smart mobile device users to fulfil content requests. Several articles have been published to date to address its different aspects including group management, interest extraction, message forwarding, participation incentive, and content replication. This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration. © 2022 Chongqing University of Posts and Telecommunications
Deep learning and federated learning for screening COVID-19 : a review
- Mondal, M., Bharati, Subrato, Podder, Prajoy, Kamruzzaman, Joarder
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
- Authors: Mondal, M. , Bharati, Subrato , Podder, Prajoy , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: BioMedInformatics Vol. 3, no. 3 (2023), p. 691-713
- Full Text:
- Reviewed:
- Description: Since December 2019, a novel coronavirus disease (COVID-19) has infected millions of individuals. This paper conducts a thorough study of the use of deep learning (DL) and federated learning (FL) approaches to COVID-19 screening. To begin, an evaluation of research articles published between 1 January 2020 and 28 June 2023 is presented, considering the preferred reporting items of systematic reviews and meta-analysis (PRISMA) guidelines. The review compares various datasets on medical imaging, including X-ray, computed tomography (CT) scans, and ultrasound images, in terms of the number of images, COVID-19 samples, and classes in the datasets. Following that, a description of existing DL algorithms applied to various datasets is offered. Additionally, a summary of recent work on FL for COVID-19 screening is provided. Efforts to improve the quality of FL models are comprehensively reviewed and objectively evaluated. © 2023 by the authors.
Survey of intrusion detection systems : techniques, datasets and challenges
- Khraisat, Ansam, Iqbal, Gondal, Vamplew, Peter, Kamruzzaman, Joarder
- Authors: Khraisat, Ansam , Iqbal, Gondal , Vamplew, Peter , Kamruzzaman, Joarder
- Date: 2019
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 2 , no. 1 (2019), p. 1-22
- Full Text:
- Reviewed:
- Authors: Khraisat, Ansam , Iqbal, Gondal , Vamplew, Peter , Kamruzzaman, Joarder
- Date: 2019
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 2 , no. 1 (2019), p. 1-22
- Full Text:
- Reviewed:
A novel OFDM format and a machine learning based dimming control for lifi
- Nowrin, Itisha, Mondal, M., Islam, Rashed, Kamruzzaman, Joarder
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Nowrin, Itisha , Mondal, M. , Islam, Rashed , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: Electronics (Switzerland) Vol. 10, no. 17 (2021), p.
- Full Text:
- Reviewed:
- Description: This paper proposes a new hybrid orthogonal frequency division multiplexing (OFDM) form termed as DC‐biased pulse amplitude modulated optical OFDM (DPO‐OFDM) by combining the ideas of the existing DC‐biased optical OFDM (DCO‐OFDM) and pulse amplitude modulated discrete multitone (PAM‐DMT). The analysis indicates that the required DC‐bias for DPO‐OFDM-based light fidelity (LiFi) depends on the dimming level and the components of the DPO‐OFDM. The bit error rate (BER) performance and dimming flexibility of the DPO‐OFDM and existing OFDM schemes are evaluated using MATLAB tools. The results show that the proposed DPO‐OFDM is power efficient and has a wide dimming range. Furthermore, a switching algorithm is introduced for LiFi, where the individual components of the hybrid OFDM are switched according to a target dimming level. Next, machine learning algorithms are used for the first time to find the appropriate proportions of the hybrid OFDM components. It is shown that polynomial regression of degree 4 can reliably predict the constellation size of the DCO‐OFDM component of DPO‐OFDM for a given constellation size of PAM‐DMT. With the component switching and the machine learning algorithms, DPO‐OFDM‐based LiFi is power efficient at a wide dimming range. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Assessing trust level of a driverless car using deep learning
- Karmakar, Gour, Chowdhury, Abdullahi, Das, Rajkumar, Kamruzzaman, Joarder, Islam, Syed
- Authors: Karmakar, Gour , Chowdhury, Abdullahi , Das, Rajkumar , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 7 (2021), p. 4457-4466
- Full Text: false
- Reviewed:
- Description: The increasing adoption of driverless cars already providing a shift to move away from traditional transportation systems to automated ones in many industrial and commercial applications. Recent research has justified that driverless vehicles will considerably reduce traffic congestions, accidents, carbon emissions, and enhance the accessibility of driving to wider cross-section of people and lifestyle choices. However, at present, people's main concerns are about its privacy and security. Since traditional protocol layers based security mechanisms are not so effective for a distributed system, trust value-based security mechanisms, a type of pervasive security, are appearing as popular and promising techniques. A few statistical non-learning based models for measuring the trust level of a driverless are available in the current literature. These are not so effective because of not being able to capture the extremely distributed, dynamic, and complex nature of the traffic systems. To bridge this research gap, in this paper, for the first time, we propose two deep learning-based models that measure the trustworthiness of a driverless car and its major On-Board Unit (OBU) components. The second model also determines its OBU components that were breached during the driving operation. Results produced using real and simulated traffic data demonstrate that our proposed DNN based deep learning models outperform other machine learning models in assessing the trustworthiness of individual car as well as its OBU components. The average precision of detection accuracies for the car, LiDAR, camera, and radar are 0.99, 0.96, 0.81, and 0.83, respectively, which indicates the potential real-life application of our models in assessing the trust level of a driverless car. © 2000-2011 IEEE.
A comprehensive spectrum trading scheme based on market competition, reputation and buyer specific requirements
- Hassan, Md Rakib, Karmakar, Gour, Kamruzzaman, Joarder, Srinivasan, Bala
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
- Authors: Hassan, Md Rakib , Karmakar, Gour , Kamruzzaman, Joarder , Srinivasan, Bala
- Date: 2015
- Type: Text , Journal article
- Relation: Computer Networks Vol. 84, no. (2015), p. 17-31
- Full Text:
- Reviewed:
- Description: In the exclusive-use model of spectrum trading, cognitive radio devices or secondary users can buy spectrum resources from licensed users or primary users for a short or long period of time. Considering such spectrum access, a trading model is introduced where a buyer can select a set of candidate sellers based on their reputation and their offers in fulfilling its requirements, namely, offered signal quality, contract duration, coverage and bandwidth. Similarly, a seller can assess a buyer as a potential trading partner considering the buyer's reliability, which the seller can derive from the buyer's reputation and financial profile. In our scheme, seller reputation or buyer reliability can be either obtained from a reputation brokerage service, if one exists, or calculated using our model. Since in a competitive market, the price of a seller depends on that of other sellers, game theory is used to model the competition among multiple sellers. An optimization technique is used by a buyer to select the best seller(s) and optimize purchase to maximize its utility. This may result in buying from multiple sellers of certain amount of bandwidth from each, depending on price and meeting requirements and budget constraints. Stability of the model is analyzed and performance evaluation shows that it benefits sellers and buyers in terms of profit and throughput, respectively. © 2015 Elsevier B.V. All rights reserved.
Applications of machine learning and deep learning in antenna design, optimization, and selection : a review
- Sarker, Nayan, Podder, Prajoy, Mondal, M., Shafin, Sakib, Kamruzzaman, Joarder
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
- Authors: Sarker, Nayan , Podder, Prajoy , Mondal, M. , Shafin, Sakib , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 11, no. (2023), p. 103890-103915
- Full Text:
- Reviewed:
- Description: This review paper provides an overview of the latest developments in artificial intelligence (AI)-based antenna design and optimization for wireless communications. Machine learning (ML) and deep learning (DL) algorithms are applied to antenna engineering to improve the efficiency of the design and optimization processes. The review discusses the use of electromagnetic (EM) simulators such as computer simulation technology (CST) and high-frequency structure simulator (HFSS) for ML and DL-based antenna design, which also covers reinforcement learning (RL)-bases approaches. Various antenna optimization methods including parallel optimization, single and multi-objective optimization, variable fidelity optimization, multilayer ML-assisted optimization, and surrogate-based optimization are discussed. The review also covers the AI-based antenna selection approaches for wireless applications. To support the automation of antenna engineering, the data generation technique with computational electromagnetics software is described and some useful datasets are reported. The review concludes that ML/DL can enhance antenna behavior prediction, reduce the number of simulations, improve computer efficiency, and speed up the antenna design process. © 2013 IEEE.
Cancer classification utilizing voting classifier with ensemble feature selection method and transcriptomic data
- Khatun, Rabea, Akter, Maksuda, Islam, Md Manowarul, Uddin, Md Ashraf, Talukder, Md Alamin, Kamruzzaman, Joarder, Azad, Akm, Paul, Bikash, Almoyad, Muhammad, Aryal, Sunil, Moni, Mohammad
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
- Authors: Khatun, Rabea , Akter, Maksuda , Islam, Md Manowarul , Uddin, Md Ashraf , Talukder, Md Alamin , Kamruzzaman, Joarder , Azad, Akm , Paul, Bikash , Almoyad, Muhammad , Aryal, Sunil , Moni, Mohammad
- Date: 2023
- Type: Text , Journal article
- Relation: Genes Vol. 14, no. 9 (2023), p.
- Full Text:
- Reviewed:
- Description: Biomarker-based cancer identification and classification tools are widely used in bioinformatics and machine learning fields. However, the high dimensionality of microarray gene expression data poses a challenge for identifying important genes in cancer diagnosis. Many feature selection algorithms optimize cancer diagnosis by selecting optimal features. This article proposes an ensemble rank-based feature selection method (EFSM) and an ensemble weighted average voting classifier (VT) to overcome this challenge. The EFSM uses a ranking method that aggregates features from individual selection methods to efficiently discover the most relevant and useful features. The VT combines support vector machine, k-nearest neighbor, and decision tree algorithms to create an ensemble model. The proposed method was tested on three benchmark datasets and compared to existing built-in ensemble models. The results show that our model achieved higher accuracy, with 100% for leukaemia, 94.74% for colon cancer, and 94.34% for the 11-tumor dataset. This study concludes by identifying a subset of the most important cancer-causing genes and demonstrating their significance compared to the original data. The proposed approach surpasses existing strategies in accuracy and stability, significantly impacting the development of ML-based gene analysis. It detects vital genes with higher precision and stability than other existing methods. © 2023 by the authors.
An efficient RANSAC hypothesis evaluation using sufficient statistics for RGB-D pose estimation
- Senthooran, Ilankalkone, Murshed, Manzur, Barca, Jan, Kamruzzaman, Joarder, Chung, Hoam
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
- Authors: Senthooran, Ilankalkone , Murshed, Manzur , Barca, Jan , Kamruzzaman, Joarder , Chung, Hoam
- Date: 2019
- Type: Text , Journal article
- Relation: Autonomous Robots Vol. 43, no. 5 (2019), p. 1257-1270
- Full Text:
- Reviewed:
- Description: Achieving autonomous flight in GPS-denied environments begins with pose estimation in three-dimensional space, and this is much more challenging in an MAV in a swarm robotic system due to limited computational resources. In vision-based pose estimation, outlier detection is the most time-consuming step. This usually involves a RANSAC procedure using the reprojection-error method for hypothesis evaluation. Realignment-based hypothesis evaluation method is observed to be more accurate, but the considerably slower speed makes it unsuitable for robots with limited resources. We use sufficient statistics of least-squares minimisation to speed up this process. The additive nature of these sufficient statistics makes it possible to compute pose estimates in each evaluation by reusing previously computed statistics. Thus estimates need not be calculated from scratch each time. The proposed method is tested on standard RANSAC, Preemptive RANSAC and R-RANSAC using benchmark datasets. The results show that the use of sufficient statistics speeds up the outlier detection process with realignment hypothesis evaluation for all RANSAC variants, achieving an execution speed of up to 6.72 times.
RBFK cipher : a randomized butterfly architecture-based lightweight block cipher for IoT devices in the edge computing environment
- Rana, Sohel, Mondal, Mondal, Kamruzzaman, Joarder
- Authors: Rana, Sohel , Mondal, Mondal , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 6, no. 1 (2023), p.
- Full Text:
- Reviewed:
- Description: Internet security has become a major concern with the growing use of the Internet of Things (IoT) and edge computing technologies. Even though data processing is handled by the edge server, sensitive data is generated and stored by the IoT devices, which are subject to attack. Since most IoT devices have limited resources, standard security algorithms such as AES, DES, and RSA hamper their ability to run properly. In this paper, a lightweight symmetric key cipher termed randomized butterfly architecture of fast Fourier transform for key (RBFK) cipher is proposed for resource-constrained IoT devices in the edge computing environment. The butterfly architecture is used in the key scheduling system to produce strong round keys for five rounds of the encryption method. The RBFK cipher has two key sizes: 64 and 128 bits, with a block size of 64 bits. The RBFK ciphers have a larger avalanche effect due to the butterfly architecture ensuring strong security. The proposed cipher satisfies the Shannon characteristics of confusion and diffusion. The memory usage and execution cycle of the RBFK cipher are assessed using the fair evaluation of the lightweight cryptographic systems (FELICS) tool. The proposed ciphers were also implemented using MATLAB 2021a to test key sensitivity by analyzing the histogram, correlation graph, and entropy of encrypted and decrypted images. Since the RBFK ciphers with minimal computational complexity provide better security than recently proposed competing ciphers, these are suitable for IoT devices in an edge computing environment. © 2023, The Author(s).
- Authors: Rana, Sohel , Mondal, Mondal , Kamruzzaman, Joarder
- Date: 2023
- Type: Text , Journal article
- Relation: Cybersecurity Vol. 6, no. 1 (2023), p.
- Full Text:
- Reviewed:
- Description: Internet security has become a major concern with the growing use of the Internet of Things (IoT) and edge computing technologies. Even though data processing is handled by the edge server, sensitive data is generated and stored by the IoT devices, which are subject to attack. Since most IoT devices have limited resources, standard security algorithms such as AES, DES, and RSA hamper their ability to run properly. In this paper, a lightweight symmetric key cipher termed randomized butterfly architecture of fast Fourier transform for key (RBFK) cipher is proposed for resource-constrained IoT devices in the edge computing environment. The butterfly architecture is used in the key scheduling system to produce strong round keys for five rounds of the encryption method. The RBFK cipher has two key sizes: 64 and 128 bits, with a block size of 64 bits. The RBFK ciphers have a larger avalanche effect due to the butterfly architecture ensuring strong security. The proposed cipher satisfies the Shannon characteristics of confusion and diffusion. The memory usage and execution cycle of the RBFK cipher are assessed using the fair evaluation of the lightweight cryptographic systems (FELICS) tool. The proposed ciphers were also implemented using MATLAB 2021a to test key sensitivity by analyzing the histogram, correlation graph, and entropy of encrypted and decrypted images. Since the RBFK ciphers with minimal computational complexity provide better security than recently proposed competing ciphers, these are suitable for IoT devices in an edge computing environment. © 2023, The Author(s).
Weighted soft decision for cooperative sensing in cognitive radio networks
- Shahid, Mohammad, Kamruzzaman, Joarder
- Authors: Shahid, Mohammad , Kamruzzaman, Joarder
- Date: 2008
- Type: Text , Conference paper
- Relation: 2008 16th International Conference on Networks (ICON) p. 1-6
- Full Text:
- Reviewed:
- Description: Enhancing the current services or deploying new services operating in RF spectrum requires more licensed spectrum which may not be provided by the regulatory bodies because of spectrum scarcity. On the contrary, recent studies suggest that many portions of the licensed spectrum remains unused or underused for significant period of time raising the issue of spectrum access without license in an opportunistic manner. Among all the spectrum accessing techniques, sensing based methods are considered optimal for their simplicity and cost effectiveness. In this paper, we introduce a new cooperative spectrum sensing technique which considers the spatial variation of secondary (unlicensed) users and each user's contribution is weighted by a factor that depends on received power and path loss. Compared to existing techniques, the proposed one increases the sensing ability and spectrum utilization, and offers greater robustness to noise uncertainty. Moreover, this cooperative technique uses very simple energy detector as its building block thereby reduces the cost and operational complexity.
- Authors: Shahid, Mohammad , Kamruzzaman, Joarder
- Date: 2008
- Type: Text , Conference paper
- Relation: 2008 16th International Conference on Networks (ICON) p. 1-6
- Full Text:
- Reviewed:
- Description: Enhancing the current services or deploying new services operating in RF spectrum requires more licensed spectrum which may not be provided by the regulatory bodies because of spectrum scarcity. On the contrary, recent studies suggest that many portions of the licensed spectrum remains unused or underused for significant period of time raising the issue of spectrum access without license in an opportunistic manner. Among all the spectrum accessing techniques, sensing based methods are considered optimal for their simplicity and cost effectiveness. In this paper, we introduce a new cooperative spectrum sensing technique which considers the spatial variation of secondary (unlicensed) users and each user's contribution is weighted by a factor that depends on received power and path loss. Compared to existing techniques, the proposed one increases the sensing ability and spectrum utilization, and offers greater robustness to noise uncertainty. Moreover, this cooperative technique uses very simple energy detector as its building block thereby reduces the cost and operational complexity.
A technique for parallel share-frequent sensor pattern mining from wireless sensor networks
- Rashid, Md. Mamunur, Gondal, Iqbal, Kamruzzaman, Joarder
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
- Authors: Rashid, Md. Mamunur , Gondal, Iqbal , Kamruzzaman, Joarder
- Date: 2014
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Computational Science, ICCS 2014; Cairns, Australia; 10th-12th June 2014; published in Procedia Computer Science p. 124-133
- Full Text:
- Reviewed:
- Description: WSNs generate huge amount of data in the form of streams and mining useful knowledge from these streams is a challenging task. Existing works generate sensor association rules using occurrence frequency of patterns with binary frequency (either absent or present) or support of a pattern as a criterion. However, considering the binary frequency or support of a pattern may not be a sufficient indicator for finding meaningful patterns from WSN data because it only reflects the number of epochs in the sensor data which contain that pattern. The share measure of sensorsets could discover useful knowledge about numerical values associated with sensor in a sensor database. Therefore, in this paper, we propose a new type of behavioral pattern called share-frequent sensor patterns by considering the non-binary frequency values of sensors in epochs. To discover share-frequent sensor patterns from sensor dataset, we propose a novel parallel technique. In this technique, we develop a novel tree structure, called parallel share-frequent sensor pattern tree (PShrFSP-tree) that is constructed at each local node independently, by capturing the database contents to generate the candidate patterns using a pattern growth technique with a single scan and then merges the locally generated candidate patterns at the final stage to generate global share-frequent sensor patterns. Comprehensive experimental results show that our proposed model is very efficient for mining share-frequent patterns from WSN data in terms of time and scalability.
Energy-balanced transmission policies for wireless sensor networks
- Azad, Arman, Kamruzzaman, Joarder
- Authors: Azad, Arman , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transactions on Mobile Computing Vol. 10, no. 7 (2011), p. 927-940
- Full Text:
- Reviewed:
- Description: Transmission policy, in addition to topology control, routing, and MAC protocols, can play a vital role in extending network lifetime. Existing transmission policies, however, cause an extremely unbalanced energy usage that contributes to early demise of some sensors reducing overall network's lifetime drastically. Considering cocentric rings around the sink, we decompose the transmission distance of traditional multihop scheme into two parts: ring thickness and hop size, analyze the traffic and energy usage distribution among sensors and determine how energy usage varies and critical ring shifts with hop size. Based on above observations, we propose a transmission scheme and determine the optimal ring thickness and hop size by formulating network lifetime as an optimization problem. Numerical results show substantial improvements in terms of network lifetime and energy usage distribution over existing policies. Two other variations of this policy are also presented by redefining the optimization problem considering: 1) concomitant hop size variation by sensors over lifetime along with optimal duty cycles, and 2) a distinct set of hop sizes for sensors in each ring. Both variations bring increasingly uniform energy usage with lower critical energy and further improves lifetime. A heuristic for distributed implementation of each policy is also presented.
- Authors: Azad, Arman , Kamruzzaman, Joarder
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transactions on Mobile Computing Vol. 10, no. 7 (2011), p. 927-940
- Full Text:
- Reviewed:
- Description: Transmission policy, in addition to topology control, routing, and MAC protocols, can play a vital role in extending network lifetime. Existing transmission policies, however, cause an extremely unbalanced energy usage that contributes to early demise of some sensors reducing overall network's lifetime drastically. Considering cocentric rings around the sink, we decompose the transmission distance of traditional multihop scheme into two parts: ring thickness and hop size, analyze the traffic and energy usage distribution among sensors and determine how energy usage varies and critical ring shifts with hop size. Based on above observations, we propose a transmission scheme and determine the optimal ring thickness and hop size by formulating network lifetime as an optimization problem. Numerical results show substantial improvements in terms of network lifetime and energy usage distribution over existing policies. Two other variations of this policy are also presented by redefining the optimization problem considering: 1) concomitant hop size variation by sensors over lifetime along with optimal duty cycles, and 2) a distinct set of hop sizes for sensors in each ring. Both variations bring increasingly uniform energy usage with lower critical energy and further improves lifetime. A heuristic for distributed implementation of each policy is also presented.
Cyberattacks detection in iot-based smart city applications using machine learning techniques
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Imam, Tassadduq, Gordon, Steven
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Imam, Tassadduq , Gordon, Steven
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Environmental Research and Public Health Vol. 17, no. 24 (2020), p. 1-21
- Full Text:
- Reviewed:
- Description: In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers’ quality of services and people’s wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain. © 2020 by the authors. Licensee MDPI, Basel, Switzerland.