Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Water quality management using hybrid machine learning and data mining algorithms : an indexing approach
- Aslam, Bilal, Maqsoom, Ahsen, Cheema, Ali, Ullah, Fahim, Alharbi, Abdullah, Imran, Muhammad
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Yousafzai, Abdullah, Yaqoob, Ibrar, Imran, Muhammad, Gani, Abdullah, Md Noor, Rafidah
- Authors: Yousafzai, Abdullah , Yaqoob, Ibrar , Imran, Muhammad , Gani, Abdullah , Md Noor, Rafidah
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Internet of Things Journal Vol. 7, no. 5 (2020), p. 4171-4182
- Full Text: false
- Reviewed:
- Description: Mobile devices have become an indispensable component of Internet of Things (IoT). However, these devices have resource constraints in processing capabilities, battery power, and storage space, thus hindering the execution of computation-intensive applications that often require broad bandwidth, stringent response time, long-battery life, and heavy-computing power. Mobile cloud computing and mobile edge computing (MEC) are emerging technologies that can meet the aforementioned requirements using offloading algorithms. In this article, we analyze the effect of platform-dependent native applications on computational offloading in edge networks and propose a lightweight process migration-based computational offloading framework. The proposed framework does not require application binaries at edge servers and thus seamlessly migrates native applications. The proposed framework is evaluated using an experimental testbed. Numerical results reveal that the proposed framework saves almost 44% of the execution time and 84% of the energy consumption. Hence, the proposed framework shows profound potential for resource-intensive IoT application processing in MEC. © 2014 IEEE.
A fault-tolerant cascaded switched-capacitor multilevel inverter for domestic applications in smart grids
- Akbari, Ehsan, Teimouri, Ali, Saki, Mojtaba, Rezaei, Mohammad, Hu, Jiefeng, Band, Shahab, Pai, Hao-Ting, Mosavi, Amir
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
- Authors: Akbari, Ehsan , Teimouri, Ali , Saki, Mojtaba , Rezaei, Mohammad , Hu, Jiefeng , Band, Shahab , Pai, Hao-Ting , Mosavi, Amir
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 110590-110602
- Full Text:
- Reviewed:
- Description: Cascaded multilevel inverters (MLIs) generate an output voltage using series-connected power modules that employ standard configurations of low-voltage components. Each module may employ one or more switched capacitors to double or quadruple its input voltage. The higher number of switched capacitors and semiconductor switches in MLIs compared to conventional two-level inverters has led to concerns about overall system reliability. A fault-tolerant design can mitigate this reliability issue. If one part of the system fails, the MLI can continue its planned operation at a reduced level rather than the entire system failing, which makes the fault tolerance of the MLI particularly important. In this paper, a novel fault location technique is presented that leads to a significant reduction in fault location detection time based on the reliability priority of the components of the proposed fault-tolerant switched capacitor cascaded MLI (CSCMLI). The main contribution of this paper is to reduce the number of MLI switches under fault conditions while operating at lower levels. The fault-tolerant inverter requires fewer switches at higher reliability, and the comparison with similar MLIs shows a faster dynamic response of fault detection and reduced fault location detection time. The experimental results confirm the effectiveness of the presented methods applied in the CSCMLI. Also, all experimental data including processor code, schematic, PCB, and video of CSCMLI operation are attached. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
An efficient boolean modelling approach for genetic network inference
- Gamage, Hasini, Chetty, Madhu, Shatte, Arian, Hallinan, Jennifer
- Authors: Gamage, Hasini , Chetty, Madhu , Shatte, Arian , Hallinan, Jennifer
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2021, Virtual, Online, 13-15 October 2021, 2021 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2021
- Full Text:
- Reviewed:
- Description: The inference of Gene Regulatory Networks (GRNs) from time series gene expression data is an effective approach for unveiling important underlying gene-gene relationships and dynamics. While various computational models exist for accurate inference of GRNs, many are computationally inefficient, and do not focus on simultaneous inference of both network topology and dynamics. In this paper, we introduce a simple, Boolean network model-based solution for efficient inference of GRNs. First, the microarray expression data are discretized using the average gene expression value as a threshold. This step permits an experimental approach of defining the maximum indegree of a network. Next, regulatory genes, including the self-regulations for each target gene, are inferred using estimated multivariate mutual information-based Min-Redundancy Max-Relevance Criterion, and further accurate inference is performed by a swapping operation. Subsequently, we introduce a new method, combining Boolean network regulation modelling and Pearson correlation coefficient to identify the interaction types (inhibition or activation) of the regulatory genes. This method is utilized for the efficient determination of the optimal regulatory rule, consisting AND, OR, and NOT operators, by defining the accurate application of the NOT operation in conjunction and disjunction Boolean functions. The proposed approach is evaluated using two real gene expression datasets for an Escherichia coli gene regulatory network and a fission yeast cell cycle network. Although the Structural Accuracy is approximately the same as existing methods (MIBNI, REVEAL, Best-Fit, BIBN, and CST), the proposed method outperforms all these methods with respect to efficiency and Dynamic Accuracy. © 2021 IEEE.
- Authors: Gamage, Hasini , Chetty, Madhu , Shatte, Arian , Hallinan, Jennifer
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2021, Virtual, Online, 13-15 October 2021, 2021 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, CIBCB 2021
- Full Text:
- Reviewed:
- Description: The inference of Gene Regulatory Networks (GRNs) from time series gene expression data is an effective approach for unveiling important underlying gene-gene relationships and dynamics. While various computational models exist for accurate inference of GRNs, many are computationally inefficient, and do not focus on simultaneous inference of both network topology and dynamics. In this paper, we introduce a simple, Boolean network model-based solution for efficient inference of GRNs. First, the microarray expression data are discretized using the average gene expression value as a threshold. This step permits an experimental approach of defining the maximum indegree of a network. Next, regulatory genes, including the self-regulations for each target gene, are inferred using estimated multivariate mutual information-based Min-Redundancy Max-Relevance Criterion, and further accurate inference is performed by a swapping operation. Subsequently, we introduce a new method, combining Boolean network regulation modelling and Pearson correlation coefficient to identify the interaction types (inhibition or activation) of the regulatory genes. This method is utilized for the efficient determination of the optimal regulatory rule, consisting AND, OR, and NOT operators, by defining the accurate application of the NOT operation in conjunction and disjunction Boolean functions. The proposed approach is evaluated using two real gene expression datasets for an Escherichia coli gene regulatory network and a fission yeast cell cycle network. Although the Structural Accuracy is approximately the same as existing methods (MIBNI, REVEAL, Best-Fit, BIBN, and CST), the proposed method outperforms all these methods with respect to efficiency and Dynamic Accuracy. © 2021 IEEE.
Smart sensing-enabled decision support system for water scheduling in orange orchard
- Khan, Rahim, Zakarya, Muhammad, Balasubramanian, Venki, Jan, Mian, Menon, Varun
- Authors: Khan, Rahim , Zakarya, Muhammad , Balasubramanian, Venki , Jan, Mian , Menon, Varun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Sensors Journal Vol. 21, no. 16 (2021), p. 17492-17499
- Full Text:
- Reviewed:
- Description: The scarcity of water resources throughout the world demands its optimum utilization in various sectors. Smart Sensing-enabled irrigation management systems are the ideal solutions to ensure the optimum utilization of water resources in the agriculture sector. This paper presents a wireless sensor network-enabled Decision Support System (DSS) for developing a need-based irrigation schedule for the orange orchard. For efficient monitoring of various in-field parameters, our proposed approach uses the latest smart sensing technology such as soil moisture, leaf-wetness, temperature and humidity. The proposed smart sensing-enabled test-bed was deployed in the orange orchard of our institute for approximately one year and successfully adjusted its irrigation schedule according to the needs and demands of the plants. Moreover, a modified Longest Common SubSequence (LCSS) mechanism is integrated with the proposed DSS for distinguishing multi-valued noise from the abrupt changing scenarios. To resolve the concurrent communication problem of two or more wasp-mote sensor boards with a common receiver, an enhanced RTS/CTS handshake mechanism is presented. Our proposed DSS compares the most recently refined data with pre-defined threshold values for efficient water management in the orchard. Irrigation activity is scheduled if water deficit criterion is met and the farmer is informed accordingly. Both the experimental and simulation results show that the proposed scheme performs better in comparison to the existing schemes. © 2001-2012 IEEE.
- Authors: Khan, Rahim , Zakarya, Muhammad , Balasubramanian, Venki , Jan, Mian , Menon, Varun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Sensors Journal Vol. 21, no. 16 (2021), p. 17492-17499
- Full Text:
- Reviewed:
- Description: The scarcity of water resources throughout the world demands its optimum utilization in various sectors. Smart Sensing-enabled irrigation management systems are the ideal solutions to ensure the optimum utilization of water resources in the agriculture sector. This paper presents a wireless sensor network-enabled Decision Support System (DSS) for developing a need-based irrigation schedule for the orange orchard. For efficient monitoring of various in-field parameters, our proposed approach uses the latest smart sensing technology such as soil moisture, leaf-wetness, temperature and humidity. The proposed smart sensing-enabled test-bed was deployed in the orange orchard of our institute for approximately one year and successfully adjusted its irrigation schedule according to the needs and demands of the plants. Moreover, a modified Longest Common SubSequence (LCSS) mechanism is integrated with the proposed DSS for distinguishing multi-valued noise from the abrupt changing scenarios. To resolve the concurrent communication problem of two or more wasp-mote sensor boards with a common receiver, an enhanced RTS/CTS handshake mechanism is presented. Our proposed DSS compares the most recently refined data with pre-defined threshold values for efficient water management in the orchard. Irrigation activity is scheduled if water deficit criterion is met and the farmer is informed accordingly. Both the experimental and simulation results show that the proposed scheme performs better in comparison to the existing schemes. © 2001-2012 IEEE.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Islam, Kazi, Ahmad, Iftekhar, Habibi, Daryoush, Zahed, M., Kamruzzaman, Joarder
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
Vehicle trajectory clustering based on dynamic representation learning of internet of vehicles
- Wang, Wei, Xia, Feng, Nie, Hansong, Chen, Zhikui, Gong, Zhiguo
- Authors: Wang, Wei , Xia, Feng , Nie, Hansong , Chen, Zhikui , Gong, Zhiguo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 6 (2021), p. 3567-3576
- Full Text:
- Reviewed:
- Description: With the widely used Internet of Things, 5G, and smart city technologies, we are able to acquire a variety of vehicle trajectory data. These trajectory data are of great significance which can be used to extract relevant information in order to, for instance, calculate the optimal path from one position to another, detect abnormal behavior, monitor the traffic flow in a city, and predict the next position of an object. One of the key technology is to cluster vehicle trajectory. However, existing methods mainly rely on manually designed metrics which may lead to biased results. Meanwhile, the large scale of vehicle trajectory data has become a challenge because calculating these manually designed metrics will cost more time and space. To address these challenges, we propose to employ network representation learning to achieve accurate vehicle trajectory clustering. Specifically, we first construct the k-nearest neighbor-based internet of vehicles in a dynamic manner. Then we learn the low-dimensional representations of vehicles by performing dynamic network representation learning on the constructed network. Finally, using the learned vehicle vectors, vehicle trajectories are clustered with machine learning methods. Experimental results on the real-word dataset show that our method achieves the best performance compared against baseline methods. © 2000-2011 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**
- Authors: Wang, Wei , Xia, Feng , Nie, Hansong , Chen, Zhikui , Gong, Zhiguo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Intelligent Transportation Systems Vol. 22, no. 6 (2021), p. 3567-3576
- Full Text:
- Reviewed:
- Description: With the widely used Internet of Things, 5G, and smart city technologies, we are able to acquire a variety of vehicle trajectory data. These trajectory data are of great significance which can be used to extract relevant information in order to, for instance, calculate the optimal path from one position to another, detect abnormal behavior, monitor the traffic flow in a city, and predict the next position of an object. One of the key technology is to cluster vehicle trajectory. However, existing methods mainly rely on manually designed metrics which may lead to biased results. Meanwhile, the large scale of vehicle trajectory data has become a challenge because calculating these manually designed metrics will cost more time and space. To address these challenges, we propose to employ network representation learning to achieve accurate vehicle trajectory clustering. Specifically, we first construct the k-nearest neighbor-based internet of vehicles in a dynamic manner. Then we learn the low-dimensional representations of vehicles by performing dynamic network representation learning on the constructed network. Finally, using the learned vehicle vectors, vehicle trajectories are clustered with machine learning methods. Experimental results on the real-word dataset show that our method achieves the best performance compared against baseline methods. © 2000-2011 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Feng Xia” is provided in this record**
Dual mechanical port machine based hybrid electric vehicle using reduced switch converters
- Bizhani, Hamed, Yao, Gang, Muyeen, S., Islam, Syed, Ben-Brahim, Lazhar
- Authors: Bizhani, Hamed , Yao, Gang , Muyeen, S. , Islam, Syed , Ben-Brahim, Lazhar
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 33665-33676
- Full Text:
- Reviewed:
- Description: Due to the increased environmental pollution, hybrid vehicles have attracted enormous attention in today's society. The two most important factors in designing these vehicles are size and weight. For this purpose, some researchers have presented the use of the dual-mechanical-port machine (DMPM) in hybrid electric vehicles (HEVs). This paper presents two modified converter topologies with a reduced number of switching devices for use on DMPM-based HEVs, with the goal of reducing the overall size and weight of the system. Beside the design of the DMPM in the series-parallel HEV structure along with the energy management unit, the conventional back-to-back (BB) converter is replaced with nine-switch (NS) and five-leg (FL) converters. These converters have never been examined for the DMPM-based HEV, and therefore, the objective of this paper is to reveal the operational characteristics and power flow mechanism of this machine using the NS and FL converters. The simulation analysis is carried out using MATLAB/Simulink considering all HEV operational modes. In addition, two proposed and the conventional converters are compared in terms of losses, maximum achievable voltages, required dc-link voltages, the rating of the components, and torque ripple, and finally, a recommendation is made based on the obtained results.
- Authors: Bizhani, Hamed , Yao, Gang , Muyeen, S. , Islam, Syed , Ben-Brahim, Lazhar
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 33665-33676
- Full Text:
- Reviewed:
- Description: Due to the increased environmental pollution, hybrid vehicles have attracted enormous attention in today's society. The two most important factors in designing these vehicles are size and weight. For this purpose, some researchers have presented the use of the dual-mechanical-port machine (DMPM) in hybrid electric vehicles (HEVs). This paper presents two modified converter topologies with a reduced number of switching devices for use on DMPM-based HEVs, with the goal of reducing the overall size and weight of the system. Beside the design of the DMPM in the series-parallel HEV structure along with the energy management unit, the conventional back-to-back (BB) converter is replaced with nine-switch (NS) and five-leg (FL) converters. These converters have never been examined for the DMPM-based HEV, and therefore, the objective of this paper is to reveal the operational characteristics and power flow mechanism of this machine using the NS and FL converters. The simulation analysis is carried out using MATLAB/Simulink considering all HEV operational modes. In addition, two proposed and the conventional converters are compared in terms of losses, maximum achievable voltages, required dc-link voltages, the rating of the components, and torque ripple, and finally, a recommendation is made based on the obtained results.
Towards a low complexity scheme for medical images in scalable video coding
- Shoaib, Muhammad, Imran, Muhammad, Subhan, Fazli, Ahmad, Iftikhar
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
Energy harvesting in underwater acoustic wireless sensor networks : design, taxonomy, applications, challenges and future directions
- Khan, Anwar, Imran, Muhammad, Alharbi, Abdullah, Mohamed, Ehab, Fouda, Mostafa
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
Malicious node detection using machine learning and distributed data storage using blockchain in WSNs
- Nouman, Muhammad, Qasim, Umar, Nasir, Hina, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
Exact string matching algorithms : survey, issues, and future research directions
- Hakak, Saqib, Kamsin, Amirrudin, Shivakumara, Palaiahnakote, Gilkar, Gulshan, Khan, Wazir, Imran, Muhammad
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
Green industrial networking : recent advances, taxonomy, and open research challenges
- Ahmed, Ejaz, Yaqoob, Ibrar, Ahmed, Ahmed, Gani, Abdullah, Imran, Muhammad, Guizani, Sghaier
- Authors: Ahmed, Ejaz , Yaqoob, Ibrar , Ahmed, Ahmed , Gani, Abdullah , Imran, Muhammad , Guizani, Sghaier
- Date: 2016
- Type: Text , Journal article
- Relation: IEEE Communications Magazine Vol. 54, no. 10 (2016), p. 38-45
- Full Text: false
- Reviewed:
- Description: The consciousness of environmental problems has attracted the industry's attention toward the reduction of unnecessary energy emission by enabling green industrial networking. The reduction of unnecessary energy emitted by industrial networks can be a possible solution to many environmental issues. Green industrial networking is in its infancy, and an overview of the domain is still lacking. In this article, we discuss recent advances in industrial and green networking paradigms to investigate the impact on global communities. We also classify the literature by devising a taxonomy based on networking technologies, machines, network types, topologies, field bus types, transmission media, and hierarchical levels. Moreover, we identify and discuss key enablers (adaptive links, resource-based energy conservation, energy-efficient scheduling, energy-aware systems, energy-aware proxying, energy-conservative approaches, and low-power wireless protocols) for green industrial networking. Furthermore, we discuss challenges that remain to be addressed as future research directions. © 2016 IEEE.
Social-aware resource allocation and optimization for D2D communication
- Ahmed, Ejaz, Yaqoob, Ibrar, Gani, Abdullah, Imran, Muhammad, Guizani, Mohsen
- Authors: Ahmed, Ejaz , Yaqoob, Ibrar , Gani, Abdullah , Imran, Muhammad , Guizani, Mohsen
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Wireless Communications Vol. 24, no. 3 (2017), p. 122-129
- Full Text: false
- Reviewed:
- Description: The undiminished growth of research activities to converge social awareness with D2D communication has paved the way for facilitating and providing significant benefits to users. Realizing these benefits depends on efficiently addressing several main technical challenges associated with the convergence. Although there are many research studies related to social networks and D2D communication, convergence of these two areas leads to further research efforts to implement social-aware D2D communication. In this article, we discuss recent advances in the domain of D2D communication from the perspective of social-aware resource allocation and optimization. We also categorize and classify the literature by devising a taxonomy based on channel-centric attributes, objectives, solving approaches, networking technologies, characteristics, and communication types. Moreover, we also outline the key requirements with the aim of providing guidelines for the domain researchers and designers to enable the social-aware resource allocation for D2D communication. Several open research challenges are presented as future research directions. © 2002-2012 IEEE.
Data exchange in delay tolerant networks using joint inter- and intra-flow network coding
- Ostovari, Pouya, Wu, Jie, Jolfaei, Alireza
- Authors: Ostovari, Pouya , Wu, Jie , Jolfaei, Alireza
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 37th IEEE International Performance Computing and Communications Conference, IPCCC 2018; Orlando, United States; 17th-19th November 2018 p. 1-8
- Full Text:
- Reviewed:
- Description: Data transmission in delay tolerant networks (DTNs) is a challenging problem due to the lack of continuous network connectivity and nondeterministic mobility of the nodes. Epidemic routing and spray-and-wait methods are two popular mechanisms that are proposed for DTNs. In order to reduce the transmission delay in DTNs, some previous works combine intra-flow network coding with the routing protocols. In this paper, we propose two routing mechanisms using systematic joint inter- and intra-flow network coding for the purpose of data exchange between the nodes. We discuss the reasons why inter-flow network coding helps to reduce the delivery delay of the packets, and we also analyze the delays related with only using intra-flow coding, and joint inter- and intra-flow coding methods. We empirically show the benefit of joint coding over just intra-flow coding. Based on our simulation, joint coding can reduce the delay up to 40%, compared to only intra-flow coding.
- Description: 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
- Authors: Ostovari, Pouya , Wu, Jie , Jolfaei, Alireza
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 37th IEEE International Performance Computing and Communications Conference, IPCCC 2018; Orlando, United States; 17th-19th November 2018 p. 1-8
- Full Text:
- Reviewed:
- Description: Data transmission in delay tolerant networks (DTNs) is a challenging problem due to the lack of continuous network connectivity and nondeterministic mobility of the nodes. Epidemic routing and spray-and-wait methods are two popular mechanisms that are proposed for DTNs. In order to reduce the transmission delay in DTNs, some previous works combine intra-flow network coding with the routing protocols. In this paper, we propose two routing mechanisms using systematic joint inter- and intra-flow network coding for the purpose of data exchange between the nodes. We discuss the reasons why inter-flow network coding helps to reduce the delivery delay of the packets, and we also analyze the delays related with only using intra-flow coding, and joint inter- and intra-flow coding methods. We empirically show the benefit of joint coding over just intra-flow coding. Based on our simulation, joint coding can reduce the delay up to 40%, compared to only intra-flow coding.
- Description: 2018 IEEE 37th International Performance Computing and Communications Conference, IPCCC 2018
Privacy protection and energy optimization for 5G-aided industrial internet of things
- Humayun, Mamoona, Jhanjhi, Nz, Alruwaili, Madallah, Amalathas, Sagaya, Balasubramanian, Venki, Selvaraj, Buvana
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Bidirectional mapping coupled GAN for generalized zero-shot learning
- Shermin, Tasfia, Teng, Shyh, Sohel, Ferdous, Murshed, Manzur, Lu, Guojun
- Authors: Shermin, Tasfia , Teng, Shyh , Sohel, Ferdous , Murshed, Manzur , Lu, Guojun
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 31, no. (2022), p. 721-733
- Full Text:
- Reviewed:
- Description: Bidirectional mapping-based generalized zero-shot learning (GZSL) methods rely on the quality of synthesized features to recognize seen and unseen data. Therefore, learning a joint distribution of seen-unseen classes and preserving the distinction between seen-unseen classes is crucial for GZSL methods. However, existing methods only learn the underlying distribution of seen data, although unseen class semantics are available in the GZSL problem setting. Most methods neglect retaining seen-unseen classes distinction and use the learned distribution to recognize seen and unseen data. Consequently, they do not perform well. In this work, we utilize the available unseen class semantics alongside seen class semantics and learn joint distribution through a strong visual-semantic coupling. We propose a bidirectional mapping coupled generative adversarial network (BMCoGAN) by extending the concept of the coupled generative adversarial network into a bidirectional mapping model. We further integrate a Wasserstein generative adversarial optimization to supervise the joint distribution learning. We design a loss optimization for retaining distinctive information of seen-unseen classes in the synthesized features and reducing bias towards seen classes, which pushes synthesized seen features towards real seen features and pulls synthesized unseen features away from real seen features. We evaluate BMCoGAN on benchmark datasets and demonstrate its superior performance against contemporary methods. © 1992-2012 IEEE.
- Authors: Shermin, Tasfia , Teng, Shyh , Sohel, Ferdous , Murshed, Manzur , Lu, Guojun
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 31, no. (2022), p. 721-733
- Full Text:
- Reviewed:
- Description: Bidirectional mapping-based generalized zero-shot learning (GZSL) methods rely on the quality of synthesized features to recognize seen and unseen data. Therefore, learning a joint distribution of seen-unseen classes and preserving the distinction between seen-unseen classes is crucial for GZSL methods. However, existing methods only learn the underlying distribution of seen data, although unseen class semantics are available in the GZSL problem setting. Most methods neglect retaining seen-unseen classes distinction and use the learned distribution to recognize seen and unseen data. Consequently, they do not perform well. In this work, we utilize the available unseen class semantics alongside seen class semantics and learn joint distribution through a strong visual-semantic coupling. We propose a bidirectional mapping coupled generative adversarial network (BMCoGAN) by extending the concept of the coupled generative adversarial network into a bidirectional mapping model. We further integrate a Wasserstein generative adversarial optimization to supervise the joint distribution learning. We design a loss optimization for retaining distinctive information of seen-unseen classes in the synthesized features and reducing bias towards seen classes, which pushes synthesized seen features towards real seen features and pulls synthesized unseen features away from real seen features. We evaluate BMCoGAN on benchmark datasets and demonstrate its superior performance against contemporary methods. © 1992-2012 IEEE.