A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing
- Nasser, Nidal, Emad-ul-Haq, Qazi, Imran, Muhammad, Ali, Asmaa, Razzak, Imran, Al-Helali, Abdulaziz
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
An effective solution to the optimal power flow problem using meta-heuristic algorithms
- Aurangzeb, Khursheed, Shafiq, Sundas, Alhussein, Musaed, Pamir, Javaid, Nadeem, Imran, Muhammad
- Authors: Aurangzeb, Khursheed , Shafiq, Sundas , Alhussein, Musaed , Pamir , Javaid, Nadeem , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Frontiers in Energy Research Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Financial loss in power systems is an emerging problem that needs to be resolved. To tackle the mentioned problem, energy generated from various generation sources in the power network needs proper scheduling. In order to determine the best settings for the control variables, this study formulates and solves an optimal power flow (OPF) problem. In the proposed work, the bird swarm algorithm (BSA), JAYA, and a hybrid of both algorithms, termed as HJBSA, are used for obtaining the settings of optimum variables. We perform simulations by considering the constraints of voltage stability and line capacity, and generated reactive and active power. In addition, the used algorithms solve the problem of OPF and minimize carbon emission generated from thermal systems, fuel cost, voltage deviations, and losses in generation of active power. The suggested approach is evaluated by putting it into use on two separate IEEE testing systems, one with 30 buses and the other with 57 buses. The simulation results show that for the 30-bus system, the minimization in cost by HJBSA, JAYA, and BSA is 860.54 $/h, 862.31, $/h and 900.01 $/h, respectively, while for the 57-bus system, it is 5506.9 $/h, 6237.4, $/h and 7245.6 $/h for HJBSA, JAYA, and BSA, respectively. Similarly, for the 30-bus system, the power loss by HJBSA, JAYA, and BSA is 9.542 MW, 10.102 MW, and 11.427 MW, respectively, while for the 57-bus system, the value of power loss is 13.473 MW, 20.552, MW and 18.638 MW for HJBSA, JAYA, and BSA, respectively. Moreover, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 4.394 ton/h, 4.524, ton/h and 4.401 ton/h, respectively, with the 30-bus system. With the 57-bus system, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 26.429 ton/h, 27.014, ton/h and 28.568 ton/h, respectively. The results show the outperformance of HJBSA. Copyright © 2023 Aurangzeb, Shafiq, Alhussein, Pamir, Javaid and Imran.
- Authors: Aurangzeb, Khursheed , Shafiq, Sundas , Alhussein, Musaed , Pamir , Javaid, Nadeem , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Frontiers in Energy Research Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Financial loss in power systems is an emerging problem that needs to be resolved. To tackle the mentioned problem, energy generated from various generation sources in the power network needs proper scheduling. In order to determine the best settings for the control variables, this study formulates and solves an optimal power flow (OPF) problem. In the proposed work, the bird swarm algorithm (BSA), JAYA, and a hybrid of both algorithms, termed as HJBSA, are used for obtaining the settings of optimum variables. We perform simulations by considering the constraints of voltage stability and line capacity, and generated reactive and active power. In addition, the used algorithms solve the problem of OPF and minimize carbon emission generated from thermal systems, fuel cost, voltage deviations, and losses in generation of active power. The suggested approach is evaluated by putting it into use on two separate IEEE testing systems, one with 30 buses and the other with 57 buses. The simulation results show that for the 30-bus system, the minimization in cost by HJBSA, JAYA, and BSA is 860.54 $/h, 862.31, $/h and 900.01 $/h, respectively, while for the 57-bus system, it is 5506.9 $/h, 6237.4, $/h and 7245.6 $/h for HJBSA, JAYA, and BSA, respectively. Similarly, for the 30-bus system, the power loss by HJBSA, JAYA, and BSA is 9.542 MW, 10.102 MW, and 11.427 MW, respectively, while for the 57-bus system, the value of power loss is 13.473 MW, 20.552, MW and 18.638 MW for HJBSA, JAYA, and BSA, respectively. Moreover, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 4.394 ton/h, 4.524, ton/h and 4.401 ton/h, respectively, with the 30-bus system. With the 57-bus system, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 26.429 ton/h, 27.014, ton/h and 28.568 ton/h, respectively. The results show the outperformance of HJBSA. Copyright © 2023 Aurangzeb, Shafiq, Alhussein, Pamir, Javaid and Imran.
An optimized hybrid deep intrusion detection model (HD-IDM) for enhancing network security
- Ahmad, Iftikhar, Imran, Muhammad, Qayyum, Abdul, Ramzan, Muhammad, Alassafi, Madini
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
Deep learning : survey of environmental and camera impacts on internet of things images
- Kaur, Roopdeep, Karmakar, Gour, Xia, Feng, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
Electricity theft detection for energy optimization using deep learning models
- Pamir, Javaid, Nadeem, Javed, Muhammad, Houran, Mohamad, Almasoud, Abdullah, Imran, Muhammad
- Authors: Pamir , Javaid, Nadeem , Javed, Muhammad , Houran, Mohamad , Almasoud, Abdullah , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Energy Science and Engineering Vol. 11, no. 10 (2023), p. 3575-3596
- Full Text:
- Reviewed:
- Description: The rapid increase in nontechnical loss (NTL) has become a principal concern for distribution system operators (DSOs) over the years. Electricity theft makes up a major part of NTL. It causes losses for the DSOs and also deteriorates the quality of electricity. The introduction of advanced metering infrastructure along with the upgradation of the traditional grids to the smart grids (SGs) has helped the electric utilities to collect the electricity consumption (EC) readings of consumers, which further empowers the machine learning (ML) algorithms to be exploited for efficient electricity theft detection (ETD). However, there are still some shortcomings, such as class imbalance, curse of dimensionality, and bypassing the automated tuning of hyperparameters in the existing ML-based theft classification schemes that limit their performances. Therefore, it is essential to develop a novel approach to deal with these problems and efficiently detect electricity theft in SGs. Using the salp swarm algorithm (SSA), gate convolutional autoencoder (GCAE), and cost-sensitive learning and long short-term memory (CSLSTM), an effective ETD model named SSA–GCAE–CSLSTM is proposed in this work. Furthermore, a hybrid GCAE model is developed via the combination of gated recurrent unit and convolutional autoencoder. The proposed model comprises five submodules: (1) data preparation, (2) data balancing, (3) dimensionality reduction, (4) hyperparameters' optimization, and (5) electricity theft classification. The real-time EC data provided by the state grid corporation of China are used for performance evaluations via extensive simulations. The proposed model is compared with two basic models, CSLSTM and GCAE–CSLSTM, along with seven benchmarks, support vector machine, decision tree, extra trees, random forest, adaptive boosting, extreme gradient boosting, and convolutional neural network. The results exhibit that SSA–GCAE–CSLSTM yields 99.45% precision, 95.93% F1 score, 92.25% accuracy, and 71.13% area under the receiver operating characteristic curve score, and surpasses the other models in terms of ETD. © 2023 The Authors. Energy Science & Engineering published by Society of Chemical Industry and John Wiley & Sons Ltd.
- Authors: Pamir , Javaid, Nadeem , Javed, Muhammad , Houran, Mohamad , Almasoud, Abdullah , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Energy Science and Engineering Vol. 11, no. 10 (2023), p. 3575-3596
- Full Text:
- Reviewed:
- Description: The rapid increase in nontechnical loss (NTL) has become a principal concern for distribution system operators (DSOs) over the years. Electricity theft makes up a major part of NTL. It causes losses for the DSOs and also deteriorates the quality of electricity. The introduction of advanced metering infrastructure along with the upgradation of the traditional grids to the smart grids (SGs) has helped the electric utilities to collect the electricity consumption (EC) readings of consumers, which further empowers the machine learning (ML) algorithms to be exploited for efficient electricity theft detection (ETD). However, there are still some shortcomings, such as class imbalance, curse of dimensionality, and bypassing the automated tuning of hyperparameters in the existing ML-based theft classification schemes that limit their performances. Therefore, it is essential to develop a novel approach to deal with these problems and efficiently detect electricity theft in SGs. Using the salp swarm algorithm (SSA), gate convolutional autoencoder (GCAE), and cost-sensitive learning and long short-term memory (CSLSTM), an effective ETD model named SSA–GCAE–CSLSTM is proposed in this work. Furthermore, a hybrid GCAE model is developed via the combination of gated recurrent unit and convolutional autoencoder. The proposed model comprises five submodules: (1) data preparation, (2) data balancing, (3) dimensionality reduction, (4) hyperparameters' optimization, and (5) electricity theft classification. The real-time EC data provided by the state grid corporation of China are used for performance evaluations via extensive simulations. The proposed model is compared with two basic models, CSLSTM and GCAE–CSLSTM, along with seven benchmarks, support vector machine, decision tree, extra trees, random forest, adaptive boosting, extreme gradient boosting, and convolutional neural network. The results exhibit that SSA–GCAE–CSLSTM yields 99.45% precision, 95.93% F1 score, 92.25% accuracy, and 71.13% area under the receiver operating characteristic curve score, and surpasses the other models in terms of ETD. © 2023 The Authors. Energy Science & Engineering published by Society of Chemical Industry and John Wiley & Sons Ltd.
Impact of traditional and embedded image denoising on CNN-based deep learning
- Kaur, Roopdeep, Karmakar, Gour, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Applied sciences Vol. 13, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: In digital image processing, filtering noise is an important step for reconstructing a high-quality image for further processing such as object segmentation, object detection, and object recognition. Various image-denoising approaches, including median, Gaussian, and bilateral filters, are available in the literature. Since convolutional neural networks (CNN) are able to directly learn complex patterns and features from data, they have become a popular choice for image-denoising tasks. As a result of their ability to learn and adapt to various denoising scenarios, CNNs are powerful tools for image denoising. Some deep learning techniques such as CNN incorporate denoising strategies directly into the CNN model layers. A primary limitation of these methods is their necessity to resize images to a consistent size. This resizing can result in a loss of vital image details, which might compromise CNN’s effectiveness. Because of this issue, we utilize a traditional denoising method as a preliminary step for noise reduction before applying CNN. To our knowledge, a comparative performance study of CNN using traditional and embedded denoising against a baseline approach (without denoising) is yet to be performed. To analyze the impact of denoising on the CNN performance, in this paper, firstly, we filter the noise from the images using traditional means of denoising method before their use in the CNN model. Secondly, we embed a denoising layer in the CNN model. To validate the performance of image denoising, we performed extensive experiments for both traffic sign and object recognition datasets. To decide whether denoising will be adopted and to decide on the type of filter to be used, we also present an approach exploiting the peak-signal-to-noise-ratio (PSNRs) distribution of images. Both CNN accuracy and PSNRs distribution are used to evaluate the effectiveness of the denoising approaches. As expected, the results vary with the type of filter, impact, and dataset used in both traditional and embedded denoising approaches. However, traditional denoising shows better accuracy, while embedded denoising shows lower computational time for most of the cases. Overall, this comparative study gives insights into whether denoising will be adopted in various CNN-based image analyses, including autonomous driving, animal detection, and facial recognition.
- Authors: Kaur, Roopdeep , Karmakar, Gour , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Applied sciences Vol. 13, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: In digital image processing, filtering noise is an important step for reconstructing a high-quality image for further processing such as object segmentation, object detection, and object recognition. Various image-denoising approaches, including median, Gaussian, and bilateral filters, are available in the literature. Since convolutional neural networks (CNN) are able to directly learn complex patterns and features from data, they have become a popular choice for image-denoising tasks. As a result of their ability to learn and adapt to various denoising scenarios, CNNs are powerful tools for image denoising. Some deep learning techniques such as CNN incorporate denoising strategies directly into the CNN model layers. A primary limitation of these methods is their necessity to resize images to a consistent size. This resizing can result in a loss of vital image details, which might compromise CNN’s effectiveness. Because of this issue, we utilize a traditional denoising method as a preliminary step for noise reduction before applying CNN. To our knowledge, a comparative performance study of CNN using traditional and embedded denoising against a baseline approach (without denoising) is yet to be performed. To analyze the impact of denoising on the CNN performance, in this paper, firstly, we filter the noise from the images using traditional means of denoising method before their use in the CNN model. Secondly, we embed a denoising layer in the CNN model. To validate the performance of image denoising, we performed extensive experiments for both traffic sign and object recognition datasets. To decide whether denoising will be adopted and to decide on the type of filter to be used, we also present an approach exploiting the peak-signal-to-noise-ratio (PSNRs) distribution of images. Both CNN accuracy and PSNRs distribution are used to evaluate the effectiveness of the denoising approaches. As expected, the results vary with the type of filter, impact, and dataset used in both traditional and embedded denoising approaches. However, traditional denoising shows better accuracy, while embedded denoising shows lower computational time for most of the cases. Overall, this comparative study gives insights into whether denoising will be adopted in various CNN-based image analyses, including autonomous driving, animal detection, and facial recognition.
Malicious node detection using machine learning and distributed data storage using blockchain in WSNs
- Nouman, Muhammad, Qasim, Umar, Nasir, Hina, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
Multi-aspect annotation and analysis of Nepali tweets on anti-establishment election discourse
- Rauniyar, Kritesh, Poudel, Sweta, Shiwakoti, Shuvam, Thapa, Surendrabikram, Rashid, Junaid, Kim, Jungeun, Imran, Muhammad, Naseem, Usman
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G
- Dahri, Safia, Shaikh, Muhammad, Alhussein, Musaed, Soomro, Muhammad, Aurangzeb, Khursheed, Imran, Muhammad
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
Performance analysis of machine learning classifiers for non-technical loss detection
- Ghori, Khawaja, Imran, Muhammad, Nawaz, Asad, Abbasi, Rabeeh, Ullah, Ata, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).
Securing smart healthcare cyber-physical systems against blackhole and greyhole attacks using a blockchain-enabled gini index framework
- Javed, Mannan, Tariq, Noshina, Ashraf, Muhammad, Khan, Farrukh, Asim, Muhammad, Imran, Muhammad
- Authors: Javed, Mannan , Tariq, Noshina , Ashraf, Muhammad , Khan, Farrukh , Asim, Muhammad , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 23 (2023), p.
- Full Text:
- Reviewed:
- Description: The increasing reliance on cyber-physical systems (CPSs) in critical domains such as healthcare, smart grids, and intelligent transportation systems necessitates robust security measures to protect against cyber threats. Among these threats, blackhole and greyhole attacks pose significant risks to the availability and integrity of CPSs. The current detection and mitigation approaches often struggle to accurately differentiate between legitimate and malicious behavior, leading to ineffective protection. This paper introduces Gini-index and blockchain-based Blackhole/Greyhole RPL (GBG-RPL), a novel technique designed for efficient detection and mitigation of blackhole and greyhole attacks in smart health monitoring CPSs. GBG-RPL leverages the analytical prowess of the Gini index and the security advantages of blockchain technology to protect these systems against sophisticated threats. This research not only focuses on identifying anomalous activities but also proposes a resilient framework that ensures the integrity and reliability of the monitored data. GBG-RPL achieves notable improvements as compared to another state-of-the-art technique referred to as BCPS-RPL, including a 7.18% reduction in packet loss ratio, an 11.97% enhancement in residual energy utilization, and a 19.27% decrease in energy consumption. Its security features are also very effective, boasting a 10.65% improvement in attack-detection rate and an 18.88% faster average attack-detection time. GBG-RPL optimizes network management by exhibiting a 21.65% reduction in message overhead and a 28.34% decrease in end-to-end delay, thus showing its potential for enhanced reliability, efficiency, and security. © 2023 by the authors.
- Authors: Javed, Mannan , Tariq, Noshina , Ashraf, Muhammad , Khan, Farrukh , Asim, Muhammad , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Sensors Vol. 23, no. 23 (2023), p.
- Full Text:
- Reviewed:
- Description: The increasing reliance on cyber-physical systems (CPSs) in critical domains such as healthcare, smart grids, and intelligent transportation systems necessitates robust security measures to protect against cyber threats. Among these threats, blackhole and greyhole attacks pose significant risks to the availability and integrity of CPSs. The current detection and mitigation approaches often struggle to accurately differentiate between legitimate and malicious behavior, leading to ineffective protection. This paper introduces Gini-index and blockchain-based Blackhole/Greyhole RPL (GBG-RPL), a novel technique designed for efficient detection and mitigation of blackhole and greyhole attacks in smart health monitoring CPSs. GBG-RPL leverages the analytical prowess of the Gini index and the security advantages of blockchain technology to protect these systems against sophisticated threats. This research not only focuses on identifying anomalous activities but also proposes a resilient framework that ensures the integrity and reliability of the monitored data. GBG-RPL achieves notable improvements as compared to another state-of-the-art technique referred to as BCPS-RPL, including a 7.18% reduction in packet loss ratio, an 11.97% enhancement in residual energy utilization, and a 19.27% decrease in energy consumption. Its security features are also very effective, boasting a 10.65% improvement in attack-detection rate and an 18.88% faster average attack-detection time. GBG-RPL optimizes network management by exhibiting a 21.65% reduction in message overhead and a 28.34% decrease in end-to-end delay, thus showing its potential for enhanced reliability, efficiency, and security. © 2023 by the authors.
A privacy-preserving framework for smart context-aware healthcare applications
- Azad, Muhammad, Arshad, Junaid, Mahmoud, Shazia, Salah, Khaled, Imran, Muhammad
- Authors: Azad, Muhammad , Arshad, Junaid , Mahmoud, Shazia , Salah, Khaled , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Transactions on Emerging Telecommunications Technologies Vol. 33, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Smart connected devices are widely used in healthcare to achieve improved well-being, quality of life, and security of citizens. While improving quality of healthcare, such devices generate data containing sensitive patient information where unauthorized access constitutes breach of privacy leading to catastrophic outcomes for an individual as well as financial loss to the governing body via regulations such as the General Data Protection Regulation. Furthermore, while mobility afforded by smart devices enables ease of monitoring, portability, and pervasive processing, it introduces challenges with respect to scalability, reliability, and context awareness. This paper is focused on privacy preservation within smart context-aware healthcare emphasizing privacy assurance challenges within Electronic Transfer of Prescription. We present a case for a comprehensive, coherent, and dynamic privacy-preserving system for smart healthcare to protect sensitive user data. Based on a thorough analysis of existing privacy preservation models, we propose an enhancement to the widely used Salford model to achieve privacy preservation against masquerading and impersonation threats. The proposed model therefore improves privacy assurance for smart healthcare while addressing unique challenges with respect to context-aware mobility of such applications. © 2019 John Wiley & Sons, Ltd.
- Authors: Azad, Muhammad , Arshad, Junaid , Mahmoud, Shazia , Salah, Khaled , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Transactions on Emerging Telecommunications Technologies Vol. 33, no. 8 (2022), p.
- Full Text:
- Reviewed:
- Description: Smart connected devices are widely used in healthcare to achieve improved well-being, quality of life, and security of citizens. While improving quality of healthcare, such devices generate data containing sensitive patient information where unauthorized access constitutes breach of privacy leading to catastrophic outcomes for an individual as well as financial loss to the governing body via regulations such as the General Data Protection Regulation. Furthermore, while mobility afforded by smart devices enables ease of monitoring, portability, and pervasive processing, it introduces challenges with respect to scalability, reliability, and context awareness. This paper is focused on privacy preservation within smart context-aware healthcare emphasizing privacy assurance challenges within Electronic Transfer of Prescription. We present a case for a comprehensive, coherent, and dynamic privacy-preserving system for smart healthcare to protect sensitive user data. Based on a thorough analysis of existing privacy preservation models, we propose an enhancement to the widely used Salford model to achieve privacy preservation against masquerading and impersonation threats. The proposed model therefore improves privacy assurance for smart healthcare while addressing unique challenges with respect to context-aware mobility of such applications. © 2019 John Wiley & Sons, Ltd.
An automatic detection of breast cancer diagnosis and prognosis based on machine learning using ensemble of classifiers
- Naseem, Usman, Rashid, Junaid, Ali, Liaqat, Kim, Jungeun, Haq, Qazi, Awan, Mazhar, Imran, Muhammad
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
- Authors: Naseem, Usman , Rashid, Junaid , Ali, Liaqat , Kim, Jungeun , Haq, Qazi , Awan, Mazhar , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 78242-78252
- Full Text:
- Reviewed:
- Description: Breast cancer (BC) is the second most prevalent type of cancer among women leading to death, and its rate of mortality is very high. Its effects will be reduced if diagnosed early. BC's early detection will greatly boost the prognosis and likelihood of recovery, as it may encourage prompt surgical care for patients. It is therefore vital to have a system enabling the healthcare industry to detect breast cancer quickly and accurately. Machine learning (ML) is widely used in breast cancer (BC) pattern classification due to its advantages in modelling a critical feature detection from complex BC datasets. In this paper, we propose a system for automatic detection of BC diagnosis and prognosis using ensemble of classifiers. First, we review various machine learning (ML) algorithms and ensemble of different ML algorithms. We present an overview of ML algorithms including ANN, and ensemble of different classifiers for automatic BC diagnosis and prognosis detection. We also present and compare various ensemble models and other variants of tested ML based models with and without up-sampling technique on two benchmark datasets. We also studied the effects of using balanced class weight on prognosis dataset and compared its performance with others. The results showed that the ensemble method outperformed other state-of-the-art methods and achieved 98.83% accuracy. Because of high performance, the proposed system is of great importance to the medical industry and relevant research community. The comparison shows that the proposed method outperformed other state-of-the-art methods. © 2013 IEEE.
An effective data-collection scheme with AUV path planning in underwater wireless sensor networks
- Khan, Wahab, Hua, Wang, Anwar, Muhammad, Alharbi, Abdullah, Imran, Muhammad, Khan, Javed
- Authors: Khan, Wahab , Hua, Wang , Anwar, Muhammad , Alharbi, Abdullah , Imran, Muhammad , Khan, Javed
- Date: 2022
- Type: Text , Journal article
- Relation: Wireless Communications and Mobile Computing Vol. 2022, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Data collection in underwater wireless sensor networks (UWSNs) using autonomous underwater vehicles (AUVs) is a more robust solution than traditional approaches, instead of transmitting data from each node to a destination node. However, the design of delay-aware and energy-efficient path planning for AUVs is one of the most crucial problems in collecting data for UWSNs. To reduce network delay and increase network lifetime, we proposed a novel reliable AUV-based data-collection routing protocol for UWSNs. The proposed protocol employs a route planning mechanism to collect data using AUVs. The sink node directs AUVs for data collection from sensor nodes to reduce energy consumption. First, sensor nodes are organized into clusters for better scalability, and then, these clusters are arranged into groups to assign an AUV to each group. Second, the traveling path for each AUV is crafted based on the Markov decision process (MDP) for the reliable collection of data. The simulation results affirm the effectiveness and efficiency of the proposed technique in terms of throughput, energy efficiency, delay, and reliability. © 2022 Wahab Khan et al.
- Authors: Khan, Wahab , Hua, Wang , Anwar, Muhammad , Alharbi, Abdullah , Imran, Muhammad , Khan, Javed
- Date: 2022
- Type: Text , Journal article
- Relation: Wireless Communications and Mobile Computing Vol. 2022, no. (2022), p.
- Full Text:
- Reviewed:
- Description: Data collection in underwater wireless sensor networks (UWSNs) using autonomous underwater vehicles (AUVs) is a more robust solution than traditional approaches, instead of transmitting data from each node to a destination node. However, the design of delay-aware and energy-efficient path planning for AUVs is one of the most crucial problems in collecting data for UWSNs. To reduce network delay and increase network lifetime, we proposed a novel reliable AUV-based data-collection routing protocol for UWSNs. The proposed protocol employs a route planning mechanism to collect data using AUVs. The sink node directs AUVs for data collection from sensor nodes to reduce energy consumption. First, sensor nodes are organized into clusters for better scalability, and then, these clusters are arranged into groups to assign an AUV to each group. Second, the traveling path for each AUV is crafted based on the Markov decision process (MDP) for the reliable collection of data. The simulation results affirm the effectiveness and efficiency of the proposed technique in terms of throughput, energy efficiency, delay, and reliability. © 2022 Wahab Khan et al.
An efficient network intrusion detection and classification system
- Ahmad, Iftikhar, Haq, Qazi, Imran, Muhammad, Alassafi, Madini, Alghamdi, Rayed
- Authors: Ahmad, Iftikhar , Haq, Qazi , Imran, Muhammad , Alassafi, Madini , Alghamdi, Rayed
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 3 (2022), p.
- Full Text:
- Reviewed:
- Description: Intrusion detection in computer networks is of great importance because of its effects on the different communication and security domains. The detection of network intrusion is a challenge. Moreover, network intrusion detection remains a challenging task as a massive amount of data is required to train the state-of-the-art machine learning models to detect network intrusion threats. Many approaches have already been proposed recently on network intrusion detection. However, they face critical challenges owing to the continuous increase in new threats that current systems do not understand. This paper compares multiple techniques to develop a network intrusion detection system. Optimum features are selected from the dataset based on the correlation between the features. Furthermore, we propose an AdaBoost-based approach for network intrusion detection based on these selected features and present its detailed functionality and performance. Unlike most previous studies, which employ the KDD99 dataset, we used a recent and comprehensive UNSW-NB 15 dataset for network anomaly detection. This dataset is a collection of network packets exchanged between hosts. It comprises 49 attributes, including nine types of threats such as DoS, Fuzzers, Exploit, Worm, shellcode, reconnaissance, generic, and analysis Backdoor. In this study, we employ SVM and MLP for comparison. Finally, we propose AdaBoost based on the decision tree classifier to classify normal activity and possible threats. We monitored the network traffic and classified it into either threats or non-threats. The experimental findings showed that our proposed method effectively detects different forms of network intrusions on computer networks and achieves an accuracy of 99.3% on the UNSW-NB15 dataset. The proposed system will be helpful in network security applications and research domains. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
- Authors: Ahmad, Iftikhar , Haq, Qazi , Imran, Muhammad , Alassafi, Madini , Alghamdi, Rayed
- Date: 2022
- Type: Text , Journal article
- Relation: Mathematics Vol. 10, no. 3 (2022), p.
- Full Text:
- Reviewed:
- Description: Intrusion detection in computer networks is of great importance because of its effects on the different communication and security domains. The detection of network intrusion is a challenge. Moreover, network intrusion detection remains a challenging task as a massive amount of data is required to train the state-of-the-art machine learning models to detect network intrusion threats. Many approaches have already been proposed recently on network intrusion detection. However, they face critical challenges owing to the continuous increase in new threats that current systems do not understand. This paper compares multiple techniques to develop a network intrusion detection system. Optimum features are selected from the dataset based on the correlation between the features. Furthermore, we propose an AdaBoost-based approach for network intrusion detection based on these selected features and present its detailed functionality and performance. Unlike most previous studies, which employ the KDD99 dataset, we used a recent and comprehensive UNSW-NB 15 dataset for network anomaly detection. This dataset is a collection of network packets exchanged between hosts. It comprises 49 attributes, including nine types of threats such as DoS, Fuzzers, Exploit, Worm, shellcode, reconnaissance, generic, and analysis Backdoor. In this study, we employ SVM and MLP for comparison. Finally, we propose AdaBoost based on the decision tree classifier to classify normal activity and possible threats. We monitored the network traffic and classified it into either threats or non-threats. The experimental findings showed that our proposed method effectively detects different forms of network intrusions on computer networks and achieves an accuracy of 99.3% on the UNSW-NB15 dataset. The proposed system will be helpful in network security applications and research domains. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
An IoT-based smart healthcare system to detect dysphonia
- Ali, Zulfiqar, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ali, Zulfiqar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 34, no. 14 (2022), p. 11255-11265
- Full Text:
- Reviewed:
- Description: Smart healthcare systems for the internet of things (IoT) platform are cost-efficient and facilitate continuous remote monitoring of patients to avoid unnecessary hospital visits and long waiting times to see practitioners. Presenting a smart healthcare system for the detection of dysphonia can reduce the suffering and pain of patients by providing an initial evaluation of voice. This preliminary feedback of voice could minimize the burden on ENT specialists by referring only genuine cases to them as well as giving an early alarm of potential voice complications to patients. Any possible delay in the treatment and/or inaccurate diagnosis using the subjective nature of tools may lead to severe circumstances for an individual because some types of dysphonia are life-threatening. Therefore, an accurate and reliable smart healthcare system for IoT platform to detect dysphonia is proposed and implemented in this study. Higher-order directional derivatives are used to analyze the time–frequency spectrum of signals in the proposed system. The computed derivatives provide essential and vital information by analyzing the spectrum along different directions to capture the changes that appeared due to malfunctioning the vocal folds. The proposed system provides 99.1% accuracy, while the sensitivity and specificity are 99.4 and 98.1%, respectively. The experimental results showed that the proposed system could provide better classification accuracy than the traditional non-directional first-order derivatives. Hence, the system can be used as a reliable tool for detecting dysphonia and implemented in edge devices to avoid latency issues and protect privacy, unlike cloud processing. © 2021, Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Ali, Zulfiqar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 34, no. 14 (2022), p. 11255-11265
- Full Text:
- Reviewed:
- Description: Smart healthcare systems for the internet of things (IoT) platform are cost-efficient and facilitate continuous remote monitoring of patients to avoid unnecessary hospital visits and long waiting times to see practitioners. Presenting a smart healthcare system for the detection of dysphonia can reduce the suffering and pain of patients by providing an initial evaluation of voice. This preliminary feedback of voice could minimize the burden on ENT specialists by referring only genuine cases to them as well as giving an early alarm of potential voice complications to patients. Any possible delay in the treatment and/or inaccurate diagnosis using the subjective nature of tools may lead to severe circumstances for an individual because some types of dysphonia are life-threatening. Therefore, an accurate and reliable smart healthcare system for IoT platform to detect dysphonia is proposed and implemented in this study. Higher-order directional derivatives are used to analyze the time–frequency spectrum of signals in the proposed system. The computed derivatives provide essential and vital information by analyzing the spectrum along different directions to capture the changes that appeared due to malfunctioning the vocal folds. The proposed system provides 99.1% accuracy, while the sensitivity and specificity are 99.4 and 98.1%, respectively. The experimental results showed that the proposed system could provide better classification accuracy than the traditional non-directional first-order derivatives. Hence, the system can be used as a reliable tool for detecting dysphonia and implemented in edge devices to avoid latency issues and protect privacy, unlike cloud processing. © 2021, Springer-Verlag London Ltd., part of Springer Nature.
Energy harvesting in underwater acoustic wireless sensor networks : design, taxonomy, applications, challenges and future directions
- Khan, Anwar, Imran, Muhammad, Alharbi, Abdullah, Mohamed, Ehab, Fouda, Mostafa
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
- Authors: Khan, Anwar , Imran, Muhammad , Alharbi, Abdullah , Mohamed, Ehab , Fouda, Mostafa
- Date: 2022
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 10, no. (2022), p. 134606-134622
- Full Text:
- Reviewed:
- Description: In underwater acoustic wireless sensor networks (UAWSNs), energy harvesting either enhances the lifetime of a network by increasing the battery power of sensor nodes or ensures battery-less operation of nodes. This, in effect, results in sustainable and reliable operation of the network deployed for various underwater applications. This work provides a survey of the energy harvesting techniques for UAWSNs. Our work is unique than the existing work on underwater energy harvesting in that it includes state-of-the art techniques designed in the last decade. It analyzes every harvesting scheme in terms of its main idea, merits, demerits and the extent of the harvested power (energy). The description of the merits results in selection of the suitable scheme for the suitable underwater applications. The demerits of the addressed schemes provide an insight to their future enhancement and improvement. Moreover, the harvested techniques are classified into various categories depending upon the involved energy harvesting mechanism and compared based on the maximum and minimum amount of harvested power, which helps in selection of the suitable category keeping in view the power budget of an underwater network before deployment. The challenges in energy harvesting and in UAWSNs are described to provide an insight to them and to address them for further enhancement in the harvested extent. Finally, research directions are specified for future investigation. © 2013 IEEE.
Formal modeling and verification of a blockchain-based crowdsourcing consensus protocol
- Afzaal, Hamra, Imran, Muhammad, Janjua, Muhammad, Gochhayat, Sarada
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad , Gochhayat, Sarada
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 8163-8183
- Full Text:
- Reviewed:
- Description: Crowdsourcing is an effective technique that allows humans to solve complex problems that are hard to accomplish by automated tools. Some significant challenges in crowdsourcing systems include avoiding security attacks, effective trust management, and ensuring the system's correctness. Blockchain is a promising technology that can be efficiently exploited to address security and trust issues. The consensus protocol is a core component of a blockchain network through which all the blockchain peers achieve an agreement about the state of the distributed ledger. Therefore, its security, trustworthiness, and correctness have vital importance. This work proposes a Secure and Trustworthy Blockchain-based Crowdsourcing (STBC) consensus protocol to address these challenges. Model checking is an effective and automatic technique based on formal methods that is utilized to ensure the correctness of STBC consensus protocol. The proposed consensus protocol's formal specification is described using Communicating Sequential Programs (CSP#). Safety, fault tolerance, leader trust, and validators' trust are important properties for a consensus protocol, which are formally specified through Linear Temporal Logic (LTL) to prevent several security attacks, such as blockchain fork, selfish mining, and invalid block insertion. Process Analysis Toolkit (PAT) is utilized for the formal verification of the proposed consensus protocol. © 2022 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Water quality management using hybrid machine learning and data mining algorithms : an indexing approach
- Aslam, Bilal, Maqsoom, Ahsen, Cheema, Ali, Ullah, Fahim, Alharbi, Abdullah, Imran, Muhammad
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.
- Authors: Aslam, Bilal , Maqsoom, Ahsen , Cheema, Ali , Ullah, Fahim , Alharbi, Abdullah , Imran, Muhammad
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 119692-119705
- Full Text:
- Reviewed:
- Description: One of the key functions of global water resource management authorities is river water quality (WQ) assessment. A water quality index (WQI) is developed for water assessments considering numerous quality-related variables. WQI assessments typically take a long time and are prone to errors during sub-indices generation. This can be tackled through the latest machine learning (ML) techniques renowned for superior accuracy. In this study, water samples were taken from the wells in the study area (North Pakistan) to develop WQI prediction models. Four standalone algorithms, i.e., random trees (RT), random forest (RF), M5P, and reduced error pruning tree (REPT), were used in this study. In addition, 12 hybrid data-mining algorithms (a combination of standalone, bagging (BA), cross-validation parameter selection (CVPS), and randomizable filtered classification (RFC)) were also used. Using the 10-fold cross-validation technique, the data were separated into two groups (70:30) for algorithm creation. Ten random input permutations were created using Pearson correlation coefficients to identify the best possible combination of datasets for improving the algorithm prediction. The variables with very low correlations performed poorly, whereas hybrid algorithms increased the prediction capability of numerous standalone algorithms. Hybrid RT-Artificial Neural Network (RT-ANN) with RMSE = 2.319, MAE = 2.248, NSE = 0.945, and PBIAS = -0.64 outperformed all other algorithms. Most algorithms overestimated WQI values except for BA-RF, RF, BA-REPT, REPT, RFC-M5P, RFC-REPT, and ANN- Adaptive Network-Based Fuzzy Inference System (ANFIS). © 2013 IEEE.