Formal verification of justification and finalization in beacon chain
- Afzaal, Hamra, Zafar, Nazir, Tehseen, Aqsa, Kousar, Shaheen, Imran, Muhammad
- Authors: Afzaal, Hamra , Zafar, Nazir , Tehseen, Aqsa , Kousar, Shaheen , Imran, Muhammad
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 55077-55102
- Full Text:
- Reviewed:
- Description: In recent years, Beacon Chain known as the core of Ethereum 2.0, has gained considerable attention since its launch. Many validators have staked billions of Ether in the Proof of Stake (PoS) network. It is a mission critical system and its security and stability rely on the justification and finalization of checkpoints. These are essential elements of the Casper FFG consensus algorithm utilized by the Beacon Chain. This process is critical for establishing a trustworthy foundation and finalizing proposed blocks by confirming agreed upon checkpoints. Hence, ensuring the correctness of checkpoints in the Beacon Chain has significant importance because any bug in it can cause serious implications. To address this challenge, we employ formal methods, a popular mathematical approach used for verifying the correctness of such critical systems. In this work, we have done formal verification of the processes of Beacon Chain state initialization, justification and finalization of checkpoints using the Process Analysis Toolkit (PAT) model checker. The adoption of model checking through the PAT model checker presents a novel contribution of our work, as this approach is not previously utilized in the formal verification of Beacon Chain. The presented work is specified through the Communicating Sequential Programs, formal specification language, and the properties are described through Linear Temporal Logic. The PAT model checker takes the specified formal model and properties as input to assess whether the properties are satisfied. The properties are analyzed with respect to the verification time, visited states, total transitions, and memory used. Through this research, we aim to increase confidence in the correctness and reliability of the Beacon Chain. © 2013 IEEE.
- Authors: Afzaal, Hamra , Zafar, Nazir , Tehseen, Aqsa , Kousar, Shaheen , Imran, Muhammad
- Date: 2024
- Type: Text , Journal article
- Relation: IEEE Access Vol. 12, no. (2024), p. 55077-55102
- Full Text:
- Reviewed:
- Description: In recent years, Beacon Chain known as the core of Ethereum 2.0, has gained considerable attention since its launch. Many validators have staked billions of Ether in the Proof of Stake (PoS) network. It is a mission critical system and its security and stability rely on the justification and finalization of checkpoints. These are essential elements of the Casper FFG consensus algorithm utilized by the Beacon Chain. This process is critical for establishing a trustworthy foundation and finalizing proposed blocks by confirming agreed upon checkpoints. Hence, ensuring the correctness of checkpoints in the Beacon Chain has significant importance because any bug in it can cause serious implications. To address this challenge, we employ formal methods, a popular mathematical approach used for verifying the correctness of such critical systems. In this work, we have done formal verification of the processes of Beacon Chain state initialization, justification and finalization of checkpoints using the Process Analysis Toolkit (PAT) model checker. The adoption of model checking through the PAT model checker presents a novel contribution of our work, as this approach is not previously utilized in the formal verification of Beacon Chain. The presented work is specified through the Communicating Sequential Programs, formal specification language, and the properties are described through Linear Temporal Logic. The PAT model checker takes the specified formal model and properties as input to assess whether the properties are satisfied. The properties are analyzed with respect to the verification time, visited states, total transitions, and memory used. Through this research, we aim to increase confidence in the correctness and reliability of the Beacon Chain. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
- Wang, Yanping, Wang, Xiaofen, Dai, Hong-Ning, Zhang, Xiaosong, Imran, Muhammad
- Authors: Wang, Yanping , Wang, Xiaofen , Dai, Hong-Ning , Zhang, Xiaosong , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 19, no. 6 (2023), p. 7835-7847
- Full Text: false
- Reviewed:
- Description: Intelligent Transport Systems (ITS) have received growing attention recently driven by technical advances in Industrial Internet of Vehicles (IIoV). In IIoV, vehicles report traffic data to management infrastructures to achieve better ITS services. To ensure security and privacy, many anonymous authentication-enabled data reporting protocols are proposed. However, these protocols usually require a large number of preloaded pseudonyms or involve a costly and irrevocable group signature. Thus, they are not ready for realistic deployment due to large storage overhead, expensive computation costs, or absence of malicious users' revocation. To address these issues, we present a novel data reporting protocol for edge-assisted ITS in this paper, where the traffic data is sent to distributed edge nodes for local processing. Specifically, we propose a new anonymous authentication scheme fine-tuned to fulfill the needs of vehicular data reporting, which allows authenticated vehicles to report unlimited unlinkable messages to edge nodes without huge pseudonyms download and storage costs. Moreover, we designed an efficient certificate update scheme based on a bivariate polynomial function. In this way, malicious vehicles can be revoked with time complexity O(1). The security analysis demonstrates that our protocol satisfies source authentication, anonymity, unlinkability, traceability, revocability, nonframeability, and nonrepudiation. Further, extensive simulation results show that the performance of our protocol is greatly improved since the signature size is reduced by at least 8%, the computation costs in message signing and verification are reduced by at least 56% and 67%, respectively, and the packet loss rate is reduced by at least 14%. © 2005-2012 IEEE.
A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing
- Nasser, Nidal, Emad-ul-Haq, Qazi, Imran, Muhammad, Ali, Asmaa, Razzak, Imran, Al-Helali, Abdulaziz
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
An effective solution to the optimal power flow problem using meta-heuristic algorithms
- Aurangzeb, Khursheed, Shafiq, Sundas, Alhussein, Musaed, Pamir, Javaid, Nadeem, Imran, Muhammad
- Authors: Aurangzeb, Khursheed , Shafiq, Sundas , Alhussein, Musaed , Pamir , Javaid, Nadeem , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Frontiers in Energy Research Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Financial loss in power systems is an emerging problem that needs to be resolved. To tackle the mentioned problem, energy generated from various generation sources in the power network needs proper scheduling. In order to determine the best settings for the control variables, this study formulates and solves an optimal power flow (OPF) problem. In the proposed work, the bird swarm algorithm (BSA), JAYA, and a hybrid of both algorithms, termed as HJBSA, are used for obtaining the settings of optimum variables. We perform simulations by considering the constraints of voltage stability and line capacity, and generated reactive and active power. In addition, the used algorithms solve the problem of OPF and minimize carbon emission generated from thermal systems, fuel cost, voltage deviations, and losses in generation of active power. The suggested approach is evaluated by putting it into use on two separate IEEE testing systems, one with 30 buses and the other with 57 buses. The simulation results show that for the 30-bus system, the minimization in cost by HJBSA, JAYA, and BSA is 860.54 $/h, 862.31, $/h and 900.01 $/h, respectively, while for the 57-bus system, it is 5506.9 $/h, 6237.4, $/h and 7245.6 $/h for HJBSA, JAYA, and BSA, respectively. Similarly, for the 30-bus system, the power loss by HJBSA, JAYA, and BSA is 9.542 MW, 10.102 MW, and 11.427 MW, respectively, while for the 57-bus system, the value of power loss is 13.473 MW, 20.552, MW and 18.638 MW for HJBSA, JAYA, and BSA, respectively. Moreover, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 4.394 ton/h, 4.524, ton/h and 4.401 ton/h, respectively, with the 30-bus system. With the 57-bus system, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 26.429 ton/h, 27.014, ton/h and 28.568 ton/h, respectively. The results show the outperformance of HJBSA. Copyright © 2023 Aurangzeb, Shafiq, Alhussein, Pamir, Javaid and Imran.
- Authors: Aurangzeb, Khursheed , Shafiq, Sundas , Alhussein, Musaed , Pamir , Javaid, Nadeem , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Frontiers in Energy Research Vol. 11, no. (2023), p.
- Full Text:
- Reviewed:
- Description: Financial loss in power systems is an emerging problem that needs to be resolved. To tackle the mentioned problem, energy generated from various generation sources in the power network needs proper scheduling. In order to determine the best settings for the control variables, this study formulates and solves an optimal power flow (OPF) problem. In the proposed work, the bird swarm algorithm (BSA), JAYA, and a hybrid of both algorithms, termed as HJBSA, are used for obtaining the settings of optimum variables. We perform simulations by considering the constraints of voltage stability and line capacity, and generated reactive and active power. In addition, the used algorithms solve the problem of OPF and minimize carbon emission generated from thermal systems, fuel cost, voltage deviations, and losses in generation of active power. The suggested approach is evaluated by putting it into use on two separate IEEE testing systems, one with 30 buses and the other with 57 buses. The simulation results show that for the 30-bus system, the minimization in cost by HJBSA, JAYA, and BSA is 860.54 $/h, 862.31, $/h and 900.01 $/h, respectively, while for the 57-bus system, it is 5506.9 $/h, 6237.4, $/h and 7245.6 $/h for HJBSA, JAYA, and BSA, respectively. Similarly, for the 30-bus system, the power loss by HJBSA, JAYA, and BSA is 9.542 MW, 10.102 MW, and 11.427 MW, respectively, while for the 57-bus system, the value of power loss is 13.473 MW, 20.552, MW and 18.638 MW for HJBSA, JAYA, and BSA, respectively. Moreover, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 4.394 ton/h, 4.524, ton/h and 4.401 ton/h, respectively, with the 30-bus system. With the 57-bus system, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 26.429 ton/h, 27.014, ton/h and 28.568 ton/h, respectively. The results show the outperformance of HJBSA. Copyright © 2023 Aurangzeb, Shafiq, Alhussein, Pamir, Javaid and Imran.
An optimized hybrid deep intrusion detection model (HD-IDM) for enhancing network security
- Ahmad, Iftikhar, Imran, Muhammad, Qayyum, Abdul, Ramzan, Muhammad, Alassafi, Madini
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
- Authors: Ahmad, Iftikhar , Imran, Muhammad , Qayyum, Abdul , Ramzan, Muhammad , Alassafi, Madini
- Date: 2023
- Type: Text , Journal article
- Relation: Mathematics Vol. 11, no. 21 (2023), p.
- Full Text:
- Reviewed:
- Description: Detecting cyber intrusions in network traffic is a tough task for cybersecurity. Current methods struggle with the complexity of understanding patterns in network data. To solve this, we present the Hybrid Deep Learning Intrusion Detection Model (HD-IDM), a new way that combines GRU and LSTM classifiers. GRU is good at catching quick patterns, while LSTM handles long-term ones. HD-IDM blends these models using weighted averaging, boosting accuracy, especially with complex patterns. We tested HD-IDM on four datasets: CSE-CIC-IDS2017, CSE-CIC-IDS2018, NSL KDD, and CIC-DDoS2019. The HD-IDM classifier achieved remarkable performance metrics on all datasets. It attains an outstanding accuracy of 99.91%, showcasing its consistent precision across the dataset. With an impressive precision of 99.62%, it excels in accurately categorizing positive cases, crucial for minimizing false positives. Additionally, maintaining a high recall of 99.43%, it effectively identifies the majority of actual positive cases while minimizing false negatives. The F1-score of 99.52% emphasizes its robustness, making it the top choice for classification tasks requiring precision and reliability. It is particularly good at ROC and precision/recall curves, discriminating normal and harmful network activities. While HD-IDM is promising, it has limits. It needs labeled data and may struggle with new intrusion methods. Future work should find ways to handle unlabeled data and adapt to emerging threats. Also, making HD-IDM work faster for real-time use and dealing with scalability challenges is key for its broader use in changing network environments. © 2023 by the authors.
Automated methods for diagnosis of Parkinson’s disease and predicting severity level
- Ayaz, Zainab, Naz, Saeeda, Khan, Naila, Razzak, Imran, Imran, Muhammad
- Authors: Ayaz, Zainab , Naz, Saeeda , Khan, Naila , Razzak, Imran , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 20 (2023), p. 14499-14534
- Full Text: false
- Reviewed:
- Description: The recent advancements in information technology and bioinformatics have led to exceptional contributions in medical sciences. Extensive developments have been recorded for digital devices, thermometers, digital equipments and health monitoring systems for the automated disease diagnosis of different diseases. These automated systems assist doctors with accurate and efficient disease diagnosis. Parkinson’s disease is a neurodegenerative disorder that affects the nervous system. Over the years, numerous efforts have been reported for the efficient automatic detection of Parkinson’s disease. Different datasets including voice data samples, radiology images, and handwriting samples and gait specimens have been used for analysis and detection. Techniques such as machine learning and deep learning have been used broadly and reported promising results. This review paper aims to provide a comprehensive survey of the use of artificial intelligence for Parkinson’s disease diagnosis. The available datasets and their various properties are discussed in detail. Further, a thorough overview is provided for the existing algorithms, methods and approaches utilizing different datasets. Several key peculiarities and challenges are also provided based on the comprehensive literature review to diagnose a healthy or unhealthy person. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Data evolution governance for ontology-based digital twin product lifecycle management
- Ren, Zijie, Shi, Jianhua, Imran, Muhammad
- Authors: Ren, Zijie , Shi, Jianhua , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 19, no. 2 (2023), p. 1791-1802
- Full Text: false
- Reviewed:
- Description: Product lifecycle management (PLM) is an effective method for enhancing the market competitiveness of modern manufacturing industries. The digital twin is characterized by a profound integration of physics and information systems, which provides a technical means for integrating multisource information and breaking the time and space barrier of communication at each link of the lifecycle. Currently, however, the application of this technology focuses primarily on the product itself and 'service-oriented' application results. There is a lack of focus on twin data and its internal evolutionary mechanisms separately. In the management of global data resources, the benefits of digital twin technology cannot be fully realized. This article applies ontology technology in an innovative manner to the field of the digital twin to increase the reusability of twin data. Initially, a four-layered ontology-based twin data management architecture is presented. Then, a three-dimensional and three-granularity unified evolution model of full lifecycle twin data is proposed, as well as its ontology model. Then, the service mode of data components at each stage of the lifecycle is defined, a knowledge-sharing plane is established in the digital twin, and a data governance method based on ontology reasoning using data components on the shared plane is proposed. The ICandyBox simulation platform is then used to demonstrate the concept of the proposed method, and future research directions are proposed. © 2005-2012 IEEE.
Deep learning : survey of environmental and camera impacts on internet of things images
- Kaur, Roopdeep, Karmakar, Gour, Xia, Feng, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
- Authors: Kaur, Roopdeep , Karmakar, Gour , Xia, Feng , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Artificial Intelligence Review Vol. 56, no. 9 (2023), p. 9605-9638
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) images are captivating growing attention because of their wide range of applications which requires visual analysis to drive automation. However, IoT images are predominantly captured from outdoor environments and thus are inherently impacted by the camera and environmental parameters which can adversely affect corresponding applications. Deep Learning (DL) has been widely adopted in the field of image processing and computer vision and can reduce the impact of these parameters on IoT images. Albeit, there are many DL-based techniques available in the current literature for analyzing and reducing the environmental and camera impacts on IoT images. However, to the best of our knowledge, no survey paper presents state-of-the-art DL-based approaches for this purpose. Motivated by this, for the first time, we present a Systematic Literature Review (SLR) of existing DL techniques available for analyzing and reducing environmental and camera lens impacts on IoT images. As part of this SLR, firstly, we reiterate and highlight the significance of IoT images in their respective applications. Secondly, we describe the DL techniques employed for assessing the environmental and camera lens distortion impacts on IoT images. Thirdly, we illustrate how DL can be effective in reducing the impact of environmental and camera lens distortion in IoT images. Finally, along with the critical reflection on the advantages and limitations of the techniques, we also present ways to address the research challenges of existing techniques and identify some further researches to advance the relevant research areas. © 2023, The Author(s).
Deep learning-based digital image forgery detection using transfer learning
- Qazi, Emad, Zia, Tanveer, Imran, Muhammad, Faheem, Muhammad
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
- Authors: Qazi, Emad , Zia, Tanveer , Imran, Muhammad , Faheem, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Intelligent Automation and Soft Computing Vol. 38, no. 3 (2023), p. 225-240
- Full Text:
- Reviewed:
- Description: Deep learning is considered one of the most efficient and reliable methods through which the legitimacy of a digital image can be verified. In the current cyber world where deepfakes have shaken the global community, confirming the legitimacy of a digital image is of great importance. With the advancements made in deep learning techniques, now we can efficiently train and develop state-of-the-art digital image forensic models. The most traditional and widely used method by researchers is convolution neural networks (CNN) for verification of image authenticity but it consumes a considerable number of resources and requires a large dataset for training. Therefore, in this study, a transfer learning based deep learning technique for image forgery detection is proposed. The proposed methodology consists of three modules namely; preprocessing module, convolutional module, and the classification module. By using our proposed technique, the training time is drastically reduced by utilizing the pre-trained weights. The performance of the proposed technique is evaluated by using benchmark datasets, i.e., BOW and BOSSBase that detect five forensic types which include JPEG compression, contrast enhancement (CE), median filtering (MF), additive Gaussian noise, and resampling. We evaluated the performance of our proposed technique by conducting various experiments and case scenarios and achieved an accuracy of 99.92%. The results show the superiority of the proposed system. © 2023, Tech Science Press. All rights reserved.
- Ali, Sajid, Abusabha, Omar, Ali, Farman, Imran, Muhammad, Abuhmed, Tamer
- Authors: Ali, Sajid , Abusabha, Omar , Ali, Farman , Imran, Muhammad , Abuhmed, Tamer
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Network and Service Management Vol. 20, no. 2 (2023), p. 1199-1209
- Full Text: false
- Reviewed:
- Description: Despite the benefits of the Internet of Things (IoT), the growing influx of IoT-specific malware coordinating large-scale cyberattacks via infected IoT devices has created a substantial threat to the Internet ecosystem. Assessing IoT systems' security and developing mitigation measures to prevent the spread of IoT malware is therefore critical. Furthermore, for training and testing the fidelity of cyber security-based Machine Learning (ML) and Deep Learning (DL) approaches, the collection and exploration of information from multiple sources from the IoT are crucial. In this regard, we propose a multitask DL model for detecting IoT malware. Our proposed Long Short-Term Memory (LSTM) based model efficiently performs two tasks: 1) determination of whether the provided traffic is benign or malicious, and 2) determination of the malware type for identifying malicious network traffic. We used large-scale traffic data of 145. pcap files of benign and malicious traffic collected from 18 different IoT devices. We performed a time-series analysis on the packets of traffic flows, which were then used to train the proposed model. The features extracted from the dataset were categorized into three modalities: flow-related, traffic flag-related, and packet payload-related features. A feature selection approach was employed at the feature and modality levels, and the best modalities and features were utilized for performance enhancement. For tasks 1 and 2 and multitask classification, the flow-related and flag-related modalities showed the best testing accuracies of 92.63%, 88.45%, and 95.83%, respectively. © 2004-2012 IEEE.
Electricity theft detection for energy optimization using deep learning models
- Pamir, Javaid, Nadeem, Javed, Muhammad, Houran, Mohamad, Almasoud, Abdullah, Imran, Muhammad
- Authors: Pamir , Javaid, Nadeem , Javed, Muhammad , Houran, Mohamad , Almasoud, Abdullah , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Energy Science and Engineering Vol. 11, no. 10 (2023), p. 3575-3596
- Full Text:
- Reviewed:
- Description: The rapid increase in nontechnical loss (NTL) has become a principal concern for distribution system operators (DSOs) over the years. Electricity theft makes up a major part of NTL. It causes losses for the DSOs and also deteriorates the quality of electricity. The introduction of advanced metering infrastructure along with the upgradation of the traditional grids to the smart grids (SGs) has helped the electric utilities to collect the electricity consumption (EC) readings of consumers, which further empowers the machine learning (ML) algorithms to be exploited for efficient electricity theft detection (ETD). However, there are still some shortcomings, such as class imbalance, curse of dimensionality, and bypassing the automated tuning of hyperparameters in the existing ML-based theft classification schemes that limit their performances. Therefore, it is essential to develop a novel approach to deal with these problems and efficiently detect electricity theft in SGs. Using the salp swarm algorithm (SSA), gate convolutional autoencoder (GCAE), and cost-sensitive learning and long short-term memory (CSLSTM), an effective ETD model named SSA–GCAE–CSLSTM is proposed in this work. Furthermore, a hybrid GCAE model is developed via the combination of gated recurrent unit and convolutional autoencoder. The proposed model comprises five submodules: (1) data preparation, (2) data balancing, (3) dimensionality reduction, (4) hyperparameters' optimization, and (5) electricity theft classification. The real-time EC data provided by the state grid corporation of China are used for performance evaluations via extensive simulations. The proposed model is compared with two basic models, CSLSTM and GCAE–CSLSTM, along with seven benchmarks, support vector machine, decision tree, extra trees, random forest, adaptive boosting, extreme gradient boosting, and convolutional neural network. The results exhibit that SSA–GCAE–CSLSTM yields 99.45% precision, 95.93% F1 score, 92.25% accuracy, and 71.13% area under the receiver operating characteristic curve score, and surpasses the other models in terms of ETD. © 2023 The Authors. Energy Science & Engineering published by Society of Chemical Industry and John Wiley & Sons Ltd.
- Authors: Pamir , Javaid, Nadeem , Javed, Muhammad , Houran, Mohamad , Almasoud, Abdullah , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Energy Science and Engineering Vol. 11, no. 10 (2023), p. 3575-3596
- Full Text:
- Reviewed:
- Description: The rapid increase in nontechnical loss (NTL) has become a principal concern for distribution system operators (DSOs) over the years. Electricity theft makes up a major part of NTL. It causes losses for the DSOs and also deteriorates the quality of electricity. The introduction of advanced metering infrastructure along with the upgradation of the traditional grids to the smart grids (SGs) has helped the electric utilities to collect the electricity consumption (EC) readings of consumers, which further empowers the machine learning (ML) algorithms to be exploited for efficient electricity theft detection (ETD). However, there are still some shortcomings, such as class imbalance, curse of dimensionality, and bypassing the automated tuning of hyperparameters in the existing ML-based theft classification schemes that limit their performances. Therefore, it is essential to develop a novel approach to deal with these problems and efficiently detect electricity theft in SGs. Using the salp swarm algorithm (SSA), gate convolutional autoencoder (GCAE), and cost-sensitive learning and long short-term memory (CSLSTM), an effective ETD model named SSA–GCAE–CSLSTM is proposed in this work. Furthermore, a hybrid GCAE model is developed via the combination of gated recurrent unit and convolutional autoencoder. The proposed model comprises five submodules: (1) data preparation, (2) data balancing, (3) dimensionality reduction, (4) hyperparameters' optimization, and (5) electricity theft classification. The real-time EC data provided by the state grid corporation of China are used for performance evaluations via extensive simulations. The proposed model is compared with two basic models, CSLSTM and GCAE–CSLSTM, along with seven benchmarks, support vector machine, decision tree, extra trees, random forest, adaptive boosting, extreme gradient boosting, and convolutional neural network. The results exhibit that SSA–GCAE–CSLSTM yields 99.45% precision, 95.93% F1 score, 92.25% accuracy, and 71.13% area under the receiver operating characteristic curve score, and surpasses the other models in terms of ETD. © 2023 The Authors. Energy Science & Engineering published by Society of Chemical Industry and John Wiley & Sons Ltd.
Federated learning based trajectory optimization for UAV enabled MEC
- Nehra, Anushka, Consul, Prakhar, Budhiraja, Ishan, Kaur, Gagandeep, Nasser, Nidal, Imran, Muhammad
- Authors: Nehra, Anushka , Consul, Prakhar , Budhiraja, Ishan , Kaur, Gagandeep , Nasser, Nidal , Imran, Muhammad
- Date: 2023
- Type: Text , Conference paper
- Relation: 2023 IEEE International Conference on Communications, ICC 2023 Vol. 2023-May, p. 1640-1645
- Full Text: false
- Reviewed:
- Description: We present a moving mobile edge computing architecture in which unmanned aerial vehicles (UAV) serve as an equipment, providing computational power and allowing task offloading from mobile devices (MD). By improving user association, resource allocation, and UAV trajectory, we optimizing the energy consumption of all MDs. Towards that purpose, we provide a Trajectory optimization technique for making real-time choices while considering all the situation of the environment, followed by a DRL-based Trajectory control approach (RLCT). The RLCT approach may be adapted to any UAV takeoff point and can find the solution faster. The FL is introduced to address the Optimization problem in a Semi-distributed DRL technique to deal with UAV trajectory constraints. The proposed FRL approach enables devices to rapidly train the models locally while communicating with a local server to construct a network globally. The simulation results in the result section shows that the proposed technique RLCT and FRL in the paper outperforms the existing methods' while the FRL performs best among all. © 2023 IEEE.
Formal verification of fraud-resilience in a crowdsourcing consensus protocol
- Afzaal, Hamra, Imran, Muhammad, Janjua, Muhammad
- Authors: Afzaal, Hamra , Imran, Muhammad , Janjua, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Computers & security Vol. 131, no. (2023), p. 103290
- Full Text: false
- Reviewed:
- Description: •A Trust and Transactions Chain consensus protocol is proposed for a blockchain-based crowdsourcing system.•Communicating Sequential Programs language is utilized for the formal modeling of the proposed consensus protocol.•The properties of no sybil attack, no eclipse attack, and fraud-resilience are defined through Linear Temporal Logic.•Model checking is employed to ensure the correctness of the proposed consensus protocol.•The formal verification is performed by giving the formal model and properties as input to the Process Analysis Toolkit. [Display omitted] Crowdsourcing has emerged as a promising computing paradigm that utilizes human intelligence to achieve complex tasks, but it encounters several security and trust issues. Blockchain is a potential technology that can resolve most of these issues, however, it is difficult to find an appropriate consensus protocol applicable to crowdsourcing systems. Therefore, this work presents a Trust and Transactions Chain (TTC) consensus protocol built upon blockchain technology. It selects a trusted leader and validators considering a trust model which depends on deposit ratio, block generation and validation rate, and waiting rate. The TTC protocol addresses the main challenge of ensuring correctness related to critical systems of crowdsourcing which has extreme significance as their failure can result in disastrous consequences. This work is primarily focused on fraud-resilience avoiding double-spending attack. It also deals with sybil and eclipse attacks. Model checking is exploited because it is effective and automatic to conduct formal verification. The TTC protocol is formally modeled utilizing Communicating Sequential Programs, and the fraud-resilience property is specified using Linear Temporal Logic. The verification of the model is done using Process Analysis Toolkit that takes the formal model and specified properties as input to inspect the properties’ satisfaction or violation. The results of the formal verification are analyzed with respect to the verification time and the number of visited states.
Impact of traditional and embedded image denoising on CNN-based deep learning
- Kaur, Roopdeep, Karmakar, Gour, Imran, Muhammad
- Authors: Kaur, Roopdeep , Karmakar, Gour , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Applied sciences Vol. 13, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: In digital image processing, filtering noise is an important step for reconstructing a high-quality image for further processing such as object segmentation, object detection, and object recognition. Various image-denoising approaches, including median, Gaussian, and bilateral filters, are available in the literature. Since convolutional neural networks (CNN) are able to directly learn complex patterns and features from data, they have become a popular choice for image-denoising tasks. As a result of their ability to learn and adapt to various denoising scenarios, CNNs are powerful tools for image denoising. Some deep learning techniques such as CNN incorporate denoising strategies directly into the CNN model layers. A primary limitation of these methods is their necessity to resize images to a consistent size. This resizing can result in a loss of vital image details, which might compromise CNN’s effectiveness. Because of this issue, we utilize a traditional denoising method as a preliminary step for noise reduction before applying CNN. To our knowledge, a comparative performance study of CNN using traditional and embedded denoising against a baseline approach (without denoising) is yet to be performed. To analyze the impact of denoising on the CNN performance, in this paper, firstly, we filter the noise from the images using traditional means of denoising method before their use in the CNN model. Secondly, we embed a denoising layer in the CNN model. To validate the performance of image denoising, we performed extensive experiments for both traffic sign and object recognition datasets. To decide whether denoising will be adopted and to decide on the type of filter to be used, we also present an approach exploiting the peak-signal-to-noise-ratio (PSNRs) distribution of images. Both CNN accuracy and PSNRs distribution are used to evaluate the effectiveness of the denoising approaches. As expected, the results vary with the type of filter, impact, and dataset used in both traditional and embedded denoising approaches. However, traditional denoising shows better accuracy, while embedded denoising shows lower computational time for most of the cases. Overall, this comparative study gives insights into whether denoising will be adopted in various CNN-based image analyses, including autonomous driving, animal detection, and facial recognition.
- Authors: Kaur, Roopdeep , Karmakar, Gour , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: Applied sciences Vol. 13, no. 20 (2023), p.
- Full Text:
- Reviewed:
- Description: In digital image processing, filtering noise is an important step for reconstructing a high-quality image for further processing such as object segmentation, object detection, and object recognition. Various image-denoising approaches, including median, Gaussian, and bilateral filters, are available in the literature. Since convolutional neural networks (CNN) are able to directly learn complex patterns and features from data, they have become a popular choice for image-denoising tasks. As a result of their ability to learn and adapt to various denoising scenarios, CNNs are powerful tools for image denoising. Some deep learning techniques such as CNN incorporate denoising strategies directly into the CNN model layers. A primary limitation of these methods is their necessity to resize images to a consistent size. This resizing can result in a loss of vital image details, which might compromise CNN’s effectiveness. Because of this issue, we utilize a traditional denoising method as a preliminary step for noise reduction before applying CNN. To our knowledge, a comparative performance study of CNN using traditional and embedded denoising against a baseline approach (without denoising) is yet to be performed. To analyze the impact of denoising on the CNN performance, in this paper, firstly, we filter the noise from the images using traditional means of denoising method before their use in the CNN model. Secondly, we embed a denoising layer in the CNN model. To validate the performance of image denoising, we performed extensive experiments for both traffic sign and object recognition datasets. To decide whether denoising will be adopted and to decide on the type of filter to be used, we also present an approach exploiting the peak-signal-to-noise-ratio (PSNRs) distribution of images. Both CNN accuracy and PSNRs distribution are used to evaluate the effectiveness of the denoising approaches. As expected, the results vary with the type of filter, impact, and dataset used in both traditional and embedded denoising approaches. However, traditional denoising shows better accuracy, while embedded denoising shows lower computational time for most of the cases. Overall, this comparative study gives insights into whether denoising will be adopted in various CNN-based image analyses, including autonomous driving, animal detection, and facial recognition.
Malicious node detection using machine learning and distributed data storage using blockchain in WSNs
- Nouman, Muhammad, Qasim, Umar, Nasir, Hina, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
Modeling and analysis of finite-scale clustered backscatter communication networks
- Wang, Qiu, Zhou, Yong, Dai, Hong-Ning, Zhang, Guopeng, Imran, Muhammad, Nasser, Nidal
- Authors: Wang, Qiu , Zhou, Yong , Dai, Hong-Ning , Zhang, Guopeng , Imran, Muhammad , Nasser, Nidal
- Date: 2023
- Type: Text , Conference paper
- Relation: 2023 IEEE International Conference on Communications, ICC 2023, Rome, 28 May-1 June 2023, ICC 2023 - IEEE International Conference on Communications Vol. 2023-May, p. 1456-1461
- Full Text: false
- Reviewed:
- Description: Backscatter communication (BackCom) is an intriguing technology that enables devices to transmit information by reflecting environmental radio frequency signals while consuming ultra-low energy. Applying BackCom in the Internet of things (IoT) networks can effectively address the power-unsustainability issue of energy-constraint devices. Considering many practical IoT applications, networks are finite-scale and devices are needed to be deployed at hotspot regions organized in clusters to cooperate for specific tasks. This paper considers finite-scale clustered backscatter communication networks (F-CBackCom Nets). To ensure communications, this paper establishes a theoretic model to analyze the communication connectivity of F-CBackCom Nets. Different from prior studies analyzing the connectivity with a focus on the transmission pair located at the center of the network, this paper analyzes the connectivity of a transmission pair located in an arbitrary location, because the performance of transmission pairs potentially varies with their network location. Extensive simulations validate the accuracy of our analytical model. Our results show that the connectivity of a transmission pair can be affected by its network location. Our analytical model and results can offer beneficial implications for constructing F-CBackCom Nets. © 2023 IEEE.
Multi-aspect annotation and analysis of Nepali tweets on anti-establishment election discourse
- Rauniyar, Kritesh, Poudel, Sweta, Shiwakoti, Shuvam, Thapa, Surendrabikram, Rashid, Junaid, Kim, Jungeun, Imran, Muhammad, Naseem, Usman
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
- Authors: Rauniyar, Kritesh , Poudel, Sweta , Shiwakoti, Shuvam , Thapa, Surendrabikram , Rashid, Junaid , Kim, Jungeun , Imran, Muhammad , Naseem, Usman
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 143092-143115
- Full Text:
- Reviewed:
- Description: In today's social media-dominated landscape, digital platforms wield substantial influence over public opinion, particularly during crucial political events such as electoral processes. These platforms become hubs for diverse discussions, encompassing topics, reforms, and desired changes. Notably, in times of government dissatisfaction, they serve as arenas for anti-establishment discourse, highlighting the need to analyze public sentiment in these conversations. However, the analysis of such discourse is notably scarce, even in high-resource languages, and entirely non-existent in the context of the Nepali language. To address this critical gap, we present Nepal Anti Establishment discourse Tweets (NAET), a novel dataset comprising 4,445 multi-aspect annotated Nepali tweets, facilitating a comprehensive understanding of political conversations. Our contributions encompass evaluating tweet relevance, sentiment, and satire, while also exploring the presence of hate speech, identifying its targets, and distinguishing directed and non-directed expressions. Additionally, we investigate hope speech, an underexplored aspect crucial in the context of anti-establishment discourse, as it reflects the aspirations and expectations from new political figures and parties. Furthermore, we set NLP-based baselines for all these tasks. To ensure a holistic analysis, we also employ topic modeling, a powerful technique that helps us identify and understand the prevalent themes and patterns emerging from the discourse. Our research thus presents a comprehensive and multi-faceted perspective on anti-establishment election discourse in a low-resource language setting. The dataset is publicly available, facilitating in-depth analysis of political tweets in Nepali discourse and further advancing NLP research for the Nepali language through labeled data and baselines for various NLP tasks. The dataset for this work is made available at https://github.com/rkritesh210/NAET. © 2013 IEEE.
Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G
- Dahri, Safia, Shaikh, Muhammad, Alhussein, Musaed, Soomro, Muhammad, Aurangzeb, Khursheed, Imran, Muhammad
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
Performance analysis of machine learning classifiers for non-technical loss detection
- Ghori, Khawaja, Imran, Muhammad, Nawaz, Asad, Abbasi, Rabeeh, Ullah, Ata, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).
- Authors: Ghori, Khawaja , Imran, Muhammad , Nawaz, Asad , Abbasi, Rabeeh , Ullah, Ata , Szathmary, Laszlo
- Date: 2023
- Type: Text , Journal article
- Relation: Journal of Ambient Intelligence and Humanized Computing Vol. 14, no. 11 (2023), p. 15327-15342
- Full Text:
- Reviewed:
- Description: Power companies are responsible for producing and transferring the required amount of electricity from grid stations to individual households. Many countries suffer huge losses in billions of dollars due to non-technical loss (NTL) in power supply companies. To deal with NTL, many machine learning classifiers have been employed in recent time. However, few has been studied about the performance evaluation metrics that are used in NTL detection to evaluate how good or bad the classifier is in predicting the non-technical loss. This paper first uses three classifiers: random forest, K-nearest neighbors and linear support vector machine to predict the occurrence of NTL in a real dataset of an electric supply company containing approximately 80,000 monthly consumption records. Then, it computes 14 performance evaluation metrics across the three classifiers and identify the key scientific relationships between them. These relationships provide insights into deciding which classifier can be more useful under given scenarios for NTL detection. This work can be proved to be a baseline not only for the NTL detection in power industry but also for the selection of appropriate performance evaluation metrics for NTL detection. © 2020, The Author(s).