A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Khan, Zahoor, Amjad, Sana, Ahmed, Farwa, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
A blockchain-based deep-learning-driven architecture for quality routing in wireless sensor networks
- Authors: Khan, Zahoor , Amjad, Sana , Ahmed, Farwa , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 31036-31051
- Full Text:
- Reviewed:
- Description: Over the past few years, great importance has been given to wireless sensor networks (WSNs) as they play a significant role in facilitating the world with daily life services like healthcare, military, social products, etc. However, heterogeneous nature of WSNs makes them prone to various attacks, which results in low throughput, and high network delay and high energy consumption. In the WSNs, routing is performed using different routing protocols like low-energy adaptive clustering hierarchy (LEACH), heterogeneous gateway-based energy-aware multi-hop routing (HMGEAR), etc. In such protocols, some nodes in the network may perform malicious activities. Therefore, four deep learning (DL) techniques and a real-time message content validation (RMCV) scheme based on blockchain are used in the proposed network for the detection of malicious nodes (MNs). Moreover, to analyse the routing data in the WSN, DL models are trained on a state-of-the-art dataset generated from LEACH, known as WSN-DS 2016. The WSN contains three types of nodes: sensor nodes, cluster heads (CHs) and the base station (BS). The CHs after aggregating the data received from the sensor nodes, send it towards the BS. Furthermore, to overcome the single point of failure issue, a decentralized blockchain is deployed on CHs and BS. Additionally, MNs are removed from the network using RMCV and DL techniques. Moreover, legitimate nodes (LNs) are registered in the blockchain network using proof-of-authority consensus protocol. The protocol outperforms proof-of-work in terms of computational cost. Later, routing is performed between the LNs using different routing protocols and the results are compared with original LEACH and HMGEAR protocols. The results show that the accuracy of GRU is 97%, LSTM is 96%, CNN is 92% and ANN is 90%. Throughput, delay and the death of the first node are computed for LEACH, LEACH with DL, LEACH with RMCV, HMGEAR, HMGEAR with DL and HMGEAR with RMCV. Moreover, Oyente is used to perform the formal security analysis of the designed smart contract. The analysis shows that blockchain network is resilient against vulnerabilities. © 2013 IEEE.
Malicious node detection using machine learning and distributed data storage using blockchain in WSNs
- Nouman, Muhammad, Qasim, Umar, Nasir, Hina, Almasoud, Abdullah, Imran, Muhammad, Javaid, Nadeem
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
- Authors: Nouman, Muhammad , Qasim, Umar , Nasir, Hina , Almasoud, Abdullah , Imran, Muhammad , Javaid, Nadeem
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 6106-6121
- Full Text:
- Reviewed:
- Description: In the proposed work, blockchain is implemented on the Base Stations (BSs) and Cluster Heads (CHs) to register the nodes using their credentials and also to tackle various security issues. Moreover, a Machine Learning (ML) classifier, termed as Histogram Gradient Boost (HGB), is employed on the BSs to classify the nodes as malicious or legitimate. In case, the node is found to be malicious, its registration is revoked from the network. Whereas, if a node is found to be legitimate, then its data is stored in an Interplanetary File System (IPFS). IPFS stores the data in the form of chunks and generates hash for the data, which is then stored in blockchain. In addition, Verifiable Byzantine Fault Tolerance (VBFT) is used instead of Proof of Work (PoW) to perform consensus and validate transactions. Also, extensive simulations are performed using the Wireless Sensor Network (WSN) dataset, referred as WSN-DS. The proposed model is evaluated both on the original dataset and the balanced dataset. Furthermore, HGB is compared with other existing classifiers, Adaptive Boost (AdaBoost), Gradient Boost (GB), Linear Discriminant Analysis (LDA), Extreme Gradient Boost (XGB) and ridge, using different performance metrics like accuracy, precision, recall, micro-F1 score and macro-F1 score. The performance evaluation of HGB shows that it outperforms GB, AdaBoost, LDA, XGB and Ridge by 2-4%, 8-10%, 12-14%, 3-5% and 14-16%, respectively. Moreover, the results with balanced dataset are better than those with original dataset. Also, VBFT performs 20-30% better than PoW. Overall, the proposed model performs efficiently in terms of malicious node detection and secure data storage. © 2013 IEEE.
Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G
- Dahri, Safia, Shaikh, Muhammad, Alhussein, Musaed, Soomro, Muhammad, Aurangzeb, Khursheed, Imran, Muhammad
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
- Authors: Dahri, Safia , Shaikh, Muhammad , Alhussein, Musaed , Soomro, Muhammad , Aurangzeb, Khursheed , Imran, Muhammad
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Access Vol. 11, no. (2023), p. 30473-30485
- Full Text:
- Reviewed:
- Description: The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol $\beta $. Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as $\beta $ crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Bio-inspired network security for 5G-enabled IoT applications
- Saleem, Kashif, Alabduljabbar, Ghadah, Alrowais, Nouf, Al-Muhtadi, Jalal, Imran, Muhammad, Rodrigues, Joel
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
Model compression for IoT applications in industry 4.0 via multiscale knowledge transfer
- Fu, Shipeng, Li, Zhen, Liu, Kai, Din, Sadia, Imran, Muhammad, Yang, Xiaomin
- Authors: Fu, Shipeng , Li, Zhen , Liu, Kai , Din, Sadia , Imran, Muhammad , Yang, Xiaomin
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 16, no. 9 (2020), p. 6013-6022
- Full Text: false
- Reviewed:
- Description: Recently, Industry 4.0 has attracted much attention. It has close relations with the Internet of Things (IoT). On the other hand, convolutional neural networks (CNNs) have shown promising performance in many foundational services of the IoT applications. For the IoT applications with high-speed data streams and the requirement of time-sensitive actions, fast processing is demanded on small-scale platforms or even on IoT devices themselves. Therefore, it is inappropriate to employ cumbersome CNNs in IoT applications, making the study of model compression necessary. In knowledge transfer, it is common to employ a deep, well-trained network, called teacher, to guide a shallow, untrained network, called student, to have better performance. Previous works have made many attempts to transfer single-scale knowledge from teacher to student, leading to degradation of generalization ability. In this article, we introduce multiscale representations to knowledge transfer, which facilitates the generalization ability of student. We divide student and teacher into several stages. Student learns from multiscale knowledge provided by teacher at the end of each stage. Extensive experiments demonstrate the effectiveness of our proposed method both on image classification and on single image super-resolution. The huge performance gap between student and teacher is significantly narrowed down by our proposed method, making student suitable for IoT applications. © 2005-2012 IEEE.
Resource optimized federated learning-enabled cognitive internet of things for smart industries
- Khan, Latif, Alsenwi, Madyan, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
A blockchain-based solution for enhancing security and privacy in smart factory
- Wan, Jafu, Li, Jiapeng, Imran, Muhammad, Li, Di
- Authors: Wan, Jafu , Li, Jiapeng , Imran, Muhammad , Li, Di
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 6 (2019), p. 3652-3660
- Full Text: false
- Reviewed:
- Description: Through the Industrial Internet of Things (IIoT), a smart factory has entered the booming period. However, as the number of nodes and network size become larger, the traditional IIoT architecture can no longer provide effective support for such enormous system. Therefore, we introduce the Blockchain architecture, which is an emerging scheme for constructing the distributed networks, to reshape the traditional IIoT architecture. First, the major problems of the traditional IIoT architecture are analyzed, and the existing improvements are summarized. Second, we introduce a security and privacy model to help design the Blockchain-based architecture. On this basis, we decompose and reorganize the original IIoT architecture to form a new multicenter partially decentralized architecture. Then, we introduce some relative security technologies to improve and optimize the new architecture. After that we design the data interaction process and the algorithms of the architecture. Finally, we use an automatic production platform to discuss the specific implementation. The experimental results show that the proposed architecture provides better security and privacy protection than the traditional architecture. Thus, the proposed architecture represents a significant improvement of the original architecture, which provides a new direction for the IIoT development. © 2005-2012 IEEE.
- Li, Xiaomin, Wan, Jiafu, Dai, Hong-Ning, Imran, Muhammad, Xia, Min, Celesti, Antonio
- Authors: Li, Xiaomin , Wan, Jiafu , Dai, Hong-Ning , Imran, Muhammad , Xia, Min , Celesti, Antonio
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 7 (2019), p. 4225-4234
- Full Text: false
- Reviewed:
- Description: At present, smart manufacturing computing framework has faced many challenges such as the lack of an effective framework of fusing computing historical heritages and resource scheduling strategy to guarantee the low-latency requirement. In this paper, we propose a hybrid computing framework and design an intelligent resource scheduling strategy to fulfill the real-time requirement in smart manufacturing with edge computing support. First, a four-layer computing system in a smart manufacturing environment is provided to support the artificial intelligence task operation with the network perspective. Then, a two-phase algorithm for scheduling the computing resources in the edge layer is designed based on greedy and threshold strategies with latency constraints. Finally, a prototype platform was developed. We conducted experiments on the prototype to evaluate the performance of the proposed framework with a comparison of the traditionally-used methods. The proposed strategies have demonstrated the excellent real-time, satisfaction degree (SD), and energy consumption performance of computing services in smart manufacturing with edge computing. © 2005-2012 IEEE.
Efficient brain tumor segmentation with multiscale two-pathway-group conventional neural networks
- Razzak, Muhammad, Imran, Muhammad, Xu, Guandong
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
Emergency message dissemination schemes based on congestion avoidance in VANET and vehicular FoG computing
- Ullah, Ata, Yaqoob, Shumayla, Imran, Muhammad, Ning, Huansheng
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
Exact string matching algorithms : survey, issues, and future research directions
- Hakak, Saqib, Kamsin, Amirrudin, Shivakumara, Palaiahnakote, Gilkar, Gulshan, Khan, Wazir, Imran, Muhammad
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
Impact of node deployment and routing for protection of critical infrastructures
- Subhan, Fazli, Noreen, Madiha, Imran, Muhammad, Tariq, Moeenuddin, Khan, Asfandyar, Shoaib, Muhammad
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
Reconfigurable smart factory for drug packing in healthcare industry 4.0
- Wan, Jiafu, Tang, Shenglong, Li, Di, Imran, Muhammad, Zhang, Chunhua
- Authors: Wan, Jiafu , Tang, Shenglong , Li, Di , Imran, Muhammad , Zhang, Chunhua
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 1 (2019), p. 507-516
- Full Text: false
- Reviewed:
- Description: Industry 4.0, which exploits cyber-physical systems and represents digital transformation of manufacturing, is deeply affecting healthcare as well as other traditional production sector. To accommodate the increasing demand of agility, flexibility, and low cost in healthcare sector, a data-driven reconfigurable production mode of Smart Factory for pharmaceutical manufacturing is proposed in this paper. The architecture of the Smart Factory is consisted of three primary layers, namely perception layer, deployment layer, and executing layer. A Manufacturing's Semantics Ontology based knowledgebase is introduced in the perception layer, which is responsible for plan scheduling of pharmaceutical production. The reconfigurable plans are generated from the production demand of drugs as well as the information statement of low-level machine resources. To further functionality reconfiguration and low-level controlling, the IEC 61499 standard is also introduced for functionality modeling and machine controlling. We verify the proposed method with an experiment of demand-based drug packing production, which reflects the feasibility and adequate flexibility of the proposed method. © 2005-2012 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran" is provided in this record**
Co-EEORS : cooperative energy efficient optimal relay selection protocol for underwater wireless sensor networks
- Khan, Anwar, Ali, Ihsan, Rahman, Atiq, Imran, Muhammad, Amin, Fazal, Mahmood, Hasan
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
Performance analysis of priority-based IEEE 802.15.6 protocol in saturated traffic conditions
- Ullah, Sana, Tovar, Eduardo, Kim, Ki, Kim, Kyong, Imran, Muhammad
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
Technology-assisted decision support system for efficient water utilization : a real-time testbed for irrigation using wireless sensor networks
- Khan, Rahim, Ali, Ihsan, Zakarya, Muhammad, Ahmad, Mushtaq, Imran, Muhammad, Shoaib, Muhammad
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.