Performance analysis of priority-based IEEE 802.15.6 protocol in saturated traffic conditions
- Ullah, Sana, Tovar, Eduardo, Kim, Ki, Kim, Kyong, Imran, Muhammad
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
Technology-assisted decision support system for efficient water utilization : a real-time testbed for irrigation using wireless sensor networks
- Khan, Rahim, Ali, Ihsan, Zakarya, Muhammad, Ahmad, Mushtaq, Imran, Muhammad, Shoaib, Muhammad
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.
Co-EEORS : cooperative energy efficient optimal relay selection protocol for underwater wireless sensor networks
- Khan, Anwar, Ali, Ihsan, Rahman, Atiq, Imran, Muhammad, Amin, Fazal, Mahmood, Hasan
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
The rise of ransomware and emerging security challenges in the internet of things
- Yaqoob, Ibrar, Ahmed, Ejaz, Rehman, Muhammad, Ahmed, Abdelmuttlib, Imran, Muhammad
- Authors: Yaqoob, Ibrar , Ahmed, Ejaz , Rehman, Muhammad , Ahmed, Abdelmuttlib , Imran, Muhammad
- Date: 2017
- Type: Text , Journal article
- Relation: Computer Networks Vol. 129, no. (2017), p. 444-458
- Full Text: false
- Reviewed:
- Description: With the increasing miniaturization of smartphones, computers, and sensors in the Internet of Things (IoT) paradigm, strengthening the security and preventing ransomware attacks have become key concerns. Traditional security mechanisms are no longer applicable because of the involvement of resource-constrained devices, which require more computation power and resources. This paper presents the ransomware attacks and security concerns in IoT. We initially discuss the rise of ransomware attacks and outline the associated challenges. Then, we investigate, report, and highlight the state-of-the-art research efforts directed at IoT from a security perspective. A taxonomy is devised by classifying and categorizing the literature based on important parameters (e.g., threats, requirements, IEEE standards, deployment level, and technologies). Furthermore, a few credible case studies are outlined to alert people regarding how seriously IoT devices are vulnerable to threats. We enumerate the requirements that need to be met for securing IoT. Several indispensable open research challenges (e.g., data integrity, lightweight security mechanisms, lack of security software's upgradability and patchability features, physical protection of trillions of devices, privacy, and trust) are identified and discussed. Several prominent future research directions are provided. © 2017 Elsevier B.V. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
Blind detection of copy-move forgery in digital audio forensics
- Imran, Muhammad, Ali, Zulfiqar, Bakhsh, Sheikh, Akram, Sheeraz
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
Enhancing quality-of-service conditions using a cross-layer paradigm for ad-hoc vehicular communication
- Rehman, Sabih, Arif Khan, M. Arif, Imran, Muhammad, Zia, Tanveer, Iftikhar, Mohsin
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
A critical analysis of mobility management related issues of wireless sensor networks in cyber physical systems
- Al-Muhtadi, Jalal, Qiang, Ma, Zeb, Khan, Chaudhry, Junaid, Imran, Muhammad
- Authors: Al-Muhtadi, Jalal , Qiang, Ma , Zeb, Khan , Chaudhry, Junaid , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 16363-16376
- Full Text:
- Reviewed:
- Description: Mobility management has been a long-standing issue in mobile wireless sensor networks and especially in the context of cyber physical systems its implications are immense. This paper presents a critical analysis of the current approaches to mobility management by evaluating them against a set of criteria which are essentially inherent characteristics of such systems on which these approaches are expected to provide acceptable performance. We summarize these characteristics by using a quadruple set of metrics. Additionally, using this set we classify the various approaches to mobility management that are discussed in this paper. Finally, the paper concludes by reviewing the main findings and providing suggestions that will be helpful to guide future research efforts in the area. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
- Authors: Al-Muhtadi, Jalal , Qiang, Ma , Zeb, Khan , Chaudhry, Junaid , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 16363-16376
- Full Text:
- Reviewed:
- Description: Mobility management has been a long-standing issue in mobile wireless sensor networks and especially in the context of cyber physical systems its implications are immense. This paper presents a critical analysis of the current approaches to mobility management by evaluating them against a set of criteria which are essentially inherent characteristics of such systems on which these approaches are expected to provide acceptable performance. We summarize these characteristics by using a quadruple set of metrics. Additionally, using this set we classify the various approaches to mobility management that are discussed in this paper. Finally, the paper concludes by reviewing the main findings and providing suggestions that will be helpful to guide future research efforts in the area. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Muhammad Imran” is provided in this record**
A blockchain-based solution for enhancing security and privacy in smart factory
- Wan, Jafu, Li, Jiapeng, Imran, Muhammad, Li, Di
- Authors: Wan, Jafu , Li, Jiapeng , Imran, Muhammad , Li, Di
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 6 (2019), p. 3652-3660
- Full Text: false
- Reviewed:
- Description: Through the Industrial Internet of Things (IIoT), a smart factory has entered the booming period. However, as the number of nodes and network size become larger, the traditional IIoT architecture can no longer provide effective support for such enormous system. Therefore, we introduce the Blockchain architecture, which is an emerging scheme for constructing the distributed networks, to reshape the traditional IIoT architecture. First, the major problems of the traditional IIoT architecture are analyzed, and the existing improvements are summarized. Second, we introduce a security and privacy model to help design the Blockchain-based architecture. On this basis, we decompose and reorganize the original IIoT architecture to form a new multicenter partially decentralized architecture. Then, we introduce some relative security technologies to improve and optimize the new architecture. After that we design the data interaction process and the algorithms of the architecture. Finally, we use an automatic production platform to discuss the specific implementation. The experimental results show that the proposed architecture provides better security and privacy protection than the traditional architecture. Thus, the proposed architecture represents a significant improvement of the original architecture, which provides a new direction for the IIoT development. © 2005-2012 IEEE.
Impact of node deployment and routing for protection of critical infrastructures
- Subhan, Fazli, Noreen, Madiha, Imran, Muhammad, Tariq, Moeenuddin, Khan, Asfandyar, Shoaib, Muhammad
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
- Li, Xiaomin, Wan, Jiafu, Dai, Hong-Ning, Imran, Muhammad, Xia, Min, Celesti, Antonio
- Authors: Li, Xiaomin , Wan, Jiafu , Dai, Hong-Ning , Imran, Muhammad , Xia, Min , Celesti, Antonio
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 15, no. 7 (2019), p. 4225-4234
- Full Text: false
- Reviewed:
- Description: At present, smart manufacturing computing framework has faced many challenges such as the lack of an effective framework of fusing computing historical heritages and resource scheduling strategy to guarantee the low-latency requirement. In this paper, we propose a hybrid computing framework and design an intelligent resource scheduling strategy to fulfill the real-time requirement in smart manufacturing with edge computing support. First, a four-layer computing system in a smart manufacturing environment is provided to support the artificial intelligence task operation with the network perspective. Then, a two-phase algorithm for scheduling the computing resources in the edge layer is designed based on greedy and threshold strategies with latency constraints. Finally, a prototype platform was developed. We conducted experiments on the prototype to evaluate the performance of the proposed framework with a comparison of the traditionally-used methods. The proposed strategies have demonstrated the excellent real-time, satisfaction degree (SD), and energy consumption performance of computing services in smart manufacturing with edge computing. © 2005-2012 IEEE.
Efficient brain tumor segmentation with multiscale two-pathway-group conventional neural networks
- Razzak, Muhammad, Imran, Muhammad, Xu, Guandong
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
Exact string matching algorithms : survey, issues, and future research directions
- Hakak, Saqib, Kamsin, Amirrudin, Shivakumara, Palaiahnakote, Gilkar, Gulshan, Khan, Wazir, Imran, Muhammad
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
Emergency message dissemination schemes based on congestion avoidance in VANET and vehicular FoG computing
- Ullah, Ata, Yaqoob, Shumayla, Imran, Muhammad, Ning, Huansheng
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
Bio-inspired network security for 5G-enabled IoT applications
- Saleem, Kashif, Alabduljabbar, Ghadah, Alrowais, Nouf, Al-Muhtadi, Jalal, Imran, Muhammad, Rodrigues, Joel
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
Novel one time signatures (NOTS) : a compact post-quantum digital signature scheme
- Shahid, Furqan, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, Muhammad
- Authors: Shahid, Furqan , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 15895-15906
- Full Text:
- Reviewed:
- Description: The future of the hash based digital signature schemes appears to be very bright in the upcoming quantum era because of the quantum threats to the number theory based digital signature schemes. The Shor's algorithm is available to allow a sufficiently powerful quantum computer to break the building blocks of the number theory based signature schemes in a polynomial time. The hash based signature schemes being quite efficient and provably secure can fill in the gap effectively. However, a draw back of the hash based signature schemes is the larger key and signature sizes which can prove a barrier in their adoption by the space critical applications, like the blockchain. A hash based signature scheme is constructed using a one time signature (OTS) scheme. The underlying OTS scheme plays an important role in determining key and signature sizes of a hash based signature scheme. In this article, we have proposed a novel OTS scheme with minimized key and signature sizes as compared to all of the existing OTS schemes. Our proposed OTS scheme offers an 88% reduction in both key and signature sizes as compared to the popular Winternitz OTS scheme. Furthermore, our proposed OTS scheme offers an 84% and an 86% reductions in the signature and the key sizes respectively as compared to an existing compact variant of the WOTS scheme, i.e. WOTS + .
- Authors: Shahid, Furqan , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 15895-15906
- Full Text:
- Reviewed:
- Description: The future of the hash based digital signature schemes appears to be very bright in the upcoming quantum era because of the quantum threats to the number theory based digital signature schemes. The Shor's algorithm is available to allow a sufficiently powerful quantum computer to break the building blocks of the number theory based signature schemes in a polynomial time. The hash based signature schemes being quite efficient and provably secure can fill in the gap effectively. However, a draw back of the hash based signature schemes is the larger key and signature sizes which can prove a barrier in their adoption by the space critical applications, like the blockchain. A hash based signature scheme is constructed using a one time signature (OTS) scheme. The underlying OTS scheme plays an important role in determining key and signature sizes of a hash based signature scheme. In this article, we have proposed a novel OTS scheme with minimized key and signature sizes as compared to all of the existing OTS schemes. Our proposed OTS scheme offers an 88% reduction in both key and signature sizes as compared to the popular Winternitz OTS scheme. Furthermore, our proposed OTS scheme offers an 84% and an 86% reductions in the signature and the key sizes respectively as compared to an existing compact variant of the WOTS scheme, i.e. WOTS + .
Resource optimized federated learning-enabled cognitive internet of things for smart industries
- Khan, Latif, Alsenwi, Madyan, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.