- Li, Zilin, Hu, Jiefeng, Chan, Ka Wing
- Authors: Li, Zilin , Hu, Jiefeng , Chan, Ka Wing
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 57, no. 6 (2021), p. 6362-6374
- Full Text: false
- Reviewed:
- Description: Unlike a synchronous generator that could withstand a large overcurrent, an inverter-based distributed generation (DG) has low thermal inertia, and the inverter is likely damaged by overcurrents during grid faults. In this article, a new strategy, namely positive-And negative-sequence limiting with stability enhanced P-f droop control (PNSL-SEPFC), is proposed to limit the output currents and active power of droop-controlled inverters in islanded microgrids. This strategy is easy to implement in the inverter controller and does not require any fault detection. Inverter stability is analyzed mathematically, which gives guidelines to design the parameters of the PNSL-SEPFC strategy. PSCAD/EMTDC simulation based on a four-DG microgrid shows that the proposed PNSL-SEPFC can limit inverter output currents and powers with better performance under both symmetrical and asymmetrical faults. Furthermore, hardware experiments demonstrate that the proposed PNSL-SEPFC can ensure the inverters riding through grid faults safely and stably. (A video of experimental waveforms is attached.). © 1972-2012 IEEE.
Performance analysis of priority-based IEEE 802.15.6 protocol in saturated traffic conditions
- Ullah, Sana, Tovar, Eduardo, Kim, Ki, Kim, Kyong, Imran, Muhammad
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
Technology-assisted decision support system for efficient water utilization : a real-time testbed for irrigation using wireless sensor networks
- Khan, Rahim, Ali, Ihsan, Zakarya, Muhammad, Ahmad, Mushtaq, Imran, Muhammad, Shoaib, Muhammad
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.
- Authors: Khan, Rahim , Ali, Ihsan , Zakarya, Muhammad , Ahmad, Mushtaq , Imran, Muhammad , Shoaib, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 25686-25697
- Full Text:
- Reviewed:
- Description: Scientific organizations and researchers are eager to apply recent technological advancements, such as sensors and actuators, in different application areas, including environmental monitoring, creation of intelligent buildings, and precision agriculture. Technology-assisted irrigation for agriculture is a major research innovation which eases the work of farmers and prevents water wastage. Wireless sensor networks (WSNs) are used as sensor nodes that directly interact with the physical environment and provide real-time data that are useful in identifying regions in need, particularly in agricultural fields. This paper presents an efficient methodology that employs WSN as a data collection tool and a decision support system (DSS). The proposed DSS can assist farmers in their manual irrigation procedures or automate irrigation activities. Water-deficient sites in both scenarios are identified by using soil moisture and environmental data sensors. However, the proposed system's accuracy is directly proportional to the accuracy of dynamic data generated by the deployed WSN. A simplified outlier-detection algorithm is thus presented and integrated with the proposed DSS to fine-tune the collected data prior to processing. The complexity of the algorithm is O(1) for dynamic datasets generated by sensor nodes and O(n) for static datasets. Different issues in technology-assisted irrigation management and their solutions are also addressed. © 2013 IEEE.
Co-EEORS : cooperative energy efficient optimal relay selection protocol for underwater wireless sensor networks
- Khan, Anwar, Ali, Ihsan, Rahman, Atiq, Imran, Muhammad, Amin, Fazal, Mahmood, Hasan
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
Blind detection of copy-move forgery in digital audio forensics
- Imran, Muhammad, Ali, Zulfiqar, Bakhsh, Sheikh, Akram, Sheeraz
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
Enhancing quality-of-service conditions using a cross-layer paradigm for ad-hoc vehicular communication
- Rehman, Sabih, Arif Khan, M. Arif, Imran, Muhammad, Zia, Tanveer, Iftikhar, Mohsin
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
Impact of node deployment and routing for protection of critical infrastructures
- Subhan, Fazli, Noreen, Madiha, Imran, Muhammad, Tariq, Moeenuddin, Khan, Asfandyar, Shoaib, Muhammad
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
- Authors: Subhan, Fazli , Noreen, Madiha , Imran, Muhammad , Tariq, Moeenuddin , Khan, Asfandyar , Shoaib, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 11502-11514
- Full Text:
- Reviewed:
- Description: Recently, linear wireless sensor networks (LWSNs) have been eliciting increasing attention because of their suitability for applications such as the protection of critical infrastructures. Most of these applications require LWSN to remain operational for a longer period. However, the non-replenishable limited battery power of sensor nodes does not allow them to meet these expectations. Therefore, a shorter network lifetime is one of the most prominent barriers in large-scale deployment of LWSN. Unlike most existing studies, in this paper, we analyze the impact of node placement and clustering on LWSN network lifetime. First, we categorize and classify existing node placement and clustering schemes for LWSN and introduce various topologies for disparate applications. Then, we highlight the peculiarities of LWSN applications and discuss their unique characteristics. Several application domains of LWSN are described. We present three node placement strategies (i.e., linear sequential, linear parallel, and grid) and various deployment methods such as random, uniform, decreasing distance, and triangular. Extensive simulation experiments are conducted to analyze the performance of the three state-of-the-art routing protocols in the context of node deployment strategies and methods. The experimental results demonstrate that the node deployment strategies and methods significantly affect LWSN lifetime. © 2013 IEEE.
Efficient brain tumor segmentation with multiscale two-pathway-group conventional neural networks
- Razzak, Muhammad, Imran, Muhammad, Xu, Guandong
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
- Authors: Razzak, Muhammad , Imran, Muhammad , Xu, Guandong
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Journal of Biomedical and Health Informatics Vol. 23, no. 5 (2019), p. 1911-1919
- Full Text:
- Reviewed:
- Description: Manual segmentation of the brain tumors for cancer diagnosis from MRI images is a difficult, tedious, and time-consuming task. The accuracy and the robustness of brain tumor segmentation, therefore, are crucial for the diagnosis, treatment planning, and treatment outcome evaluation. Mostly, the automatic brain tumor segmentation methods use hand designed features. Similarly, traditional methods of deep learning such as convolutional neural networks require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain. Here, we describe a new model two-pathway-group CNN architecture for brain tumor segmentation, which exploits local features and global contextual features simultaneously. This model enforces equivariance in the two-pathway CNN model to reduce instabilities and overfitting parameter sharing. Finally, we embed the cascade architecture into two-pathway-group CNN in which the output of a basic CNN is treated as an additional source and concatenated at the last layer. Validation of the model on BRATS2013 and BRATS2015 data sets revealed that embedding of a group CNN into a two pathway architecture improved the overall performance over the currently published state-of-the-art while computational complexity remains attractive. © 2013 IEEE.
Exact string matching algorithms : survey, issues, and future research directions
- Hakak, Saqib, Kamsin, Amirrudin, Shivakumara, Palaiahnakote, Gilkar, Gulshan, Khan, Wazir, Imran, Muhammad
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
- Authors: Hakak, Saqib , Kamsin, Amirrudin , Shivakumara, Palaiahnakote , Gilkar, Gulshan , Khan, Wazir , Imran, Muhammad
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 69614-69637
- Full Text:
- Reviewed:
- Description: String matching has been an extensively studied research domain in the past two decades due to its various applications in the fields of text, image, signal, and speech processing. As a result, choosing an appropriate string matching algorithm for current applications and addressing challenges is difficult. Understanding different string matching approaches (such as exact string matching and approximate string matching algorithms), integrating several algorithms, and modifying algorithms to address related issues are also difficult. This paper presents a survey on single-pattern exact string matching algorithms. The main purpose of this survey is to propose new classification, identify new directions and highlight the possible challenges, current trends, and future works in the area of string matching algorithms with a core focus on exact string matching algorithms. © 2013 IEEE.
Emergency message dissemination schemes based on congestion avoidance in VANET and vehicular FoG computing
- Ullah, Ata, Yaqoob, Shumayla, Imran, Muhammad, Ning, Huansheng
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
MESH : a flexible manifold-embedded semantic hashing for cross-modal retrieval
- Zhong, Fangming, Wang, Guangze, Chen, Zhikui, Xia, Feng
- Authors: Zhong, Fangming , Wang, Guangze , Chen, Zhikui , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147569-147579
- Full Text:
- Reviewed:
- Description: Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE.
- Authors: Zhong, Fangming , Wang, Guangze , Chen, Zhikui , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147569-147579
- Full Text:
- Reviewed:
- Description: Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE.
Resource optimized federated learning-enabled cognitive internet of things for smart industries
- Khan, Latif, Alsenwi, Madyan, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Khan, Latif , Alsenwi, Madyan , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168854-168864
- Full Text:
- Reviewed:
- Description: Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
Exploring the Dynamic Voltage Signature of Renewable Rich Weak Power System
- Alzahrani, S., Shah, Rakibuzzaman, Mithulananthan, N.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Heterogeneity-aware task allocation in mobile ad hoc cloud
- Yaqoob, Ibrar, Ahmed, Ejaz, Gani, Abdullah, Mokhtar, Salimah, Imran, Muhammad
- Authors: Yaqoob, Ibrar , Ahmed, Ejaz , Gani, Abdullah , Mokhtar, Salimah , Imran, Muhammad
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 1779-1795
- Full Text:
- Reviewed:
- Description: Mobile Ad Hoc Cloud (MAC) enables the use of a multitude of proximate resource-rich mobile devices to provide computational services in the vicinity. However, inattention to mobile device resources and operational heterogeneity-measuring parameters, such as CPU speed, number of cores, and workload, when allocating task in MAC, causes inefficient resource utilization that prolongs task execution time and consumes large amounts of energy. Task execution is remarkably degraded, because the longer execution time and high energy consumption impede the optimum use of MAC. This paper aims to minimize execution time and energy consumption by proposing heterogeneity-aware task allocation solutions for MAC-based compute-intensive tasks. Results of the proposed solutions reveal that incorporation of the heterogeneity-measuring parameters guarantees a shorter execution time and reduces the energy consumption of the compute-intensive tasks in MAC. A system model is developed to validate the proposed solutions' empirical results. In comparison with random-based task allocation, the proposed five solutions based on CPU speed, number of core, workload, CPU speed and workload, and CPU speed, core, and workload reduce execution time up to 56.72%, 53.12%, 56.97%, 61.23%, and 71.55%, respectively. In addition, these heterogeneity-aware task allocation solutions save energy up to 69.78%, 69.06%, 68.25%, 67.26%, and 57.33%, respectively. For this reason, the proposed solutions significantly improve tasks' execution performance, which can increase the optimum use of MAC. © 2013 IEEE.
- Authors: Yaqoob, Ibrar , Ahmed, Ejaz , Gani, Abdullah , Mokhtar, Salimah , Imran, Muhammad
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 1779-1795
- Full Text:
- Reviewed:
- Description: Mobile Ad Hoc Cloud (MAC) enables the use of a multitude of proximate resource-rich mobile devices to provide computational services in the vicinity. However, inattention to mobile device resources and operational heterogeneity-measuring parameters, such as CPU speed, number of cores, and workload, when allocating task in MAC, causes inefficient resource utilization that prolongs task execution time and consumes large amounts of energy. Task execution is remarkably degraded, because the longer execution time and high energy consumption impede the optimum use of MAC. This paper aims to minimize execution time and energy consumption by proposing heterogeneity-aware task allocation solutions for MAC-based compute-intensive tasks. Results of the proposed solutions reveal that incorporation of the heterogeneity-measuring parameters guarantees a shorter execution time and reduces the energy consumption of the compute-intensive tasks in MAC. A system model is developed to validate the proposed solutions' empirical results. In comparison with random-based task allocation, the proposed five solutions based on CPU speed, number of core, workload, CPU speed and workload, and CPU speed, core, and workload reduce execution time up to 56.72%, 53.12%, 56.97%, 61.23%, and 71.55%, respectively. In addition, these heterogeneity-aware task allocation solutions save energy up to 69.78%, 69.06%, 68.25%, 67.26%, and 57.33%, respectively. For this reason, the proposed solutions significantly improve tasks' execution performance, which can increase the optimum use of MAC. © 2013 IEEE.
A new hybrid cascaded switched-capacitor reduced switch multilevel inverter for renewable sources and domestic loads
- Rezaei, Mohammad, Nayeripour, Majid, Hu, Jiefeng, Band, Shahab, Mosavi, Amir, Khooban, Mohammad-Hassan
- Authors: Rezaei, Mohammad , Nayeripour, Majid , Hu, Jiefeng , Band, Shahab , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 14157-14183
- Full Text:
- Reviewed:
- Description: This multilevel inverter type summarizes an output voltage of medium voltage based on a series connection of power cells employing standard configurations of low-voltage components. The main problems of cascaded switched-capacitor multilevel inverters (CSCMLIs) are the harmful reverse flowing current of inductive loads, the large number of switches, and the surge current of the capacitors. As the number of switches increases, the reliability of the inverter decreases. To address these issues, a new CSCMLI is proposed using two modules containing asymmetric DC sources to generate 13 levels. The main novelty of the proposed configuration is the reduction of the number of switches while increasing the maximum output voltage. Despite the many similarities, the presented topology differs from similar topologies. Compared to similar structures, the direction of some switches is reversed, leading to a change in the direction of current flow. By incorporating the lowest number of semiconductors, it was demonstrated that the proposed inverter has the lowest cost function among similar inverters. The role of switched-capacitor inrush current in the selection of switch, diode, and DC source for inverter operation in medium and high voltage applications is presented. The inverter performance to supply the inductive loads is clarified. Comparison of the simulation and experimental results validates the effectiveness of the proposed inverter topology, showing promising potentials in photovoltaic, buildings, and domestic applications. A video demonstrating the experimental test, and all manufacturing data are attached. © 2013 IEEE.
- Authors: Rezaei, Mohammad , Nayeripour, Majid , Hu, Jiefeng , Band, Shahab , Mosavi, Amir , Khooban, Mohammad-Hassan
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Access Vol. 10, no. (2022), p. 14157-14183
- Full Text:
- Reviewed:
- Description: This multilevel inverter type summarizes an output voltage of medium voltage based on a series connection of power cells employing standard configurations of low-voltage components. The main problems of cascaded switched-capacitor multilevel inverters (CSCMLIs) are the harmful reverse flowing current of inductive loads, the large number of switches, and the surge current of the capacitors. As the number of switches increases, the reliability of the inverter decreases. To address these issues, a new CSCMLI is proposed using two modules containing asymmetric DC sources to generate 13 levels. The main novelty of the proposed configuration is the reduction of the number of switches while increasing the maximum output voltage. Despite the many similarities, the presented topology differs from similar topologies. Compared to similar structures, the direction of some switches is reversed, leading to a change in the direction of current flow. By incorporating the lowest number of semiconductors, it was demonstrated that the proposed inverter has the lowest cost function among similar inverters. The role of switched-capacitor inrush current in the selection of switch, diode, and DC source for inverter operation in medium and high voltage applications is presented. The inverter performance to supply the inductive loads is clarified. Comparison of the simulation and experimental results validates the effectiveness of the proposed inverter topology, showing promising potentials in photovoltaic, buildings, and domestic applications. A video demonstrating the experimental test, and all manufacturing data are attached. © 2013 IEEE.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.