A new global index for short term voltage stability assessment
- Alshareef, Abdulrhman, Shah, Rakibuzzaman, Mithulananthan, Nadarajah, Alzahrani, Saeed
- Authors: Alshareef, Abdulrhman , Shah, Rakibuzzaman , Mithulananthan, Nadarajah , Alzahrani, Saeed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 36114-36124
- Full Text:
- Reviewed:
- Description: The utility scale of non-conventional generators (NCGs), such as wind and photovoltaic (PV) plants, are competitive alternatives to synchronous machines (SMs) for power generation. Higher penetration of NCGs has been respondent of causing several recent incidents leading up to voltage collapse in power systems due to the distinct characteristics of NCGs under different operating conditions. Consequently, the so-called system strength has been reduced with higher NCGs penetration. A number of indices have been developed to quantify system strength from the short-term voltage stability (STVS) perspective. None of the indices capture the overall performances of power systems on dynamic voltage recovery. In this paper, an improvement in one of the STVS indices namely, the Voltage Recovery Index (VRI), is proposed to overcome shortcomings in the original index. Moreover, the improved index is globalized to establish a new index defined as system voltage recovery index (VRIsys) to quantify STVS at the system level. The amended VRI and developed VRIsys are used in systematic simulations to quantify the impact and interaction of various factors that could affect system strength. The assessment was conducted using time-domain simulation with direct connected induction motors (DCIMs) and a proliferation of converter-based technologies on both the generation and load sides, namely, NCGs and Variable Speed Drives (VSDs), respectively. © 2013 IEEE.
- Authors: Alshareef, Abdulrhman , Shah, Rakibuzzaman , Mithulananthan, Nadarajah , Alzahrani, Saeed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 36114-36124
- Full Text:
- Reviewed:
- Description: The utility scale of non-conventional generators (NCGs), such as wind and photovoltaic (PV) plants, are competitive alternatives to synchronous machines (SMs) for power generation. Higher penetration of NCGs has been respondent of causing several recent incidents leading up to voltage collapse in power systems due to the distinct characteristics of NCGs under different operating conditions. Consequently, the so-called system strength has been reduced with higher NCGs penetration. A number of indices have been developed to quantify system strength from the short-term voltage stability (STVS) perspective. None of the indices capture the overall performances of power systems on dynamic voltage recovery. In this paper, an improvement in one of the STVS indices namely, the Voltage Recovery Index (VRI), is proposed to overcome shortcomings in the original index. Moreover, the improved index is globalized to establish a new index defined as system voltage recovery index (VRIsys) to quantify STVS at the system level. The amended VRI and developed VRIsys are used in systematic simulations to quantify the impact and interaction of various factors that could affect system strength. The assessment was conducted using time-domain simulation with direct connected induction motors (DCIMs) and a proliferation of converter-based technologies on both the generation and load sides, namely, NCGs and Variable Speed Drives (VSDs), respectively. © 2013 IEEE.
A novel collaborative IoD-assisted VANET approach for coverage area maximization
- Ahmed, Gamil, Sheltami, Tarek, Mahmoud, Ashraf, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
AI and IoT-Enabled smart exoskeleton system for rehabilitation of paralyzed people in connected communities
- Jacob, Sunil, Alagirisamy, Mukil, Xi, Chen, Balasubramanian, Venki, Srinivasan, Ram
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Xi, Chen , Balasubramanian, Venki , Srinivasan, Ram
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 80340-80350
- Full Text:
- Reviewed:
- Description: In recent years, the number of cases of spinal cord injuries, stroke and other nervous impairments have led to an increase in the number of paralyzed patients worldwide. Rehabilitation that can aid and enhance the lives of such patients is the need of the hour. Exoskeletons have been found as one of the popular means of rehabilitation. The existing exoskeletons use techniques that impose limitations on adaptability, instant response and continuous control. Also most of them are expensive, bulky, and requires high level of training. To overcome all the above limitations, this paper introduces an Artificial Intelligence (AI) powered Smart and light weight Exoskeleton System (AI-IoT-SES) which receives data from various sensors, classifies them intelligently and generates the desired commands via Internet of Things (IoT) for rendering rehabilitation and support with the help of caretakers for paralyzed patients in smart and connected communities. In the proposed system, the signals collected from the exoskeleton sensors are processed using AI-assisted navigation module, and helps the caretakers in guiding, communicating and controlling the movements of the exoskeleton integrated to the patients. The navigation module uses AI and IoT enabled Simultaneous Localization and Mapping (SLAM). The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker to monitor the real time movement and navigation of the exoskeleton. The automated exoskeleton detects and take decisions on navigation thereby improving the life conditions of such patients. The experimental results simulated using MATLAB shows that the proposed system is the ideal method for rendering rehabilitation and support for paralyzed patients in smart communities. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
Cloudlet computing : recent advances, taxonomy, and challenges
- Babar, Mohammad, Khan, Muhammad, Ali, Farman, Imran, Muhammad, Shoaib, Muhammad
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
Efficient high-resolution video compression scheme using background and foreground layers
- Afsana, Fariha, Paul, Manoranjan, Murshed, Manzur, Taubman, David
- Authors: Afsana, Fariha , Paul, Manoranjan , Murshed, Manzur , Taubman, David
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 157411-157421
- Full Text:
- Reviewed:
- Description: Video coding using dynamic background frame achieves better compression compared to the traditional techniques by encoding background and foreground separately. This process reduces coding bits for the overall frame significantly; however, encoding background still requires many bits that can be compressed further for achieving better coding efficiency. The cuboid coding framework has been proven to be one of the most effective methods of image compression which exploits homogeneous pixel correlation within a frame and has better alignment with object boundary compared to traditional block-based coding. In a video sequence, the cuboid-based frame partitioning varies with the changes of the foreground. However, since the background remains static for a group of pictures, the cuboid coding exploits better spatial pixel homogeneity. In this work, the impact of cuboid coding on the background frame for high-resolution videos (Ultra-High-Definition (UHD) and 360-degree videos) is investigated using the multilayer framework of SHVC. After the cuboid partitioning, the method of coarse frame generation has been improved with a novel idea by keeping human-visual sensitive information. Unlike the traditional SHVC scheme, in the proposed method, cuboid coded background and the foreground are encoded in separate layers in an implicit manner. Simulation results show that the proposed video coding method achieves an average BD-Rate reduction of 26.69% and BD-PSNR gain of 1.51 dB against SHVC with significant encoding time reduction for both UHD and 360 videos. It also achieves an average of 13.88% BD-Rate reduction and 0.78 dB BD-PSNR gain compared to the existing relevant method proposed by X. Hoang Van. © 2013 IEEE.
- Authors: Afsana, Fariha , Paul, Manoranjan , Murshed, Manzur , Taubman, David
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 157411-157421
- Full Text:
- Reviewed:
- Description: Video coding using dynamic background frame achieves better compression compared to the traditional techniques by encoding background and foreground separately. This process reduces coding bits for the overall frame significantly; however, encoding background still requires many bits that can be compressed further for achieving better coding efficiency. The cuboid coding framework has been proven to be one of the most effective methods of image compression which exploits homogeneous pixel correlation within a frame and has better alignment with object boundary compared to traditional block-based coding. In a video sequence, the cuboid-based frame partitioning varies with the changes of the foreground. However, since the background remains static for a group of pictures, the cuboid coding exploits better spatial pixel homogeneity. In this work, the impact of cuboid coding on the background frame for high-resolution videos (Ultra-High-Definition (UHD) and 360-degree videos) is investigated using the multilayer framework of SHVC. After the cuboid partitioning, the method of coarse frame generation has been improved with a novel idea by keeping human-visual sensitive information. Unlike the traditional SHVC scheme, in the proposed method, cuboid coded background and the foreground are encoded in separate layers in an implicit manner. Simulation results show that the proposed video coding method achieves an average BD-Rate reduction of 26.69% and BD-PSNR gain of 1.51 dB against SHVC with significant encoding time reduction for both UHD and 360 videos. It also achieves an average of 13.88% BD-Rate reduction and 0.78 dB BD-PSNR gain compared to the existing relevant method proposed by X. Hoang Van. © 2013 IEEE.
Examination of effective VAr with respect to dynamic voltage stability in renewable rich power grids
- Alzahrani, Saeed, Shah, Rakibuzzaman, Mithulananthan, N.
- Authors: Alzahrani, Saeed , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 75494-75508
- Full Text:
- Reviewed:
- Description: High penetrations of inverter-based renewable resources (IBRs) diminish the resilience that traditional power systems had due to constant research and developments for many years. In particular, dynamic voltage stability becomes one of the major concerns for transmission system operators due to the limited capabilities of IBRs (i.e., voltage and frequency regulation). A heavily loaded renewable-rich network is susceptible to fault-induced delayed voltage recovery (FIDVR) due to insufficient effective reactive power (E-VAr) in power grids. Hence, it is crucial to thoroughly scrutinize each VAr resources' participation in E-VAr under various operating conditions. Moreover, it is essential to investigate the influence of E-VAr on system post-fault performance. The E-VAr investigation would help in determining the optimal location and sizing of grid-connected IBRs and allow more renewable energy integration. Furthermore, it would enrich decision-making about adopting additional grid support devices. In this paper, a comprehensive assessment framework is utilized to assess the E-VAr of a power system with a large-scale photovoltaic power. Plant under different realistic operating conditions. Several indices quantifying the contribution of VAr resources and load bus voltage recovery assists to explore the transient response and voltage trajectories. The recovery indices help have a better understanding of the factors affecting E-VAr. The proposed framework has been tested in the New England (IEEE 39 bus system) through simulation by DIgSILENT Power Factory. © 2013 IEEE.
Examination of effective VAr with respect to dynamic voltage stability in renewable rich power grids
- Authors: Alzahrani, Saeed , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 75494-75508
- Full Text:
- Reviewed:
- Description: High penetrations of inverter-based renewable resources (IBRs) diminish the resilience that traditional power systems had due to constant research and developments for many years. In particular, dynamic voltage stability becomes one of the major concerns for transmission system operators due to the limited capabilities of IBRs (i.e., voltage and frequency regulation). A heavily loaded renewable-rich network is susceptible to fault-induced delayed voltage recovery (FIDVR) due to insufficient effective reactive power (E-VAr) in power grids. Hence, it is crucial to thoroughly scrutinize each VAr resources' participation in E-VAr under various operating conditions. Moreover, it is essential to investigate the influence of E-VAr on system post-fault performance. The E-VAr investigation would help in determining the optimal location and sizing of grid-connected IBRs and allow more renewable energy integration. Furthermore, it would enrich decision-making about adopting additional grid support devices. In this paper, a comprehensive assessment framework is utilized to assess the E-VAr of a power system with a large-scale photovoltaic power. Plant under different realistic operating conditions. Several indices quantifying the contribution of VAr resources and load bus voltage recovery assists to explore the transient response and voltage trajectories. The recovery indices help have a better understanding of the factors affecting E-VAr. The proposed framework has been tested in the New England (IEEE 39 bus system) through simulation by DIgSILENT Power Factory. © 2013 IEEE.
Forced oscillation in power systems with converter controlled-based resources- a survey with case studies
- Surinkaew, Tossaporn, Emami, Koanoush, Shah, Rakibuzzaman, Islam, Syed, Mithulananthan, N.
- Authors: Surinkaew, Tossaporn , Emami, Koanoush , Shah, Rakibuzzaman , Islam, Syed , Mithulananthan, N.
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 150911-150924
- Full Text:
- Reviewed:
- Description: In future power systems, conventional synchronous generators will be replaced by converter controlled-based generations (CCGs), i.e., wind and solar generations, and battery energy storage systems. Thus, the paradigm shift in power systems will lead to the inferior system strength and inertia scarcity. Therefore, the problems of forced oscillation (FO) will emerge with new features of the CCGs. The state-of-the-art review in this paper emphasizes previous strategies for FO detection, source identification, and mitigation. Moreover, the effect of FO is investigated in a power system with CCGs. In its conclusion, this paper also highlights important findings and provides suggestions for subsequent research in this important topic of future power systems. © 2013 IEEE.
- Authors: Surinkaew, Tossaporn , Emami, Koanoush , Shah, Rakibuzzaman , Islam, Syed , Mithulananthan, N.
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 150911-150924
- Full Text:
- Reviewed:
- Description: In future power systems, conventional synchronous generators will be replaced by converter controlled-based generations (CCGs), i.e., wind and solar generations, and battery energy storage systems. Thus, the paradigm shift in power systems will lead to the inferior system strength and inertia scarcity. Therefore, the problems of forced oscillation (FO) will emerge with new features of the CCGs. The state-of-the-art review in this paper emphasizes previous strategies for FO detection, source identification, and mitigation. Moreover, the effect of FO is investigated in a power system with CCGs. In its conclusion, this paper also highlights important findings and provides suggestions for subsequent research in this important topic of future power systems. © 2013 IEEE.
Green underwater wireless communications using hybrid optical-acoustic technologies
- Islam, Kazi, Ahmad, Iftekhar, Habibi, Daryoush, Zahed, M., Kamruzzaman, Joarder
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
- Authors: Islam, Kazi , Ahmad, Iftekhar , Habibi, Daryoush , Zahed, M. , Kamruzzaman, Joarder
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 85109-85123
- Full Text:
- Reviewed:
- Description: Underwater wireless communication is a rapidly growing field, especially with the recent emergence of technologies such as autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs). To support the high-bandwidth applications using these technologies, underwater optics has attracted significant attention, alongside its complementary technology - underwater acoustics. In this paper, we propose a hybrid opto-acoustic underwater wireless communication model that reduces network power consumption and supports high-data rate underwater applications by selecting appropriate communication links in response to varying traffic loads and dynamic weather conditions. Underwater optics offers high data rates and consumes less power. However, due to the severe absorption of light in the medium, the communication range is short in underwater optics. Conversely, acoustics suffers from low data rate and high power consumption, but provides longer communication ranges. Since most underwater equipment relies on battery power, energy-efficient communication is critical for reliable underwater communications. In this work, we derive analytical models for both underwater acoustics and optics, and calculate the required transmit power for reliable communications in various underwater communication environments. We then formulate an optimization problem that minimizes the network power consumption for carrying data from underwater nodes to surface sinks under varying traffic loads and weather conditions. The proposed optimization model can be solved offline periodically, hence the additional computational complexity to find the optimum solution for larger networks is not a limiting factor for practical applications. Our results indicate that the proposed technique yields up to 35% power savings compared to existing opto-acoustic solutions. © 2013 IEEE.
Machine Learning Techniques for 5G and beyond
- Kaur, Jasneet, Khan, M. Arif, Iftikhar, Mohsin, Imran, Muhammad, Emad Ul Haq, Qazi
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
Optimal placement of synchronized voltage traveling wave sensors in a radial distribution network
- Tashakkori, Ali, Abu-Siada, Ahmed, Wolfs, Peter, Islam, Syed
- Authors: Tashakkori, Ali , Abu-Siada, Ahmed , Wolfs, Peter , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 65380-65387
- Full Text:
- Reviewed:
- Description: A transmission line fault generates transient high frequency travelling waves (TWs) that propagate through the entire network. The fault location can be determined by recording the instants at which the incident waves arrive at various points in the network. In single end-based methods, the incident wave arrival time and its subsequent reflections from the fault point are used to identify the fault location. In heavily branched distribution networks, the magnitude of the traveling wave declines rapidly as it passes through multiple junctions that cause reflection and refraction to the signal. Therefore, detecting the first incident wave from a high impedance fault is a significant challenge in the electrical distribution networks, in particular, subsequent reflections from a temporarily fault may not be possible. Therefore, to identify a high impedance or temporary faults in a distribution network with many branches, loads, switching devices and distributed transformers, multiple observers are required to observe the entire network. A fully observable and locatable network requires at least one observer per branch or spur which is not a cost effective solution. This paper proposes a reasonable number of relatively low-cost voltage TW observers with GPS time-synchronization and radio communication to detect and timestamp the TW arrival at several points in the network. In this regard, a method to optimally place a given number of TW detectors to maximize the network observability and locatability is presented. Results show the robustness of the proposed method to detect high impedance and intermittent faults within distribution networks with a minimum number of observers. © 2013 IEEE.
- Authors: Tashakkori, Ali , Abu-Siada, Ahmed , Wolfs, Peter , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 65380-65387
- Full Text:
- Reviewed:
- Description: A transmission line fault generates transient high frequency travelling waves (TWs) that propagate through the entire network. The fault location can be determined by recording the instants at which the incident waves arrive at various points in the network. In single end-based methods, the incident wave arrival time and its subsequent reflections from the fault point are used to identify the fault location. In heavily branched distribution networks, the magnitude of the traveling wave declines rapidly as it passes through multiple junctions that cause reflection and refraction to the signal. Therefore, detecting the first incident wave from a high impedance fault is a significant challenge in the electrical distribution networks, in particular, subsequent reflections from a temporarily fault may not be possible. Therefore, to identify a high impedance or temporary faults in a distribution network with many branches, loads, switching devices and distributed transformers, multiple observers are required to observe the entire network. A fully observable and locatable network requires at least one observer per branch or spur which is not a cost effective solution. This paper proposes a reasonable number of relatively low-cost voltage TW observers with GPS time-synchronization and radio communication to detect and timestamp the TW arrival at several points in the network. In this regard, a method to optimally place a given number of TW detectors to maximize the network observability and locatability is presented. Results show the robustness of the proposed method to detect high impedance and intermittent faults within distribution networks with a minimum number of observers. © 2013 IEEE.
Reduced switch multilevel inverter topologies for renewable energy sources
- Sarebanzadeh, Maryam, Hosseinzadeh, Mohammad, Garcia, Cristian, Babaei, Ebrahim, Islam, Syed, Rodriguez, Jose
- Authors: Sarebanzadeh, Maryam , Hosseinzadeh, Mohammad , Garcia, Cristian , Babaei, Ebrahim , Islam, Syed , Rodriguez, Jose
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 120580-120595
- Full Text:
- Reviewed:
- Description: This article proposes two generalized multilevel inverter configurations that reduce the number of switching devices, isolated DC sources, and total standing voltage on power switches, making them suitable for renewable energy sources. The main topology is a multilevel inverter that handles two isolated DC sources with ten power switches to create 25 voltage levels. Based on the main proposed topology, two generalized multilevel inverters are introduced to provide flexibility in the design and to minimize the number of elements. The optimal topologies for both extensive multilevel inverters are derived from different design objectives such as minimizing the number of elements (gate drivers, DC sources), achieving a large number of levels, and minimizing the total standing voltage. The main advantages of the proposed topologies are a reduced number of elements compared to those required by other existing multilevel inverter topologies. The power loss analysis and standalone PV application of the proposed topologies are discussed. Experimental results are presented for the proposed topology to demonstrate its correct operation. © 2013 IEEE.
- Authors: Sarebanzadeh, Maryam , Hosseinzadeh, Mohammad , Garcia, Cristian , Babaei, Ebrahim , Islam, Syed , Rodriguez, Jose
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 120580-120595
- Full Text:
- Reviewed:
- Description: This article proposes two generalized multilevel inverter configurations that reduce the number of switching devices, isolated DC sources, and total standing voltage on power switches, making them suitable for renewable energy sources. The main topology is a multilevel inverter that handles two isolated DC sources with ten power switches to create 25 voltage levels. Based on the main proposed topology, two generalized multilevel inverters are introduced to provide flexibility in the design and to minimize the number of elements. The optimal topologies for both extensive multilevel inverters are derived from different design objectives such as minimizing the number of elements (gate drivers, DC sources), achieving a large number of levels, and minimizing the total standing voltage. The main advantages of the proposed topologies are a reduced number of elements compared to those required by other existing multilevel inverter topologies. The power loss analysis and standalone PV application of the proposed topologies are discussed. Experimental results are presented for the proposed topology to demonstrate its correct operation. © 2013 IEEE.
Robust image classification using a low-pass activation function and DCT augmentation
- Hossain, Md Tahmid, Teng, Shyh, Sohel, Ferdous, Lu, Guojun
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
Rock-burst occurrence prediction based on optimized naïve bayes models
- Ke, Bo, Khandelwal, Manoj, Asteris, Panagiotis, Skentou, Athanasia, Mamou, Anna, Armaghani, Danial
- Authors: Ke, Bo , Khandelwal, Manoj , Asteris, Panagiotis , Skentou, Athanasia , Mamou, Anna , Armaghani, Danial
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 91347-91360
- Full Text:
- Reviewed:
- Description: Rock-burst is a common failure in hard rock related projects in civil and mining construction and therefore, proper classification and prediction of this phenomenon is of interest. This research presents the development of optimized naïve Bayes models, in predicting rock-burst failures in underground projects. The naïve Bayes models were optimized using four weight optimization techniques including forward, backward, particle swarm optimization, and evolutionary. An evolutionary random forest model was developed to identify the most significant input parameters. The maximum tangential stress, elastic energy index, and uniaxial tensile stress were then selected by the feature selection technique (i.e., evolutionary random forest) to develop the optimized naïve Bayes models. The performance of the models was assessed using various criteria as well as a simple ranking system. The results of this research showed that particle swarm optimization was the most effective technique in improving the accuracy of the naïve Bayes model for rock-burst prediction (cumulative ranking = 21), while the backward technique was the worst weight optimization technique (cumulative ranking = 11). All the optimized naïve Bayes models identified the maximum tangential stress as the most significant parameter in predicting rock-burst failures. The results of this research demonstrate that particle swarm optimization technique may improve the accuracy of naïve Bayes algorithms in predicting rock-burst occurrence. © 2013 IEEE.
- Authors: Ke, Bo , Khandelwal, Manoj , Asteris, Panagiotis , Skentou, Athanasia , Mamou, Anna , Armaghani, Danial
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 91347-91360
- Full Text:
- Reviewed:
- Description: Rock-burst is a common failure in hard rock related projects in civil and mining construction and therefore, proper classification and prediction of this phenomenon is of interest. This research presents the development of optimized naïve Bayes models, in predicting rock-burst failures in underground projects. The naïve Bayes models were optimized using four weight optimization techniques including forward, backward, particle swarm optimization, and evolutionary. An evolutionary random forest model was developed to identify the most significant input parameters. The maximum tangential stress, elastic energy index, and uniaxial tensile stress were then selected by the feature selection technique (i.e., evolutionary random forest) to develop the optimized naïve Bayes models. The performance of the models was assessed using various criteria as well as a simple ranking system. The results of this research showed that particle swarm optimization was the most effective technique in improving the accuracy of the naïve Bayes model for rock-burst prediction (cumulative ranking = 21), while the backward technique was the worst weight optimization technique (cumulative ranking = 11). All the optimized naïve Bayes models identified the maximum tangential stress as the most significant parameter in predicting rock-burst failures. The results of this research demonstrate that particle swarm optimization technique may improve the accuracy of naïve Bayes algorithms in predicting rock-burst occurrence. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Ghori, Khawaja, Awais, Muhammad, Khattak, Akmal, Imran, Muhammad, Amin, Fazal, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
A low-complexity equalizer for video broadcasting in cyber-physical social systems through handheld mobile devices
- Solyman, Ahmad, Attar, Hani, Khosravi, Mohammad, Menon, Varun, Jolfaei, Alireza, Balasubramanian, Venki, Selvaraj, Buvana, Tavallali, Pooya
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
A robust consistency model of crowd workers in text labeling tasks
- Alqershi, Fattoh, Al-Qurishi, Muhammad, Aksoy, Mehmet, Alrubaian, Majed, Imran, Muhammad
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/