An adaptive and flexible brain energized full body exoskeleton with IoT edge for assisting the paralyzed patients
- Jacob, Sunil, Alagirisamy, Mukil, Menon, Varun, Kumar, B. Manoj, Balasubramanian, Venki
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
Reduced switch multilevel inverter topologies for renewable energy sources
- Sarebanzadeh, Maryam, Hosseinzadeh, Mohammad, Garcia, Cristian, Babaei, Ebrahim, Islam, Syed, Rodriguez, Jose
- Authors: Sarebanzadeh, Maryam , Hosseinzadeh, Mohammad , Garcia, Cristian , Babaei, Ebrahim , Islam, Syed , Rodriguez, Jose
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 120580-120595
- Full Text:
- Reviewed:
- Description: This article proposes two generalized multilevel inverter configurations that reduce the number of switching devices, isolated DC sources, and total standing voltage on power switches, making them suitable for renewable energy sources. The main topology is a multilevel inverter that handles two isolated DC sources with ten power switches to create 25 voltage levels. Based on the main proposed topology, two generalized multilevel inverters are introduced to provide flexibility in the design and to minimize the number of elements. The optimal topologies for both extensive multilevel inverters are derived from different design objectives such as minimizing the number of elements (gate drivers, DC sources), achieving a large number of levels, and minimizing the total standing voltage. The main advantages of the proposed topologies are a reduced number of elements compared to those required by other existing multilevel inverter topologies. The power loss analysis and standalone PV application of the proposed topologies are discussed. Experimental results are presented for the proposed topology to demonstrate its correct operation. © 2013 IEEE.
- Authors: Sarebanzadeh, Maryam , Hosseinzadeh, Mohammad , Garcia, Cristian , Babaei, Ebrahim , Islam, Syed , Rodriguez, Jose
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 120580-120595
- Full Text:
- Reviewed:
- Description: This article proposes two generalized multilevel inverter configurations that reduce the number of switching devices, isolated DC sources, and total standing voltage on power switches, making them suitable for renewable energy sources. The main topology is a multilevel inverter that handles two isolated DC sources with ten power switches to create 25 voltage levels. Based on the main proposed topology, two generalized multilevel inverters are introduced to provide flexibility in the design and to minimize the number of elements. The optimal topologies for both extensive multilevel inverters are derived from different design objectives such as minimizing the number of elements (gate drivers, DC sources), achieving a large number of levels, and minimizing the total standing voltage. The main advantages of the proposed topologies are a reduced number of elements compared to those required by other existing multilevel inverter topologies. The power loss analysis and standalone PV application of the proposed topologies are discussed. Experimental results are presented for the proposed topology to demonstrate its correct operation. © 2013 IEEE.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
Energy efficiency perspectives of femtocells in internet of things : recent advances and challenges
- Al-Turjman, Fadi, Imran, Muhammad, Bakhsh, Sheikh
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
Performance analysis of priority-based IEEE 802.15.6 protocol in saturated traffic conditions
- Ullah, Sana, Tovar, Eduardo, Kim, Ki, Kim, Kyong, Imran, Muhammad
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
- Authors: Ullah, Sana , Tovar, Eduardo , Kim, Ki , Kim, Kyong , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 66198-66209
- Full Text:
- Reviewed:
- Description: Recent advancement in internet of medical things has enabled deployment of miniaturized, intelligent, and low-power medical devices in, on, or around a human body for unobtrusive and remote health monitoring. The IEEE 802.15.6 standard facilitates such monitoring by enabling low-power and reliable wireless communication between the medical devices. The IEEE 802.15.6 standard employs a carrier sense multiple access with collision avoidance protocol for resource allocation. It utilizes a priority-based backoff procedure by adjusting the contention window bounds of devices according to user requirements. As the performance of this protocol is considerably affected when the number of devices increases, we propose an accurate analytical model to estimate the saturation throughput, mean energy consumption, and mean delay over the number of devices. We assume an error-prone channel with saturated traffic conditions. We determine the optimal performance bounds for a fixed number of devices in different priority classes with different values of bit error ratio. We conclude that high-priority devices obtain quick and reliable access to the error-prone channel compared to low-priority devices. The proposed model is validated through extensive simulations. The performance bounds obtained in our analysis can be used to understand the tradeoffs between different priority levels and network performance. © 2018 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Ghori, Khawaja, Awais, Muhammad, Khattak, Akmal, Imran, Muhammad, Amin, Fazal, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
Dual cost function model predictive direct speed control with duty ratio optimization for PMSM drives
- Liu, Ming, Hu, Jiefeng, Chan, Ka, Or, Siu, Ho, Siu, Xu, Wenzheng, Zhang, Xian
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
Blind detection of copy-move forgery in digital audio forensics
- Imran, Muhammad, Ali, Zulfiqar, Bakhsh, Sheikh, Akram, Sheeraz
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
- Authors: Imran, Muhammad , Ali, Zulfiqar , Bakhsh, Sheikh , Akram, Sheeraz
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12843-12855
- Full Text:
- Reviewed:
- Description: Although copy-move forgery is one of the most common fabrication techniques, blind detection of such tampering in digital audio is mostly unexplored. Unlike active techniques, blind forgery detection is challenging, because it does not embed a watermark or signature in an audio that is unknown in most of the real-life scenarios. Therefore, forgery localization becomes more challenging, especially when using blind methods. In this paper, we propose a novel method for blind detection and localization of copy-move forgery. One of the most crucial steps in the proposed method is a voice activity detection (VAD) module for investigating audio recordings to detect and localize the forgery. The VAD module is equally vital for the development of the copy-move forgery database, wherein audio samples are generated by using the recordings of various types of microphones. We employ a chaotic theory to copy and move the text in generated forged recordings to ensure forgery localization at any place in a recording. The VAD module is responsible for the extraction of words in a forged audio, these words are analyzed by applying a 1-D local binary pattern operator. This operator provides the patterns of extracted words in the form of histograms. The forged parts (copy and move text) have similar histograms. An accuracy of 96.59% is achieved, the proposed method is deemed robust against noise. © 2013 IEEE.
TOSNet : a topic-based optimal subnetwork identification in academic networks
- Bedru, Hayat, Zhao, Wenhong, Alrashoud, Mubarak, Tolba, Amr, Guo, He, Xia, Feng
- Authors: Bedru, Hayat , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Guo, He , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 201015-201027
- Full Text:
- Reviewed:
- Description: Subnetwork identification plays a significant role in analyzing, managing, and comprehending the structure and functions in big networks. Numerous approaches have been proposed to solve the problem of subnetwork identification as well as community detection. Most of the methods focus on detecting communities by considering node attributes, edge information, or both. This study focuses on discovering subnetworks containing researchers with similar or related areas of interest or research topics. A topic- aware subnetwork identification is essential to discover potential researchers on particular research topics and provide qualitywork. Thus, we propose a topic-based optimal subnetwork identification approach (TOSNet). Based on some fundamental characteristics, this paper addresses the following problems: 1)How to discover topic-based subnetworks with a vigorous collaboration intensity? 2) How to rank the discovered subnetworks and single out one optimal subnetwork? We evaluate the performance of the proposed method against baseline methods by adopting the modularity measure, assess the accuracy based on the size of the identified subnetworks, and check the scalability for different sizes of benchmark networks. The experimental findings indicate that our approach shows excellent performance in identifying contextual subnetworks that maintain intensive collaboration amongst researchers for a particular research topic. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Bedru, Hayat , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Guo, He , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 201015-201027
- Full Text:
- Reviewed:
- Description: Subnetwork identification plays a significant role in analyzing, managing, and comprehending the structure and functions in big networks. Numerous approaches have been proposed to solve the problem of subnetwork identification as well as community detection. Most of the methods focus on detecting communities by considering node attributes, edge information, or both. This study focuses on discovering subnetworks containing researchers with similar or related areas of interest or research topics. A topic- aware subnetwork identification is essential to discover potential researchers on particular research topics and provide qualitywork. Thus, we propose a topic-based optimal subnetwork identification approach (TOSNet). Based on some fundamental characteristics, this paper addresses the following problems: 1)How to discover topic-based subnetworks with a vigorous collaboration intensity? 2) How to rank the discovered subnetworks and single out one optimal subnetwork? We evaluate the performance of the proposed method against baseline methods by adopting the modularity measure, assess the accuracy based on the size of the identified subnetworks, and check the scalability for different sizes of benchmark networks. The experimental findings indicate that our approach shows excellent performance in identifying contextual subnetworks that maintain intensive collaboration amongst researchers for a particular research topic. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
A robust consistency model of crowd workers in text labeling tasks
- Alqershi, Fattoh, Al-Qurishi, Muhammad, Aksoy, Mehmet, Alrubaian, Majed, Imran, Muhammad
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Emergency message dissemination schemes based on congestion avoidance in VANET and vehicular FoG computing
- Ullah, Ata, Yaqoob, Shumayla, Imran, Muhammad, Ning, Huansheng
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
- Authors: Ullah, Ata , Yaqoob, Shumayla , Imran, Muhammad , Ning, Huansheng
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 1570-1585
- Full Text:
- Reviewed:
- Description: With the rapid growth in connected vehicles, FoG-assisted vehicular ad hoc network (VANET) is an emerging and novel field of research. For information sharing, a number of messages are exchanged in various applications, including traffic monitoring and area-specific live weather and social aspects monitoring. It is quite challenging where vehicles' speed, direction, and density of neighbors on the move are not consistent. In this scenario, congestion avoidance is also quite challenging to avoid communication loss during busy hours or in emergency cases. This paper presents emergency message dissemination schemes that are based on congestion avoidance scenario in VANET and vehicular FoG computing. In the similar vein, FoG-assisted VANET architecture is explored that can efficiently manage the message congestion scenarios. We present a taxonomy of schemes that address message congestion avoidance. Next, we have included a discussion about comparison of congestion avoidance schemes to highlight the strengths and weaknesses. We have also identified that FoG servers help to reduce the accessibility delays and congestion as compared to directly approaching cloud for all requests in linkage with big data repositories. For the dependable applicability of FoG in VANET, we have identified a number of open research challenges. © 2013 IEEE.
Machine Learning Techniques for 5G and beyond
- Kaur, Jasneet, Khan, M. Arif, Iftikhar, Mohsin, Imran, Muhammad, Emad Ul Haq, Qazi
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
Extending the technology acceptance model for use of e-learning systems by digital learners
- Hanif, Aamer, Jamal, Faheem, Imran, Muhammad
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
- Authors: Hanif, Aamer , Jamal, Faheem , Imran, Muhammad
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 73395-73404
- Full Text:
- Reviewed:
- Description: Technology-based learning systems enable enhanced student learning in higher-education institutions. This paper evaluates the factors affecting behavioral intention of students toward using e-learning systems in universities to augment classroom learning. Based on the technology acceptance model, this paper proposes six external factors that influence the behavioral intention of students toward use of e-learning. A quantitative approach involving structural equation modeling is adopted, and research data collected from 437 undergraduate students enrolled in three academic programs is used for analysis. Results indicate that subjective norm, perception of external control, system accessibility, enjoyment, and result demonstrability have a significant positive influence on perceived usefulness and on perceived ease of use of the e-learning system. This paper also examines the relevance of some previously used external variables, e.g., self-efficacy, experience, and computer anxiety, for present-world students who have been brought up as digital learners and have higher levels of computer literacy and experience. © 2018 IEEE.
Robust image classification using a low-pass activation function and DCT augmentation
- Hossain, Md Tahmid, Teng, Shyh, Sohel, Ferdous, Lu, Guojun
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
A novel collaborative IoD-assisted VANET approach for coverage area maximization
- Ahmed, Gamil, Sheltami, Tarek, Mahmoud, Ashraf, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
Enhancing quality-of-service conditions using a cross-layer paradigm for ad-hoc vehicular communication
- Rehman, Sabih, Arif Khan, M. Arif, Imran, Muhammad, Zia, Tanveer, Iftikhar, Mohsin
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
- Authors: Rehman, Sabih , Arif Khan, M. Arif , Imran, Muhammad , Zia, Tanveer , Iftikhar, Mohsin
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 12404-12416
- Full Text:
- Reviewed:
- Description: The Internet of Vehicles (IoVs) is an emerging paradigm aiming to introduce a plethora of innovative applications and services that impose a certain quality of service (QoS) requirements. The IoV mainly relies on vehicular ad-hoc networks (VANETs) for autonomous inter-vehicle communication and road-traffic safety management. With the ever-increasing demand to design new and emerging applications for VANETs, one challenge that continues to stand out is the provision of acceptable QoS requirements to particular user applications. Most existing solutions to this challenge rely on a single layer of the protocol stack. This paper presents a cross-layer decision-based routing protocol that necessitates choosing the best multi-hop path for packet delivery to meet acceptable QoS requirements. The proposed protocol acquires the information about the channel rate from the physical layer and incorporates this information in decision making, while directing traffic at the network layer level. Key performance metrics for the system design are analyzed using extensive experimental simulation scenarios. In addition, three data rate variant solutions are proposed to cater for various application-specific requirements in highways and urban environments. © 2013 IEEE.
Co-EEORS : cooperative energy efficient optimal relay selection protocol for underwater wireless sensor networks
- Khan, Anwar, Ali, Ihsan, Rahman, Atiq, Imran, Muhammad, Amin, Fazal, Mahmood, Hasan
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.
- Authors: Khan, Anwar , Ali, Ihsan , Rahman, Atiq , Imran, Muhammad , Amin, Fazal , Mahmood, Hasan
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 28777-28789
- Full Text:
- Reviewed:
- Description: Cooperative routing mitigates the adverse channel effects in the harsh underwater environment and ensures reliable delivery of packets from the bottom to the surface of water. Cooperative routing is analogous to sparse recovery in that faded copies of data packets are processed by the destination node to extract the desired information. However, it usually requires information about the two or three position coordinates of the nodes. It also requires the synchronization of the source, relay, and destination nodes. These features make the cooperative routing a challenging task as sensor nodes move with water currents. Moreover, the data packets are simply discarded if the acceptable threshold is not met at the destination. This threatens the reliable delivery of data to the final destination. To cope with these challenges, this paper proposes a cooperative energy-efficient optimal relay selection protocol for underwater wireless sensor networks. Unlike the existing routing protocols involving cooperation, the proposed scheme combines location and depth of the sensor nodes to select the destination nodes. Combination of these two parameters does not involve knowing the position coordinates of the nodes and results in selection of the destination nodes closest to the water surface. As a result, data packets are less affected by the channel properties. In addition, a source node chooses a relay node and a destination node. Data packets are sent to the destination node by the relay node as soon as the relay node receives them. This eliminates the need for synchronization among the source, relay, and destination nodes. Moreover, the destination node acknowledges the source node about the successful reception or retransmission of the data packets. This overcomes the packets drop. Based on simulation results, the proposed scheme is superior in delivering packets to the final destination than some existing techniques. © 2013 IEEE.