Heterogeneity-aware task allocation in mobile ad hoc cloud
- Yaqoob, Ibrar, Ahmed, Ejaz, Gani, Abdullah, Mokhtar, Salimah, Imran, Muhammad
- Authors: Yaqoob, Ibrar , Ahmed, Ejaz , Gani, Abdullah , Mokhtar, Salimah , Imran, Muhammad
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 1779-1795
- Full Text:
- Reviewed:
- Description: Mobile Ad Hoc Cloud (MAC) enables the use of a multitude of proximate resource-rich mobile devices to provide computational services in the vicinity. However, inattention to mobile device resources and operational heterogeneity-measuring parameters, such as CPU speed, number of cores, and workload, when allocating task in MAC, causes inefficient resource utilization that prolongs task execution time and consumes large amounts of energy. Task execution is remarkably degraded, because the longer execution time and high energy consumption impede the optimum use of MAC. This paper aims to minimize execution time and energy consumption by proposing heterogeneity-aware task allocation solutions for MAC-based compute-intensive tasks. Results of the proposed solutions reveal that incorporation of the heterogeneity-measuring parameters guarantees a shorter execution time and reduces the energy consumption of the compute-intensive tasks in MAC. A system model is developed to validate the proposed solutions' empirical results. In comparison with random-based task allocation, the proposed five solutions based on CPU speed, number of core, workload, CPU speed and workload, and CPU speed, core, and workload reduce execution time up to 56.72%, 53.12%, 56.97%, 61.23%, and 71.55%, respectively. In addition, these heterogeneity-aware task allocation solutions save energy up to 69.78%, 69.06%, 68.25%, 67.26%, and 57.33%, respectively. For this reason, the proposed solutions significantly improve tasks' execution performance, which can increase the optimum use of MAC. © 2013 IEEE.
- Authors: Yaqoob, Ibrar , Ahmed, Ejaz , Gani, Abdullah , Mokhtar, Salimah , Imran, Muhammad
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 1779-1795
- Full Text:
- Reviewed:
- Description: Mobile Ad Hoc Cloud (MAC) enables the use of a multitude of proximate resource-rich mobile devices to provide computational services in the vicinity. However, inattention to mobile device resources and operational heterogeneity-measuring parameters, such as CPU speed, number of cores, and workload, when allocating task in MAC, causes inefficient resource utilization that prolongs task execution time and consumes large amounts of energy. Task execution is remarkably degraded, because the longer execution time and high energy consumption impede the optimum use of MAC. This paper aims to minimize execution time and energy consumption by proposing heterogeneity-aware task allocation solutions for MAC-based compute-intensive tasks. Results of the proposed solutions reveal that incorporation of the heterogeneity-measuring parameters guarantees a shorter execution time and reduces the energy consumption of the compute-intensive tasks in MAC. A system model is developed to validate the proposed solutions' empirical results. In comparison with random-based task allocation, the proposed five solutions based on CPU speed, number of core, workload, CPU speed and workload, and CPU speed, core, and workload reduce execution time up to 56.72%, 53.12%, 56.97%, 61.23%, and 71.55%, respectively. In addition, these heterogeneity-aware task allocation solutions save energy up to 69.78%, 69.06%, 68.25%, 67.26%, and 57.33%, respectively. For this reason, the proposed solutions significantly improve tasks' execution performance, which can increase the optimum use of MAC. © 2013 IEEE.
Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
Investigating smart home security : is blockchain the answer?
- Arif, Samrah, Khan, M. Arif, Rehman, Sabih, Kabir, Muhammad, Imran, Muhammad
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Ghori, Khawaja, Awais, Muhammad, Khattak, Akmal, Imran, Muhammad, Amin, Fazal, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
A novel collaborative IoD-assisted VANET approach for coverage area maximization
- Ahmed, Gamil, Sheltami, Tarek, Mahmoud, Ashraf, Imran, Muhammad, Shoaib, Muhammad
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
- Authors: Ahmed, Gamil , Sheltami, Tarek , Mahmoud, Ashraf , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 61211-61223
- Full Text:
- Reviewed:
- Description: Internet of Drones (IoD) is an efficient technique that can be integrated with vehicular ad-hoc networks (VANETs) to provide terrestrial communications by acting as an aerial relay when terrestrial infrastructure is unreliable or unavailable. To fully exploit the drones' flexibility and superiority, we propose a novel dynamic IoD collaborative communication approach for urban VANETs. Unlike most of the existing approaches, the IoD nodes are dynamically deployed based on current locations of ground vehicles to effectively mitigate inevitable isolated cars in conventional VANETs. For efficiently coordinating IoD, we model IoD to optimize coverage based on the location of vehicles. The goal is to obtain an efficient IoD deployment to maximize the number of covered vehicles, i.e., minimize the number of isolated vehicles in the target area. More importantly, the proposed approach provides sufficient interconnections between IoD nodes. To do so, an improved version of succinct population-based meta-heuristic, namely Improved Particle Swarm Optimization (IPSO) inspired by food searching behavior of birds or fishes flock, is implemented for IoD assisted VANET (IoDAV). Moreover, the coverage, received signal quality, and IoD connectivity are achieved by IPSO's objective function for optimal IoD deployment at the same time. We carry out an extensive experiment based on the received signal at floating vehicles to examine the proposed IoDAV performance. We compare the results with the baseline VANET with no IoD (NIoD) and Fixed IoD assisted (FIoD). The comparisons are based on the coverage percentage of the ground vehicles and the quality of the received signal. The simulation results demonstrate that the proposed IoDAV approach allows finding the optimal IoD positions throughout the time based on the vehicle's movements and achieves better coverage and better quality of the received signal by finding the most appropriate IoD position compared with NIoD and FIoD schemes. © 2013 IEEE.
A robust consistency model of crowd workers in text labeling tasks
- Alqershi, Fattoh, Al-Qurishi, Muhammad, Aksoy, Mehmet, Alrubaian, Majed, Imran, Muhammad
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Energy efficiency perspectives of femtocells in internet of things : recent advances and challenges
- Al-Turjman, Fadi, Imran, Muhammad, Bakhsh, Sheikh
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
- Authors: Al-Turjman, Fadi , Imran, Muhammad , Bakhsh, Sheikh
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 26808-26818
- Full Text:
- Reviewed:
- Description: Energy efficiency is a growing concern in every aspect of the technology. Apart from maintaining profitability, energy efficiency means a decrease in the overall environmental effects, which is a serious concern in today's world. Using a femtocell in Internet of Things (IoT) can boost energy efficiency. To illustrate, femtocells can be used in smart homes, which is a subpart of the smart grid, as a communication mechanism in order to manage energy efficiency. Moreover, femtocells can be used in many IoT applications in order to provide communication. However, it is important to evaluate the energy efficiency of femtocells. This paper investigates recent advances and challenges in the energy efficiency of the femtocell in IoT. First, we introduce the idea of femtocells in the context of IoT and their role in IoT applications. Next, we describe prominent performance metrics in order to understand how the energy efficiency is evaluated. Then, we elucidate how energy can be modeled in terms of femtocell and provide some models from the literature. Since femtocells are used in heterogeneous networks to manage energy efficiency, we also express some energy efficiency schemes for deployment. The factors that affect the energy usage of a femtocell base station are discussed and then the power consumption of user equipment under femtocell coverage is mentioned. Finally, we highlight prominent open research issues and challenges. © 2013 IEEE.
Cloudlet computing : recent advances, taxonomy, and challenges
- Babar, Mohammad, Khan, Muhammad, Ali, Farman, Imran, Muhammad, Shoaib, Muhammad
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
- Authors: Babar, Mohammad , Khan, Muhammad , Ali, Farman , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 29609-29622
- Full Text:
- Reviewed:
- Description: A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail. © 2013 IEEE.
Performance analysis of different types of machine learning classifiers for non-technical loss detection
- Ghori, Khawaja, Abbasi, Rabeeh, Awais, Muhammad, Imran, Muhammad, Ullah, Ata, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Abbasi, Rabeeh , Awais, Muhammad , Imran, Muhammad , Ullah, Ata , Szathmary, Laszlo
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 16033-16048
- Full Text:
- Reviewed:
- Description: With the ever-growing demand of electric power, it is quite challenging to detect and prevent Non-Technical Loss (NTL) in power industries. NTL is committed by meter bypassing, hooking from the main lines, reversing and tampering the meters. Manual on-site checking and reporting of NTL remains an unattractive strategy due to the required manpower and associated cost. The use of machine learning classifiers has been an attractive option for NTL detection. It enhances data-oriented analysis and high hit ratio along with less cost and manpower requirements. However, there is still a need to explore the results across multiple types of classifiers on a real-world dataset. This paper considers a real dataset from a power supply company in Pakistan to identify NTL. We have evaluated 15 existing machine learning classifiers across 9 types which also include the recently developed CatBoost, LGBoost and XGBoost classifiers. Our work is validated using extensive simulations. Results elucidate that ensemble methods and Artificial Neural Network (ANN) outperform the other types of classifiers for NTL detection in our real dataset. Moreover, we have also derived a procedure to identify the top-14 features out of a total of 71 features, which are contributing 77% in predicting NTL. We conclude that including more features beyond this threshold does not improve performance and thus limiting to the selected feature set reduces the computation time required by the classifiers. Last but not least, the paper also analyzes the results of the classifiers with respect to their types, which has opened a new area of research in NTL detection. © 2013 IEEE.
- Authors: Ghori, Khawaja , Abbasi, Rabeeh , Awais, Muhammad , Imran, Muhammad , Ullah, Ata , Szathmary, Laszlo
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 16033-16048
- Full Text:
- Reviewed:
- Description: With the ever-growing demand of electric power, it is quite challenging to detect and prevent Non-Technical Loss (NTL) in power industries. NTL is committed by meter bypassing, hooking from the main lines, reversing and tampering the meters. Manual on-site checking and reporting of NTL remains an unattractive strategy due to the required manpower and associated cost. The use of machine learning classifiers has been an attractive option for NTL detection. It enhances data-oriented analysis and high hit ratio along with less cost and manpower requirements. However, there is still a need to explore the results across multiple types of classifiers on a real-world dataset. This paper considers a real dataset from a power supply company in Pakistan to identify NTL. We have evaluated 15 existing machine learning classifiers across 9 types which also include the recently developed CatBoost, LGBoost and XGBoost classifiers. Our work is validated using extensive simulations. Results elucidate that ensemble methods and Artificial Neural Network (ANN) outperform the other types of classifiers for NTL detection in our real dataset. Moreover, we have also derived a procedure to identify the top-14 features out of a total of 71 features, which are contributing 77% in predicting NTL. We conclude that including more features beyond this threshold does not improve performance and thus limiting to the selected feature set reduces the computation time required by the classifiers. Last but not least, the paper also analyzes the results of the classifiers with respect to their types, which has opened a new area of research in NTL detection. © 2013 IEEE.
Flow-aware elephant flow detection for software-defined networks
- Hamdan, Mosab, Mohammed, Bushra, Humayun, Usman, Abdelaziz, Ahmed, Khan, Suleman, Ali, M., Imran, Muhammad, Marsono, M.
- Authors: Hamdan, Mosab , Mohammed, Bushra , Humayun, Usman , Abdelaziz, Ahmed , Khan, Suleman , Ali, M. , Imran, Muhammad , Marsono, M.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 72585-72597
- Full Text:
- Reviewed:
- Description: Software-defined networking (SDN) separates the network control plane from the packet forwarding plane, which provides comprehensive network-state visibility for better network management and resilience. Traffic classification, particularly for elephant flow detection, can lead to improved flow control and resource provisioning in SDN networks. Existing elephant flow detection techniques use pre-set thresholds that cannot scale with the changes in the traffic concept and distribution. This paper proposes a flow-aware elephant flow detection applied to SDN. The proposed technique employs two classifiers, each respectively on SDN switches and controller, to achieve accurate elephant flow detection efficiently. Moreover, this technique allows sharing the elephant flow classification tasks between the controller and switches. Hence, most mice flows can be filtered in the switches, thus avoiding the need to send large numbers of classification requests and signaling messages to the controller. Experimental findings reveal that the proposed technique outperforms contemporary methods in terms of the running time, accuracy, F-measure, and recall. © 2013 IEEE.
- Authors: Hamdan, Mosab , Mohammed, Bushra , Humayun, Usman , Abdelaziz, Ahmed , Khan, Suleman , Ali, M. , Imran, Muhammad , Marsono, M.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 72585-72597
- Full Text:
- Reviewed:
- Description: Software-defined networking (SDN) separates the network control plane from the packet forwarding plane, which provides comprehensive network-state visibility for better network management and resilience. Traffic classification, particularly for elephant flow detection, can lead to improved flow control and resource provisioning in SDN networks. Existing elephant flow detection techniques use pre-set thresholds that cannot scale with the changes in the traffic concept and distribution. This paper proposes a flow-aware elephant flow detection applied to SDN. The proposed technique employs two classifiers, each respectively on SDN switches and controller, to achieve accurate elephant flow detection efficiently. Moreover, this technique allows sharing the elephant flow classification tasks between the controller and switches. Hence, most mice flows can be filtered in the switches, thus avoiding the need to send large numbers of classification requests and signaling messages to the controller. Experimental findings reveal that the proposed technique outperforms contemporary methods in terms of the running time, accuracy, F-measure, and recall. © 2013 IEEE.
Towards a low complexity scheme for medical images in scalable video coding
- Shoaib, Muhammad, Imran, Muhammad, Subhan, Fazli, Ahmad, Iftikhar
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
- Authors: Shoaib, Muhammad , Imran, Muhammad , Subhan, Fazli , Ahmad, Iftikhar
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 41439-41451
- Full Text:
- Reviewed:
- Description: Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE.
Machine Learning Techniques for 5G and beyond
- Kaur, Jasneet, Khan, M. Arif, Iftikhar, Mohsin, Imran, Muhammad, Emad Ul Haq, Qazi
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
An automatic digital audio authentication/forensics system
- Ali, Zulfiqar, Imran, Muhammad, Alsulaiman, Mansour
- Authors: Ali, Zulfiqar , Imran, Muhammad , Alsulaiman, Mansour
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 2994-3007
- Full Text:
- Reviewed:
- Description: With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system. © 2017 IEEE.
- Authors: Ali, Zulfiqar , Imran, Muhammad , Alsulaiman, Mansour
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 2994-3007
- Full Text:
- Reviewed:
- Description: With the continuous rise in ingenious forgery, a wide range of digital audio authentication applications are emerging as a preventive and detective control in real-world circumstances, such as forged evidence, breach of copyright protection, and unauthorized data access. To investigate and verify, this paper presents a novel automatic authentication system that differentiates between the forged and original audio. The design philosophy of the proposed system is primarily based on three psychoacoustic principles of hearing, which are implemented to simulate the human sound perception system. Moreover, the proposed system is able to classify between the audio of different environments recorded with the same microphone. To authenticate the audio and environment classification, the computed features based on the psychoacoustic principles of hearing are dangled to the Gaussian mixture model to make automatic decisions. It is worth mentioning that the proposed system authenticates an unknown speaker irrespective of the audio content i.e., independent of narrator and text. To evaluate the performance of the proposed system, audios in multi-environments are forged in such a way that a human cannot recognize them. Subjective evaluation by three human evaluators is performed to verify the quality of the generated forged audio. The proposed system provides a classification accuracy of 99.2% ± 2.6. Furthermore, the obtained accuracy for the other scenarios, such as text-dependent and text-independent audio authentication, is 100% by using the proposed system. © 2017 IEEE.
- «
- ‹
- 1
- ›
- »