Deep learning-based approach for detecting trajectory modifications of cassini-huygens spacecraft
- Aldabbas, Ashraf, Gal, Zoltan, Ghori, Khawaja, Imran, Muhammad, Shoaib, Muhammad
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
- Authors: Aldabbas, Ashraf , Gal, Zoltan , Ghori, Khawaja , Imran, Muhammad , Shoaib, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 39111-39125
- Full Text:
- Reviewed:
- Description: There were necessary trajectory modifications of Cassini spacecraft during its last 14 years movement cycle of the interplanetary research project. In the scale 1.3 hour of signal propagation time and 1.4-billion-kilometer size of Earth-Cassini channel, complex event detection in the orbit modifications requires special investigation and analysis of the collected big data. The technologies for space exploration warrant a high standard of nuanced and detailed research. The Cassini mission has accumulated quite huge volumes of science records. This generated a curiosity derives mainly from a need to use machine learning to analyze deep space missions. For energy saving considerations, the communication between the Earth and Cassini was executed in non-periodic mode. This paper provides a sophisticated in-depth learning approach for detecting Cassini spacecraft trajectory modifications in post-processing mode. The proposed model utilizes the ability of Long Short Term Memory (LSTM) neural networks for drawing out useful data and learning the time series inner data pattern, along with the forcefulness of LSTM layers for distinguishing dependencies among the long-short term. Our research study exploited the statistical rates, Matthews correlation coefficient, and F1 score to evaluate our models. We carried out multiple tests and evaluated the provided approach against several advanced models. The preparatory analysis showed that exploiting the LSTM layer provides a notable boost in rising the detection process performance. The proposed model achieved a number of 232 trajectory modification detections with 99.98% accuracy among the last 13.35 years of the Cassini spacecraft life. © 2013 IEEE.
Effect of storage on properties of pine needle cattle dung briquettes
- Kaur, Lovepreet, Singh, Harpreet, Sharma, Hemant, Singh, Triveni, Singh, Jayant
- Authors: Kaur, Lovepreet , Singh, Harpreet , Sharma, Hemant , Singh, Triveni , Singh, Jayant
- Date: 2021
- Type: Text , Journal article
- Relation: Indian Journal of Engineering and Materials Sciences Vol. 28, no. 6 (2021), p. 591-601
- Full Text:
- Reviewed:
- Description: The study has been under taken to utilize abundantly available pine needles in hilly region of Uttarakhand state to mitigate the drudgery involved in collection of fuel wood from nearby forest for cooking. Pine needle briquettes have been prepared using cattle dung as binding agent in a proportion of 60:40 by weight with the help of a hydraulic press. Three levels of briquetting parameters, namely particle size (0.54, 1.5, 3.0mm), die pressure (2.8, 4.14 and 5.5MPa) and dwell time (15, 30, 45sec) have been taken. Heating value, ash content, moisture content, bulk density, crushing strength and water resistance capacity of briquettes have been evaluated. Bulk density and calorific values have decreased with increase in storage period for all types of briquettes. An overall reduction of 6.5% in bulk density and about 1.5% in calorific value has found during storage for a period of 60 days. However, all the briquettes have remained stable. Based on process optimization using RSM, briquettes prepared at highest die pressure of 5.5 MPa with 2.6 mm particle size and 15 seconds dwell time have proved to be optimal considering all the quality parameters of briquettes included in the study over the storage period of 60 days. © 2021, National Institute of Science Communication and Information Resources. All rights reserved.
- Authors: Kaur, Lovepreet , Singh, Harpreet , Sharma, Hemant , Singh, Triveni , Singh, Jayant
- Date: 2021
- Type: Text , Journal article
- Relation: Indian Journal of Engineering and Materials Sciences Vol. 28, no. 6 (2021), p. 591-601
- Full Text:
- Reviewed:
- Description: The study has been under taken to utilize abundantly available pine needles in hilly region of Uttarakhand state to mitigate the drudgery involved in collection of fuel wood from nearby forest for cooking. Pine needle briquettes have been prepared using cattle dung as binding agent in a proportion of 60:40 by weight with the help of a hydraulic press. Three levels of briquetting parameters, namely particle size (0.54, 1.5, 3.0mm), die pressure (2.8, 4.14 and 5.5MPa) and dwell time (15, 30, 45sec) have been taken. Heating value, ash content, moisture content, bulk density, crushing strength and water resistance capacity of briquettes have been evaluated. Bulk density and calorific values have decreased with increase in storage period for all types of briquettes. An overall reduction of 6.5% in bulk density and about 1.5% in calorific value has found during storage for a period of 60 days. However, all the briquettes have remained stable. Based on process optimization using RSM, briquettes prepared at highest die pressure of 5.5 MPa with 2.6 mm particle size and 15 seconds dwell time have proved to be optimal considering all the quality parameters of briquettes included in the study over the storage period of 60 days. © 2021, National Institute of Science Communication and Information Resources. All rights reserved.
It's all about perceptions : a DEMATEL approach to exploring user perceptions of real estate online platforms
- Ullah, Fahim, Sepasgozar, Samad, Jamaluddin Thaheem, Muhammad, Cynthia Wang, Changxin, Imran, Muhammad
- Authors: Ullah, Fahim , Sepasgozar, Samad , Jamaluddin Thaheem, Muhammad , Cynthia Wang, Changxin , Imran, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: Ain Shams Engineering Journal Vol. 12, no. 4 (2021), p. 4297-4317
- Full Text:
- Reviewed:
- Description: Real Estate Online Platforms (REOPs) are used for conveying real estate and property-related information to potential users (buyers, renters, or sellers). The information leveraged through REOPs supports these users in reaching conclusive rent or buy decisions. Despite their promised utility, user perception about accepting online information through REOPs is unexplored. Using a comprehensive questionnaire and data collected from 65 users, the current study captures the users’ perception of REOPs. Risk, service, information, system, technology adoption model (RSISTAM) is proposed comprising of seven users’ perceptions: risk (PR), service quality (PSEQ), information quality (PIQ), and system quality (PSYQ) from the information systems success model, and usefulness (PU), ease of use (PEU) and behaviour to accept (BAU) from TAM. The results are analysed using the decision making trial and evaluation laboratory (DEMATEL) approach, which shows that PIQ, PSEQ and PEU are the causes and PR, PSYQ, PU and BAU are the effects. Among the criteria, the order of prominence is PEU > PSEQ > PIQ, and for net effects, the order is PU > BAU > PSYQ > PR. For addressing the causes, the REOP managers must provide more transparent, high quality and voluminous information to the users, focus on the system, services, and information qualities, and add more enjoyable, immersive and easy-to-use content through REOPs. This study contributes to the body of knowledge by exploring user perceptions and proposing methods to improve the quality and reliability of REOPs in line with Real Estate 4.0 and industry 4.0 aims. © 2021 THE AUTHORS
- Authors: Ullah, Fahim , Sepasgozar, Samad , Jamaluddin Thaheem, Muhammad , Cynthia Wang, Changxin , Imran, Muhammad
- Date: 2021
- Type: Text , Journal article
- Relation: Ain Shams Engineering Journal Vol. 12, no. 4 (2021), p. 4297-4317
- Full Text:
- Reviewed:
- Description: Real Estate Online Platforms (REOPs) are used for conveying real estate and property-related information to potential users (buyers, renters, or sellers). The information leveraged through REOPs supports these users in reaching conclusive rent or buy decisions. Despite their promised utility, user perception about accepting online information through REOPs is unexplored. Using a comprehensive questionnaire and data collected from 65 users, the current study captures the users’ perception of REOPs. Risk, service, information, system, technology adoption model (RSISTAM) is proposed comprising of seven users’ perceptions: risk (PR), service quality (PSEQ), information quality (PIQ), and system quality (PSYQ) from the information systems success model, and usefulness (PU), ease of use (PEU) and behaviour to accept (BAU) from TAM. The results are analysed using the decision making trial and evaluation laboratory (DEMATEL) approach, which shows that PIQ, PSEQ and PEU are the causes and PR, PSYQ, PU and BAU are the effects. Among the criteria, the order of prominence is PEU > PSEQ > PIQ, and for net effects, the order is PU > BAU > PSYQ > PR. For addressing the causes, the REOP managers must provide more transparent, high quality and voluminous information to the users, focus on the system, services, and information qualities, and add more enjoyable, immersive and easy-to-use content through REOPs. This study contributes to the body of knowledge by exploring user perceptions and proposing methods to improve the quality and reliability of REOPs in line with Real Estate 4.0 and industry 4.0 aims. © 2021 THE AUTHORS
Machine Learning Techniques for 5G and beyond
- Kaur, Jasneet, Khan, M. Arif, Iftikhar, Mohsin, Imran, Muhammad, Emad Ul Haq, Qazi
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
- Authors: Kaur, Jasneet , Khan, M. Arif , Iftikhar, Mohsin , Imran, Muhammad , Emad Ul Haq, Qazi
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 23472-23488
- Full Text:
- Reviewed:
- Description: Wireless communication systems play a very crucial role in modern society for entertainment, business, commercial, health and safety applications. These systems keep evolving from one generation to next generation and currently we are seeing deployment of fifth generation (5G) wireless systems around the world. Academics and industries are already discussing beyond 5G wireless systems which will be sixth generation (6G) of the evolution. One of the main and key components of 6G systems will be the use of Artificial Intelligence (AI) and Machine Learning (ML) for such wireless networks. Every component and building block of a wireless system that we currently are familiar with from our knowledge of wireless technologies up to 5G, such as physical, network and application layers, will involve one or another AI/ML techniques. This overview paper, presents an up-to-date review of future wireless system concepts such as 6G and role of ML techniques in these future wireless systems. In particular, we present a conceptual model for 6G and show the use and role of ML techniques in each layer of the model. We review some classical and contemporary ML techniques such as supervised and un-supervised learning, Reinforcement Learning (RL), Deep Learning (DL) and Federated Learning (FL) in the context of wireless communication systems. We conclude the paper with some future applications and research challenges in the area of ML and AI for 6G networks. © 2013 IEEE.
Opportunities and challenges for decarbonizing steel production by creating markets for ‘green steel’ products
- Muslemani, Hasan, Liang, Xi, Kaesehage, Katharina, Ascui, Francisco, Wilson, Jeffrey
- Authors: Muslemani, Hasan , Liang, Xi , Kaesehage, Katharina , Ascui, Francisco , Wilson, Jeffrey
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Cleaner Production Vol. 315, no. (2021), p.
- Full Text:
- Reviewed:
- Description: The creation of a market for steel produced by less carbon-intensive production processes, here called ‘green steel’, has been identified as a means of supporting the introduction of breakthrough emission reduction technologies into steel production. However, numerous details remain under-explored, including exactly what ‘green’ entails in the context of steelmaking, the likely competitiveness of green steel products in domestic and international markets, and potential policy mechanisms to support their successful market penetration. This paper addresses this gap through qualitative research with international sustainability experts and commercial managers from leading steel trade associations, research institutes and steelmakers. We find that there is a need to establish a common understanding of what ‘greenness’ means in the steelmaking context, and to resolve various carbon accounting and assurance issues, which otherwise have the potential to lead to perverse outcomes and opportunities for greenwashing. We identify a set of potential demand-side and supply-side policy mechanisms to support green steel production, and highlight a need for a combination of policies to ensure successful market development and avoid unintended consequences for competition at three different levels: 1) between products manufactured through a primary vs secondary steelmaking route, 2) between ‘green’ and traditional, ‘brown’ steel, and 3) with other substitutable materials. The study further shows that the automotive industry is a likely candidate for green steel demand, where a market could be supported by price premiums paid by willing consumers, such as those of high-end luxury and heavy-duty vehicles. © 2021 Elsevier Ltd
- Authors: Muslemani, Hasan , Liang, Xi , Kaesehage, Katharina , Ascui, Francisco , Wilson, Jeffrey
- Date: 2021
- Type: Text , Journal article
- Relation: Journal of Cleaner Production Vol. 315, no. (2021), p.
- Full Text:
- Reviewed:
- Description: The creation of a market for steel produced by less carbon-intensive production processes, here called ‘green steel’, has been identified as a means of supporting the introduction of breakthrough emission reduction technologies into steel production. However, numerous details remain under-explored, including exactly what ‘green’ entails in the context of steelmaking, the likely competitiveness of green steel products in domestic and international markets, and potential policy mechanisms to support their successful market penetration. This paper addresses this gap through qualitative research with international sustainability experts and commercial managers from leading steel trade associations, research institutes and steelmakers. We find that there is a need to establish a common understanding of what ‘greenness’ means in the steelmaking context, and to resolve various carbon accounting and assurance issues, which otherwise have the potential to lead to perverse outcomes and opportunities for greenwashing. We identify a set of potential demand-side and supply-side policy mechanisms to support green steel production, and highlight a need for a combination of policies to ensure successful market development and avoid unintended consequences for competition at three different levels: 1) between products manufactured through a primary vs secondary steelmaking route, 2) between ‘green’ and traditional, ‘brown’ steel, and 3) with other substitutable materials. The study further shows that the automotive industry is a likely candidate for green steel demand, where a market could be supported by price premiums paid by willing consumers, such as those of high-end luxury and heavy-duty vehicles. © 2021 Elsevier Ltd
Setting time and strength monitoring of alkali-activated cement mixtures by ultrasonic testing
- Tekle, Biruk, Hertwig, Ludwig, Holschemacher, Klaus
- Authors: Tekle, Biruk , Hertwig, Ludwig , Holschemacher, Klaus
- Date: 2021
- Type: Text , Journal article
- Relation: Materials Vol. 14, no. 8 (2021), p. 1889
- Full Text:
- Reviewed:
- Description: Alkali-activated cement (AAC) is a promising binder that replaces ordinary Portland cement (OPC). In this study, the development of setting time and strength of AAC mixes were studied using ultrasonic testing method. The test results were compared with traditional Vicat setting time and compressive and flexural strengths. The findings showed that setting times and strengths have a strong correlation with ultrasonic velocity curve. The initial setting time corresponds well with the ultrasonic velocity curve's dormant period, and the final setting time with the time it takes to reach the velocity curve's maximum acceleration. Both setting times also showed a correlation with the value of the maximum acceleration. An exponential relation was found between the ultrasonic velocity and the compressive and flexural strengths. The effect of binder content, alkaline solid to binder ratio (AS/B), sodium silicate to sodium hydroxide solids ratio (SS/SH), and total water to total solid binder ratio (TW/TS) on the strength and setting time are also studied using Taguchi method of experimental design. AS/B ratio showed a significant influence on the setting time of AAC while TW/TS ratio showed only a minor effect. The ultrasonic velocities were able to capture the effect of the different parameters similar to the compressive strength. The velocity decreased mainly with the increase of TW/TS ratio and binder content, while AS/B and SS/SH ratios showed a lower influence.
- Authors: Tekle, Biruk , Hertwig, Ludwig , Holschemacher, Klaus
- Date: 2021
- Type: Text , Journal article
- Relation: Materials Vol. 14, no. 8 (2021), p. 1889
- Full Text:
- Reviewed:
- Description: Alkali-activated cement (AAC) is a promising binder that replaces ordinary Portland cement (OPC). In this study, the development of setting time and strength of AAC mixes were studied using ultrasonic testing method. The test results were compared with traditional Vicat setting time and compressive and flexural strengths. The findings showed that setting times and strengths have a strong correlation with ultrasonic velocity curve. The initial setting time corresponds well with the ultrasonic velocity curve's dormant period, and the final setting time with the time it takes to reach the velocity curve's maximum acceleration. Both setting times also showed a correlation with the value of the maximum acceleration. An exponential relation was found between the ultrasonic velocity and the compressive and flexural strengths. The effect of binder content, alkaline solid to binder ratio (AS/B), sodium silicate to sodium hydroxide solids ratio (SS/SH), and total water to total solid binder ratio (TW/TS) on the strength and setting time are also studied using Taguchi method of experimental design. AS/B ratio showed a significant influence on the setting time of AAC while TW/TS ratio showed only a minor effect. The ultrasonic velocities were able to capture the effect of the different parameters similar to the compressive strength. The velocity decreased mainly with the increase of TW/TS ratio and binder content, while AS/B and SS/SH ratios showed a lower influence.
Smart dynamic traffic monitoring and enforcement system
- El-Hansali, Youssef, Outay, Fatma, Yasar, Ansar, Farrag, Siham, Shoaib, Muhammad, Imran, Muhammad, Awan, Hammad
- Authors: El-Hansali, Youssef , Outay, Fatma , Yasar, Ansar , Farrag, Siham , Shoaib, Muhammad , Imran, Muhammad , Awan, Hammad
- Date: 2021
- Type: Text , Journal article
- Relation: Computers, Materials and Continua Vol. 67, no. 3 (2021), p. 2797-2806
- Full Text:
- Reviewed:
- Description: Enforcement of traffic rules and regulations involves a wide range of complex tasks, many of which demand the use of modern technologies. variable speed limits (VSL) control is to change the current speed limit according to the current traffic situation based on the observed traffic conditions. The aim of this study is to provide a simulation-based methodological framework to evaluate (VSL) as an effective Intelligent Transportation System (ITS) enforcement system. The focus of the study is on measuring the effectiveness of the dynamic traffic control strategy on traffic performance and safety considering various performance indicators such as total travel time, average delay, and average number of stops. United Arab Emirates (UAE) was selected as a case study to evaluate the effectiveness of this strategy. A micro simulation software package VISSIM with add-on module VisVAP is used to evaluate the impacts of VSL. It has been observed that VSL control strategy reduced the average delay time per vehicle to around 7%, travel time by 3.2%, and number of stops by 48.5%. Dynamic traffic control strategies also alleviated congestion by increasing the capacity of the bottleneck section and improving safety. Results of this study would act as a guidance for engineers and decision makers to new traffic control system implementation. © 2021 Tech Science Press. All rights reserved.
- Authors: El-Hansali, Youssef , Outay, Fatma , Yasar, Ansar , Farrag, Siham , Shoaib, Muhammad , Imran, Muhammad , Awan, Hammad
- Date: 2021
- Type: Text , Journal article
- Relation: Computers, Materials and Continua Vol. 67, no. 3 (2021), p. 2797-2806
- Full Text:
- Reviewed:
- Description: Enforcement of traffic rules and regulations involves a wide range of complex tasks, many of which demand the use of modern technologies. variable speed limits (VSL) control is to change the current speed limit according to the current traffic situation based on the observed traffic conditions. The aim of this study is to provide a simulation-based methodological framework to evaluate (VSL) as an effective Intelligent Transportation System (ITS) enforcement system. The focus of the study is on measuring the effectiveness of the dynamic traffic control strategy on traffic performance and safety considering various performance indicators such as total travel time, average delay, and average number of stops. United Arab Emirates (UAE) was selected as a case study to evaluate the effectiveness of this strategy. A micro simulation software package VISSIM with add-on module VisVAP is used to evaluate the impacts of VSL. It has been observed that VSL control strategy reduced the average delay time per vehicle to around 7%, travel time by 3.2%, and number of stops by 48.5%. Dynamic traffic control strategies also alleviated congestion by increasing the capacity of the bottleneck section and improving safety. Results of this study would act as a guidance for engineers and decision makers to new traffic control system implementation. © 2021 Tech Science Press. All rights reserved.
Software-defined networks for resource allocation in cloud computing : a survey
- Mohamed, Arwa, Hamdan, Mosab, Khan, Suleman, Abdelaziz, Abdelaziz, Babiker, Sharief, Imran, Muhammad, Marsono, M.
- Authors: Mohamed, Arwa , Hamdan, Mosab , Khan, Suleman , Abdelaziz, Abdelaziz , Babiker, Sharief , Imran, Muhammad , Marsono, M.
- Date: 2021
- Type: Text , Journal article
- Relation: Computer Networks Vol. 195, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Cloud computing has a shared set of resources, including physical servers, networks, storage, and user applications. Resource allocation is a critical issue for cloud computing, especially in Infrastructure-as-a-Service (IaaS). The decision-making process in the cloud computing network is non-trivial as it is handled by switches and routers. Moreover, the network concept drifts resulting from changing user demands are among the problems affecting cloud computing. The cloud data center needs agile and elastic network control functions with control of computing resources to ensure proper virtual machine (VM) operations, traffic performance, and energy conservation. Software-Defined Network (SDN) proffers new opportunities to blueprint resource management to handle cloud services allocation while dynamically updating traffic requirements of running VMs. The inclusion of an SDN for managing the infrastructure in a cloud data center better empowers cloud computing, making it easier to allocate resources. In this survey, we discuss and survey resource allocation in cloud computing based on SDN. It is noted that various related studies did not contain all the required requirements. This study is intended to enhance resource allocation mechanisms that involve both cloud computing and SDN domains. Consequently, we analyze resource allocation mechanisms utilized by various researchers; we categorize and evaluate them based on the measured parameters and the problems presented. This survey also contributes to a better understanding of the core of current research that will allow researchers to obtain further information about the possible cloud computing strategies relevant to IaaS resource allocation. © 2021
- Authors: Mohamed, Arwa , Hamdan, Mosab , Khan, Suleman , Abdelaziz, Abdelaziz , Babiker, Sharief , Imran, Muhammad , Marsono, M.
- Date: 2021
- Type: Text , Journal article
- Relation: Computer Networks Vol. 195, no. (2021), p.
- Full Text:
- Reviewed:
- Description: Cloud computing has a shared set of resources, including physical servers, networks, storage, and user applications. Resource allocation is a critical issue for cloud computing, especially in Infrastructure-as-a-Service (IaaS). The decision-making process in the cloud computing network is non-trivial as it is handled by switches and routers. Moreover, the network concept drifts resulting from changing user demands are among the problems affecting cloud computing. The cloud data center needs agile and elastic network control functions with control of computing resources to ensure proper virtual machine (VM) operations, traffic performance, and energy conservation. Software-Defined Network (SDN) proffers new opportunities to blueprint resource management to handle cloud services allocation while dynamically updating traffic requirements of running VMs. The inclusion of an SDN for managing the infrastructure in a cloud data center better empowers cloud computing, making it easier to allocate resources. In this survey, we discuss and survey resource allocation in cloud computing based on SDN. It is noted that various related studies did not contain all the required requirements. This study is intended to enhance resource allocation mechanisms that involve both cloud computing and SDN domains. Consequently, we analyze resource allocation mechanisms utilized by various researchers; we categorize and evaluate them based on the measured parameters and the problems presented. This survey also contributes to a better understanding of the core of current research that will allow researchers to obtain further information about the possible cloud computing strategies relevant to IaaS resource allocation. © 2021
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Ghori, Khawaja, Awais, Muhammad, Khattak, Akmal, Imran, Muhammad, Amin, Fazal, Szathmary, Laszlo
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
Treating class imbalance in non-technical loss detection : an exploratory analysis of a real dataset
- Authors: Ghori, Khawaja , Awais, Muhammad , Khattak, Akmal , Imran, Muhammad , Amin, Fazal , Szathmary, Laszlo
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 98928-98938
- Full Text:
- Reviewed:
- Description: Non-Technical Loss (NTL) is a significant concern for many electric supply companies due to the financial impact caused as a result of suspect consumption activities. A range of machine learning classifiers have been tested across multiple synthesized and real datasets to combat NTL. An important characteristic that exists in these datasets is the imbalance distribution of the classes. When the focus is on predicting the minority class of suspect activities, the classifiers' sensitivity to the class imbalance becomes more important. In this paper, we evaluate the performance of a range of classifiers with under-sampling and over-sampling techniques. The results are compared with the untreated imbalanced dataset. In addition, we compare the performance of the classifiers using penalized classification model. Lastly, the paper presents an exploratory analysis of using different sampling techniques on NTL detection in a real dataset and identify the best performing classifiers. We conclude that logistic regression is the most sensitive to the sampling techniques as the change of its recall is measured around 50% for all sampling techniques. While the random forest is the least sensitive to the sampling technique, the difference in its precision is observed between 1% - 6% for all sampling techniques. © 2013 IEEE.
6G wireless systems : a vision, architectural elements, and future directions
- Khan, Latif, Yaqoob, Ibrar, Imran, Muhammad, Han, Zhu, Hong, Choong
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
- Authors: Khan, Latif , Yaqoob, Ibrar , Imran, Muhammad , Han, Zhu , Hong, Choong
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147029-147044
- Full Text:
- Reviewed:
- Description: Internet of everything (IoE)-based smart services are expected to gain immense popularity in the future, which raises the need for next-generation wireless networks. Although fifth-generation (5G) networks can support various IoE services, they might not be able to completely fulfill the requirements of novel applications. Sixth-generation (6G) wireless systems are envisioned to overcome 5G network limitations. In this article, we explore recent advances made toward enabling 6G systems. We devise a taxonomy based on key enabling technologies, use cases, emerging machine learning schemes, communication technologies, networking technologies, and computing technologies. Furthermore, we identify and discuss open research challenges, such as artificial-intelligence-based adaptive transceivers, intelligent wireless energy harvesting, decentralized and secure business models, intelligent cell-less architecture, and distributed security models. We propose practical guidelines including deep Q-learning and federated learning-based transceivers, blockchain-based secure business models, homomorphic encryption, and distributed-ledger-based authentication schemes to cope with these challenges. Finally, we outline and recommend several future directions. © 2013 IEEE.
A deep learning model based on concatenation approach for the diagnosis of brain tumor
- Noreen, Neelum, Palaniappan, Sellappan, Qayyum, Abdul, Ahmad, Iftikhar, Imran, Muhammad, Shoaib, M.uhammad
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
- Authors: Noreen, Neelum , Palaniappan, Sellappan , Qayyum, Abdul , Ahmad, Iftikhar , Imran, Muhammad , Shoaib, M.uhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 55135-55144
- Full Text:
- Reviewed:
- Description: Brain tumor is a deadly disease and its classification is a challenging task for radiologists because of the heterogeneous nature of the tumor cells. Recently, computer-aided diagnosis-based systems have promised, as an assistive technology, to diagnose the brain tumor, through magnetic resonance imaging (MRI). In recent applications of pre-trained models, normally features are extracted from bottom layers which are different from natural images to medical images. To overcome this problem, this study proposes a method of multi-level features extraction and concatenation for early diagnosis of brain tumor. Two pre-trained deep learning models i.e. Inception-v3 and DensNet201 make this model valid. With the help of these two models, two different scenarios of brain tumor detection and its classification were evaluated. First, the features from different Inception modules were extracted from pre-trained Inception-v3 model and concatenated these features for brain tumor classification. Then, these features were passed to softmax classifier to classify the brain tumor. Second, pre-trained DensNet201 was used to extract features from various DensNet blocks. Then, these features were concatenated and passed to softmax classifier to classify the brain tumor. Both scenarios were evaluated with the help of three-class brain tumor dataset that is available publicly. The proposed method produced 99.34 %, and 99.51% testing accuracies respectively with Inception-v3 and DensNet201 on testing samples and achieved highest performance in the detection of brain tumor. As results indicated, the proposed method based on features concatenation using pre-trained models outperformed as compared to existing state-of-the-art deep learning and machine learning based methods for brain tumor classification. © 2013 IEEE.
A multi-objective deep reinforcement learning framework
- Nguyen, Thanh, Nguyen, Ngoc, Vamplew, Peter, Nahavandi, Saeid, Dazeley, Richard, Lim, Chee
- Authors: Nguyen, Thanh , Nguyen, Ngoc , Vamplew, Peter , Nahavandi, Saeid , Dazeley, Richard , Lim, Chee
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering Applications of Artificial Intelligence Vol. 96, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper introduces a new scalable multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We develop a high-performance MODRL framework that supports both single-policy and multi-policy strategies, as well as both linear and non-linear approaches to action selection. The experimental results on two benchmark problems (two-objective deep sea treasure environment and three-objective Mountain Car problem) indicate that the proposed framework is able to find the Pareto-optimal solutions effectively. The proposed framework is generic and highly modularized, which allows the integration of different deep reinforcement learning algorithms in different complex problem domains. This therefore overcomes many disadvantages involved with standard multi-objective reinforcement learning methods in the current literature. The proposed framework acts as a testbed platform that accelerates the development of MODRL for solving increasingly complicated multi-objective problems. © 2020 Elsevier Ltd
- Authors: Nguyen, Thanh , Nguyen, Ngoc , Vamplew, Peter , Nahavandi, Saeid , Dazeley, Richard , Lim, Chee
- Date: 2020
- Type: Text , Journal article
- Relation: Engineering Applications of Artificial Intelligence Vol. 96, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper introduces a new scalable multi-objective deep reinforcement learning (MODRL) framework based on deep Q-networks. We develop a high-performance MODRL framework that supports both single-policy and multi-policy strategies, as well as both linear and non-linear approaches to action selection. The experimental results on two benchmark problems (two-objective deep sea treasure environment and three-objective Mountain Car problem) indicate that the proposed framework is able to find the Pareto-optimal solutions effectively. The proposed framework is generic and highly modularized, which allows the integration of different deep reinforcement learning algorithms in different complex problem domains. This therefore overcomes many disadvantages involved with standard multi-objective reinforcement learning methods in the current literature. The proposed framework acts as a testbed platform that accelerates the development of MODRL for solving increasingly complicated multi-objective problems. © 2020 Elsevier Ltd
A robust consistency model of crowd workers in text labeling tasks
- Alqershi, Fattoh, Al-Qurishi, Muhammad, Aksoy, Mehmet, Alrubaian, Majed, Imran, Muhammad
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
- Authors: Alqershi, Fattoh , Al-Qurishi, Muhammad , Aksoy, Mehmet , Alrubaian, Majed , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 168381-168393
- Full Text:
- Reviewed:
- Description: Crowdsourcing is a popular human-based model to acquire labeled data. Despite its ability to generate huge amounts of labelled data at moderate costs, it is susceptible to low quality labels. This can happen through unintentional or intentional errors by the crowd workers. Consistency is an important attribute of reliability. It is a practical metric that evaluates a crowd workers' reliability based on their ability to conform to themselves by yielding the same output when repeatedly given a particular input. Consistency has not yet been sufficiently explored in the literature. In this work, we propose a novel consistency model based on the pairwise comparisons method. We apply this model on unpaid workers. We measure the workers' consistency on tasks of labeling political text-based claims and study the effects of different duplicate task characteristics on their consistency. Our results show that the proposed model outperforms the current state-of-the-art models in terms of accuracy. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
Attacks on self-driving cars and their countermeasures : a survey
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Jolfaei, Alireza, Das, Rajkumar
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Jolfaei, Alireza , Das, Rajkumar
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 207308-207342
- Full Text:
- Reviewed:
- Description: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-To-Vehicle (V2V), Vehicle-To-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle's operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-Attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-Attack. We also provide further research directions to improve the security issues associated with self-driving cars. © 2013 IEEE.
Bio-inspired network security for 5G-enabled IoT applications
- Saleem, Kashif, Alabduljabbar, Ghadah, Alrowais, Nouf, Al-Muhtadi, Jalal, Imran, Muhammad, Rodrigues, Joel
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
- Authors: Saleem, Kashif , Alabduljabbar, Ghadah , Alrowais, Nouf , Al-Muhtadi, Jalal , Imran, Muhammad , Rodrigues, Joel
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE access Vol. 8, no. (2020), p. 1-1
- Full Text:
- Reviewed:
- Description: Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks.
Blending big data analytics : review on challenges and a recent study
- Amalina, Fairuz, Targio Hashem, Ibrahim, Azizul, Zati, Fong, Ang, Imran, Muhammad
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
- Authors: Amalina, Fairuz , Targio Hashem, Ibrahim , Azizul, Zati , Fong, Ang , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article , Review
- Relation: IEEE Access Vol. 8, no. (2020), p. 3629-3645
- Full Text:
- Reviewed:
- Description: With the collection of massive amounts of data every day, big data analytics has emerged as an important trend for many organizations. These collected data can contain important information that may be key to solving wide-ranging problems, such as cyber security, marketing, healthcare, and fraud. To analyze their large volumes of data for business analyses and decisions, large companies, such as Facebook and Google, adopt analytics. Such analyses and decisions impact existing and future technology. In this paper, we explore how big data analytics is utilized as a technique for solving problems of complex and unstructured data using such technologies as Hadoop, Spark, and MapReduce. We also discuss the data challenges introduced by big data according to the literature, including its six V's. Moreover, we investigate case studies of big data analytics on various techniques of such analytics, namely, text, voice, video, and network analytics. We conclude that big data analytics can bring positive changes in many fields, such as education, military, healthcare, politics, business, agriculture, banking, and marketing, in the future. © 2013 IEEE.
Exploring the Dynamic Voltage Signature of Renewable Rich Weak Power System
- Alzahrani, S., Shah, Rakibuzzaman, Mithulananthan, N.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
- Authors: Alzahrani, S. , Shah, Rakibuzzaman , Mithulananthan, N.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 216529-216542
- Full Text:
- Reviewed:
- Description: Large-scale renewable energy-based power plants are becoming attractive technically and economically for generation mix around the world. Nevertheless, network operation has significantly changed due to the rapid integration of renewable energy in supply side. The integration of more renewable resources, especially inverter-based generation, deteriorates power system resilience to disturbances and substantially affects stable operations. The dynamic voltage stability becomes one of the major concerns for the transmission system operators (TSOs) due to the limited capabilities of inverter-based resources (IBRs). A heavily loaded and stressed renewable rich grid is susceptible to fault-induced delayed voltage recovery. Hence, it is crucial to examine the system response upon disturbances, to understand the voltage signature, to determine the optimal location and sizing of grid-connected IBRs. Moreover, the IBRs fault contribution mechanism investigation is essential in adopting additional grid support devices, control coordination, and the selection of appropriate corrective control schemes. This article utilizes a comprehensive assessment framework to assess power systems' dynamic voltage signature with large-scale PV under different realistic operating conditions. Several indices quantifying load bus voltage recovery have been used to explore the system' s steady-state, transient response, and voltage trajectories. The recovery indices help extricate the signature and influence of IBRs. The proposed framework's applicability is carried out on the New England IEEE-39 bus test system using the DIgSILENT platform. © 2013 IEEE.
Flow-aware elephant flow detection for software-defined networks
- Hamdan, Mosab, Mohammed, Bushra, Humayun, Usman, Abdelaziz, Ahmed, Khan, Suleman, Ali, M., Imran, Muhammad, Marsono, M.
- Authors: Hamdan, Mosab , Mohammed, Bushra , Humayun, Usman , Abdelaziz, Ahmed , Khan, Suleman , Ali, M. , Imran, Muhammad , Marsono, M.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 72585-72597
- Full Text:
- Reviewed:
- Description: Software-defined networking (SDN) separates the network control plane from the packet forwarding plane, which provides comprehensive network-state visibility for better network management and resilience. Traffic classification, particularly for elephant flow detection, can lead to improved flow control and resource provisioning in SDN networks. Existing elephant flow detection techniques use pre-set thresholds that cannot scale with the changes in the traffic concept and distribution. This paper proposes a flow-aware elephant flow detection applied to SDN. The proposed technique employs two classifiers, each respectively on SDN switches and controller, to achieve accurate elephant flow detection efficiently. Moreover, this technique allows sharing the elephant flow classification tasks between the controller and switches. Hence, most mice flows can be filtered in the switches, thus avoiding the need to send large numbers of classification requests and signaling messages to the controller. Experimental findings reveal that the proposed technique outperforms contemporary methods in terms of the running time, accuracy, F-measure, and recall. © 2013 IEEE.
- Authors: Hamdan, Mosab , Mohammed, Bushra , Humayun, Usman , Abdelaziz, Ahmed , Khan, Suleman , Ali, M. , Imran, Muhammad , Marsono, M.
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 72585-72597
- Full Text:
- Reviewed:
- Description: Software-defined networking (SDN) separates the network control plane from the packet forwarding plane, which provides comprehensive network-state visibility for better network management and resilience. Traffic classification, particularly for elephant flow detection, can lead to improved flow control and resource provisioning in SDN networks. Existing elephant flow detection techniques use pre-set thresholds that cannot scale with the changes in the traffic concept and distribution. This paper proposes a flow-aware elephant flow detection applied to SDN. The proposed technique employs two classifiers, each respectively on SDN switches and controller, to achieve accurate elephant flow detection efficiently. Moreover, this technique allows sharing the elephant flow classification tasks between the controller and switches. Hence, most mice flows can be filtered in the switches, thus avoiding the need to send large numbers of classification requests and signaling messages to the controller. Experimental findings reveal that the proposed technique outperforms contemporary methods in terms of the running time, accuracy, F-measure, and recall. © 2013 IEEE.
Have you been a victim of COVID-19-related cyber incidents? Survey, taxonomy, and mitigation strategies
- Hakak, Saqib, Khan, Wazir, Imran, Muhammad, Choo, Kim-Kwang, Shoaib, Muhammad
- Authors: Hakak, Saqib , Khan, Wazir , Imran, Muhammad , Choo, Kim-Kwang , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 124134-124144
- Full Text:
- Reviewed:
- Description: Cybercriminals are constantly on the lookout for new attack vectors, and the recent COVID-19 pandemic is no exception. For example, social distancing measures have resulted in travel bans, lockdowns, and stay-at-home orders, consequently increasing the reliance on information and communications technologies, such as Zoom. Cybercriminals have also attempted to exploit the pandemic to facilitate a broad range of malicious activities, such as attempting to take over videoconferencing platforms used in online meetings/educational activities, information theft, and other fraudulent activities. This study briefly reviews some of the malicious cyber activities associated with COVID-19 and the potential mitigation solutions. We also propose an attack taxonomy, which (optimistically) will help guide future risk management and mitigation responses. © 2013 IEEE.
- Authors: Hakak, Saqib , Khan, Wazir , Imran, Muhammad , Choo, Kim-Kwang , Shoaib, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 124134-124144
- Full Text:
- Reviewed:
- Description: Cybercriminals are constantly on the lookout for new attack vectors, and the recent COVID-19 pandemic is no exception. For example, social distancing measures have resulted in travel bans, lockdowns, and stay-at-home orders, consequently increasing the reliance on information and communications technologies, such as Zoom. Cybercriminals have also attempted to exploit the pandemic to facilitate a broad range of malicious activities, such as attempting to take over videoconferencing platforms used in online meetings/educational activities, information theft, and other fraudulent activities. This study briefly reviews some of the malicious cyber activities associated with COVID-19 and the potential mitigation solutions. We also propose an attack taxonomy, which (optimistically) will help guide future risk management and mitigation responses. © 2013 IEEE.
Investigating smart home security : is blockchain the answer?
- Arif, Samrah, Khan, M. Arif, Rehman, Sabih, Kabir, Muhammad, Imran, Muhammad
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.
- Authors: Arif, Samrah , Khan, M. Arif , Rehman, Sabih , Kabir, Muhammad , Imran, Muhammad
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 117802-117816
- Full Text:
- Reviewed:
- Description: Smart Home automation is increasingly gaining popularity among current applications of Internet of Things (IoT) due to the convenience and facilities it provides to the home owners. Sensors are employed within the home appliances via wireless connectivity to be accessible remotely by home owners to operate these devices. With the exponential increase of smart home IoT devices in the marketplace such as door locks, light bulbs, power switches etc, numerous security concerns are arising due to limited storage and processing power of such devices, making these devices vulnerable to several attacks. Due to this reason, security implementations in the deployment of these devices has gained popularity among researchers as a critical research area. Moreover, the adoption of traditional security schemes has failed to address the unique security concerns associated with these devices. Blockchain, a decentralised database based on cryptographic techniques, is gaining enormous attention to assure security of IoT systems. The blockchain framework within an IoT system is a fascinating substitute to the traditional centralised models, which has some significant concerns in fulfilling the demand of smart homes security. In this article, we aim to examine the security of smart homes by instigating the adoption of blockchain and exploring some of the currently proposed smart home architectures using blockchain technology. To present our findings, we describe a simple secure smart home framework based on a refined version of blockchain called Consortium blockchain. We highlight the limitations and opportunities of adopting such an architecture. We further evaluate our model and conclude with the results by designing an experimental testbed using a few household IoT devices commonly available in the marketplace. © 2013 IEEE.