Robust image classification using a low-pass activation function and DCT augmentation
- Hossain, Md Tahmid, Teng, Shyh, Sohel, Ferdous, Lu, Guojun
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
Rock-burst occurrence prediction based on optimized naïve bayes models
- Ke, Bo, Khandelwal, Manoj, Asteris, Panagiotis, Skentou, Athanasia, Mamou, Anna, Armaghani, Danial
- Authors: Ke, Bo , Khandelwal, Manoj , Asteris, Panagiotis , Skentou, Athanasia , Mamou, Anna , Armaghani, Danial
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 91347-91360
- Full Text:
- Reviewed:
- Description: Rock-burst is a common failure in hard rock related projects in civil and mining construction and therefore, proper classification and prediction of this phenomenon is of interest. This research presents the development of optimized naïve Bayes models, in predicting rock-burst failures in underground projects. The naïve Bayes models were optimized using four weight optimization techniques including forward, backward, particle swarm optimization, and evolutionary. An evolutionary random forest model was developed to identify the most significant input parameters. The maximum tangential stress, elastic energy index, and uniaxial tensile stress were then selected by the feature selection technique (i.e., evolutionary random forest) to develop the optimized naïve Bayes models. The performance of the models was assessed using various criteria as well as a simple ranking system. The results of this research showed that particle swarm optimization was the most effective technique in improving the accuracy of the naïve Bayes model for rock-burst prediction (cumulative ranking = 21), while the backward technique was the worst weight optimization technique (cumulative ranking = 11). All the optimized naïve Bayes models identified the maximum tangential stress as the most significant parameter in predicting rock-burst failures. The results of this research demonstrate that particle swarm optimization technique may improve the accuracy of naïve Bayes algorithms in predicting rock-burst occurrence. © 2013 IEEE.
- Authors: Ke, Bo , Khandelwal, Manoj , Asteris, Panagiotis , Skentou, Athanasia , Mamou, Anna , Armaghani, Danial
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 91347-91360
- Full Text:
- Reviewed:
- Description: Rock-burst is a common failure in hard rock related projects in civil and mining construction and therefore, proper classification and prediction of this phenomenon is of interest. This research presents the development of optimized naïve Bayes models, in predicting rock-burst failures in underground projects. The naïve Bayes models were optimized using four weight optimization techniques including forward, backward, particle swarm optimization, and evolutionary. An evolutionary random forest model was developed to identify the most significant input parameters. The maximum tangential stress, elastic energy index, and uniaxial tensile stress were then selected by the feature selection technique (i.e., evolutionary random forest) to develop the optimized naïve Bayes models. The performance of the models was assessed using various criteria as well as a simple ranking system. The results of this research showed that particle swarm optimization was the most effective technique in improving the accuracy of the naïve Bayes model for rock-burst prediction (cumulative ranking = 21), while the backward technique was the worst weight optimization technique (cumulative ranking = 11). All the optimized naïve Bayes models identified the maximum tangential stress as the most significant parameter in predicting rock-burst failures. The results of this research demonstrate that particle swarm optimization technique may improve the accuracy of naïve Bayes algorithms in predicting rock-burst occurrence. © 2013 IEEE.
- Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder, Islam, Syed
- Authors: Chowdhury, Abdullahi , Karmakar, Gour , Kamruzzaman, Joarder , Islam, Syed
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 17, no. 2 (2021), p. 961-970
- Full Text: false
- Reviewed:
- Description: To enhance industrial production and automation, rapid and faster transportation of raw materials and finished products to and from distributed factories, warehouses and outlets are essential. To reduce cost with increased efficiency, this will increasingly see the use of connected and self-driving commercial vehicles fitted with industrial grade sensors on roads, shared with normal and self-driving passenger vehicles. For its wide adoption, the trustworthiness of self-driving vehicles in the intelligent transportation system (ITS) is pivotal. In this article, we introduce a novel model to measure the overall trustworthiness of a self-driving vehicle considering on-Board unit (OBU) components, GPS data and safety messages. In calculating the trustworthiness of individual OBU components, CertainLogic and beta distribution function (BDF) are used. Those trust values are fused using both the dempster-Shafer Theory (DST) and a logical operator of CertainLogic. Results of our simulation show that our proposed method can effectively determine the trust of self-driving vehicles. © 2005-2012 IEEE.
- Authors: Ooi, Ean H. , Ooi, Ean Tat
- Date: 2021
- Type: Text , Journal article
- Relation: Computers in Biology and Medicine Vol. 137, no. (2021), p.
- Full Text: false
- Reviewed:
- Description: Switching bipolar radiofrequency ablation (bRFA) is a thermal treatment modality used for liver cancer treatment that is capable of producing larger, more confluent and more regular thermal coagulation. When implemented in the no-touch mode, switching bRFA can prevent tumour track seeding; a medical phenomenon defined by the deposition of cancer cells along the insertion track. Nevertheless, the no-touch mode was found to yield significant unwanted thermal damage as a result of the electrodes’ position outside the tumour. It is postulated that the unwanted thermal damage can be minimized if ablation can be directed such that it focuses only within the tumour domain. As it turns out, this can be achieved by partially insulating the active tip of the RF electrodes such that electric current flows in and out of the tissue only through the non-insulated section of the electrode. This concept is known as unidirectional ablation and has been shown to produce the desired effect in monopolar RFA. In this paper, computational models based on a well-established mathematical framework for modelling RFA was developed to investigate if unidirectional ablation can minimize unwanted thermal damage during time-based switching bRFA. From the numerical results, unidirectional ablation was shown to produce treatment efficacy of nearly 100%, while at the same time, minimizing the amount of unwanted thermal damage. Nevertheless, this effect was observed only when the switch interval of the time-based protocol was set to 50 s. An extended switch interval negated the benefits of unidirectional ablation. © 2021 Elsevier Ltd
- Usman, Muhammad, Jan, Mian, Jolfaei, Alireza, Xu, Min, He, Xiangjian, Chen, Jinjun
- Authors: Usman, Muhammad , Jan, Mian , Jolfaei, Alireza , Xu, Min , He, Xiangjian , Chen, Jinjun
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industrial Informatics Vol. 16, no. 9 (2020), p. 6114-6123
- Full Text: false
- Reviewed:
- Description: Industrial Internet of Things applications demand trustworthiness in terms of quality of service (QoS), security, and privacy, to support the smooth transmission of data. To address these challenges, in this article, we propose a distributed and anonymous data collection (DaaC) framework based on a multilevel edge computing architecture. This framework distributes captured data among multiple level-one edge devices (LOEDs) to improve the QoS and minimize packet drop and end-to-end delay. Mobile sinks are used to collect data from LOEDs and upload to cloud servers. Before data collection, the mobile sinks are registered with a level-two edge-device to protect the underlying network. The privacy of mobile sinks is preserved through group-based signed data collection requests. Experimental results show that our proposed framework improves QoS through distributed data transmission. It also helps in protecting the underlying network through a registration scheme and preserves the privacy of mobile sinks through group-based data collection requests. © 2005-2012 IEEE.
A low-complexity equalizer for video broadcasting in cyber-physical social systems through handheld mobile devices
- Solyman, Ahmad, Attar, Hani, Khosravi, Mohammad, Menon, Varun, Jolfaei, Alireza, Balasubramanian, Venki, Selvaraj, Buvana, Tavallali, Pooya
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
A secured framework for SDN-based edge computing in IoT-enabled healthcare system
- Li, Junxia, Cai, Jinjin, Khan, Fazlullah, Rehman, Ateeq, Balasubramanian, Venki
- Authors: Li, Junxia , Cai, Jinjin , Khan, Fazlullah , Rehman, Ateeq , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 135479-135490
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) consists of resource-constrained smart devices capable to sense and process data. It connects a huge number of smart sensing devices, i.e., things, and heterogeneous networks. The IoT is incorporated into different applications, such as smart health, smart home, smart grid, etc. The concept of smart healthcare has emerged in different countries, where pilot projects of healthcare facilities are analyzed. In IoT-enabled healthcare systems, the security of IoT devices and associated data is very important, whereas Edge computing is a promising architecture that solves their computational and processing problems. Edge computing is economical and has the potential to provide low latency data services by improving the communication and computation speed of IoT devices in a healthcare system. In Edge-based IoT-enabled healthcare systems, load balancing, network optimization, and efficient resource utilization are accurately performed using artificial intelligence (AI), i.e., intelligent software-defined network (SDN) controller. SDN-based Edge computing is helpful in the efficient utilization of limited resources of IoT devices. However, these low powered devices and associated data (private sensitive data of patients) are prone to various security threats. Therefore, in this paper, we design a secure framework for SDN-based Edge computing in IoT-enabled healthcare system. In the proposed framework, the IoT devices are authenticated by the Edge servers using a lightweight authentication scheme. After authentication, these devices collect data from the patients and send them to the Edge servers for storage, processing, and analyses. The Edge servers are connected with an SDN controller, which performs load balancing, network optimization, and efficient resource utilization in the healthcare system. The proposed framework is evaluated using computer-based simulations. The results demonstrate that the proposed framework provides better solutions for IoT-enabled healthcare systems. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramaniam” is provided in this record**
- Authors: Li, Junxia , Cai, Jinjin , Khan, Fazlullah , Rehman, Ateeq , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 135479-135490
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) consists of resource-constrained smart devices capable to sense and process data. It connects a huge number of smart sensing devices, i.e., things, and heterogeneous networks. The IoT is incorporated into different applications, such as smart health, smart home, smart grid, etc. The concept of smart healthcare has emerged in different countries, where pilot projects of healthcare facilities are analyzed. In IoT-enabled healthcare systems, the security of IoT devices and associated data is very important, whereas Edge computing is a promising architecture that solves their computational and processing problems. Edge computing is economical and has the potential to provide low latency data services by improving the communication and computation speed of IoT devices in a healthcare system. In Edge-based IoT-enabled healthcare systems, load balancing, network optimization, and efficient resource utilization are accurately performed using artificial intelligence (AI), i.e., intelligent software-defined network (SDN) controller. SDN-based Edge computing is helpful in the efficient utilization of limited resources of IoT devices. However, these low powered devices and associated data (private sensitive data of patients) are prone to various security threats. Therefore, in this paper, we design a secure framework for SDN-based Edge computing in IoT-enabled healthcare system. In the proposed framework, the IoT devices are authenticated by the Edge servers using a lightweight authentication scheme. After authentication, these devices collect data from the patients and send them to the Edge servers for storage, processing, and analyses. The Edge servers are connected with an SDN controller, which performs load balancing, network optimization, and efficient resource utilization in the healthcare system. The proposed framework is evaluated using computer-based simulations. The results demonstrate that the proposed framework provides better solutions for IoT-enabled healthcare systems. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramaniam” is provided in this record**
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Shahriar Shafin, Sakib, Bhuiyan, Md Zakirul
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
An adaptive and flexible brain energized full body exoskeleton with IoT edge for assisting the paralyzed patients
- Jacob, Sunil, Alagirisamy, Mukil, Menon, Varun, Kumar, B. Manoj, Balasubramanian, Venki
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
An enhancement to the spatial pyramid matching for image classification and retrieval
- Karmakar, Priyabrata, Teng, Shyh, Lu, Guojun, Zhang, Dengsheng
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 22463-22472
- Full Text:
- Reviewed:
- Description: Spatial pyramid matching (SPM) is one of the widely used methods to incorporate spatial information into the image representation. Despite its effectiveness, the traditional SPM is not rotation invariant. A rotation invariant SPM has been proposed in the literature but it has many limitations regarding the effectiveness. In this paper, we investigate how to make SPM robust to rotation by addressing those limitations. In an SPM framework, an image is divided into an increasing number of partitions at different pyramid levels. In this paper, our main focus is on how to partition images in such a way that the resulting structure can deal with image-level rotations. To do that, we investigate three concentric ring partitioning schemes. Apart from image partitioning, another important component of the SPM framework is a weight function. To apportion the contribution of each pyramid level to the final matching between two images, the weight function is needed. In this paper, we propose a new weight function which is suitable for the rotation-invariant SPM structure. Experiments based on image classification and retrieval are performed on five image databases. The detailed result analysis shows that we are successful in enhancing the effectiveness of SPM for image classification and retrieval. © 2013 IEEE.
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 22463-22472
- Full Text:
- Reviewed:
- Description: Spatial pyramid matching (SPM) is one of the widely used methods to incorporate spatial information into the image representation. Despite its effectiveness, the traditional SPM is not rotation invariant. A rotation invariant SPM has been proposed in the literature but it has many limitations regarding the effectiveness. In this paper, we investigate how to make SPM robust to rotation by addressing those limitations. In an SPM framework, an image is divided into an increasing number of partitions at different pyramid levels. In this paper, our main focus is on how to partition images in such a way that the resulting structure can deal with image-level rotations. To do that, we investigate three concentric ring partitioning schemes. Apart from image partitioning, another important component of the SPM framework is a weight function. To apportion the contribution of each pyramid level to the final matching between two images, the weight function is needed. In this paper, we propose a new weight function which is suitable for the rotation-invariant SPM structure. Experiments based on image classification and retrieval are performed on five image databases. The detailed result analysis shows that we are successful in enhancing the effectiveness of SPM for image classification and retrieval. © 2013 IEEE.
Dual cost function model predictive direct speed control with duty ratio optimization for PMSM drives
- Liu, Ming, Hu, Jiefeng, Chan, Ka, Or, Siu, Ho, Siu, Xu, Wenzheng, Zhang, Xian
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
- Zhang, Xian, Hu, Jiefeng, Wang, Huaizhi, Wang, Guibin, Chan, Ka, Qiu, Jing
- Authors: Zhang, Xian , Hu, Jiefeng , Wang, Huaizhi , Wang, Guibin , Chan, Ka , Qiu, Jing
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 56, no. 5 (2020), p. 5868-5879
- Full Text: false
- Reviewed:
- Description: This article studies electric vehicle (EV) potential to participate in the energy market and provide flexible ramping products (FRPs). EV traffic flows are predicted by the deep belief network, and the availability of flexible EVs is estimated based on the predicted EV traffic flows. Then, a novel market mechanism in distribution system is proposed to encourage the dispatchable EV demand to react to economic signals and provide ramping services. The designed market model is based on locational marginal pricing of energy and marginal pricing of FRPs. System ramping capacity constraints and EV operation constraints are incorporated in the proposed model to achieve the balance between the system social cost minimization and the EV traveling convenience. Moreover, typical uncertainties are considered by the scenario-based approach. Finally, simulations are conducted to verify the effectiveness of the established model and demonstrate the contributions of EVs to the system reliability and flexibility. © 1972-2012 IEEE.
- Description: ITIAC: Funding details: JCYJ20170817100412438, 2019-AAAE-1307, JCYJ20190808141019317
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Buurman, Ben, Kamruzzaman, Joarder, Karmakar, Gour, Islam, Syed
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
Mobility based network lifetime in wireless sensor networks: A review
- Authors: Nguyen, Linh , Nguyen, Hoc
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 174, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows mobile wireless sensor networks (MWSNs) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating an MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of an MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported. © 2020
- Authors: Nguyen, Linh , Nguyen, Hoc
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 174, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows mobile wireless sensor networks (MWSNs) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating an MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of an MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported. © 2020
Network representation learning: From traditional feature learning to deep learning
- Sun, Ke, Wang, Lei, Xu, Bo, Zhao, Wenhong, Teng, Shyh, Xia, Feng
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Privacy protection and energy optimization for 5G-aided industrial internet of things
- Humayun, Mamoona, Jhanjhi, Nz, Alruwaili, Madallah, Amalathas, Sagaya, Balasubramanian, Venki, Selvaraj, Buvana
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Quantifying success in science : an overview
- Bai, Xiaomei, Pan, Habxiao, Hou, Jie, Guo, Teng, Lee, Ivan, Xia, Feng
- Authors: Bai, Xiaomei , Pan, Habxiao , Hou, Jie , Guo, Teng , Lee, Ivan , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 123200-123214
- Full Text:
- Reviewed:
- Description: Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions. © 2013 IEEE.
- Description: This work was supported in part by the Liaoning Provincial Key Research and Development Guidance Project under Grant 2018104021, and in part by the Liaoning Provincial Natural Fund Guidance Plan under Grant 20180550011.
- Authors: Bai, Xiaomei , Pan, Habxiao , Hou, Jie , Guo, Teng , Lee, Ivan , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 123200-123214
- Full Text:
- Reviewed:
- Description: Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions. © 2013 IEEE.
- Description: This work was supported in part by the Liaoning Provincial Key Research and Development Guidance Project under Grant 2018104021, and in part by the Liaoning Provincial Natural Fund Guidance Plan under Grant 20180550011.
RaSEC : an intelligent framework for reliable and secure multilevel edge computing in industrial environments
- Usman, Muhammad, Jolfaei, Alireza, Jan, Mian
- Authors: Usman, Muhammad , Jolfaei, Alireza , Jan, Mian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 56, no. 4 (2020), p. 4543-4551
- Full Text:
- Reviewed:
- Description: Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats. © 1972-2012 IEEE.
- Authors: Usman, Muhammad , Jolfaei, Alireza , Jan, Mian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 56, no. 4 (2020), p. 4543-4551
- Full Text:
- Reviewed:
- Description: Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats. © 1972-2012 IEEE.
Real-time dissemination of emergency warning messages in 5G enabled selfish vehicular social networks
- Ullah, Noor, Kong, Xiangjie, Lin, Limei, Alrashoud, Mubarak, Tolba, Amr, Xia, Feng
- Authors: Ullah, Noor , Kong, Xiangjie , Lin, Limei , Alrashoud, Mubarak , Tolba, Amr , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 182, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper addresses the issues of selfishness, limited network resources, and their adverse effects on real-time dissemination of Emergency Warning Messages (EWMs) in modern Autonomous Moving Platforms (AMPs) such as Vehicular Social Networks (VSNs). For this purpose, we propose a social intelligence based identification mechanism to differentiate between a selfish and a cooperative node in the network. Therefore, we devise a crowdsensing based mechanism to calculate a tie-strength value based on several social metrics. Moreover, we design a recursive evolutionary algorithm for each node's reputation calculation and update. Given that, then we estimate each node's state-transition probability to select a super-spreader for rapid dissemination. In order to ensure a seamless and reliable dissemination process, we incorporate 5G network structure instead of conventional short range communication which is used in most vehicular networks at present. Finally, we design a real-time dissemination algorithm for EWMs and evaluate its performance in terms of network parameters such as delivery-ratio, delay, hop-count, and message-overhead for varying values of vehicular density, speed, and selfish nodes’ density based on realistic vehicular mobility traces. In addition, we present a comparative analysis of the performance of the proposed scheme with state-of-the-art dissemination schemes in VSNs. © 2020 Elsevier B.V.
- Authors: Ullah, Noor , Kong, Xiangjie , Lin, Limei , Alrashoud, Mubarak , Tolba, Amr , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 182, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper addresses the issues of selfishness, limited network resources, and their adverse effects on real-time dissemination of Emergency Warning Messages (EWMs) in modern Autonomous Moving Platforms (AMPs) such as Vehicular Social Networks (VSNs). For this purpose, we propose a social intelligence based identification mechanism to differentiate between a selfish and a cooperative node in the network. Therefore, we devise a crowdsensing based mechanism to calculate a tie-strength value based on several social metrics. Moreover, we design a recursive evolutionary algorithm for each node's reputation calculation and update. Given that, then we estimate each node's state-transition probability to select a super-spreader for rapid dissemination. In order to ensure a seamless and reliable dissemination process, we incorporate 5G network structure instead of conventional short range communication which is used in most vehicular networks at present. Finally, we design a real-time dissemination algorithm for EWMs and evaluate its performance in terms of network parameters such as delivery-ratio, delay, hop-count, and message-overhead for varying values of vehicular density, speed, and selfish nodes’ density based on realistic vehicular mobility traces. In addition, we present a comparative analysis of the performance of the proposed scheme with state-of-the-art dissemination schemes in VSNs. © 2020 Elsevier B.V.