A low-complexity equalizer for video broadcasting in cyber-physical social systems through handheld mobile devices
- Solyman, Ahmad, Attar, Hani, Khosravi, Mohammad, Menon, Varun, Jolfaei, Alireza, Balasubramanian, Venki, Selvaraj, Buvana, Tavallali, Pooya
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
- Authors: Solyman, Ahmad , Attar, Hani , Khosravi, Mohammad , Menon, Varun , Jolfaei, Alireza , Balasubramanian, Venki , Selvaraj, Buvana , Tavallali, Pooya
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 67591-67602
- Full Text:
- Reviewed:
- Description: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels. © 2013 IEEE.
A new data driven long-term solar yield analysis model of photovoltaic power plants
- Ray, Biplob, Shah, Rakibuzzaman, Islam, Md Rabiul, Islam, Syed
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
- Authors: Ray, Biplob , Shah, Rakibuzzaman , Islam, Md Rabiul , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 136223-136233
- Full Text:
- Reviewed:
- Description: Historical data offers a wealth of knowledge to the users. However, often restrictively mammoth that the information cannot be fully extracted, synthesized, and analyzed efficiently for an application such as the forecasting of variable generator outputs. Moreover, the accuracy of the prediction method is vital. Therefore, a trade-off between accuracy and efficacy is required for the data-driven energy forecasting method. It has been identified that the hybrid approach may outperform the individual technique in minimizing the error while challenging to synthesize. A hybrid deep learning-based method is proposed for the output prediction of the solar photovoltaic systems (i.e. proposed PV system) in Australia to obtain the trade-off between accuracy and efficacy. The historical dataset from 1990-2013 in Australian locations (e.g. North Queensland) are used to train the model. The model is developed using the combination of multivariate long and short-term memory (LSTM) and convolutional neural network (CNN). The proposed hybrid deep learning (LSTM-CNN) is compared with the existing neural network ensemble (NNE), random forest, statistical analysis, and artificial neural network (ANN) based techniques to assess the performance. The proposed model could be useful for generation planning and reserve estimation in power systems with high penetration of solar photovoltaics (PVs) or other renewable energy sources (RESs). © 2013 IEEE.
A secured framework for SDN-based edge computing in IoT-enabled healthcare system
- Li, Junxia, Cai, Jinjin, Khan, Fazlullah, Rehman, Ateeq, Balasubramanian, Venki
- Authors: Li, Junxia , Cai, Jinjin , Khan, Fazlullah , Rehman, Ateeq , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 135479-135490
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) consists of resource-constrained smart devices capable to sense and process data. It connects a huge number of smart sensing devices, i.e., things, and heterogeneous networks. The IoT is incorporated into different applications, such as smart health, smart home, smart grid, etc. The concept of smart healthcare has emerged in different countries, where pilot projects of healthcare facilities are analyzed. In IoT-enabled healthcare systems, the security of IoT devices and associated data is very important, whereas Edge computing is a promising architecture that solves their computational and processing problems. Edge computing is economical and has the potential to provide low latency data services by improving the communication and computation speed of IoT devices in a healthcare system. In Edge-based IoT-enabled healthcare systems, load balancing, network optimization, and efficient resource utilization are accurately performed using artificial intelligence (AI), i.e., intelligent software-defined network (SDN) controller. SDN-based Edge computing is helpful in the efficient utilization of limited resources of IoT devices. However, these low powered devices and associated data (private sensitive data of patients) are prone to various security threats. Therefore, in this paper, we design a secure framework for SDN-based Edge computing in IoT-enabled healthcare system. In the proposed framework, the IoT devices are authenticated by the Edge servers using a lightweight authentication scheme. After authentication, these devices collect data from the patients and send them to the Edge servers for storage, processing, and analyses. The Edge servers are connected with an SDN controller, which performs load balancing, network optimization, and efficient resource utilization in the healthcare system. The proposed framework is evaluated using computer-based simulations. The results demonstrate that the proposed framework provides better solutions for IoT-enabled healthcare systems. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramaniam” is provided in this record**
- Authors: Li, Junxia , Cai, Jinjin , Khan, Fazlullah , Rehman, Ateeq , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 135479-135490
- Full Text:
- Reviewed:
- Description: The Internet of Things (IoT) consists of resource-constrained smart devices capable to sense and process data. It connects a huge number of smart sensing devices, i.e., things, and heterogeneous networks. The IoT is incorporated into different applications, such as smart health, smart home, smart grid, etc. The concept of smart healthcare has emerged in different countries, where pilot projects of healthcare facilities are analyzed. In IoT-enabled healthcare systems, the security of IoT devices and associated data is very important, whereas Edge computing is a promising architecture that solves their computational and processing problems. Edge computing is economical and has the potential to provide low latency data services by improving the communication and computation speed of IoT devices in a healthcare system. In Edge-based IoT-enabled healthcare systems, load balancing, network optimization, and efficient resource utilization are accurately performed using artificial intelligence (AI), i.e., intelligent software-defined network (SDN) controller. SDN-based Edge computing is helpful in the efficient utilization of limited resources of IoT devices. However, these low powered devices and associated data (private sensitive data of patients) are prone to various security threats. Therefore, in this paper, we design a secure framework for SDN-based Edge computing in IoT-enabled healthcare system. In the proposed framework, the IoT devices are authenticated by the Edge servers using a lightweight authentication scheme. After authentication, these devices collect data from the patients and send them to the Edge servers for storage, processing, and analyses. The Edge servers are connected with an SDN controller, which performs load balancing, network optimization, and efficient resource utilization in the healthcare system. The proposed framework is evaluated using computer-based simulations. The results demonstrate that the proposed framework provides better solutions for IoT-enabled healthcare systems. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramaniam” is provided in this record**
A Survey on Behavioral Pattern Mining from Sensor Data in Internet of Things
- Rashid, Md Mamunur, Kamruzzaman, Joarder, Hassan, Mohammad, Shahriar Shafin, Sakib, Bhuiyan, Md Zakirul
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
- Authors: Rashid, Md Mamunur , Kamruzzaman, Joarder , Hassan, Mohammad , Shahriar Shafin, Sakib , Bhuiyan, Md Zakirul
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 33318-33341
- Full Text:
- Reviewed:
- Description: The deployment of large-scale wireless sensor networks (WSNs) for the Internet of Things (IoT) applications is increasing day-by-day, especially with the emergence of smart city services. The sensor data streams generated from these applications are largely dynamic, heterogeneous, and often geographically distributed over large areas. For high-value use in business, industry and services, these data streams must be mined to extract insightful knowledge, such as about monitoring (e.g., discovering certain behaviors over a deployed area) or network diagnostics (e.g., predicting faulty sensor nodes). However, due to the inherent constraints of sensor networks and application requirements, traditional data mining techniques cannot be directly used to mine IoT data streams efficiently and accurately in real-time. In the last decade, a number of works have been reported in the literature proposing behavioral pattern mining algorithms for sensor networks. This paper presents the technical challenges that need to be considered for mining sensor data. It then provides a thorough review of the mining techniques proposed in the recent literature to mine behavioral patterns from sensor data in IoT, and their characteristics and differences are highlighted and compared. We also propose a behavioral pattern mining framework for IoT and discuss possible future research directions in this area. © 2013 IEEE.
An adaptive and flexible brain energized full body exoskeleton with IoT edge for assisting the paralyzed patients
- Jacob, Sunil, Alagirisamy, Mukil, Menon, Varun, Kumar, B. Manoj, Balasubramanian, Venki
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
- Authors: Jacob, Sunil , Alagirisamy, Mukil , Menon, Varun , Kumar, B. Manoj , Balasubramanian, Venki
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 100721-100731
- Full Text:
- Reviewed:
- Description: The paralyzed population is increasing worldwide due to stroke, spinal code injury, post-polio, and other related diseases. Different assistive technologies are used to improve the physical and mental health of the affected patients. Exoskeletons have emerged as one of the most promising technology to provide movement and rehabilitation for the paralyzed. But exoskeletons are limited by the constraints of weight, flexibility, and adaptability. To resolve these issues, we propose an adaptive and flexible Brain Energized Full Body Exoskeleton (BFBE) for assisting the paralyzed people. This paper describes the design, control, and testing of BFBE with 15 degrees of freedom (DoF) for assisting the users in their daily activities. The flexibility is incorporated into the system by a modular design approach. The brain signals captured by the Electroencephalogram (EEG) sensors are used for controlling the movements of BFBE. The processing happens at the edge, reducing delay in decision making and the system is further integrated with an IoT module that helps to send an alert message to multiple caregivers in case of an emergency. The potential energy harvesting is used in the system to solve the power issues related to the exoskeleton. The stability in the gait cycle is ensured by using adaptive sensory feedback. The system validation is done by using six natural movements on ten different paralyzed persons. The system recognizes human intensions with an accuracy of 85%. The result shows that BFBE can be an efficient method for providing assistance and rehabilitation for paralyzed patients. © 2013 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Venki Balasubramanian” is provided in this record**
An enhancement to the spatial pyramid matching for image classification and retrieval
- Karmakar, Priyabrata, Teng, Shyh, Lu, Guojun, Zhang, Dengsheng
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 22463-22472
- Full Text:
- Reviewed:
- Description: Spatial pyramid matching (SPM) is one of the widely used methods to incorporate spatial information into the image representation. Despite its effectiveness, the traditional SPM is not rotation invariant. A rotation invariant SPM has been proposed in the literature but it has many limitations regarding the effectiveness. In this paper, we investigate how to make SPM robust to rotation by addressing those limitations. In an SPM framework, an image is divided into an increasing number of partitions at different pyramid levels. In this paper, our main focus is on how to partition images in such a way that the resulting structure can deal with image-level rotations. To do that, we investigate three concentric ring partitioning schemes. Apart from image partitioning, another important component of the SPM framework is a weight function. To apportion the contribution of each pyramid level to the final matching between two images, the weight function is needed. In this paper, we propose a new weight function which is suitable for the rotation-invariant SPM structure. Experiments based on image classification and retrieval are performed on five image databases. The detailed result analysis shows that we are successful in enhancing the effectiveness of SPM for image classification and retrieval. © 2013 IEEE.
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 22463-22472
- Full Text:
- Reviewed:
- Description: Spatial pyramid matching (SPM) is one of the widely used methods to incorporate spatial information into the image representation. Despite its effectiveness, the traditional SPM is not rotation invariant. A rotation invariant SPM has been proposed in the literature but it has many limitations regarding the effectiveness. In this paper, we investigate how to make SPM robust to rotation by addressing those limitations. In an SPM framework, an image is divided into an increasing number of partitions at different pyramid levels. In this paper, our main focus is on how to partition images in such a way that the resulting structure can deal with image-level rotations. To do that, we investigate three concentric ring partitioning schemes. Apart from image partitioning, another important component of the SPM framework is a weight function. To apportion the contribution of each pyramid level to the final matching between two images, the weight function is needed. In this paper, we propose a new weight function which is suitable for the rotation-invariant SPM structure. Experiments based on image classification and retrieval are performed on five image databases. The detailed result analysis shows that we are successful in enhancing the effectiveness of SPM for image classification and retrieval. © 2013 IEEE.
Cyberattack triage using incremental clustering for intrusion detection systems
- Taheri, Sona, Bagirov, Adil, Gondal, Iqbal, Brown, Simon
- Authors: Taheri, Sona , Bagirov, Adil , Gondal, Iqbal , Brown, Simon
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Information Security Vol. 19, no. 5 (2020), p. 597-607
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: Intrusion detection systems (IDSs) are devices or software applications that monitor networks or systems for malicious activities and signals alerts/alarms when such activity is discovered. However, an IDS may generate many false alerts which affect its accuracy. In this paper, we develop a cyberattack triage algorithm to detect these alerts (so-called outliers). The proposed algorithm is designed using the clustering, optimization and distance-based approaches. An optimization-based incremental clustering algorithm is proposed to find clusters of different types of cyberattacks. Using a special procedure, a set of clusters is divided into two subsets: normal and stable clusters. Then, outliers are found among stable clusters using an average distance between centroids of normal clusters. The proposed algorithm is evaluated using the well-known IDS data sets—Knowledge Discovery and Data mining Cup 1999 and UNSW-NB15—and compared with some other existing algorithms. Results show that the proposed algorithm has a high detection accuracy and its false negative rate is very low. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
- Description: This research was conducted in Internet Commerce Security Laboratory (ICSL) funded by Westpac Banking Corporation Australia. In addition, the research by Dr. Sona Taheri and A/Prof. Adil Bagirov was supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (DP190100580).
- Authors: Taheri, Sona , Bagirov, Adil , Gondal, Iqbal , Brown, Simon
- Date: 2020
- Type: Text , Journal article
- Relation: International Journal of Information Security Vol. 19, no. 5 (2020), p. 597-607
- Relation: http://purl.org/au-research/grants/arc/DP190100580
- Full Text:
- Reviewed:
- Description: Intrusion detection systems (IDSs) are devices or software applications that monitor networks or systems for malicious activities and signals alerts/alarms when such activity is discovered. However, an IDS may generate many false alerts which affect its accuracy. In this paper, we develop a cyberattack triage algorithm to detect these alerts (so-called outliers). The proposed algorithm is designed using the clustering, optimization and distance-based approaches. An optimization-based incremental clustering algorithm is proposed to find clusters of different types of cyberattacks. Using a special procedure, a set of clusters is divided into two subsets: normal and stable clusters. Then, outliers are found among stable clusters using an average distance between centroids of normal clusters. The proposed algorithm is evaluated using the well-known IDS data sets—Knowledge Discovery and Data mining Cup 1999 and UNSW-NB15—and compared with some other existing algorithms. Results show that the proposed algorithm has a high detection accuracy and its false negative rate is very low. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.
- Description: This research was conducted in Internet Commerce Security Laboratory (ICSL) funded by Westpac Banking Corporation Australia. In addition, the research by Dr. Sona Taheri and A/Prof. Adil Bagirov was supported by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (DP190100580).
Dual cost function model predictive direct speed control with duty ratio optimization for PMSM drives
- Liu, Ming, Hu, Jiefeng, Chan, Ka, Or, Siu, Ho, Siu, Xu, Wenzheng, Zhang, Xian
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
- Authors: Liu, Ming , Hu, Jiefeng , Chan, Ka , Or, Siu , Ho, Siu , Xu, Wenzheng , Zhang, Xian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 126637-126647
- Full Text:
- Reviewed:
- Description: Traditional speed control of permanent magnet synchronous motors (PMSMs) includes a cascaded speed loop with proportional-integral (PI) regulators. The output of this outer speed loop, i.e. electromagnetic torque reference, is in turn fed to either the inner current controller or the direct torque controller. This cascaded control structure leads to relatively slow dynamic response, and more importantly, larger speed ripples. This paper presents a new dual cost function model predictive direct speed control (DCF-MPDSC) with duty ratio optimization for PMSM drives. By employing accurate system status prediction, optimized duty ratios between one zero voltage vector and one active voltage vector are firstly deduced based on the deadbeat criterion. Then, two separate cost functions are formulated sequentially to refine the combinations of voltage vectors, which provide two-degree-of-freedom control capability. Specifically, the first cost function results in better dynamic response, while the second one contributes to speed ripple reduction and steady-state offset elimination. The proposed control strategy has been validated by both Simulink simulation and hardware-in-the-loop (HIL) experiment. Compared to existing control methods, the proposed DCF-MPDSC can reach the speed reference rapidly with very small speed ripple and offset. © 2013 IEEE.
- Description: This work was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (HKSAR) Government under Grant R5020-18, and in part by the Innovation and Technology Commission of the HKSAR Government to the Hong Kong Branch of National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1.
Low-power wide-area networks : design goals, architecture, suitability to use cases and research challenges
- Buurman, Ben, Kamruzzaman, Joarder, Karmakar, Gour, Islam, Syed
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
- Authors: Buurman, Ben , Kamruzzaman, Joarder , Karmakar, Gour , Islam, Syed
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 17179-17220
- Full Text:
- Reviewed:
- Description: Previous survey articles on Low-Powered Wide-Area Networks (LPWANs) lack a systematic analysis of the design goals of LPWAN and the design decisions adopted by various commercially available and emerging LPWAN technologies, and no study has analysed how their design decisions impact their ability to meet design goals. Assessing a technology's ability to meet design goals is essential in determining suitable technologies for a given application. To address these gaps, we have analysed six prominent design goals and identified the design decisions used to meet each goal in the eight LPWAN technologies, ranging from technical consideration to business model, and determined which specific technique in a design decision will help meet each goal to the greatest extent. System architecture and specifications are presented for those LPWAN solutions, and their ability to meet each design goal is evaluated. We outline seventeen use cases across twelve domains that require large low power network infrastructure and prioritise each design goal's importance to those applications as Low, Moderate, or High. Using these priorities and each technology's suitability for meeting design goals, we suggest appropriate LPWAN technologies for each use case. Finally, a number of research challenges are presented for current and future technologies. © 2013 IEEE.
Mobility based network lifetime in wireless sensor networks: A review
- Authors: Nguyen, Linh , Nguyen, Hoc
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 174, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows mobile wireless sensor networks (MWSNs) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating an MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of an MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported. © 2020
- Authors: Nguyen, Linh , Nguyen, Hoc
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 174, no. (2020), p.
- Full Text:
- Reviewed:
- Description: Increasingly emerging technologies in micro-electromechanical systems and wireless communications allows mobile wireless sensor networks (MWSNs) to be a more and more powerful mean in many applications such as habitat and environmental monitoring, traffic observing, battlefield surveillance, smart homes and smart cities. Nevertheless, due to sensor battery constraints, energy-efficiently operating an MWSN is paramount importance in those applications; and a plethora of approaches have been proposed to elongate the network longevity at most possible. Therefore, this paper provides a comprehensive review on the developed methods that exploit mobility of sensor nodes and/or sink(s) to effectively maximize the lifetime of an MWSN. The survey systematically classifies the algorithms into categories where the MWSN is equipped with mobile sensor nodes, one mobile sink or multiple mobile sinks. How to drive the mobile sink(s) for energy efficiency in the network is also fully reviewed and reported. © 2020
Network representation learning: From traditional feature learning to deep learning
- Sun, Ke, Wang, Lei, Xu, Bo, Zhao, Wenhong, Teng, Shyh, Xia, Feng
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Sun, Ke , Wang, Lei , Xu, Bo , Zhao, Wenhong , Teng, Shyh , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 205600-205617
- Full Text:
- Reviewed:
- Description: Network representation learning (NRL) is an effective graph analytics technique and promotes users to deeply understand the hidden characteristics of graph data. It has been successfully applied in many real-world tasks related to network science, such as social network data processing, biological information processing, and recommender systems. Deep Learning is a powerful tool to learn data features. However, it is non-trivial to generalize deep learning to graph-structured data since it is different from the regular data such as pictures having spatial information and sounds having temporal information. Recently, researchers proposed many deep learning-based methods in the area of NRL. In this survey, we investigate classical NRL from traditional feature learning method to the deep learning-based model, analyze relationships between them, and summarize the latest progress. Finally, we discuss open issues considering NRL and point out the future directions in this field. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Online dispute resolution in mediating EHR disputes : a case study on the impact of emotional intelligence
- Bellucci, Emilia, Venkatraman, Sitalakshmi, Stranieri, Andrew
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2020
- Type: Text , Journal article
- Relation: Behaviour and Information Technology Vol. 39, no. 10 (2020), p. 1124-1139
- Full Text:
- Reviewed:
- Description: An Electronic Health Record (EHR) is an individual’s record of all health events that enables critical information to be documented and shared electronically amongst health care providers and patients. The introduction of an EHR, particularly a patient-accessible EHR, can be expected to lead to an escalation of enquiries, complaints and ultimately, disputes. Prevailing opinion is that Online Dispute Resolution (ODR) systems can help with the mediation of certain types of disputes electronically, particularly systems which deploy Artificial Intelligence (AI) to reduce the need for a human mediator. However, disputes regarding health tend to invoke emotional responses from patients that may conceivably impact ODR efficacy. This raises an interesting question on the influence of emotional intelligence (EI) in the process of mediation. Using a phenomenological research methodology simulating doctor–patient disputes mediated with an AI Smart ODR system in place of a human mediator, we found an association between EI and the propensity for a participant to change their previously asserted claims. Our results indicate participants with lower EI tend to prolong resolution compared to those with higher EI. Future research include trialling larger scale ODR systems for specific cohorts of patients in the area of health related dispute resolution are advanced. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
- Authors: Bellucci, Emilia , Venkatraman, Sitalakshmi , Stranieri, Andrew
- Date: 2020
- Type: Text , Journal article
- Relation: Behaviour and Information Technology Vol. 39, no. 10 (2020), p. 1124-1139
- Full Text:
- Reviewed:
- Description: An Electronic Health Record (EHR) is an individual’s record of all health events that enables critical information to be documented and shared electronically amongst health care providers and patients. The introduction of an EHR, particularly a patient-accessible EHR, can be expected to lead to an escalation of enquiries, complaints and ultimately, disputes. Prevailing opinion is that Online Dispute Resolution (ODR) systems can help with the mediation of certain types of disputes electronically, particularly systems which deploy Artificial Intelligence (AI) to reduce the need for a human mediator. However, disputes regarding health tend to invoke emotional responses from patients that may conceivably impact ODR efficacy. This raises an interesting question on the influence of emotional intelligence (EI) in the process of mediation. Using a phenomenological research methodology simulating doctor–patient disputes mediated with an AI Smart ODR system in place of a human mediator, we found an association between EI and the propensity for a participant to change their previously asserted claims. Our results indicate participants with lower EI tend to prolong resolution compared to those with higher EI. Future research include trialling larger scale ODR systems for specific cohorts of patients in the area of health related dispute resolution are advanced. © 2019 Informa UK Limited, trading as Taylor & Francis Group.
Privacy protection and energy optimization for 5G-aided industrial internet of things
- Humayun, Mamoona, Jhanjhi, Nz, Alruwaili, Madallah, Amalathas, Sagaya, Balasubramanian, Venki, Selvaraj, Buvana
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Humayun, Mamoona , Jhanjhi, Nz , Alruwaili, Madallah , Amalathas, Sagaya , Balasubramanian, Venki , Selvaraj, Buvana
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 183665-183677
- Full Text:
- Reviewed:
- Description: The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Quantifying success in science : an overview
- Bai, Xiaomei, Pan, Habxiao, Hou, Jie, Guo, Teng, Lee, Ivan, Xia, Feng
- Authors: Bai, Xiaomei , Pan, Habxiao , Hou, Jie , Guo, Teng , Lee, Ivan , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 123200-123214
- Full Text:
- Reviewed:
- Description: Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions. © 2013 IEEE.
- Description: This work was supported in part by the Liaoning Provincial Key Research and Development Guidance Project under Grant 2018104021, and in part by the Liaoning Provincial Natural Fund Guidance Plan under Grant 20180550011.
- Authors: Bai, Xiaomei , Pan, Habxiao , Hou, Jie , Guo, Teng , Lee, Ivan , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 123200-123214
- Full Text:
- Reviewed:
- Description: Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions. © 2013 IEEE.
- Description: This work was supported in part by the Liaoning Provincial Key Research and Development Guidance Project under Grant 2018104021, and in part by the Liaoning Provincial Natural Fund Guidance Plan under Grant 20180550011.
RaSEC : an intelligent framework for reliable and secure multilevel edge computing in industrial environments
- Usman, Muhammad, Jolfaei, Alireza, Jan, Mian
- Authors: Usman, Muhammad , Jolfaei, Alireza , Jan, Mian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 56, no. 4 (2020), p. 4543-4551
- Full Text:
- Reviewed:
- Description: Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats. © 1972-2012 IEEE.
- Authors: Usman, Muhammad , Jolfaei, Alireza , Jan, Mian
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Transactions on Industry Applications Vol. 56, no. 4 (2020), p. 4543-4551
- Full Text:
- Reviewed:
- Description: Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats. © 1972-2012 IEEE.
Real-time dissemination of emergency warning messages in 5G enabled selfish vehicular social networks
- Ullah, Noor, Kong, Xiangjie, Lin, Limei, Alrashoud, Mubarak, Tolba, Amr, Xia, Feng
- Authors: Ullah, Noor , Kong, Xiangjie , Lin, Limei , Alrashoud, Mubarak , Tolba, Amr , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 182, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper addresses the issues of selfishness, limited network resources, and their adverse effects on real-time dissemination of Emergency Warning Messages (EWMs) in modern Autonomous Moving Platforms (AMPs) such as Vehicular Social Networks (VSNs). For this purpose, we propose a social intelligence based identification mechanism to differentiate between a selfish and a cooperative node in the network. Therefore, we devise a crowdsensing based mechanism to calculate a tie-strength value based on several social metrics. Moreover, we design a recursive evolutionary algorithm for each node's reputation calculation and update. Given that, then we estimate each node's state-transition probability to select a super-spreader for rapid dissemination. In order to ensure a seamless and reliable dissemination process, we incorporate 5G network structure instead of conventional short range communication which is used in most vehicular networks at present. Finally, we design a real-time dissemination algorithm for EWMs and evaluate its performance in terms of network parameters such as delivery-ratio, delay, hop-count, and message-overhead for varying values of vehicular density, speed, and selfish nodes’ density based on realistic vehicular mobility traces. In addition, we present a comparative analysis of the performance of the proposed scheme with state-of-the-art dissemination schemes in VSNs. © 2020 Elsevier B.V.
- Authors: Ullah, Noor , Kong, Xiangjie , Lin, Limei , Alrashoud, Mubarak , Tolba, Amr , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Computer Networks Vol. 182, no. (2020), p.
- Full Text:
- Reviewed:
- Description: This paper addresses the issues of selfishness, limited network resources, and their adverse effects on real-time dissemination of Emergency Warning Messages (EWMs) in modern Autonomous Moving Platforms (AMPs) such as Vehicular Social Networks (VSNs). For this purpose, we propose a social intelligence based identification mechanism to differentiate between a selfish and a cooperative node in the network. Therefore, we devise a crowdsensing based mechanism to calculate a tie-strength value based on several social metrics. Moreover, we design a recursive evolutionary algorithm for each node's reputation calculation and update. Given that, then we estimate each node's state-transition probability to select a super-spreader for rapid dissemination. In order to ensure a seamless and reliable dissemination process, we incorporate 5G network structure instead of conventional short range communication which is used in most vehicular networks at present. Finally, we design a real-time dissemination algorithm for EWMs and evaluate its performance in terms of network parameters such as delivery-ratio, delay, hop-count, and message-overhead for varying values of vehicular density, speed, and selfish nodes’ density based on realistic vehicular mobility traces. In addition, we present a comparative analysis of the performance of the proposed scheme with state-of-the-art dissemination schemes in VSNs. © 2020 Elsevier B.V.
Rectified softmax loss with all-sided cost sensitivity for age estimation
- Li, Daxiang, Ma, Xuan, Ren, Yaqiong, Teng, Shyh-Wei
- Authors: Li, Daxiang , Ma, Xuan , Ren, Yaqiong , Teng, Shyh-Wei
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 32551-32563
- Full Text:
- Reviewed:
- Description: In Convolutional Neural Network (ConvNet) based age estimation algorithms, softmax loss is usually chosen as the loss function directly, and the problems of Cost Sensitivity (CS), such as class imbalance and misclassification cost difference between different classes, are not considered. Focus on these problems, this paper constructs a rectified softmax loss function with all-sided CS, and proposes a novel cost-sensitive ConvNet based age estimation algorithm. Firstly, a loss function is established for each age category to solve the imbalance of the number of training samples. Then, a cost matrix is defined to reflect the cost difference caused by misclassification between different classes, thus constructing a new cost-sensitive error function. Finally, the above methods are merged to construct a rectified softmax loss function for ConvNet model, and a corresponding Back Propagation (BP) training scheme is designed to enable ConvNet network to learn robust face representation for age estimation during the training phase. Simultaneously, the rectified softmax loss is theoretically proved that it satisfies the general conditions of the loss function used for classification. The effectiveness of the proposed method is verified by experiments on face image datasets of different races. © 2013 IEEE.
- Authors: Li, Daxiang , Ma, Xuan , Ren, Yaqiong , Teng, Shyh-Wei
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 32551-32563
- Full Text:
- Reviewed:
- Description: In Convolutional Neural Network (ConvNet) based age estimation algorithms, softmax loss is usually chosen as the loss function directly, and the problems of Cost Sensitivity (CS), such as class imbalance and misclassification cost difference between different classes, are not considered. Focus on these problems, this paper constructs a rectified softmax loss function with all-sided CS, and proposes a novel cost-sensitive ConvNet based age estimation algorithm. Firstly, a loss function is established for each age category to solve the imbalance of the number of training samples. Then, a cost matrix is defined to reflect the cost difference caused by misclassification between different classes, thus constructing a new cost-sensitive error function. Finally, the above methods are merged to construct a rectified softmax loss function for ConvNet model, and a corresponding Back Propagation (BP) training scheme is designed to enable ConvNet network to learn robust face representation for age estimation during the training phase. Simultaneously, the rectified softmax loss is theoretically proved that it satisfies the general conditions of the loss function used for classification. The effectiveness of the proposed method is verified by experiments on face image datasets of different races. © 2013 IEEE.
Reusing artifact-centric business process models : a behavioral consistent specialization approach
- Yongchareon, Sira, Liu, Chengfei, Zhao, Xiaohui
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Computing Vol. 102, no. 8 (2020), p. 1843-1879
- Full Text:
- Reviewed:
- Description: Process reuse is one of the important research areas that address efficiency issues in business process modeling. Similar to software reuse, business processes should be able to be componentized and specialized in order to enable flexible process expansion and customization. Current activity/control-flow centric workflow modeling approaches face difficulty in supporting highly flexible process reuse, limited by their procedural nature. In comparison, the emerging artifact-centric workflow modeling approach well fits into these reuse requirements. Beyond the classic class level reuse in existing object-oriented approaches, process reuse faces the challenge of handling synchronization dependencies among artifact lifecycles as parts of a business process. In this article, we propose a theoretical framework for business process specialization that comprises an artifact-centric business process model, a set of methods to design and construct a specialized business process model from a base model, and a set of behavioral consistency criteria to help check the consistency between the two process models. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature.
- Authors: Yongchareon, Sira , Liu, Chengfei , Zhao, Xiaohui
- Date: 2020
- Type: Text , Journal article
- Relation: Computing Vol. 102, no. 8 (2020), p. 1843-1879
- Full Text:
- Reviewed:
- Description: Process reuse is one of the important research areas that address efficiency issues in business process modeling. Similar to software reuse, business processes should be able to be componentized and specialized in order to enable flexible process expansion and customization. Current activity/control-flow centric workflow modeling approaches face difficulty in supporting highly flexible process reuse, limited by their procedural nature. In comparison, the emerging artifact-centric workflow modeling approach well fits into these reuse requirements. Beyond the classic class level reuse in existing object-oriented approaches, process reuse faces the challenge of handling synchronization dependencies among artifact lifecycles as parts of a business process. In this article, we propose a theoretical framework for business process specialization that comprises an artifact-centric business process model, a set of methods to design and construct a specialized business process model from a base model, and a set of behavioral consistency criteria to help check the consistency between the two process models. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature.
TOSNet : a topic-based optimal subnetwork identification in academic networks
- Bedru, Hayat, Zhao, Wenhong, Alrashoud, Mubarak, Tolba, Amr, Guo, He, Xia, Feng
- Authors: Bedru, Hayat , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Guo, He , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 201015-201027
- Full Text:
- Reviewed:
- Description: Subnetwork identification plays a significant role in analyzing, managing, and comprehending the structure and functions in big networks. Numerous approaches have been proposed to solve the problem of subnetwork identification as well as community detection. Most of the methods focus on detecting communities by considering node attributes, edge information, or both. This study focuses on discovering subnetworks containing researchers with similar or related areas of interest or research topics. A topic- aware subnetwork identification is essential to discover potential researchers on particular research topics and provide qualitywork. Thus, we propose a topic-based optimal subnetwork identification approach (TOSNet). Based on some fundamental characteristics, this paper addresses the following problems: 1)How to discover topic-based subnetworks with a vigorous collaboration intensity? 2) How to rank the discovered subnetworks and single out one optimal subnetwork? We evaluate the performance of the proposed method against baseline methods by adopting the modularity measure, assess the accuracy based on the size of the identified subnetworks, and check the scalability for different sizes of benchmark networks. The experimental findings indicate that our approach shows excellent performance in identifying contextual subnetworks that maintain intensive collaboration amongst researchers for a particular research topic. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
- Authors: Bedru, Hayat , Zhao, Wenhong , Alrashoud, Mubarak , Tolba, Amr , Guo, He , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 201015-201027
- Full Text:
- Reviewed:
- Description: Subnetwork identification plays a significant role in analyzing, managing, and comprehending the structure and functions in big networks. Numerous approaches have been proposed to solve the problem of subnetwork identification as well as community detection. Most of the methods focus on detecting communities by considering node attributes, edge information, or both. This study focuses on discovering subnetworks containing researchers with similar or related areas of interest or research topics. A topic- aware subnetwork identification is essential to discover potential researchers on particular research topics and provide qualitywork. Thus, we propose a topic-based optimal subnetwork identification approach (TOSNet). Based on some fundamental characteristics, this paper addresses the following problems: 1)How to discover topic-based subnetworks with a vigorous collaboration intensity? 2) How to rank the discovered subnetworks and single out one optimal subnetwork? We evaluate the performance of the proposed method against baseline methods by adopting the modularity measure, assess the accuracy based on the size of the identified subnetworks, and check the scalability for different sizes of benchmark networks. The experimental findings indicate that our approach shows excellent performance in identifying contextual subnetworks that maintain intensive collaboration amongst researchers for a particular research topic. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
A lightweight integrity protection scheme for low latency smart grid applications
- Jolfaei, Alireza, Kant, Krishna
- Authors: Jolfaei, Alireza , Kant, Krishna
- Date: 2019
- Type: Text , Journal article
- Relation: Computers and Security Vol. 86, no. (2019), p. 471-483
- Full Text:
- Reviewed:
- Description: The substation communication protocol used in smart grid allows the transmission of messages without integrity protection for applications that require very low communication latency. This leaves the real-time measurements taken by phasor measurement units (PMUs) vulnerable to man-in-the-middle attacks, and hence makes high voltage to medium voltage (HV/MV) substations vulnerable to cyber-attacks. In this paper, a lightweight and secure integrity protection algorithm has been proposed to maintain the integrity of PMU data, which fills the missing integrity protection in the IEC 61850-90-5 standard, when the MAC identifier is declared 0. The rigorous security analysis proves the security of the proposed integrity protection method against ciphertext-only attacks and known/chosen plaintext attacks. A comparison with existing integrity protection methods shows that our method is much faster, and is also the only integrity protection scheme that meets the strict timing requirement. Not only the proposed method can be used in power protection applications, but it also can be used in emerging anomaly detection scenarios, where a fast integrity check coupled with low latency communications is used for multiple rounds of message exchanges. This paper is an extension of work originally reported in Proceedings of 14th International Conference on Security and Cryptography (Jolfaei and Kant, 2017).
- Authors: Jolfaei, Alireza , Kant, Krishna
- Date: 2019
- Type: Text , Journal article
- Relation: Computers and Security Vol. 86, no. (2019), p. 471-483
- Full Text:
- Reviewed:
- Description: The substation communication protocol used in smart grid allows the transmission of messages without integrity protection for applications that require very low communication latency. This leaves the real-time measurements taken by phasor measurement units (PMUs) vulnerable to man-in-the-middle attacks, and hence makes high voltage to medium voltage (HV/MV) substations vulnerable to cyber-attacks. In this paper, a lightweight and secure integrity protection algorithm has been proposed to maintain the integrity of PMU data, which fills the missing integrity protection in the IEC 61850-90-5 standard, when the MAC identifier is declared 0. The rigorous security analysis proves the security of the proposed integrity protection method against ciphertext-only attacks and known/chosen plaintext attacks. A comparison with existing integrity protection methods shows that our method is much faster, and is also the only integrity protection scheme that meets the strict timing requirement. Not only the proposed method can be used in power protection applications, but it also can be used in emerging anomaly detection scenarios, where a fast integrity check coupled with low latency communications is used for multiple rounds of message exchanges. This paper is an extension of work originally reported in Proceedings of 14th International Conference on Security and Cryptography (Jolfaei and Kant, 2017).